TCRP Bus Route Evaluation Standards Synthesis 10 A Synthesis of Transit Practice

TCRP Bus Route Evaluation Standards Synthesis 10 A Synthesis of Transit Practice

T R A N S I T C O O P E R A T I V E R E S E A R C H P R O G R A M

SPONSORED BY

The Federal Transit Administration

TCRP

Synthesis 10

Bus Route Evaluation Standards

A Synthesis of Transit Practice

Transportation Research Board

National Research Council

TCRP OVERSIGHT AND PROJECT

SELECTION COMMITTEE

CHAIRMAN

WILLIAM W. MILLAR

Port Authority of Allegheny County

MEMBERS

SHARON D. BANKS

AC Transit

LEE BARNES

Barwood, Inc

GERALD L. BLAIR

Indiana County Transit Authoirty

MICHAEL BOLTON

Capital Metro

SHIRLEY A. D

E

LIBERO

New Jersey Transit Corporation

ROD DIRIDON

Santa Clara County Transit District

SANDRA DRAGGOO

CATA

LOUIS J. GAMBACCINI

SEPTA

DELON HAMPTON

Delon Hampton & Associates

RICHARD R. KELLY

Port Authority Trans-Hudson Corp

ALAN F. KIEPPER

New York City Transit Authority

EDWARD N. KRAVITZ

The Flxible Corporation

ROBERT G. LINGWOOD

BC Transit

MIKE MOBEY

Isabella County Transportation Comm

DON S. MONROE

Pierce Transit

PATRICIA S. NETTLESHIP

The Nettleship Group, Inc

ROBERT E. PAASWELL

The City College of New York

JAMES P. REICHERT

Reichert Management Services

LAWRENCE G. REUTER

WMATA

VICKIE SHAFFER

The Tri-State Transit Authority

B. R. STOKES

ATE Management & Service Co

MICHAEL S. TOWNES

Peninsula Transportation Dist Comm

FRANK J. WILSON

New Jersey DOT

EX OFFICIO MEMBERS

GORDON J. LINTON

FTA

JACK R. GILSTRAP

APTA

RODNEY E. SLATER

FHWA

FRANCIS B. FRANCOIS

AASHTO

ROBERT E. SKINNER, JR

TRB

TDC EXECUTIVE DIRECTOR

FRANK J. CIHAK

APTA

SECRETARY

ROBERT J. REILLY

TRB

TRANSPORTATION RESEARCH BOARD EXECUTIVE COMMITTEE 1995

OFFICERS

Chair: LILLIAN C. LIBURDI, Director, Port Department, The Port Authority of New York and New Jersey

Vice Chair: JAMES W.

VAN

LOBEN SELS, Director, California Department of Transportation

Executive Director: ROBERT E. SKINNER, JR., Transportation Research Board, National Research Council

MEMBERS

EDWARD H. ARNOLD, Chairman & President, Arnold Industries, Inc

SHARON D. BANKS, General Manager, Alameda-Contra Costa Transit District, Oakland, California

BRIAN J. L. BERRY, Lloyd Viel Berkner Regental Professor & Chair, Bruton Center for Development Studies,

University of Texas at Dallas

DWIGHT M. BOWER, Director, Idaho Transportation Department

JOHN E. BREEN, The Nasser I Al-Rashid Chair in Civil Engineering, The University of Texas at Austin

WILLIAM F. BUNDY, Director, Rhode Island Department of Transportation

DAVID BURWELL, President, Rails-to-Trails Conservancy

A. RAY CHAMBERLAIN, Vice President, Freight Policy, American Trucking Associations, Inc

(Past Chair, 1993)

RAY W. CLOUGH, Nishkian Professor of Structural Engineering, Emeritus, University of California, Berkeley

JAMES C. D

E

LONG, Director of Aviation, Denver International Airport

JAMES N. DENN, Commissioner, Minnesota Department of Transportation

DENNIS J. FITZGERALD, Executive Director, Capital District Transportation Authority

JAMES A. HAGEN, Chairman & CEO, CONRAIL

DELON HAMPTON, Chairman & CEO, Delon Hampton & Associates

LESTER A. HOEL, Hamilton Professor, University of Virginia, Department of Civil Engineering

DON C. KELLY, Secretary and Commissioner of Highways, Transportation Cabinet, Kentucky

ROBERT KOCHANOWSKI, Executive Director, Southwestern Pennsylvania Regional Planning Commission

JAMES L. LAMMIE, President & CEO, Parsons Brinckerhoff, Inc

CHARLES P. O'LEARY, JR, Commissioner, New Hampshire Department of Transportation

JUDE W. P. PATIN, Secretary, Louisiana Department of Transportation and Development

CRAIG E. PHILIP, President, Ingram Barge Company

DARREL RENSINK, Director, Iowa Department of Transportation

JOSEPH M. SUSSMAN, JR East Professor and Professor of Civil and Environmental Engineering,

Massachusetts Institute

MARTIN WACHS, Director, Institute of Transportation Studies, Department of Urban Planning, University of

California, Los Angeles

DAVID N. WORMLEY, Dean of Engineering, Pennsylvania State University

HOWARD YERUSALIM, Vice President, KCI Technologics, Inc

EX OFFICIO MEMBERS

MIKE ACOTT, President, National Asphalt Pavement Association (ex officio)

ROY A. ALLEN, Vice President, Research and Test Department, Association of American Railroads (ex

officio)

ANDREW H. CARD, JR, President & CEO, American Automobile Manufacturers Association (ex officio)

THOMAS J. DONOHUE, President and CEO, American Trucking Associations, Inc (ex officio)

FRANCIS B. FRANCOIS, Executive Director, American Association of State Highway and Transportation

Officials (ex officio)

JACK R. GILSTRAP, Executive Vice President, American Public Transit Association (ex officio)

ALBERT J. HERBERGER, Maritime Administrator, U.S.Department of Transportation (ex officio)

DAVID R. HINSON, Federal Aviation Administrator, U.S.Department of Transportation (ex officio)

GORDON J. LINTON, Federal Transit Administrator, U.S.Department of Transportation (ex officio)

RICARDO MARTINEZ, Administrator, National Highway Traffic Safety Administration (ex officio)

JOLENE M. MOLITORIS, Federal Railroad Administrator, U.S.Department of Transportation (ex officio)

DAVE SHARMA, Administrator, Research & Special Programs Administration, U.S.Department of

Transportation (ex officio)

RODNEY E. SLATER, Federal Highway Administrator, U.S.Department of Transportation (ex officio)

ARTHUR E. WILLIAMS, Chief of Engineers and Commander, U.S.Army Corps of Engineers (ex officio)

TRANSIT COOPERATIVE RESEARCH PROGRAM

Transportation Research Board Executive Committee Subcommittee for TCRP

LESTER A. HOEL, University of Virginia

LILLIAN C. LIBURDI, Port Authority of New York and New Jersey (Chair)

GORDON J. LINTON, U.S.Department of Transportation

WILLIAM W. MILLAR, Port Authority of Allegheny County

ROBERT E. SKINNER, JR., Transportation Research Board

JOSEPH M. SUSSMAN, Massachusetts Institute of Technology

JAMES W.

VAN

LOBEN SELS, California Department of Transportation

T R A N S I T C O O P E R A T I V E R E S E A R C H P R O G R A M

Synthesis of Transit Practice 10

Bus Route Evaluation Standards

HOWARD P. BENN

Barton-Aschman Associates, Inc.

TOPIC PANEL

BERT ARRILLAGA, Federal Transit Administration

MARY KAY CHRISTOPHER, Chicago Transit Authority

DAVID R. FIALKOFF, Metro-Dade Transit Agency

STEPHEN T. PARRY, Los Angeles County Metropolitan Transportation Authority

MILLARD L. SEAY, Washington Metropolitan Area Transit Authority

PETER L. SHAW, Transportation Research Board

STEVEN SILKUNAS, Southeastern Pennsylvania Transportation Authority

TRANSPORTATION RESEARCH BOARD

NATIONAL RESEARCH COUNCIL

Research Sponsored by the Federal Transit Administration in

Cooperation with the Transit Development Corporation

NATIONAL ACADEMY PRESS

Washington, D.C. 1995

TRANSIT COOPERATIVE RESEARCH PROGRAM

The nation's growth and the need to meet mobility, environmental, and energy objectives place demands on public transit systems Current systems, some of which are old and in need of upgrading, must expand service area, increase service frequency, and improve efficiency to serve these demands. Research is necessary to solve operating problems, to adapt appropriate new technologies from other industries, and to introduce innovations into the transit industry. The Transit Cooperative Research Program

(TCRP) serves as one of the principal means by which the transit industry can develop innovative near-term solutions to meet demands placed on it.

The need for TCRP was originally identified in TRB Special

Report 213--Research for Public Transit: New Directions, published in 1987 and based on a study sponsored by the Federal Transit

Administration (FTA). A report by the American Public Transit

Association (APTA), Transportation 2000, also recognized the need for local, problem-solving research TCRP, modeled after the longstanding and successful National Cooperative Highway Research

Program, undertakes research and other technical activities in response to the needs of transit service providers The scope of vice configuration, equipment, facilities, operations, human resources, maintenance, policy, and administrative practices

TCRP was established under FTA sponsorship in July 1992.

Proposed by the U S Department of Transportation, TCRP was authorized as part of the Intermodal Surface Transportation

Efficiency Act of 1991 (ISTEA) On May 13, 1992, a memorandum agreement outlining TCRP operating procedures was executed by the three cooperating organizations: FTA, the National Academy of

Sciences, acting through the Transportation Research Board (TRB), and the Transit Development Corporation, Inc. (TDC), a nonprofit educational and research organization established by APTA TDC is responsible for forming the independent governing board, designated as the TCRP Oversight and Project Selection (TOPS) Committee

Research problem statements for TCRP are solicited periodically but may be submitted to TRB by anyone at anytime It is the responsibility of the TOPS Committee to formulate the research program by identifying the highest priority projects. As part of the evaluation, the TOPS Committee defines funding levels and expected products

Once selected, each project is assigned to an expert panel, appointed by the Transportation Research Board. The panels prepare project statements (requests for proposals), select contractors, and provide technical guidance and counsel throughout the life of the project The process for developing research problem statements and selecting research agencies has been used by TRB in managing cooperative research programs since 1962. As in other TRB activities, TCRP project panels serve voluntarily without compensation

Because research cannot have the desired impact if products fail to reach the intended audience, special emphasis is placed on disseminating TCRP results to the intended end-users of the research: transit agencies, service providers, and suppliers. TRB provides a series of research reports, syntheses of transit practice, and other supporting material developed by TCRP research. APTA will arrange for workshops, training aids, field visits, and other activities to ensure that results are implemented by urban and rural transit industry practitioners

The TCRP provides a forum where transit agencies can cooperatively address common operational problems TCRP results support and complement other ongoing transit research and training programs.

TCRP SYNTHESIS 10

Project SA-1

ISSN 1073-4880

ISBN 0-309-058554

Library of Congress Catalog Card No 95-60883

Price $12.00

NOTICE

The project that is the subject of this report was a part of the Transit

Cooperative Research Program conducted by the Transportation

Research Board with the approval of the Governing Board of the

National Research Council Such approval reflects the Governing

Board's judgment that the project concerned is appropriate with respect to both the purposes and resources of the National Research

Council

The members of the technical advisory panel selected to monitor this project and to review this report were chosen for recognized scholarly competence and with due consideration for the balance of disciplines appropriate to the project. The opinions and conclusions expressed or implied are those of the research agency that performed the research, and while they have been accepted as appropriate by the technical panel, they are not necessarily those of the Transportation Research Board, the Transit Development

Corporation, the National Research Council, or the Federal Transit

Administration of the U S. Department of Transportation.

Each report is reviewed and accepted for publication by the technical panel according to procedures established and monitored by the Transportation Research Board Executive Committee and the

Governing Board of the National Research Council

Special Notice

The Transportation Research Board, the Transit Development

Corporation, the National Research Council, and the Federal Transit

Administration (sponsor of the Transit Cooperative Research

Program) do not endorse products or manufacturers Trade or manufacturers' names appear herein solely because they are considered essential to the clarity and completeness of the project report.

Published reports of the

TRANSIT COOPERATIVE RESEARCH PROGRAM

are available from:

Transportation Research Board

National Research Council

2101 Constitution Avenue, N W Washington, D C 20418

Printed in the United States of America

PREFACE

A vast storehouse of information exists on many subjects of concern to the transit industry. This information has resulted from research and from the successful application of solutions to problems by individuals or organizations. There is a continuing need to provide a systematic means for compiling this information and making it available to the entire transit community in a usable format. The Transit

Cooperative Research Program includes a synthesis series designed to search for and synthesize useful knowledge from all available sources and to prepare documented reports on current practices in subject areas of concern to the transit industry.

This synthesis series reports on various practices, making specific recommendations where appropriate but without the detailed directions usually found in handbooks or design manuals.

Nonetheless, these documents can serve similar purposes, for each is a compendium of the best knowledge available on those measures found to be successful in resolving specific problems. The extent to which these reports are useful will be tempered by the user's knowledge and experience in the particular problem area.

FOREWORD

By Staff

Transportation

Research Board

This synthesis will be of interest to transit agency general managers, as well as operations, scheduling, maintenance, and planning personnel. Information on bus route evaluation standards and criteria used by transit agencies in the United States and Canada is summarized. The synthesis provides updated information to the 1984 United States Department of Transportation (U.S. DOT) report entitled

Bus Service Evaluation Methods: A Review, however, the results are not directly comparable as the

respondents, questions asked, and analytical procedures differ in the 1994 synthesis. It does report what agencies do in the area of bus route, not system, evaluation standards, and how they undertake these efforts.

Administrators, practitioners, and researchers are continually faced with issues or problems on which there is much information, either in the form of reports or in terms of undocumented experience and practice. Unfortunately, this information often is scattered or not readily available in the literature, and, as a consequence, in seeking solutions, full information on what has been learned about an issue or problem is not assembled. Costly research findings may go unused, valuable experience may be overlooked, and full consideration may not be given to the available methods of solving or alleviating the issue or problem. In an effort to correct this situation, the Transit Cooperative Research Program (TCRP)

Synthesis Project, carried out by the Transportation Research Board as the research agency, has the objective of reporting on common transit issues and problems and synthesizing available information.

The synthesis reports from this endeavor constitute a TCRP publication series in which various forms of relevant information are assembled into single, concise documents pertaining to a specific or closely related issue or problem.

This report of the Transportation Research Board provides transit agency staff with a compilation of current activity, and data to identify some new standards that have come into play in recent years with regard to route design, schedule design, economics and

productivity, service delivery, and passenger comfort and safety. The status of service standards today and changes since 1984, as well as the organization and effectiveness of bus service evaluation activities are presented.

To develop this synthesis in a comprehensive manner and to ensure inclusion of significant knowledge, available information was assembled from numerous sources, including a number of public transportation agencies. A topic panel of experts in the subject area was established to guide the researchers in organizing and evaluating the collected data, and to review the final synthesis report.

This synthesis is an immediately useful document that records practices that were acceptable within the limitations of the knowledge available at the time of its preparation. As the processes of advancement continue, new knowledge can be expected to be added to that now at hand.

CONTENTS

1 SUMMARY

3 CHAPTER ONE INTRODUCTION

Study Background, 3

5 CHAPTER TWO SYNTHESIS OVERVIEW

Methodology, 5

Project Objectives and Industry Review, 6

The Classifications of Various Evaluation Standards, 6

9 CHAPTER THREE RESPONDENTS' USE OF SERVICE STANDARDS AND

PERFORMANCE CRITERIA

Route Design Standards, 9

Schedule Design Standards, 13

Economic and Productivity Standards, 17

Service Delivery Standards, 19

Passenger Comfort and Safety Standards, 22

23 CHAPTER FOUR THE ADMINISTRATION OF BUS SERVICE

EVALUATION ACTIVITIES

The Status of Service Standards and Changes in Status

Since 1984, 23

The Organization of Bus Service Evaluation Activities, 23

The Effectiveness of Bus Service Evaluation Actitivies, 24

25 CHAPTER FIVE CONCLUSIONS AND RECOMMENDATIONS FOR

FUTURE RESEARCH

Emerging Trends, 25

27 REFERENCES

27 BIBLIOGRAPHY

28 APPENDIX A 1984 SURVEY INSTRUMENT

29 APPENDIX B 1994 SURVEY INSTRUMENT

35 APPENDIX C FREQUENCY RESPONSE TO EACH QUESTION

41 APPENDIX D THE STATUS OF STANDARDS AT RESPONDING AGENCIES

44 APPENDIX E CROSS-TABULATIONS, BY SYSTEM SIZE, OF VARIOUS

ASPECTS OF THE TRAFFIC CHECKING AND CLERKING

FUNCTIONS

46 APPENDIX F CROSS-TABULATION OF VARIOUS OPERATING DATA

COLLECTION PRACTICES BY SYSTEM SIZE

49 APPENDIX G DOCUMENTS PROVIDED BY AGENCIES PARTICIPATING

IN THIS SYNTHESIS

51 APPENDIX H GLOSSARY OF TERMS

TCRP COMMITTEE FOR PROJECT J-7

CHAIR

JACK REILLY

Capital District Transit Authority

MEMBERS

GERALD BLAIR

Indiana County Transit Authority

KENNETH J. DUEKER

Center for Urban Studies

ALAN J. GIBBS

National Transit Institute

AMIR N. HANNA

Transportation Research Board

HENRY HIDE

Cole Sherman & Associates, Ltd.

MAXINE MARSHALL

ATE/Ryder Management

FRANK T. MARTIN

Metro-Dade Transit Agency

PATRICIA V. McLAUGHLIN

Los Angeles County Metropolitan Transportation

Authority

BEVERLY G. WARD

Center for Urban Transportation Research

TRB LIAISON

ROBERT SPICHER

Transportation Research Board

COOPERATIVE RESEARCH PROGRAMS STAFF

ROBERT J. REILLY, Director Cooperative Research Program

STEPHEN J. ANDRLE, Manager, TCRP

GWEN CHISHOLM SMITH, Project Manager, TCRP

TCRP SYNTHESIS STAFF

STEPHEN R. GODWIN, Director for Studies and Information

Services

SALLY D. LIFF, Manager, Synthesis Studies

DONNA L. VLASAK, Senior Program Officer

LINDA S. MASON, Editor

REBECCA B. HEATON, Assistant Editor

ACKNOWLEDGMENTS

Howard P. Benn, Chief Transit Operations/Planning Officer, Barton

Aschman Associates, Inc., was responsible for collection of the data and preparation of the report.

Valuable assistance in the preparation of this synthesis was provided by the Topic Panel, consisting of Bert Arrillaga, Director,

Service Assistance Unit, Federal Transit Administration; Mary Kay

Christopher, General Manager, Market Research, Chicago Transit

Authority; David R. Fialkoff, Chief, Service Planning and

Scheduling, Metro-Dade Transit Agency; Stephen T. Parry, Director of Scheduling and Operations Planning, Los Angeles County

Metropolitan Transportation Authority; Millard "Butch" L. Seay,

Director, Office of Planning, Washington Metropolitan Area Transit

Authority; Peter L. Shaw, Public Transportation Specialist,

Transportation Research Board; and

Steven Silkunas, Director, Technical Services and Research,

Southeastern Pennsylvania Transportation Authority.

The Principal Investigators responsible for the conduct of the synthesis were Sally D. Liff, Manager, Synthesis Studies, and Donna

L. Vlasak, Senior Program Officer. This synthesis was edited by

Linda S. Mason, assisted by Rebecca B. Heaton.

Valuable assistance to the Topic Panel and the synthesis staff was provided by Christopher W. Jenks and Gwen Chisholm Smith,

Senior Program Officers, Transit Cooperative Research Program,

Transportation Research Board.

Information on current practice was provided by many transit agencies. Their cooperation and assistance were most helpful.

SUMMARY

BUS ROUTE EVALUATION

STANDARDS

Bus route evaluation standards are composed of criteria that measure the quality and quantity of service offered by a public transit system's bus routes, either individually or grouped together. This synthesis compiles current activity and assesses the state of the art of evaluating individual bus routes.

In 1984, the U.S. Department of Transportation (USDOT) published Bus Service Evaluation Methods: A

Review (1). This synthesis revists the topic and provides supplemental material for use by the transit industry

in the area of route level service delivery.

The need for bus route evaluation standards is universal. Even in the most organized and well-run bus operations, services and routes exist that are seriously out of conformance with the rest of the system. While there are sound public policy reasons to maintain selected services in such environments, they often prove to be a drain on system assets. Having standards for bus route evaluation provides an objective basis for the requisite decisions for sustained operation.

A survey of transit agencies in North America indicates that as many as 44 different evaluation criteria are currently used in the transit industry. These criteria cover activities related to bus route design and operation, ranging from location of bus stops to the hours of service. Based on results from the synthesis survey, more transit agencies are formally using standards in the evaluation of bus routes, particularly larger systems with over 500 buses. However, many agencies use standards or guidelines of some type that are not formally adopted to guide service delivery.

While the general direction of bus route evaluation standards has not changed since the 1984 study, the

development of standards has evolved gradually, and will probably continue to evolve, especially as route

level data become more readily available. It is also expected that with new technology and more sophisticated reporting capabilities, these data will be collected more cost effectively and will be employed more often.

Although the development of bus route evaluation standards has evolved through modest changes since 1984, there have been new trends emerging in how transit systems validate the use of particular standards and their criteria within their agencies. Historically, transit systems confined their reviews to peer analysis, i.e., comparisons with other transit systems of similar size. A growing number of transit agencies now report comparisons with other

2 transit operations of various sizes in their region, as well as other customer service industries in which minimizing waiting time is a goal. The Americans with Disabilities Act (ADA) and the Clean Air Act

Amendments of 1990 (CAAA) have also had an impact on the application of route level standards, but in a different manner.

3

CHAPTER ONE

INTRODUCTION

To varying degrees, transit agency service standards have been in place since the first horse-car line was operated. Decisions had to be made as to where the line would go, how frequently it would run, and what hours it would run. But such decisions were not made without some basis, and considerable private capital was involved.

As transit services developed into ongoing businesses, the need to maximize profitability required a system of standards that permitted the assessment of an operator's individual lines.

Even today, with virtually every transit system under public ownership, the need continues for a system of standards. In the most organized and well-run agencies, services and routes exist that are seriously out of conformance with the rest of the system. While there are sound public policy reasons to maintain selected services in such environments, they often prove to be a drain on system assets requiring inordinate amounts of attention, finances, and scarce resources. Establishing standards for bus route evaluation provides an objective basis for making the requisite decisions for sustained operation.

Bus route evaluation standards comprise several criteria that measure the quality and quantity of service offered by a public transit system's bus routes. The standards include a number of items that determine, as well as reflect, the manner in which transit systems offer service to the public, and are often directly related to the costs of service provision.

For this synthesis, industry practice in late 1993 to early 1994 was examined largely by means of a multiple choice questionnaire sent to transit operators throughout North America. A previous study,

Bus Service Evaluation Methods: A Review (1), was prepared by the

Metropolitan Transit Authority of Harris County (Houston METRO) and published in 1984 by the U.S. Department of Transportation

(USDOT), based on data obtained in 1981. While this synthesis provides updated information to the 1984 report, the results are not directly comparable as the agencies that responded were not the same, the questions differed in presentation and content, and the analytical procedures differed. The resources available for this synthesis were less than were available for the previous research study; thus this synthesis has a more narrow scope and extensive follow-up interviews to the questionnaire were not conducted. It is not the purpose of this effort to evaluate various transit systems, but to report what agencies do in the area of bus route, not system, evaluation standards and how they undertake these efforts.

Because certain standards that were considered applicable in

1984 no longer apply, or are at least not as broad in today's changed environment, revised and completely new standards have come to play in recent years, particularly in the areas of service delivery and performance monitoring. It is the intent of this synthesis to present a compilation of current activity and current data from transit properties in North America both as an update of the 1984 study and in light of recent developments.

STUDY BACKGROUND

Except for the mid 1970s, most of the U.S. transit industry has experienced intense fiscal pressure for decades. Service equity issues between adjoining communities (e.g., urban versus suburban, developed versus undeveloped, minority versus non-minority) have caused tensions to rise. In response to these strains, many agencies began to structure their examinations of bus service so that the resources allocated to individual routes were evaluated with measures, often formal, that attempted to rationally gauge the route's efficiency and effectiveness.

The previous effort was part of an implementation program as well as a study and review of then current bus service evaluation practices within the U.S. and Canadian transit industries. The effort therefore included other key project steps--developing a program of service standards and planning guidelines, and implementation of a line ridership data collection and analysis system. In the 1994 questionnaire, a large number of questions dealt with these areas, that is, not only how service is evaluated, but how data are collected and what the status of service standards is (e.g, formal versus informal, board adopted versus staff adopted). Thus, despite the fact that the context of this synthesis differs from the 1984 study, the results and project conclusions of both are largely similar because of the survey tools used.

Consistent with the similarity in study results is the fact that the general direction of bus route evaluation standards has not materially changed in the intervening 10 years. Standards have evolved gradually, typically becoming more discrete, and are expected to evolve in a similar manner in the foreseeable future. Especially as route level data become more readily available, particularly service reliability information, this trend is expected to continue. It is also expected that with new technology and more sophisticated reporting capabilities, these data will be collected more regularly and cost effectively, thereby being employed more frequently by the industry.

However, although the development of bus route evaluation standards has only gone through evolutionary changes since 1984, there have been new trends emerging in how transit systems validate the use of particular standards within their agencies. Historically, transit systems confined their reviews and comparisons to peer analysis, i.e., other transit systems of similar size. A growing number of systems, however, now report comparisons done with other transit operations of various sizes in their region, as well as benchmarking themselves against other businesses. The Americans with Disabilities

Act (ADA) and the Clean Air Act Amendments of 1990 (CAAA) have also had an impact on the application of route level standards, but in a manner somewhat different from traditional route evaluation.

Some systems have begun looking at themselves much the way their passengers do, comparing the service of another transportation business (typically an airline, taxi or commuter

4 railroad), or service business (bank), or, in an increasing number of cities, other transit operators that can provide service to them also. In such circumstances, the passenger will then make comparisons between the public bus operators, effectively conducting their own comparative bus route evaluation. In these circumstances, when assessing bus route evaluation standards used by other operators, rather than the traditional peer review based on size, these operators find themselves also assessing the bus route evaluation standards used by other operators in their region.

Similarly, some operators now measure themselves against evaluation standards used in other industries, typically those responsible for transporting customers. Other than noting that this phenomenon was mentioned during the course of conducting this synthesis, further research in this area is not included in this synthesis.

In light of the ADA complementary paratransit requirement, a number of transit systems are evaluating fixed-route performance to ascertain each route's suitability to remain a fixed-route service.

Given this ADA requirement, there are circumstances foreseen under which, based on the route's performance as measured by the evaluation standards selected by the transit system, it may be more effective to provide service in the corridor as a paratransit service available to the general public. It is anticipated that the general evaluation standards seen in 1984 and 1994 will evolve to include measures in this regard (paratransit suitability/substitution) as well.

It is not anticipated that the 1990 CAAA and the Intermodal

Surface Transportation and Efficiency Act of 1991 (ISTEA) will have any direct impact on route level evaluation and analyses because both of these acts are geared to system, rather than route level, matters. However, because route level evaluation standards are often used for routes that operate together in a common corridor (and the route level evaluation standards are then used as corridor level evaluation standards), it is both logical and likely that measures will be developed by transit systems to ascertain at the route/corridor level the interrelationship between the route and the 1990 CAAA.

This will be especially true for those systems with routes that operate in areas with air quality problems.

This synthesis consists of five chapters, starting with this brief introduction. Chapter 2 describes the basic objectives of the industry review, the methodology employed in the review, and the various classification schemes that were employed. Chapter 3 describes the performance criteria that are most often used to monitor and evaluate bus service performance by the agencies that responded to the questionnaire. Chapter 4 contains several discussions that relate to the general organization of service evaluation activities, the way data are collected, and the status of formal service standards and guidelines. Conclusions and recommendations for future research are provided in Chapter 5.

5

CHAPTER TWO

SYNTHESIS OVERVIEW

The approach used for this synthesis is described in this chapter. The methodology employed to contact transit systems is presented first, followed by a discussion of the objectives of the industry review; the chapter concludes with a description of the classification scheme used to categorize the review responses.

METHODOLOGY

An extensive multiple choice questionnaire was prepared to obtain information for this synthesis on transit agencies' current activity and data with regard to bus route evaluation standards. This survey was developed based on a review of the 1984 study, which requested demographic and statistical information, narrative statements about the evaluation process and performance criteria, transit agencies' perceptions of the impact of an evaluation process on service delivery, equity, ridership, and operating costs; follow-up telephone interviews were conducted by Houston METRO staff to obtain more detail. The 1994 study, in contrast, did not involve follow-up interviews, except for matters of clarification. Copies of the 1984 and 1994 surveys are provided in Appendices A and B, respectively. It is important to again note that although the contexts of the two studies were different, the survey instruments were similar in the types of information they attempted to extract, and their results and conclusions are largely similar. Questionnaires were mailed to a list of 297 transit agencies provided by the American Public Transit

Association (APTA). This distribution was somewhat smaller than the prior study's (345) because 1) not all transit systems belong to

APTA, and 2) the prior study also included some non-conventional fixedroute operators, e.g., the University of Massachusetts campus system. A total of 111 "usable" replies were received, representing about 37 percent of the 297 agencies originally contacted. (The term

"usable" refers to the fact that respondents were required to fill out optical scan sheets by darkening circles, and there were several returned answer sheets that could not be properly scanned.) In 1984, 31 percent (109 of 345) of the transit agencies surveyed responded.

The representativeness of the systems participating in this synthesis is shown in Tables 1 and 2. Table 1 presents the distribution of responses by system size, and Table 2 shows the distribution by FTA region (see Figure 1 for breakdown of FTA regions). The frequency response to each question is provided in

Appendix C. As shown in Table 1, the study group had a higher percentage of large systems (over 500 buses) than does the industry as a whole. While less than 8 percent of the industry properties are considered large, 17 percent of the study respondents fit into that category. However,

TABLE 2

DISTRIBUTION OF RESPONSES BY SYSTEM SIZE

Region Number of Respondents Percent

I

II

III

IV

V

VI

VII

VIII

IX

X

Other

Total

6

7

10

17

14

12

4

2

21

7

11

111

5.4

6.3

9.0

15 3

12 7

10 8

3.6

21

18 9

6.3

9.9

100.0

"Other" contains the six Canadian respondents and the five U S.

properties that were either rural and did not have a Section 15 ID number, or were "subsidiary" operations that shared their section 15

ID with a larger property.

TABLE 1

DISTRIBUTION OF RESPONSES BY SYSTEM SIZE

System size in buses

Under 50 buses

51 to 200 buses

201 to 500 buses

501 to 1,000 Over 1,000 buses buses Total

Number of

Respondents

Percent

42

37.8

34

30.6

16

14.4

9

8.1

10 111

9.0 100.0

The system size definitions differed between the two surveys: The first study had three groupings: under 100 buses; 100 to 399; and 400 or more. The current study had the 5 groupings in the above table.

6

FIGURE 1 FTA regional boundaries.

over-representation is not considered to be a problem since specific sample sizes are large enough to permit, as appropriate, analyses within comparable system sizes.

PROJECT OBJECTIVES AND INDUSTRY REVIEW

In addition to a review of bus route evaluation practices, this project had a number of informational objectives concerning transit systems' uses of different performance criteria, their development, the collection of data, and application of service standards. (Service standards are also periodically referred to as service guidelines. For the purposes of this synthesis, they are the same.) An outline of the specific objectives follows.

What performance criteria are examined in the course of monitoring or evaluating bus routes?

What service standards are currently used?

What is the official status of any service standards that are used to evaluate bus services?

What data are collected and used to evaluate the various performance criteria that are used? How are they collected?

What are the data collection and review/report cycles?

What specific departments are responsible for collecting and analyzing the evaluation data collected?

How does service performance compare with how service was planned? Which variables, internal or external, influence it?

THE CLASSIFICATIONS OF VARIOUS

EVALUATION STANDARDS

There are numerous performance and service criteria used in the bus route evaluation process. In and of themselves, these criteria initially serve as indicators that gauge the quality and quantity of service offered by a public transit system's bus routes. They also include a number of items that determine, as well as reflect, the manner in which transit systems offer service to the public, and are often directly related to the costs of service provision. Survey respondents were asked to report which ones they used and how they were used.

The questionnaire divided the bus route evaluation standards into five broad categories:

Route design (included bus stop siting and spacing)

Schedule design

Economics and productivity

Service delivery monitoring

Passenger comfort and safety.

Service standards taken collectively, which encompass the whole milieu of various criteria, were examined in the context of their formality, i.e., were they formally adopted or not, and if so, how. In this synthesis, the performance criteria reported by survey respondents are listed under one of the above categories, and are reported similarly in the tabular summaries.

Route Design Standards

Under the route design category, 15 criteria are used in designing or redesigning a routing. These criteria, which help determine and establish the buses' pathway, are as follows:

Population density

Employment density

Spacing between other bus routes and corridors

Limitations on the number of deviations or branches

Equal (geographic) coverage throughout the local tax base

System design considerations such as enhancement of timed transfers

Streamlining/reduction of routing duplications

Network connectivity

Service equity

Route directness

Proximity to residences

Proximity to non-residential generators

Limitation on the number of transfers required of riders

Bus stop siting requirements

Bus stop spacing requirements.

Each of these relates to the basic structure and design of a transit system's route network Factors such as the location of transit services, the structure and configuration of transit routes, and patron accessibility to transit services are measured by these criteria.

Schedule Design Standards

The criteria under schedule design are used in designing or redesigning a route's frequency, and help determine and establish the scheduled interval between buses as well as the starting and ending time of service on a given day (span of service). These criteria include the following:

Differing levels of service, i.e., local service versus express service

Differing character of service, e g., crosstown versus feeder

• point

Maximum number of standees

Maximum intervals

Peak periods versus off-peak periods

Minimum intervals

Standees versus no standees

Duration of standee time

Timed meets, or time to be spent waiting at a transfer

Use of clock-face schedules

Span of service.

7

These criteria relate to the basic frequency and the hours and days in which a route will run. To many riders, they are the items that largely determine and reflect service quality. In the 1984 study, the schedule factors discussed were grouped along with others under the heading

"Service Quality Criteria."

Economic and Productivity Standards

The first two categories of standards, route design and schedule design, dealt with criteria that lead to the design or redesign of a service. Economic and productivity standards include criteria that measure the performance of an already existing service. The criteria are as follows:

Passengers per hour

Cost per passenger

Passengers per mile

Passengers per trip

Passenger miles

Revenue per passenger per route (either in absolute dollars or as a percentage of variable cost)

Subsidy per passenger

Route level minimum variable cost recovery ratio

Route level minimum costs that also include semivariable and/or fully allocated/fixed costs

Route level performance relative to other routes in the system.

While each performance criterion is used to monitor or evaluate the financial and ridership performance of individual bus routes, the first nine can be viewed in two ways: 1) as absolute measures against some predetermined number(s)/standard(s), or 2) as relative measures where the subject route's performance is measured against the performance of other routes. The last criterion, "route level performance relative to other routes in the system," by definition can only be viewed as a relative measure. It will be noted that there is no criterion regarding performance per route. As routes are wholly variable--some 0.5 mi (.805 km) long (e.g., a parking lot shuttle) and others as long as 20 mi (32.2 km), there is no way to meaningfully evaluate a route simply as a "route." A unit of standard measurement against which to apply the criteria is required: hence the use of passengers, miles, and hours.

Service Delivery Standards

The criteria for this category of standards measure service reliability. Service delivery criteria include the following: ontime performance and headway adherence (evenness of interval). These criteria measure a route's service as actually delivered to a passenger.

As will be reported later in this synthesis, some transit systems use these criteria at a system, not bus route, level.

Passenger Comfort and Safety Standards

The following criteria measure the ambiance that greets a rider using the bus system:

8

Passenger complaints

Missed trips

Unscheduled extra buses or trips

Accidents

Passenger environment conditions (e.g., vehicle cleanliness, vehicle condition, missing stanchions, blank destination signs)

Special information (or special intervals) in areas where riders do not feel secure waiting for buses.

In addition to developing the above classification scheme, to categorize various criteria based on their foci, a second classification system was used to sort the agencies based on the adoptive status of evaluation practices at that system where the standards were formally adopted by the board, or the executive director, general manager, or staff Given the nature of the inquiry, the level of detail reported, and the primary method of reporting (response to the questionnaire), this part of the report has been done in a tabular fashion. Appendix D contains a listing of the responding agencies and the status of standards at each agency. Table 3 is a cross-tabulation of the use of formal standards by system size.

TABLE 3

DISTRIBUTION OF RESPONSES BY SYSTEM SIZE

System size in buses

Under 50 buses

51 to 200 buses

201 to 500 buses

Formally adopted

Not formally adopted

24

18

Number of properties not reporting: 1

27

7

13

3

501 to

1,000 buses

Over 1,000 buses

9

0

8

1

Total

81

29

9

CHAPTER THREE

RESPONDENTS' USE OF SERVICE STANDARDS AND PERFORMANCE CRITERIA

This chapter examines the respondents' use of performance criteria in the evaluation of local bus services. Responses are representative of the U.S. and Canadian transit industries. The discussion for each category describes the type of criteria that are employed by the industry.

ROUTE DESIGN STANDARDS

The criteria for route design standards are used in designing or redesigning a routing, and help in determining and establishing the pathway for the bus. More than 86 percent of the respondents use route design standards. As discussed in Chapter 2, there are 15 criteria for route design; respondents were asked to select all that applied. Table 4 shows by size grouping which of these criteria are used by the respondents. Sixteen transit agencies, all with fewer than

500 buses, did not claim the use of route design standards.

Basic Criteria

Of the criteria related to network design considerations, the following five are generally considered the basics of route design standards: population density, employment density, spacing between other routes/corridors, limits on the number of deviations or branches, and equal (geographic) coverage through the local tax base. The use of each of these criteria, as well as a brief discussion of each, is provided next.

Population Density

Population density represents the number of people residing per square mile. It is the best representation of the potential, in terms of daily trips, at the point of origin. Data are available from the respective national censuses in the United States and Canada, as well as from local planning agencies. Population density is the most elemental of factors. Given that the fundamental purpose of bus mass transit is to carry passengers, in volume, this indicator reveals how many people live where. Almost 74 percent (82) of the respondents use population density as a criterion in route design.

Clearly, as the old expression goes, "run the bus where the people are." Data for this criterion are found in the census and may be augmented by local data. Other demographic factors such as auto ownership may also be used in this analysis.

Employment Density

Employment density represents the number of jobs per square mile. Typically, work trips account for well over one-half of a transit system's ridership (2). Almost 66 percent (73) of the respondents use employment density as a criterion in route design.

The main source of information for this criterion is the traffic analysis zone (TAZ) version of the census transportation planning package (CTPP). However, because it reduces down to the TAZ level, which is usually not fine enough for detailed route planning, planners usually use the metropolitan planning organization (MPO) or the regional databases.

Route Coverage (Spacing Between Other Bus

Routes and Corridors)

This refers to the spacing distance between adjoining routings.

The route coverage criterion guides spacing between bus services, geographically distributing them within the service area. This is done to maximize patron accessibility to transit service within the resources available to the agency. Use of this design criterion was reported by 66 percent (73) of the respondents. Of these 73 systems,

81 percent (59) had formal service standards. In comparison with the

1984 study, 65 percent (60 of 109 systems) used this criterion, and 43 percent (28) of these had formal service standards at that time.

This is one of the few criteria that are directly comparable between the two studies. The fact that the percentage of systems with formal standards who used this criterion doubled during the time between the two studies reflects both the importance of the criterion and the increase in the adoption of formal standards.

In 1984, some agencies required the spacing between bus routes to be maintained at between 0.5 and 2 mi (.805 and 3.22 km).

Spacing further depends on such factors as the population density of an area, the proximity of an area to the central business district

(CBD), and the type of bus services or routes in operation within an area (e.g., grid versus feeder) One-half-mile (.805 km) spacings are usually required in areas with high density and close proximity to the

CBD: wider spacings of 1 mi (1.61 km) or more are generally reserved for commuter or express-type routes that serve less densely populated rural or suburban areas. By establishing ideal distances between bus routes, transit agencies attempt to ensure that routes do not overlap covered areas and that transit services are well distributed throughout the jurisdiction.

Limitations On the Number of Deviations or Branches

In this criterion, deviation, or branching, involves selected trips leaving the main-line of the route; the deviation is

10

TABLE 4

ROUTE DESIGN STANDARDS SELECTED CRITERIA BY SYSTEM SIZE

Criterion

Population density

Employment density

Spacing between other bus routes and corridors

Limitations on the number of deviations or branches

Under 50 buses

(42)

29

25

25

9

Use of Criterion by System Size

51 to 200 buses

(34)

201 to

500 buses

(16)

25 13

501 to

1,000 buses

(9)

6

Over 1,000 buses

(10)

9

22 13 5 8

23

12

11

5

7

5

7

4

Equal (geographic) coverage throughout the local tax base

System design considerations

Streamlining/ reduction of routing duplications

Network connectivity

Service equity

Route directness

Proximity to residences

Proximity to non--residential generators

Limitation on the number of transfers

Bus stop siting requirements

Bus stop spacing requirements

7

6

5

2

3

12

8

5

4

33

34

4

9

4

3

0

15

6

7

3

30

31

2

2

1

1

1

8

4

1

0

14

14

1

1

3

1

0

3

1

3

0

9

9

2

2

2

1

10

10

0

7

3

3

0

Total

82

73

73

35

16

20

18

8

96

98

Numbers in parentheses are the number of systems responding to the synthesis survey in that size category.

viewed with regard to the routing of the main bus route, not the streets over which the main bus route operates. This criterion provides for a regularity in the pattern of a bus routing, whatever the

(street) directness of the main routing may be. This criterion was reported by 31 percent (35) of the responding transit systems.

There was some confusion concerning what exactly was meant by this measure in the survey, as it can be mistaken for "route directness." Compared with the percent reporting in 1984 on use of a route deviation criterion (43 percent), there is a marked reduction in usage. However, because of the potential confusion, discussed later under "route directness," these percentages are not directly comparable.

Equal (Geographic) Coverage Throughout the Local Tax Base Area

Bus routes operate in jurisdictions or other political subdivisions based on local tax base contribution considerations.

Some transit systems operate a network design based on

16

10

4

45

19

geographic considerations of local tax contributions. This criterion was reported by 14 percent (16) of the respondents.

This is the least used criterion of the five basic ones and reflects policy decision more than planning practice. Because routes operate at different levels of intensity, use of this criterion by no means reflects service levels. A route that operates to provide geographic coverage may not provide meaningful service at all. Indeed, it may be reflective of the old franchise trip, i.e., a token trip that operates to establish presence on the route, if not service. This was quite common in the era of private operation where such operations were required of a carrier if it wanted to preserve its franchises on profitable routes. No public carriers reported any such operations in their comments.

Secondary Criteria

The remaining ten criteria for route design standards, which can be considered a secondary level, are as follows:

System design considerations such as enhancement of timed transfers

Streamlining/reduction of routing duplications

Network connectivity

Service equity

Route directness

Proximity to residences

Proximity to non-residential trip generators

Limitation on the number of transfers required of riders

Bus stop siting requirements

Bus stop spacing requirements.

Although important, these criteria constitute a level of refinement in the area of route design standards. The first four were treated as design elements in the questionnaire, along with "service to unserved areas" (which is an objective, but does not constitute a criterion).

Respondents were asked to select their primary design element from among the five choices. The criterion most often selected was

"service to unserved areas" (49 percent). After that, favored route design criteria were selected in the following order: system design considerations, streamlining/reduction of duplication, network connectivity, and service equity. Most systems use at least two or more of these criteria.

System Design Considerations

This criterion refers to the relationship between a new routing and existing routes in the system. Such aspects as whether there will be a timed transfer where a new route intersects with an existing one, or whether routes share common terminals and bus loops, are considerations under this criterion. Preference for this design criterion, from among the five mentioned above, was reported by 18 percent (20) of the systems responding to this question. This criterion is similar to "network connectivity," but focuses on the route's design in relation to other routes in the system and how a rider would use the route itself in relation to other routes. Network connectivity, which is discussed later, focuses on the system as a whole. Typically, the system design criterion includes such aspects such as the enhancement of timed transfers, or the use

11 of adjoining loading bays in a passenger terminal for routes with the same destination so that riders can board the first departing bus.

Streamlining/Reduction of Duplication

This criterion refers to a situation where two or more distinct routings, that serve the same passenger market(s), appear within close or overlapping proximity. Streamlining/reduction is designed to control the duplication of bus routings to ensure that transit services are adequately distributed geographically within a service area. By ensuring that overlapping covered areas for different bus routes are minimized, services can be more widely dispersed within an agency's jurisdiction. Just over 14 percent (16) of the respondents considered this to be their primary criterion. However, for the larger transit systems, this was a more significant concern, with more than 30 percent noting this as their primary design element. In the 1984 review, 30 percent of the respondents reported use of this criterion to control the amount of route duplication within their route networks

(along with any other design element measures used).

The data collected to evaluate this criterion are obtained through analyses of route or system maps and various mileage calculations and comparisons. The caveat in focusing on route duplication, especially with the use of maps, is that two close, parallel lines on a map may be serving two totally different markets, and one route cannot do the job of two. One route may be a limited, or street express, operating at a relatively high rate of speed over an arterial, while the other may be serving a more localized neighborhood function. Combining the two routes may not save any resources because of a reduction in operating speeds and an overlaying of proximate maximum load points, which may serve to incur the displeasure of regular riders who see either their trip slowed or their bus stop dislocated.

Network Connectivity

This criterion refers to the physical relationship of a new routing to the existing route system already in place at the agency.

When a new routing is being introduced into a system, its relationship to the system as a whole must be considered (e.g., is a radial route being introduced into a grid system). This criterion was selected as a primary design element by 9 percent

(10) of the respondents (evenly distributed by system size).

Network connectivity, although similar to "system design considerations," focuses on the route's relation to the system as a whole and not specifically to any other individual route or group of routings. This criterion represents those opportunities where 2 + 2 =

5, i.e., the sum of the whole can be greater than the parts. For example, this may mean designing a route that connects two others, thereby creating a through route, thus providing a one-seat ride for customers on what would otherwise be three routes. Even directly combining two poorly performing routes into one stronger route, with improved efficiency, is an example of how this measure strengthens the system as a whole.

12

Service Equity

Service equity can mean many things. To some, it is compliance with Title VI of the Civil Rights Act, which provides for equitable distribution of transit resources. To others it is simply the distribution of service or the use of populationbased criteria. Service equity was not defined in the questionnaire. Rather it was deliberately left undefined so that agencies could comment on the subject if they wished. Fewer than five systems reported this criterion as their primary design element, and no one commented.

The next four secondary criteria for route design standards, considered design tactics, are route directness, service proximate to as many residences as possible, service to as many non-residential trip generators as possible, and limitation on the number of transfers required of riders. Again, respondents were requested to select from among the four as preferred. Many, of course, use more than one.

Route Directness

For transit systems that use this criterion, a mathematic assessment is used to measure a route's deviation from a linear path in one or more of the following ways:

Additional travel time for a one-way bus trip

Additional travel time required over an automobile making the same trip

Mileage limitations that permit a maximum deviation per route

Time limit increase in average travel times per passenger

An absolute limit to the total number of pathway deviations

The pathway deviation(s) must not lower the average productivity of the route or the deviation should have a higher productivity rating than that for the line as a whole.

This criterion was selected by 41 percent (45) of the respondents as their primary design tactic. In the 1984 study, 43 percent of the systems reported use of this criterion.

To some, this criterion is substantially similar to "limitations on the number of deviations or branches." However, it is a matter of how deviation is viewed. If one views deviation as the main-line veering from the most direct path/streets possible, then the 1984 and

1994 questions could be interpreted the same. But if route deviation is viewed as selected trips leaving the main-line of the route--the deviation relationship is with the main bus route (the deviating segment rejoins the mainline in due course, unlike a branch which stays away from the mainline after diverging), not the streets over which the deviation operates--then these criteria are different. The view was that in this criterion, direct, noncircuitous routing dealt with the streets themselves, not branches or deviations from the bus route. Given the response rate to this question, and the close parallel in response to 1984, it is likely that most respondents also viewed this question in relation to the street-network, not the main-line, of the bus route.

This criterion was discussed at length in the 1984 study. The standards that were reported placed various limits or conditions on bus routing to control the extent to which buses in regular service leave major arterial streets to serve residential pockets or activity centers. Several types of controls or limitations are employed in standards designed to evaluate this criterion. These are

Some standards limit route deviations to a maximum number of minutes (5 to 8 min) of additional travel time for a oneway bus trip.

Other standards limit such deviations by requiring that transit travel distances not exceed automobile travel distances for the same trip by more than 20 to 40 percent.

Mileage limits are used by other agencies, which permit a maximum of 1 mi (1.61 km) of deviation per route.

Other standards limit increases in average travel times per passenger to a maximum of 10 to 25 percent as a result of any deviations along a route.

Route deviation is also controlled by limiting transit travel times to a maximum of twice the automobile travel time for the same trip regardless of the number of such route deviations that exist.

Some standards limit to two the number of route deviations that will be allowed per route

Some agency standards require that route deviations must not lower the average productivity of a line or that the deviated segment(s) of route should have a higher productivity rating than that for the line as a whole.

It is important to note that several smaller systems have standards that either encourage considerable route deviation or permit it to serve activity centers or to correct route coverage deficiencies that exist in the transit network.

Service Proximate to as Many Residences as Possible

There is no mathematic definition for this criterion.

"Proximate" is how the respondents would define it themselves. The objective is to get as close as possible to a rider's residence without unnecessarily delaying or detouring other riders. (In some respects, this criterion can be viewed as a cross between route directness and bus stop spacing.) In designing the segment of a service's residential routing, use of this criterion represents trade-offs that include factors such as population density, market identification (e.g., income and auto availability of potential riders, their destinations), and the ability for the vehicle to navigate streets (often, residential streets, even in high density neighborhoods, have geometries that do not permit use by 40-ft, or even 35-ft, buses). The question posed to survey respondents asked if this criterion was their primary design tactic.

Seventeen percent (19) reported affirmatively.

Service to as Many Non-Residential Trip Generators as

Possible

There is no mathematic definition for this criterion either. As with the previous criterion, "proximate" is how the respondents would define it themselves. The objective here is to get as close as possible to a rider's non-home destination without unnecessarily delaying or detouring other riders The

principle factor in this particular trade-off is employment density, although market identification and vehicle operation are also issues.

Vehicle navigation is also an issue in nonresidential areas because many suburban office park roadways have geometries that do not permit use by large vehicles. The question posed to survey respondents asked if this criterion was their primary design tactic.

Sixteen percent (18) reported that it was.

Limitation on the Number of Transfers Required of Riders

This criterion considers whether the design of the route calls for a significant number of its users to transfer, an important consideration in designing or redesigning a bus route. An area could be served with a feeder route where virtually all riders will be required to transfer to complete their trip, or the route itself could be designed to operate a full distance, for example to the CBD, and the need for transfers would be obviated for many. Only 7 percent (8) of the respondents stated that limiting the number of transfers was their primary design tactic.

Seven of the eight respondents were smaller systems. Given that transfers are a tool that planners need to use, but judiciously, it is not surprising that the prevention of transferring was the least likely primary design tactic. While it is an important design tactic, clearly it is secondary for the majority of systems.

The last two secondary criteria under route design standards concern a matter as fundamental as the bus itself--the stop. Two design issues for bus stops are the siting, or placement of a stop, and the spacing between stops.

Bus Stop Siting

The site of a bus stop can be near-side (just before the cross street at the intersection), far-side (just after the intersection), or midblock (on the block face between two intersections). Far-side siting had a plurality response (43 percent of 111 properties surveyed; 48 percent of those that responded to the question) but near-side was more than 30 percent. No preference was reported by almost 25 percent, and 24 percent gave no answer at all.

Bus stop siting has always been one of the more controversial issues within the transit industry--should the stop be near-side or farside? The results of the survey are consistent with the controversy; there was no majority answer. The key practice that underlies sound siting is support from the responsible party (typically a municipality) to properly designate the length, mark, and sign, and to enforce against encroachments. Short stops and cars parked in zones limit adequate curbing.

The need for adequate length translates into a preference by municipalities, where parking space is at a premium, for nearside stops. At 80-ft (24.4-m) lengths, less parking is lost from the street for near-side stops than that required for 105-ft (32m) far-side stops.

The near-side stop takes advantage of the intersecting street width for pull-out and lane merge requirements. Where parking is not at a premium, a case can be stated for preferring far-side. Municipal traffic engineers prefer far-side stops because they facilitate following right

13 turn on red movements. They are deemed safer for both alighting passengers (rear doors will likely be closer to the curb, with less chance of passengers missing the curb, especially at night) and crossing pedestrians, who will not be darting in front of a bus.

Given the likelihood that at signalized intersections one street is favored over the other, if the bus route is with the favored direction, the stop should logically be far-side to take advantage of the greater green time it is likely to encounter. Preemptive signalling also creates this condition, i.e., the street/route with preemptive signalling will be favored over cross streets and thus far-side stops are preferred.

Conversely, the stop should be near-side if the route is on the street not favored with greater green time.

Bus Stop Spacing

Bus stop spacing is the distance between adjoining service stops of a route. Transit operators have developed standards regarding bus stop spacing as part of their effort to balance the tradeoff between rider convenience (stops with easy walking distances) and speed. Use of this design criterion was reported by over 85 percent (95) of the responding systems. In 1984, 62 percent (68) of the respondents reported using this criterion.

There are almost as many practices as there are operators. The combination of this criterion with "bus stop siting" reveals that there is no predominant practice in either siting or spacing. The highest quantity reported was 48 percent of operators with non-CBD spacing of 6 to 8 stops per mi (1.61 km) and 48 percent of those preferred farside stops. Further, of those who preferred far-side stops, one-half also preferred non-CBD spacing of 6 to 8 stops per mi (1.61 km).

Spacing has always been controversial precisely because of the seeming opposing arguments of convenience versus speed. The convenience argument is not as strong as it might appear, however, unless the base condition was a bi-directional bus route network with service on every street and a stop on every comer. In virtually all other circumstances the majority of riders will walk from their origin to the closest comer. And it follows that the fewer the stops, the greater the number of people who will walk. A 1992 study conducted by MTA New York City Transit determined that in local bus stop relocation, where the change went from approximately 10 per mi

(1.61 km) (530 ft/161.5 m between stops) to 7 per mi (1.61 km) (750 ft/228.6 m), a 42 percent increase in distance between stops, the number of walkers increased by only about 12 percent

(memorandum, J. Gaul, from MTA New York City Transit, February

1992). Further, it does not follow, with an existing route undergoing change, that fewer stops means that each and every specific individual will have to walk farther. Often, it only means reorienting to the new stop, with the walking distance unchanged for most patrons.

SCHEDULE DESIGN STANDARDS

The criteria for schedule design standards, which relate to the basic frequency, hours and days, and times of departure in which a route will run, were used as reported by almost 75 percent (83) of the respondents. The eleven criteria under this category of standards are as follows:

14

Differing levels of service, e.g., local service versus express service

Differing character of service, e.g., crosstown versus feeder

• point

Maximum number of standees

Maximum intervals

Peak periods versus off-peak periods

Minimum intervals

Standees versus no standees

Duration of standee time

Timed meets, or time to be spent waiting at a transfer

Use of clock-face schedules

Span of service (the hours at which service runs).

By definition, as all fixed-route buses must have schedules, the

25 percent of respondents not reporting use of these standards reflects systems that operate strictly on a policy basis, which is typical of smaller systems. (It should be noted that 24 of the respondents, 23 of which had fewer than 200 buses, had, as formal schedule standards, policy headways.)

Schedule design standards are based on 1) waiting time between buses (policy schedule); 2) per vehicle occupancy standards

(standees versus no standees, different levels/percents of seat occupancy ranging from 25 to 150 percent, i.e., 50 percent of the number of seated riders will stand); or 3) both. Of the 83 respondents, 58 percent (48) reported using both policies; almost 29 percent (24) reported a waiting time standard only, and approximately 13 percent (11) stated that schedules were based on loading levels. It should be noted that in some states, such as

California and Pennsylvania, the choice of which standard will be selected is influenced, if not regulated, by a state public utilities commission or highway patrol.

Differing Levels of Service

This criterion refers to the fact that there are different standards for different levels of service. Some systems have guidelines that differ for each level of service. For example, express buses or buses that operate over limited access roadways might not be scheduled to carry standees, whereas those operating as local buses over regular streets would carry standees. Almost 58 percent of the respondents stated that they had different guidelines for different levels of service.

The questionnaire also asked if there were surcharges for nonlocal bus service. The affirmative response to the question concerning different standards for differing levels of service was more pronounced if there was a premium surcharge on the express bus. Of the systems that reported having different standards for different levels of service and responded to the question concerning premium surcharges, 61 percent had both different guidelines for local and express (or limited) services and different fares, i.e., surcharges for the non-local bus services.

Differing Character of Service

Some systems have standards that differ for different characters of service. Examples of this are crosstown, feeder, radial, and shuttle services. Forty-three percent (48) of the respondents had differing guidelines for differing characters of service.

Larger systems (over 500 buses) were predominant with regard to having different standards for different characters of service; 55 percent of the larger systems (11 of 20) had differing standards. Forty percent of the smaller systems also reported using different standards for various types of services

With regard to the next five criteria--maximum number of standees, maximum intervals, peak periods versus off-peak periods, minimum intervals, and standees versus no standees--transit systems were asked to identify all that applied to them. They are discussed in the order of use Many systems used more than one criterion; however, 24 stated that they used none. Table 5 shows the use of these schedule design criteria by system size All five were used by only three properties--two between 51 and 200 buses (one in FTA

Region I and one in Region IX) and one over 1,000 buses (in Region

I)

It is worth noting that in the 1984 study, the area of scheduling standards, which included only two criteria at the time, vehicle loads and vehicle headways, was considered a subset of "Quality Criteria."

Schedules underlie the area of customer satisfaction (as does service reliability, discussed later) and it is the whole area of standards in customer satisfaction that is emerging as the major trend in bus service evaluation.

Although there weren't any explicit customer satisfaction questions, some systems reported comparing themselves with other services, transit and otherwise, that they found their customers using rather than the traditional peer comparisons with systems of similar size. This is because while a passenger rides a bus one route at a time, the passenger is riding part of a system. In the case of transit, these non-peer comparisons were most pronounced in metropolitan areas with more than one public operator (i e., more than one operator in the municipality (New York and Los Angeles) or one operator in the central city, and other operators in the suburbs (Los

Angeles and Washington)). In these environments, passengers ride both systems and make the customer satisfaction comparisons between the systems that they ride. In turn, operators find themselves comparing their services not just with peers of comparable size, but with other operators in their region that may be substantially different in size. Additionally, one operator mentioned conducting customer market research that focused on comparisons not just with urban transit (bus and rail) but with other industries, including airlines. It must be noted, however, that these are system level issues, and not those of individual route evaluations where comparisons are typically made between one route and the rest of the routes in that particular system. Nevertheless, in the area of customer satisfaction, these are matters that often transcend whether an analysis is at the system or route level.

Maximum Standees

There is a maximum number of standees scheduled per trip.

This means that the transit system will schedule additional service

(trips) to keep the average number of standees per bus on the route below whatever benchmark, e.g., 20 standees, or 50 percent of the number of seats, has been selected. Seventy-two percent (60) of the respondents reported

15

TABLE 5

SCHEDULE DESIGN STANDARDS SELECTED CRITERIA BY SYSTEM SIZE

Criterion

Maximum number of standees

Maximum intervals

Peak versus off-peak periods

Minimum intervals

Standees versus no standees

Under 50 buses (42)

11

13

8

6

3

51 to 200 buses (34)

Use of Criterion by System Size

201 to 500 buses (16)

501 to 1,000 buses (9)

20

21

17

8

7

12

9

5

1

3

7

6

6

1

0

Over 1,000 buses (10)

10

10

9

3

4

Numbers in parentheses are the number of systems responding to the synthesis survey in that size category.

use of this criterion, including all 10 responding properties over

1,000 buses. Although not explicitly requested, the 1984 study asked about vehicle loads. Seventy-six percent of those respondents reported standards for vehicle loading.

Respondents were asked if they permitted standees and if so, whether there was an upper (maximum) limit on the number of standees. They were not asked for the number of standees (as an absolute number) or the square footage allotted per standee. This criterion, as applied by systems, is heavily influenced by the interior configuration of the bus. Systems that do not schedule for standees will order buses with wider seats and more narrow aisles.

Maximum Intervals

guidelines that are different in the rush hours than in the nonrush hours were reported by 41 percent (45) of the respondents.

Nine of the 10 systems with 1,000 or more buses engage in this practice.

Minimum Intervals

Minimum intervals is the working definition of the minimum amount of time an agency will require between scheduled buses, e.g., buses will operate at least 2 min apart. The transit operator (or oversight board) will frequently stipulate a policy directive on service frequency. In the case of minimum intervals, it is a matter of buses not running more frequently than "x" minutes apart. Only 17 percent (19) of the respondents reported having minimum intervals.

Seventy-four percent of the respondents were from properties under 200 buses.

Maximum intervals is the working definition of the maximum amount of time an agency will permit to lapse between scheduled buses, e.g., there will be a bus at least every 30 min or every 2 hr.

The transit operator (or oversight board) will frequently stipulate a policy directive on service frequency. Specifically, buses will run no less frequently (or no more frequently) than every so many minutes.

Fifty-three percent (59) of the respondents reported use of this measure. Although not explicitly requested, the 1984 study asked about vehicle headways. Sixty-eight percent of those respondents said that they had vehicle headways standards.

This was the most commonly reported schedule design criterion. All systems with 1,000 or more buses reported using it.

Peak Periods Versus Off-peak Periods (Rush Hours that are Different from Non-rush Hours)

Some systems with loading guidelines have different guidelines for rush hours than non-rush periods. Schedule

Standees Versus No Standees

This criterion refers to service being scheduled in such a manner that seats will be provided to all riders and there will be no scheduled standing. There are systems, that by policy, attempt to guarantee a seat to every rider, which translates into a policy of no standees. Only 15 percent (17) of the responding systems reported use of this criterion.

Four of the 17 respondents were systems with 1,000 or more buses, i.e., 40 percent of the largest systems, and each of the four reported having surcharges for non-local bus services. Thus in all likelihood for these properties, a no standee policy is probably reflective of no standing on express bus/premium services.

The next four criteria exhibit a transit operator's scheduling practices that may be viewed as actual guidelines themselves.

Total

60

59

45

19

17

16

TABLE 6

USE OF TIMED TRANSFERS BY SYSTEM SIZE

Under 50 buses

51 to 200 buses

201 to 500 buses

Use

Don't Use

Total

20

15

35

16

15

31

5

9

14

Number of properties not reporting: 12

Duration of Standee Time

This criterion is defined as the maximum amount of time the majority of standees will stand. Some transit systems schedule for standees, but do so in a manner that limits the amount of time a rider will stand. Twelve percent

(13) of the respondents reported using standee time.

It is possible to determine standee time through traffic checks.

On feeder and express routes, this is a relatively simple task-inbound, the elapsed period of standing begins at the point of the first standee, and outbound it is the location at which there are no more standees. Crosstown buses are more problematic. It is possible for there to be standees from terminal to terminal, but because of rider turnover, and under the presumption that once a seat is freed up a standee will take it, no one rider will stand for very long.

Determining the duration of standee time under such circumstances is much more difficult. Riding checks that are graphed can pictorially reveal the character of standee riding, at least with the assumption that the standee of longest duration is the rider that takes the first seat when it becomes available. (Checks are taken on board buses by staff riding the bus, typically from terminal-to-terminal. Boarding and alighting volumes are recorded by time at each bus stop used. The checker sits in the seats behind the operator or in seats directly across from the operator in the front of the bus. Some agency staff sit in seats behind the rear set of doors where vision is not obscured.) Four of the 13 systems reporting use of this criterion were systems with

1,000 or more buses, which is reflective of the longer routes in the larger metropolitan areas in which they operate. On shorter routes, in any size metro area, this is not a meaningful measure.

Timed Meets, or Time Spent Waiting at Transfer Points

This refers to the scheduled meeting of two (or more) trips on different routings for the purpose of serving riders who are transferring between trips. Because transferring is prevalent, some systems are attempting to have trips on intersecting bus routes converge at the transfer point at the same time and meet. Where this occurs at one location, but with a large volume of services, it is also known as a pulse meet or timed transfer. Of the 99 respondents to this question 49 percent (48) said they schedule for timed transfers.

There was a 50/50 split by respondents to this question (51no/48-yes) on use of the criterion, which can be quite

501 to 1,000 buses

4

5

9

Over 1,000 buses

3

7

10

Total

48

51

99 complicated to implement; the responses reflect this. Table 6 shows the cross-tabulation of timed transfer versus size of property. There is basically a steady progression of "don't use" as property size increases. Forty-three percent of the small properties (fewer than 50 buses) responding to this question reported that they don't use this criterion; 70 percent of the large properties don't use it.

Use of Clock-Face Schedules

Clock-face schedules are intervals that divide evenly into 60.

Some systems specifically write their schedule intervals (when greater than 10 min) so that they divide evenly into 60 min. For example, 12 or 15 min schedules are used, but 13 and 17 min schedules are not. Some systems call these memory schedules.

Ninety-four systems responded to this question, with 47 saying they use it, and 47 saying they did not.

The use of clock-face schedules, when buses operate on intervals greater than every 10 to 12 min, is somewhat controversial in the transit industry. Marketing/sales staff believe that it is a powerful tool in terms of promoting the product; buses show up at the same time (in minutes) past each and every hour. From the cost accounting viewpoint, schedules written around this criterion may be expensive (depending on the route-specific circumstances), however.

Clock-face schedules in grid systems with lengthy routes and numerous route intersections are difficult to write because of vagaries in running time, and it has not been proven whether these schedules attract more ridership. Of the systems responding to this question, the smaller ones were more likely to use these schedules, while the larger properties (200 or more buses) were not; eight of the

10 systems with 1,000 or more buses said that they did not use the schedules.

Span of Service (the Hours at which Service Runs)

Span of service is the hours and days when service operates, i.e., the start of the service day until the end of the service day (or 24hr service) as well as weekdays and/or Saturday and/or Sunday.

Systems have different policies concerning the temporal coverage that they will provide. Eighty-five percent of the systems responded to the span of service question. Twenty percent

(19) of the respondents stated that they had no

17 such guideline; a similar number said that all, or virtually all, regular routes operate the same hours whenever the system is operational.

The remaining 60 percent were split on whether or not a route's span was based on system considerations. Fourteen percent reported a daytime grid (0.5 mi) might become an overnight, or "owl" (I-mi grid); 17 percent reported route level considerations; 29 percent reported both.

ECONOMIC AND PRODUCTIVITY STANDARDS

The transit agencies were asked to report on criteria they use to evaluate economic and ridership performance at the route level.

These criteria reflect use of five key variables that are required for a comprehensive assessment of ridership productivity and financial performance--vehicle hours, vehicle miles, passengers, line revenues, and operating costs. The criteria considered are:

Passengers per hour

Cost per passenger

Passengers per mile

Passengers per trip

Passenger miles

Revenue per passenger per route (either in absolute dollars or as a percentage of variable cost)

Subsidy per passenger

Route level minimum variable cost recovery ratio

Route level minimum costs that also include semivariable and/or fully allocated/fixed costs

Route level performance relative to other routes in the system.

Given the particular sensitivity of this category of standards, it is worth repeating that the criteria reported in this review, as in the other categories, reflect performance objectives, requirements, and evaluations for individual routes, or in some cases groups of services, rather than for the transit system as a whole. A number of the criteria reported explicitly link line performance objectives and requirements to the attainment of a desired systemwide performance level.

However, system level evaluations were not the focus of this inquiry. Instead, the emphasis is placed on discovering how agencies compare the performance of individual lines and determine when lines are performing satisfactorily.

The first five criteria in this category were presented in one question to the transit systems, with respondents requested to not limit themselves to one answer, but to indicate all that applied. Over

90 percent reported use of at least one with 79 percent (88) reporting use of two or more. Table 7 shows the use of these five economic and productivity standards criteria by system size. When later asked to pick the most critical standard, 47 percent selected passengers per hour.

As previously noted, many transit properties use measurements that are time or distance related. These measurements relate to whether or not the vehicle was in revenue service, i e., actually moving and scheduled to carry riders or not in service. The nonrevenue measurements pertain to the time or distance consumed when a bus travels between its garage and route (often called pullout/pull-in), time/distance traveled between routes (inter-lining), or the time spent standing in a terminal (layover). These measures are called vehicle hours (or miles), platform hours, service hours, revenue hours, non-revenue hours, etc. Because the realities of local practices vary, no questions regarding these measurements were asked.

Passengers Per Hour

This criterion is the number of passengers carried in one bus hour. (One bus operating one hour is one bus hour. If there were five bus hours, the total number of passengers would be divided by five.

Five bus hours could be one bus operating five hours, or five buses operating one hour each.) This is the single most used productivity criterion. Its near universality is reflective of the fact that wages, typically 80 percent of an operating budget, are paid on an hourly basis. This criterion provides a common basis when examining costs.

Use of this criterion was reported by 78 percent (86) of the reporting systems. In the 1984 study, 71 percent (77 of the 109 agencies) reported using it.

Precise definition of this criterion depends on the subcategory employed. There are two common methods of accounting for passengers. The first, unlinked passenger trips,

TABLE 7

ECONOMIC AND PRODUCTIVITY STANDARDS SELECTED CRITERIA BY SYSTEM SIZE

Criterion

Passengers per hour

Cost per passenger

Passengers per mile

Passengers per trip

Passenger miles

Under 50 buses (42)

29

23

28

19

8

51 to 200 buses (34)

29

22

20

16

3

Use of Criterion by System Size

201 to 500 501 to 1,000 buses (16) buses (9)

12 9

11

6

11

2

7

6

7

1

Over 1,000 buses (10)

7

6

4

5

3

Numbers in parentheses are the number of systems responding to the synthesis survey in that size category.

Total

86

69

64

58

17

18 counts each time a person boards a bus as a trip or passenger carried.

The second, linked passenger trips, does not count transfers and reflects only the number of completed trips made by riders regardless of the number of times they must transfer buses to reach their final destination. As discussed previously, there are also two common ways of reporting the vehicle hours for a route. The first is to count only the hours that buses actually spend in-service, excluding nonrevenue hours that are not capable of generating ridership. Fashioned this way, this variable is often referred to as service or revenue hours.

When called total vehicle hours, it includes non-revenue hours for vehicles assigned to each line. In the 1984 interviews, most of the systems reported using the unlinked (total) passenger trips and revenue hours as variables to establish performance requirements and objectives.

Cost Per Passenger

Broadly defined, this criterion is the route's costs divided by its ridership. In trying to ascertain the productivity of a route, a key measure for a system is determining the cost per passenger on the route in question. Sixty-two percent (69) of the responding transit systems use this criterion. In 1984, it was reported by 39 percent (43) of the systems.

Because of the wide variances in systems costs, and especially because some systems focus on variable (also known as marginal or out-of-pocket) costs while others use fully allocated, or fixed plus variable costs, respondents, when later asked what was their minimal acceptable standard for cost per passenger, were asked to reply in terms of the route's relationship to their system's average.

Passengers Per Mile

This criterion is defined as the number of passengers carried in one bus mile. (One bus operating one mile is one bus mile. If there were five bus miles, the total number of passengers would be divided by five. Five bus miles could be one bus operating five miles, or five buses operating one mile each.) For bus routes that make relatively frequent stops, and have the potential of turnover, i.e., as one rider gets off another gets on, passengers per mile is one of the productivity measures used by transit operators. This performance criterion was reported by 58 percent (64) of the responding agencies.

In the 1984 review, 63 percent (69) of the respondents reported using this measure.

The practices reported for this criterion are very similar to the practices employed to evaluate the passengers per hour criterion, with most of the standards used based on the unlinked (total) passenger trips and revenue (service) miles variables.

Passengers Per Trip

Passengers per trip is the total number of riders on a oneway trip. Some systems analyze and publish the resulting data on a passengers per trip basis. This criterion was reported by 52 percent

(58) of the agencies. About 38 percent (42) of the respondents reported its usage in 1984.

There were no detailed follow-up questions asked in the survey concerning this criterion, and there is no clear explanation for the increase in use. Seventy-two percent of the systems with between

200 and 500 buses used this measure, the others were split 50/50.

Because there is no denominator to this computation, other than the trip (no miles, hours, or cost) the significance of this criterion is confined to the transit system. It is useful for routes/trips where there is no on-off traffic, such as an express route. However counterintuitive as it may seem, this criterion may indeed be the easiest one to explain to the public, i.e., "there were 21 people on that trip." That alone may bespeak its use.

Passenger Miles

Passenger miles are the sum total of passengers carried times the total miles operated in a one-way trip. For bus routes that have relatively little turnover, or that operate at relatively great distances with significant loadings, passenger miles are a good measure of a route's productivity. Fifteen percent of the systems reported use of this criterion.

This criterion is best applied to bus routes that are relatively long and are not expected to have much turnover. This is characteristic of express routes and long-distance feeder routes and is an excellent example of a criterion that is best measured internally to a system; it is best not to use it to compare two routes in disparate conditions or systems.

The next four productivity measures are financial criteria and can be considered refinements of the previous five.

Revenue Per Passenger (Per Route)

Revenues collected on a route divided by total boardings equal revenue per passenger. Although productivity can be measured simply by counting passengers (and then divided by a unit, such as hour or mile), this is not always an accurate reflection, in a qualitative sense, of a route's activity because different fares are paid.

Some routes can have disproportionate transfer traffic, such as heavy school riding (with corollary reduced fares). In turn, many systems use this as a measure either as actual revenue or relative to a system average. Of the 69 systems that reported this standard, 87 percent

(60) indicated that they used this criterion, and their minimum acceptable revenue per passenger per route was either 25 or 33 percent of the system average.

This criterion can be evaluated either as actual revenue or relative to other routes in the system. For the purpose of this synthesis, actual revenue dollars have little value outside the transit property itself. Respondents were therefore afforded the opportunity to answer as a percentage of variable cost.

Subsidy Per Passenger

The subsidy per passenger criterion is the public funding requirement to make up the difference between cost per passenger and revenue per passenger. Just over 50 percent (56) of the systems responded to this question.

In most cases, this criterion is simply the reverse of cost per passenger and is redundant. The measure has two values,

however. In systems with variable fares, e.g., express bus premiums, the cost per passenger can be constant across services but the premium collected makes the subsidy less for the route in question. It is also easier to explain to the public than cost per passenger when discussing subsidy requirements. Of the systems that answered this question, almost 86 percent indicated that 2 to 3 times their system average was the acceptable subsidy per passenger.

Route Level Minimum Variable Cost Recovery Ratio

This ratio is the percentage of direct operating costs for a route that are recovered through the fares paid by its ridership. Most systems have a minimum standard for the percentage of direct operating costs that they expect to be recovered on the route. This financial performance criterion was reported by about 75 percent

(83) of the responding agencies, which was the same percentage in the 1984 study.

Of the route level minimum variable cost recovery ratio values reported, below 25 percent was the most common, with 25 to 40 percent next. Only 9 percent (10) of the systems use a minimum at the route level greater than 40 percent.

Direct operating costs in transit normally include such expenses as operator wages and benefits and variable maintenance costs such as fuel and consumables (oil, tires) that are required to put buses into service. Capital costs and general administrative expenses for such support functions as accounting and marketing are generally excluded from the calculation of direct operating costs and are considered fully allocated/fixed costs. There is also a middle area called semi-variable costs, which is discussed in the next section.

The 1984 report, based on the interviews then conducted, discussed that minimum acceptable cost recovery rates for different types of lines and the answers ranged from 10 to 100 percent. The

100 percent requirement was applied by certain agencies to commuter and suburban express-type services or to lines that operate outside of the agency's geographical jurisdiction.

Route Level Minimum Costs that also Include Semivariable and/or Fully Allocated/Fixed Costs

This criterion is the percentage of assigned costs for a route that are recovered through the fares paid by its ridership. The assigned costs will, in addition to variable or marginal costs, include semivariable and/or fully allocated/fixed costs. In addition to the minimum standards for the percentage of direct operating costs that systems expect to be recovered on the route, there are times when system level costs are included in the allocation as well. Sixty-eight percent (76) of the systems indicated that on occasion they will use semi-variable and/or fully allocated costs in their calculations.

There are times when, although an individual route is being examined, the examination is taking place in an environment where more than one route is under study. A typical scenario is an economy program where, because of budget requirements, numerous routes are under scrutiny. In such cases there is a synergy of savings.

Maintenance is an example where

19 such synergy takes place. Discontinuation of a one bus shuttle route, in and of itself, will not reduce a system's requirements for mechanics. But five such savings will (if all the buses were housed, or could have been housed, at the same maintenance facility). Such savings that accrue in a step-like fashion are known as semi-variable costs. Conversely, a small system undergoing a major expansion, for example from 75 buses to 150, may incur new fixed overhead costs such as an assistant in the purchasing department.

Route Level Performance Relative to Other Routes in the

System

A route's percentile rank in an overall ranking of a transit system's routes is its route level performance relative to other routes in the particular system. Seventy-five percent of the systems said that they use this measure.

There are two ways to view the first nine economic and productivity criteria: as absolute measures against some predetermined number(s)/standard(s) or as relative measures, where the subject route's performance is measured against the performance of other routes. At some point in a transit system's history, this criterion will be used. Of the responses, 37 percent said that they looked at the bottom 10 percent of their routes, and 36 percent looked at the bottom 20 percent. This is not necessarily an economy driven measure. Some systems as part of their evaluation standards will routinely, on an annualized business basis, look at their relatively weak performers and undertake a route level analysis that will result in recommendations to strengthen the weaker routes.

SERVICE DELIVERY STANDARDS

The transit agencies participating in the survey were asked to report on the criteria they use in measuring a route's service as actually delivered to a passenger. Two criteria were reported: 1) ontime performance and 2) headway adherence (evenness of interval).

In 1984, of these two measures, only on-time performance (called schedule adherence) was discussed. It is acknowledged that because of sampling requirements to have statistically valid data, some systems, especially in the area of headway adherence, may sample at the system, but not the route, level. In other words, they may have collected enough data to get a good picture of system level activity and performance, but in and of itself, the quantity of data is too small to be meaningful at the route level. This is reflective of the fact that often a system is measuring itself as part of good business practice.

On-Time Performance

The definition of on-time performance varies by operator, but usually deals with deviation of actual operation from the schedule

(see Tables 8 and 9, which define parameters used by systems). The traditional method for determining service reliability in the transportation industries has been on-time performance; transit is no different in its historic use of this measure. This criterion was reported by 77 percent (85) of the systems. In 1984, 84 percent (92) of reporting systems indicated its use.

20

TABLE 8

DEFINITION OF LATE BUS BY SYSTEM SIZE

Minutes

Late

Under

50 buses

51 to 200 buses

201 to

500 buses

4

5

1

2

Over 5

1

1

4

11

14

0

3

6

9

11

0

1

1

4

6

Total 31 29 12

Number of properties not reporting: 25

TABLE 9

DEFINITION OF EARLY BUS BY SYSTEM SIZE

501 to 1,000 buses

0

2

1

1

3

7

1

2

4

Minutes

Early

Under

50 buses

20

1

3

51 to 200 buses

201 to

500 buses

22

6

0

11

1

0

5

Over 5

1

6

0

1

Total 31 29

Number of properties not reporting: 25

12

0

0

501 to 1,000 buses

7

0

0

0

0

7

Over 1,000 buses

1

4

0

0

2

7

Total

12

30

2

6

36

86

Over 1,000

Buses

5

1

1

0

0

7

1

7

86

Total

65

9

4

The reduction in use of on-time performance from 1984 to

1994 is explained, in part, by the questionnaire's requirement of deliberate collection of data for performance monitoring rather than supervising/dispatching bus service. As Tables 8 and 9 show, smaller operators are far more liberal with their definitions, which is reflective of the wider intervals that they operate. Tables 10 and 11 show on-time performance by system size, Table 10 for rush hours and Table 11 for non-rush hours.

There is also an issue of "between-day" consistency, which is not readily adapted to conventional on-time performance monitoring.

If a specific bus trip is consistently 5 min late, this is preferable to that trip being on time one day, 5 min late the next, etc. While it is difficult to collect this information because of the requirement of repeated daily observations, it is quite useful when examined from the customer's viewpoint. This is a matter that is much easier to monitor with automated equipment such as automatic vehicle location (AVL).

Headway Adherence (Evenness of Interval)

Headway adherence, or evenness of interval, is the service reliability criterion that measures reliability much the way a customer would see it. Within an agency's parameters, the interval scheduled was the interval operated without regard to whether or not the trips were made by the properly scheduled runs. Trips are monitored at a location based on arrival time, without regard to whether the bus trip that arrived was scheduled for that time slot. For instance, buses are due at 5:03, 5:08, 5:13, and 5:18 p.m.. No bus arrives at 5:03 p.m., but buses do arrive at 5:09, 5:14, and 5:181/2 p.m. However, it was the 5:03 that arrived at 5:09, the 5:08 that arrived at 5:14, and the 5:14 that arrived at 5:18½ p.m. From the customer's perspective, the scheduled headway, 5 min, was maintained in two of three instances, even though all four trips were late or missing. Twenty-eight percent

(31) of the respondents reported using this criterion.

From a rider's perspective, it is possible for a schedule to be adhered to and yet on-time performance from the Transportation

Department perspective will be quite poor. This situation comes about when individual buses are each approximately as late as the scheduled interval and the scheduled interval is greater than the agency's on-time performance definition. For example, on a 5-min schedule, every bus is individually 5 min late. With a 4-min benchmark of on-time

TABLE 10

RUSH HOUR PERFORMANCE BY SYSTEM SIZE

Percent on

Time

98 to 100%

94 to 97%

90 to 94%

85 to 89%

80 to 84%

75 to 79%

70 to 74%

Below 74%

Total under 50 buses

4

6

8

2

5

2

2

1

30

51 to 200 buses

7

8

8

2

1

2

1

1

30

201 to 500 buses

0

1

5

2

2

0

12

1

1

Number of properties not reporting: 28

501 to 1,000 buses

0

1

2

2

0

0

6

1

0

Over 1,000 buses

1

1

1

0

1

0

5

1

0

TABLE 11

NON-RUSH HOUR PERFORMANCE BY SYSTEM SIZE

Percentage on Time

98 to 100%

94 to 97%

90 to 94%

85 to 89%

80 to 84%

75 to 79%

70 to 74%

Below 70%

Total

Under 50

Buses

6

13

4

3

1

1

2

1

31

51 to 200 buses

7

11

6

1

0

1

0

3

29

201 to

500 buses

0

1

7

2

1

0

0

0

11

501 to 1,000 buses

0

3

2

1

1

0

0

0

7

Over 1,000 buses

1

1

1

0

0

0

2

0

5

Number of properties not reporting: 28

3

2

4

4

83

Total

14

29

20

7

Total

2

83

7

4

12

17

24

8

9

21 performance, from the agency's viewpoint every bus is late and ontime performance is 0 percent; from the passengers' perspective, performance is 100 percent. Alternatively, with a 2-min frequency, three buses could come bunched together after 4 min. In this case, on-time performance was 100 percent, but from the passenger's viewpoint service was poor.

This criterion began to be commonly used on rapid transit systems where schedule monitoring and remedial measures are easier to undertake than on bus systems (APTA Multimo dal Operations Planning Workshop, Miami, Florida, December

1990). Another major reason for its limited use is the difficulty in providing a clear, concise description of the standard's performance and meaning. Headway adherence, while quite meaningful, is difficult to describe. In the example given above, where on-time performance was 100 percent but service was poor due to bunching, it is possible, with one simple interpretation, to say that service was off 300 percent. (The first bus was 2 min late. The net interval was thus 4 min, or

22

200 percent of 2 min. The second bus was on time. The third bus was

2 min early so it is 100 percent off (200 + 0 + 100 = 300).) Those that use headway adherence use it largely for internal purposes, i.e., not necessarily publicly reported as other measures. Over 80 percent of those who do use it have been using it for more than 3 years. The inroads made in the last several years in the area of AVL will make the data more accessible. This, in turn, will bring more attention to the criterion.

PASSENGER COMFORT AND SAFETY STANDARDS

There are potentially six criteria in this category. In 1984, this category was called "Service Quality" and covered factors in the other categories as well, including schedule adherence, vehicle loads, and vehicle headways. For this synthesis, the six criteria in this category are:

Passenger complaints

Missed trips

Unscheduled extras

Accidents

Passenger environment conditions (e.g., vehicle cleanliness, vehicle condition, missing stanchions, blank destination signs)

Special information (or special intervals) in areas where riders do not feel secure waiting for buses.

Passenger Complaints

Passenger complaints are records kept on route level complaints over some unit of measure, such as complaints per 1,000 boardings. A measure of a route's performance is the volume of complaints drawn from customers. In 1984, this criterion was reported by 55 percent (60) of the agencies participating in the study

In the current survey, the question was phrased: "Do you keep route level information on complaints in a manner other than just a listing, i.e., complaints per mile, or per 1000 boardings?" Sixty-five percent (72) of the reporting systems said they did not.

Missed Trips and Unscheduled

Extras

This criterion is defined as trips that are either added to, or removed from, the daily schedule, other than routine schedule changes (e.g., picks, sign-ups, line-ups). Daily operations are dynamic, and although there is an established predetermined schedule, often scheduled trips are either missed (e.g., due to mechanical failure or inordinate driver absences), or trips are added

(e.g., special events at arenas, early school discharges--these are commonly called extras). More than 60 percent (67) of the systems responded affirmatively to the question regarding data collection on missed trips and unscheduled extras. In

1984, 47 percent (51) of the responding agencies said that they monitored this service quality criterion, they were not asked about unscheduled extras in 1984.

Accidents

An accident involves either personal injury or property damage.

Evaluation of accidents on the route level, whether by mile or passenger, can reveal relative anomalies in a routing For example, high accident rates on a clockwise turn-around operation might be reflective of difficult right turn movements; counter-clockwise turn movements might reduce the hazard Sixty-five percent (72) of the transit systems stated that they collect route level accident data. In

1984. 58 percent (63) of the agencies used this criterion.

Safety is a universal concern in the transit industry, pervasive through every department, and is certainly monitored by all agencies in some way. Thus, it has a role in the bus route evaluation process.

Passenger Environment Conditions

This bus route evaluation criterion for passenger comfort and safety measures the overall physical environment a passenger will encounter on board the bus, including vehicle cleanliness, vehicle condition, missing stanchions, and blank destination signs. Use of this criterion was reported by almost 50 percent

(55) of the agencies.

Approximately one-half of the agencies seek to collect data on this standard with either field surveys conducted by administrative/management staff, structured surveys conducted by traffic checkers with formal training in the survey instrument, or market research, either general or conducted by focus groups.

Special Information (or Special Intervals) in Areas Where

Riders Do Not Feel Secure Waiting for Buses

This criterion is defined as bus operations on given routes that are different than they would be otherwise because of security needs.

With increases in street crime (not on board vehicles), some systems operate their buses differently at certain time periods or in certain areas to either minimize passenger waiting at regular bus stops or to avoid regular bus stops altogether. About 7 percent (8) of the respondents reported engaging in this practice.

This measure, which could be viewed as a routing/bus stop or schedule criterion rather than a passenger safety measure, has come into practice in the last 6 or 8 years Some systems call it "request-astop" where alighting riders can request to be let off closer to their home or job. Three of the eight respondents to this criterion were

Canadian (one-half of the Canadian respondents). This practice can also be used for safety, rather than security related matters, such as different stops in inclement weather, so as to avoid boarding and alighting on ice patches.

23

CHAPTER FOUR

THE ADMINISTRATION OF BUS SERVICE EVALUATION ACTIVITIES

This chapter examines transit industry bus service evaluation practices from an organizational perspective. The chapter begins with a discussion of the status of service standards and the change in this status since 1984. This is followed by a section on the basic organization of service evaluation activities in terms of departmental/institutional responsibilities. A discussion of the benefits and results obtained by agencies examined in the industry review concludes the chapter.

THE STATUS OF SERVICE STANDARDS AND

CHANGES IN STATUS SINCE 1984

The service standards criteria reported by agencies participating in the industry review have either formal or informal status. As shown in Table 4 (see Chapter 3), the larger the system the more likely its formal adoption of service standards; adoption was also more likely on the West Coast than in most other regions (except

Federal region V, Ohio to Minnesota).

Because there was only a 28 percent repeat response rate between the two studies, i.e., agencies that participated in both the

1984 and 1994 surveys, it is not possible to reliably track formal standards adoption other than at the general system size level. Even then, comparison is difficult because the size groupings are somewhat different. Nevertheless, there is a pattern.

Since 1984, the adoption of standards by small and mediumsized system standards is virtually unchanged (59 and 79 percent, respectively, in 1984; and 57 and 80 percent, respectively, in 1994).

Large systems, however, changed from 70 to 95 percent. Again, however, the comparison cannot be precise; for example in 1984, when adoption of discrete criteria was reported, the large systems reported some criteria at 100 percent level. Still, with a 25 percent increase in overall adoption, the trend in large, over 500 bus systems is clear.

Reasons for this are numerous and anecdotal, but they basically deal with the necessity of a publicly open process--something that began after the prior report's data collection efforts. Large systems, in intense media markets, felt the need for formal adoption of standards to withstand the scrutiny of public interest (3).

departments, divisions, branches, or offices) with no single unit having total responsibility for all criteria. This is not surprising given that the agency's mission is to operate service. Evaluation, or oversight, will be spread throughout the agency as well, although not expansively. In some instances, separate departments have considerable responsibility for certain criteria and rely little on other departments for assistance or coordination. In other cases, departments within an agency cooperate and share the responsibility for the monitoring and evaluation of performance criteria. One department may be required to collect performance data that is forwarded to another department for processing. The processed data may then be sent to another department where it is assessed and evaluated. An exception to this pattern occurs within a number of small systems where staff sizes do not lend themselves readily to a great deal of decentralization. These agencies tend to centralize activities within one or two departments such as operations, finance, or the office of the director or general manager. A number of organizational structures were reported by transit systems to describe the administration of their bus service evaluation activities. It is possible, however, to identify several general patterns.

Route Design Criteria

It is common for planning departments to assume the major responsibility for evaluation activities relating to route design criteria. Planning departments are responsible for obtaining the population and land use data that are required to evaluate these criteria. These departments frequently lead in the administration of other data collection efforts, such as passenger surveys and cordon counts, which are required on a special or infrequent basis. Planning departments work closely with other areas such as scheduling, transportation, and marketing because they frequently provide supportive data to conduct evaluations of route design criteria and their functions are likely to be impacted by route design decisions.

Where separate planning departments do not exist at some smaller agencies, the evaluation of these criteria is normally the responsibility of the agency's schedulers and/or transportation department. In large agencies, planning (or at least service planning) and scheduling are often in one department, typically called operations planning. In some agencies these responsibilities are assumed by an administrative staff or by the system's executive office staff.

THE ORGANIZATION OF BUS SERVICE

EVALUATION ACTIVITIES

An important objective of this synthesis was to examine the organization and administration of bus service data collection, evaluation, and monitoring activities. Responsibility for the various tasks and activities that are required to conduct evaluations of bus service performance criteria historically have been spread over several work units (directorates,

Schedule Criteria

Depending on the specific criteria under consideration, different departments are charged with responsibility for performance evaluations. Vehicle loads, vehicle headways, and

24 span of service criteria are usually the responsibility of an agency's schedulers and/or planners. Where planners are responsible for conducting evaluations, they rely to a large extent on data obtained by the scheduling department. Passenger transfer criteria are often the evaluation responsibility of an agency's planning area, which may rely on data collected by transportation and scheduling departments.

The service reliability criteria of on-time performance are often the responsibility of transportation or scheduling departments with occasional support from maintenance records to evaluate the missed trips criteria.

Economic and Productivity Criteria

The two functional areas most often assigned major responsibilities for evaluating bus services' financial and ridership performance are planning and finance/accounting. The major responsibility for data collection, however, often falls to transportation and, sometimes, scheduling departments. Even those systems that rely on revenue-based ridership estimation methods usually involve bus operators or other transportation personnel in the collection of these data. A common example of this is the use of bus operators to record the meter readings from revenue-registering fareboxes. Vehicle mileage and operating hours data are normally compiled by scheduling departments for each pick or service change, with the mileage and hours lost due to such things as breakdowns and missed trips usually reported by an agency's transportation or maintenance area.

Service Delivery Criteria

Data collection, not for supervision or dispatch purposes, is done by regular street dispatchers/supervisors (starters) conducting a special assignment, dedicated transportation department personnel, or traffic checkers. This process can be automated via automatic passenger counters (APC) or AVL. The data are analyzed in either transportation, scheduling, or planning departments.

Passenger Comfort and Safety Criteria

Evaluation of passenger comfort and safety criteria is conducted by the groups listed for the four previous criteria (except for finance). Safety and marketing staff are also involved in this process.

In trying to ascertain which work units were involved in the evaluation process, the questionnaire also asked numerous questions about the process of collecting and analyzing data. Historically, from the industry's earliest days, staff called traffic checkers collected data, and these staff were often from the schedule department. At many systems, the schedules work unit used to be called the

Schedules and Traffic Department or Traffic.

In the medium and large systems as policy intervals were placed on numerous routes, the traffic checking force was reduced.

With tight budgets, departments were reduced in size before service itself was cut back, and this contributed to further reductions in the checking force. Beginning in the 1970s, some systems became automated. Initially, this impacted the clerking force, i.e., the individuals who summarized the raw data collected by individuals. In some locations, checkers began filling out optical scan sheets that could be machine read, hand-held devices that could be down-loaded into computers (pre-PC) also came into use. The checking function itself was automated at some systems with APCs.

More than automating existing staff functions however, these activities permitted a re-expansion of the amount of data collected.

Weekend surveying, which is infrequent at best, became more practical. Appendix E consists of cross-tabulations, by system size, of various aspects of the traffic checking and clerking functions. Not surprisingly, except for the largest systems, there are very few sizeable (6 or more) full-time checking forces. Of the 111 respondents to the 1994 survey, only 15 percent (17) had 6 or more full-time checkers. Appendix F consists of a cross-tabulation of selected operating data collection practices by system size.

THE EFFECTIVENESS OF BUS SERVICE

EVALUATION ACTIVITIES

An objective of this synthesis was to survey the effectiveness of bus service evaluation efforts being undertaken by U.S. and

Canadian transit systems. To report this finding, the questionnaire had a stated preference scale, identical to the one used in 1984, but with five rather than 10 gradations. These questions were employed to measure an agency's perceived effectiveness of these programs in improving four service related factors for its system: 1) service delivery, 2) service equity, 3) ridership, and 4) operating costs.

Systems were asked to provide a rating or score for each of these factors, ranging from one through five, with one reflecting the greatest degree of perceived effectiveness for a particular factor.

In 1994, 84 percent (93) of the surveyed agencies responded to the scaling question on these four factors. Of these 93 systems there were 30 small, 45 medium, and 18 large systems. (In 1984, of the 109 respondents, 60 percent (65) responded to these questions--

38 small, 15 medium, and 12 large agencies.)

Analysis of these four factors reveals an average ratings range of between 1.8 and 3.3, or mid-range responses. In both studies, the highest ratings for effective standards were received in the areas of service delivery and operating costs factors. Of the various systems, the medium-sized group (between 201 and 500 buses) reported the highest average rating score for all of the factors. It is interesting to note that in 1984, the ridership factor experienced the lowest scores for the groupings; in 1994, it was service equity. While the scales are not precisely comparable, trends are discernible. Excepting service equity, all of the factors rated relatively higher in 1994 than their

1984 counterparts. Service equity not only scored lower than the ridership factor in 1994, it scored lower than it did in 1984.

25

CHAPTER FIVE

CONCLUSIONS AND RECOMMENDATIONS FOR FUTURE RESEARCH

Evaluation standards have progressed considerably from the days of horse-car or omni-bus lines when the main criteria considered were how much money a trip brought in, was the trip was profitable, and was the vehicle able to accommodate passengers.

In 1984, 15 criteria for bus route evaluation standards were studied; in 1994, the number increased to 44. A growing number of transit operators, particularly at the larger systems, have formally adopted evaluation standards. Agencies are using evaluation tools to provide to both themselves and the public a better understanding of where resources are being applied, how well services are being delivered, and how well these services are used. Along with assisting in service deployment decisions, including providing justification for service changes, standards are also providing the requisite analytical tools that agencies need to upgrade the quality of existing service.

Bus route evaluation standards are evolutionary. The more the industry deals in their use, the better the industry understands their individual application; the increase in the number of evaluation criteria, as presented in this synthesis, largely reflects the change in use An “old” (1984) criterion might now be used as two or three discrete criteria, e.g., minimum intervals and duration of standee time, two separate criteria in 1994, were grouped together under

“Service Quality” in 1984. These changes in use, as well as legislation, such as the Clean Air Act Amendments of 1990 (CAAA) and the Americans with Disabilities Act (ADA), have caused trends in route level evaluation to emerge in the past 10 years.

EMERGING TRENDS

It is apparent from the expansion in individual criteria for schedule design standards, service delivery standards, and passenger comfort and safety standards that there has been an increase in customer orientation and satisfaction. While these are generally system level concerns, they are only meaningful to riders while carefully applied at the route level. This in turn requires route level assessment and evaluation to ensure proper application. It is useful to reemphasize that there are system level standards, and while not applicable or seemingly meaningful at the route level, (e.g., the relationship of route level evaluation standards to the Intermodal

Surface Transportation and Efficiency Act of 1991 (ISTEA)), the service performance of individual routes taken together produce this standard. An example of such a measure would be transit’s market share. But market share can only increase if customer satisfaction derived at the route (actually trip) level increases.

Understanding the need to focus from the customers’ perspective, measuring customer satisfaction against non-similar transit systems has developed. More systems in multi-operator regions are comparing their performance with operators of other sizes and even other customer service industries. Because passengers judge their satisfaction by what they see on other carriers, cross-size systems evaluations have begun to emerge, with the adoption of route level standards that previously seemed appropriate for other size systems. Because such standards are being employed at the route level where the customers are, such comparisons and evaluation techniques can be applied. It is thus anticipated that the industry will adapt its waiting time standard to be more akin to those in industries where customers are won by minimizing waiting time.

Given the ADA's complementary paratransit requirement, circumstances can be foreseen under which, based on the route's performance as measured by the evaluation standards selected by the transit system, it may be more effective to provide service in the corridor as a paratransit service available to the general public. It is anticipated that, as an emerging trend, the general evaluation standards seen in 1984 and 1994 may evolve to include measures for paratransit suitability/substitution as well.

Because route level evaluation standards are often used for routes that operate together in a common corridor (and the route level evaluation standards are then used as corridor level evaluation standards), it is both logical and likely that measures will be developed by transit systems to ascertain at the route/corridor level the interrelationship between the route and the 1990 CAAA. This will be especially true for those systems with routes that operate in designated areas with air quality problems.

Lastly, it can be expected that evaluations will take place more often. Especially as route level data become more readily available, particularly with regard to service reliability information, the trend of increasing data collection is expected to continue. It is anticipated that with new technology and more sophisticated reporting capabilities, this information will be collected more often and more cost effectively, thereby being employed more frequently by the industry. An example of the application of new, smart technology to improve service performance monitoring will be its use to address such issues as "same-day" consistency.

Much like a census, it is important to monitor progress in this field by conducting periodic updates such as this synthesis. In addition to a general update of the state of the art and a close watch of emerging trends, a particular area to monitor is that of "waiting time versus route level real time information." Because drivers get into their cars spontaneously, they expect much the same of transit.

Although when viewed from a policy standpoint it appears to be a system level issue, transit's increased use of AVL systems will create another route evaluation issue and new opportunities.

AVL will be bringing real time information down to the bus stop level. Bus route evaluation issues will likely include how to select between routes when initially installing the system, what route level standards will determine this, and

26 further, does a wait become more tolerable to an intending passenger when real time information is provided? Particularly in light of the impending proliferation of real time information, new research on passenger waiting attitudes may be appropriate. As congestion in many systems shows no sign of decreasing, which may lead to increased use of transit in some form, systemwide increases reflected on a route basis may be explored in detail in the future based on this technology.

Simply because changes, foreseen and unforeseen, routinely happen, it is suggested that bus route evaluation standard reviews be undertaken more routinely in the future.

REFERENCES

1.

Metropolitan Transit Authority of Harris County

(METRO), Bus Service Evaluation Methods: A

Review, Office of Planning Assistance, Urban Mass

Trasit Administration, U.S. Department of

Transportation, Washington, D.C., 1984.

2.

Koski, R.W., "Bus Transit," in Public Transportation

(G.E. Gray and L.A. Hoel, eds.), Prentice Hall, New

Jersey, 1992.

3.

Barton-Aschman Associates, Inc., Comprehensive

Operations Analysis of the Bus System, Honolulu

Public Transit Authority, Hawaii, 1993.

BIBLIOGRAPHY

Canadian Transit Handbook, 2nd ed., Canadian Urban

Transit Association and the Roads and Transport

Associate of Canada, 1985.

Levinson, H.S., NCTRP Synthesis of Transit Practice 15:

Supersvision Strategies for Improved Reliability of

Bus Routes, Transportation Research Board, National

Research Council, Washington, D.C., 1991.

NCHRP Synthesis of Highway Practice 69: Bus Route and

Schedule Planning Guidelines, Transportation

Research Board, National Research Council,

Washington, D.C., 1980.

27

28

APPENDIX A

1984 Survey Instrument

THE 1984 QUESTIONNAIRE

AUTHORITY

DESCRIPTION

SIZE OF SERVICE AREA [sq. miles]

POPULATION OF SERVICE AREA

TOTAL NUMBER OF BUSES

NUMBER OF UNLINKED PASSENGER TRIPS [FY80]

NUMBER OF REVENUE BUS MILES [FY80]

NUMBER OF BUS ROUTES

SIZE OF TRAFFIC CHECKER FORCE

REVIEW OF EVALUATION PROCEDURES

Property's perceived effectiveness of evaluation efforts in improving service delivery, promoting equity in service distribution, increasing ridership, and reducing operating costs.

SERVICE DELIVERY ..... 1 2 3 4 5 6 7 8 9 10

EQUITY .......................... 1 2 3 4 5 6 7 8 9 10

RIDERSHIP...................... 1 2 3 4 5 6 7 8 9 10

OPERATING COSTS ..... 1 2 3 4 5 6 7 8 9 10

COMMENTS

PROBLEMS ENCOUNTERED

PERFORMANCE CRITERIA UTILIZED

FORMAL STANDARD INFORMAL STANDARD PROPOSED STANDARD MONITOR

CRITERIA ONLY

29

APPENDIX B

1994 Survey Instrument

TRANSIT COOPERATIVE RESEARCH PROGRAM

SYNTHESIS PROJECT SA-1

BUS ROUTE EVALUATION STANDARDS

QUESTIONNAIRE

Bus Route Evaluation Standards are indicators that measure the quality and quantity of service offered by a public transit system's bus routes. They include a number of items that determine, as well as reflect, the manner In which transit systems offer service to the public, and are often directly related to the costs of service provision.

This project will compile current industry practice in the form of a synthesis. It will be an update of Bus Service Evaluation Methods: A Review -- published in 1984 by the U. S.

Department of Transportation based largely on data obtained In 1981. That study analyzed then current practices in the industry. The 1984 study itself was an update of a 1979 report Bus

Service Evaluation Procedures: A Review.

Certain of the 1984 standards are no longer applicable in today's changed environment, and revised and completely new standards have come to play in recent years, particularly in the areas of service delivery and performance monitoring. It is the intent of this Synthesis to collect data from transit properties from around the country, to present these data in light of recent developments, and to produce a report that will serve the Industry with 'tools' until the next update. It is not the purpose of this effort to critique a transit propertv. Rather, we wish to determine, in part from this questionnaire, what it is that you/your propertv does in the area of bus route (not system) evaluation standards and how you undertake these efforts. Additionally, by comparing the results of this effort with those of the prior study, we will try to determine trends in bus route evaluation methods and procedures.

In order to reduce questionnaire processing costs, we are using general purpose bubble/optical scan sheets that are normally geared to an individual's response, not that of an organization's.

Please ignore the lower left-hand corner where demographic information is requested

except for the identification number. In the identification number boxes, A - J, In a right-

justified fashion, please enter your section 15 ID number. Where last name/first name is requested In the upper left-hand corner, please place, left-justified, your organization's name.

Please use a number 2 'lead' pencil or equivalent. In addition, we are requesting certain additional materials and your comments (requests for comments or materials are interspersed throughout the questionnaire). Comments should be typewritten/word-processed on your agency's stationery. An extra optical scan answer sheet Is provided if a mistake is made on the first one.

2

A. NAME OF AGENCY

1

______________________________________________________

Contact person _______________________________________________________________

Address _____________________________________________________________________

_____________________________________________________________________

_____________________________________________________________________

_____________________________________________________________________

Telephone no.______________________ Fax no ____________________________________

Section 15 ID number ___________________________________

The number of each item or question Is keyed for the answer sheet. As appropriate, please draw your answers from your Section 15 FY 92 submittal. Where more than one circle should be darkened in an answer, it is indicated; there are times when it will either be appropriate for you to leave a question blank or you may not wish to answer a question.

We are genuinely interested in hearing about your system and your use of standards, or as some call them, guidelines (in this questionnaire the terms are used interchangeably). We strongly invite your comments. Please tell us if in development of your standards if they were keyed to your best performers or your median performers or something else. Tell us about measures that you use that we didn't ask about. Tell us what you would like to see asked next time; where transit research In this area should head. Tell us what's useful and what is not. Are standards effective in improving transit systems? Do they effectively measure your routes' (and system's) performance?

This page (number 2), your comments, materials you wish to transmit and completed scan sheets should be sent to:

Howard Benn

Chief Transit Operations/Planning Officer

Barton-Aschman Associates, Inc.

The Parsons Transportation Group

1133 15th St., NW

Washington, DC 20005-2701 phone (202) 775-6066 fax (202) 775-6080

___________________________

1

Where service is provided by a contractor, public or pnvate, the name, address and phone numbers of the contracting public agency should be provided if it is they who establish, by either board or agency staff action, the service standards or guidelines. Otherwise, the operator/contractor is requested to complete it.

30

APPENDIX B (Continued)

3

B. SERVICE AREA/PROPERTY DESCRIPTION

1] Size of service area (square miles) a]under 50 sq. mi. b]50-100 sq. mi.

c]100-200 sq. mi. d]200-500 sq. mi.

e]over 500 sq. mi.

2] Population of service area (1990 census) a]under 50,000 b]50,000-100,000 c]100,000-250,000 d]250,000-750,000 e]over 750,000

3] Total number of annual unlinked passenger trips (boardings) [Section 15 FY 92] a]under 500,000 b]500,000-5,000,000 c]5,000,000-25,000,000 d]25,000,000-75,000,000 e]over 75,000,000

4] Total number of annual linked passenger trips (boardings)

2

[Section 15 FY 92] a]under 500,000 b]500,000-5,000,000 d]25,000,000-75,000,000 e]over 75,000,000 c]5,000,000-25,000,000

5] Annual number of revenue bus miles [Section 15 FY 92] a]under 125,000 b]125,000-1,250,000 c]1,250,000-6,250,000 d]6,250,000-18,750,000 e]over 18,750,000

6] Annual number of "passengerxaverage passenger trip length In miles"

(i.e., passengerxmiles. Not passengers per mile) a]under 3,000,000 b]3,000,000-30,000,000 c]30,000,000-150,000,000 d]150,000,000-500,000,000 e]over 500,000,000

7] Total number of required peak period buses a]under 50 b]51-200 c]201-500 d] 501-1000 e]over 1000

8] Spare ratio (as a percentage of total number of required peak period buses) a]under 10%b]10-12% c]13-17% d] 17-20% e]over 20%

9] Number of individual bus routes

3 a] 10 b]10-25 c]26-50 d] 51-75 e]over 75

10] Number of bus operators/mechanics/first level supervision

4 a] under 50 b]50-199 c]200-499 d]500-1499 e]over 1500

_______________________

2

Please use multi-modal linked boardings if known.

3

As a nder would count them. If, for example, schedules for two routes that are through-routed are counted by the schedules department as one product but a rider on the street would count it as two, please report it as two routes.

4

Maintenance supervisors, street supervisors/dispatchers, control center/command center supervisors, etc.

4

11] Are you a multimodal property (bus and rail)?

a] yes b] no

C. DATA COLLECTION

Staffing/Automation

12] Please darken the appropriate circle: a] data is collected by In-house staff b]data Is collected by contract services c] data not collected

13] Please darken the appropriate circle: a] all traffic checking is manual b] manual traffic checking is assisted by hand held devices c]automated passenger counters (apc) are used entirely d]apc's are supplemented by traffic checkers and/or hand held devices collection e]fare boxes

14] Size of full time traffic checker force a] 0 b] 1-5 c] 6-11 d] 12-20 e]over 20

15] Size of part time traffic checker force a] 0 b] 1-5 c] 6-11 d] 12-20 e]over 20

16] Do you have a dedicated traffic clerking force

5 a]yes - under 4 b]yes - over 4 c]no

If you answered yes, a or b, please go to question #18

17] Is your traffic clerking function combined with the duties of a]service analysts/planners b]schedulemakers c]traffic checkers

-ord]totally automated, with a completed product going to planners, analysts and schedulemakers

________________________________

5

Traffic clerking is the function of summarizing and analyzing traffic checks when the checks come into the office.

31

APPENDIX B (Continued)

5

Operating Data Collection Practices

18] Frequency of typical route level weekday point

6

checks a]1 set

7 per pick per year b]2 sets per pick per year c]1 set per alternate pick d] once per year e] every other year or less frequently

19] Your property's definition of a set a]1 b]2 c]3 d]4

20] Frequency of route level Saturday point checks a]1 per year b]2 per year c]every other year d]every third year e] hardly ever

21] Frequency of route level Sunday point checks a] 1 per year b]2 per year c]every other year d]every third year e] hardly ever

22] Frequency of typical route level weekday ride

8 checks a] 1 set per pick per year b]2 sets per pick per year c]1 set per alternate pick d] once per year e] every other year or less frequently

23] Frequency of route level Saturday ride checks a] 1 per year b]2 per year c]every other year d]every third year e] hardly ever

24] Frequency of route level Sunday ride checks a] 1 per year b]2 per year c]every other year d]every third year e] hardly ever

25] Frequency of statistically valid

9

'ride' checks on what your property considers to be a major route a] 2 per year b] once per year c] every other year d] every third year or less

26] Frequency of route level weekday running time checks a]l set per pick per year b] 2 sets per pick per year c]1 set per alternate pick d] once per year e] every other year or less frequently

27] Frequency of route level weekend running time checks a] 1 per year b]2 per year c]every other year d]every third year e] hardly ever

________________________________

6

At individual locations

7

Some properties will check a schedule only once per pick. Some will check twice per pick and average them if they are reasonably close to each other. In this case 'two' individual checks constitute one set. Other properties use 3 or 4 individual checks to constitute a set. In question 20 you are asked what constitutes a set at your property.

8

A check where either a person or device rides a bus trip in its entirety, from terminal to terminal (or designated location to designated location).

9

On extremely frequent routes, i.e., 2 to 4 minute frequencies, a 25% to 30% sample size would be considered statistically valid.

6

28] Treatment of holiday schedules a] as a Saturday schedule b] as a Sunday schedule c]both Saturday and Sunday as appropriate d]special schedules written e] c and d

29] Data (ridership and/or running time) for holiday schedules, specially written or otherwise, is collected a] yearly b]every other year c]every third year d] hardly ever

30] In collecting route level information/data, does your property regularly use any of the following techniques with riders (darken all that apply) a] focus groups b] general market research (telephone, post card) c]on-board surveys d]media based surveys e]meetings with organized riders' groups or community councils/boards

31] In collecting route level information/data, does your property regularly use any of the following techniques with bus operators (darken all that apply) a]operator quality 'circles' b] TQM (Total Quality Management) c]special unionmanagement meetings d]route or garage based teams or task forces e]employee suggestion plans or comment process

In the comments section please tell us if you handle peak hours differently than non-peak hours and how you handle your requirements for section 15.

D. ADOPTION PROCESS FOR STANDARDS (GUIDELINES)

32] Does your property have formal guidelines in any of the following areas: route design, schedule design, productivity or service delivery?

a] yes b]no

If you answered yes, go to question 33; if you answered no go to #35

33] Were they adopted by your governing board?

a] yes b]no

34] If you answered no to #33, who were they adopted by a]Executive Director/General Manager b]head of transportation/operations c)head of schedules/service planning d]other

35] If you answered no to #32 do you have a]documented informal standards 'adopted' by either schedules/service planning or transportation/operations b] policy from the Executive Director/General

Manager c]proposed standards/guidelines d]none of the preceding

STANDARDS (GUIDELINES)

(Answer these sections only if, formally or informally, your property uses guidelines/ standards. If not skip to section H.)

32

APPENDIX B (Continued)

7

E. ROUTE DESIGN STANDARDS

36] Which of the following factors does your property use in its route design (geographic coverage) guidelines (answer all that apply) a]population density b]employment density c]spacing between routes/corridors d]limitation on the number of deviations e]equal coverage through local tax base

37] In initially approaching the design of individual routes, routes are: a] first designed on their own, and then integrated into the system b] existing system solutions are sought first, from which a new routing may evolve c] either a] or b]

38] Which one answer best describes the primary design element of new and/or revised routings in your system: a] service to unserved areas b] network connectivity c] system design considerations (enhancement of pulse meets, etc.)d] streamlining/reduction of routing duplications e] service equity

10

39] Which one answer best describes the primary design tactic of new and/or revised routings in your system: a]direct, non-circuitous routings b] service proximate to as many residences as possible c] service to as many non-residential generators as possible d] service that intersects as many other routes as possible e] keep as many linked trips as possible on one vehicle

40] For long

11

routes, do you have a criterion for maximum local operation, and then express operation: a] yes, 20 minutes b] yes, 30 minutes c] yes, 45 minutes d] yes, 60 minutes e] no

If your interval is not specified, please pick the closest answer and detail your reply in the comments

41] What is the typical non-CBD bus stop spacing in your system: a]less than 4 per mile e]more than 12 per mile b]4 per mile c]6-8 per mile d] 10-12 per mile

42] What is the typical CBD bus stop spacing in your system: a]4 per mile b]6-8 per mile c] 10-12 per mile d]13-16 per mile e]more than 16 per mile

________________________________

10

Service equity is deliberately not defined here. Should you wish, in the comments section, please provide us with your definition.

11

Time or distance or both

8

43] Which bus stop siting does your property/municipality

12 prefer: a] near side b]far side c]mid-block d]no preferences

In your comments, please tell us how passenger traffic demand influences route design in your service area and how you would rank in terms of usefulness the various criteria. Please send us copies of your guidelines as well.

F SCHEDULE DESIGN STANDARDS

44] Does your property have differing guidelines for local, limited and express buses?

a] yes b] no

If no, go to #44

45] Do these various services have different basic fares, i.e., are there surcharges for the non-local bus services?

a] yes b] no

46] Does your property have schedule guidelines for (darken all circles that apply): a]maximum intervals b]minimum intervals c]rush hours that are different than non-rush hours d]maximum number of standees e]no standees

47] Do different character local routes (crosstown, feeder, grid, shuttle, etc.) have different guidelines?

a] yes b] no

48] Are your guidelines based on: a] waiting time between buses b] loading standards c] both

49] When schedule intervals are greater than 10-12 minutes, do you schedule only clockface (memory) schedules

13

?

a] yes b] no

50] Does your property have a duration limit for the amount of time a rider is expected to stand on board a bus?

a] yes b] no

51] Does your property have guidelines for transfers/meets?

a] yes b] no

If you answered yes, and if they vary by time of day or service frequency, please discuss them in your comments.

________________________________

12

The answer should be that of the organization responsible for placement of the stop.

13

That is, intervals that are evenly divisible into 60 -- 12, 15, 20, 30 and 60 minutes.

For this questionnaire, 45, 90 and 120 are also considered memory schedules.

APPENDIX B (Continued)

9

52] Do you have guidelines/standards for 'span of service'?

14 a] yes, they're based on system considerations b]yes, they're based on route level considerations c] a] and b] d] all/virtually all regular (local) routes operate the same hours whenever the system is up and running without a policy basis e] no

You are requested to send in your scheduling/ loading standards (guidelines).

G. PRODUCTIVITY STANDARDS/MEASURES

53] Which of the following productivity measures does your property regularly utilize in your (local) bus route analysis (please darken all that apply): a]passengers per mile b]passenger miles c]passengers per hour d]passengers per trip e]cost per passenger

54] Generally, what is the minimally acceptable standard for passengers per mile?

a]l b]2-3 c]3-5 d]5-8 e]over 8

55]Generally, what is the minimally acceptable standard for passenger per hour?

a]4 b]5-10 c]11-20 d]21-35 e]36 or over

56]Generally, what is the minimally acceptable standard for revenue per passenger per route

15

?

a]25% of variable cost? b]33% c]50% d]67% e]75%

57] Generally, what is the minimally acceptable standard for subsidy per passenger?

a] 2 times the system average b] 3 times the system average c] 4 times the system average d] 5 times the system average e] greater than 5 times

58] Generally, what is the minimally acceptable standard for cost per passenger?

a] 2 times the system average b] 3 times the system average c] 4 times the system average d] 5 times the system average e] greater than 5 times

59] Generally, on a route level, minimum acceptable (variable) recovery ratios are: a] below 25% b] 26-40% c]41-60% d] 61-75% e] over 75%

60] Overall, taking the above criteria together, in doing systemic route level assessments, where do you start: a] bottom 10% b] bottom 20% c] bottom 25% d] bottom 33 1/3%

61] Of the following criteria, which is the most critical in your assessment of route performance: a]passengers per mile b]passengers per hour c]cost per passenger d]subsidy per passengere]political feasibility

________________________

14

The span of service is defined as the hours when service runs, i.e., the start of the service day until the end of the service day (or 24 hour service).

15

If the selection is not exactly what you use, round down to the closest selection.

33

10

62]In doing a route level cost analysis, do you use

16

.

a]variable costs b]semi-variable costs d]a mix of a, b or c c]fully allocated/fixed costs e]at differing times, all of a,b and c

63] At your property, variable costs are what percent of fully allocated costs in your system: a]below 20% b]20-35% c] 36-50% d] 50-65% e]over 65%

64] Who is ultimately responsible for the selection of which cost numbers to use: a] Budget Department (or some other financial department) b]Operations/transportation department c]service planning/schedules departmentd]other department e]combination of departments

You are requested to send in your productivity standards how you would rank them in terms of usefulness the various critena.

H. SERVICE DELIVERY MONITORING/PERFORMANCE AUDITS

65] Are there formal procedures for engaging in service monitoring/performance audits

17

?

a] yes b]no

If you answered no, please skip to question 82

66] Which department/division undertakes these efforts?

a]Transportation/Operations b]Schedules/Service Planning c]lnternal Audit d] a and b e]other

67]What is monitored/audited on the bus route

18 level: a]On-time performance b]Headway adherence (evenness of interval) c]both d]neither

Please answer questions #68-73 only if you answered a, b or c to #67

On-Time Performance

68]A bus is officially late when it is more than _____ minute(s) behind schedule a] 1 b]2 c]4 d]5 e]over 5

__________________________

16

In your comments, please tell us which cost components you include in each of a], b] and c]. For example, some properties in 'a] variable costs' include operator wages, fuel and wear & tear items. Others include prorated mechanic wages.

17

For the purpose of collecting data rather than supervising/dispatching bus service.

18

If your property monitors either of these measures on a system level but not on a route level, please tell us in your comments. If different standards are used on a route level basis versus the system level, e.g., 1 minute route level versus 5 minute system level, please tell us that also.

34

APPENDIX B (Continued)

11

69]A bus is officially early when it is more than _____ minute(s) ahead of schedule a] 1 b]2 c]4 d]5 e]over 5

70] Route level data is collected at (darken all that apply): a] departing terminals b] mid-line time points c] various mid-line locations

(other than time points) d] arriving terminals

Please darken the circle for the answer that best fits your property in 1992. Systemwide rush hour performance was:

71a]98-100% b]94-97% c]90-94% d]85-89% e]80-84%

72a]75-79% b]70-74% c]60-69% d]below 60%

73]Does the answer to #71-72 reflect an average of wide variations, i.e., wide range of individual route performance's or are the routes in the system fairly consistent?

a] wide range b]the routes in the system are fairly consistent with each other

Please darken the circle for the answer that best fits your property in 1992. Systemwide non-rush hour performance was:

74a]98-100% b]94-97% c]90-94% d]85-89% e]80-84%

75a]75-79% b]70-74% c]60-69% d]below 60%

76]Does the answer to 74-75 reflect an average of wide variations, i.e., wide range of individual route performance's or are the routes in the system fairly consistent?

a] wide range b]the routes in the system are fairly consistent with each other

Headway Adherence (Evenness of Interval)

77] How does your property measure headway adherence a] as a percent of interval b]as an absolute measure, in seconds/minutes c] in another manner (please describe in comments) d] not measured (go to #79)

78] How long has your property used this measure a]less than 1 year b] 1 to 3 years c]more than 3 years

Please darken the circle that best fits from either #79 or #80.

If you had to pick one primary cause of poor on-time performance or poor headway adherence, it would be a]obsolete, outdated schedules b]erratic day-to-day operations, due to inconsistent passenger loadings, for which it is impossible to write reliable schedules c]erratic day-to-day street traffic for which it is impossible to write reliable schedules d]inadequate supervisory resources a]other forces beyond the control of the operating property performance c]lack of up to date data b]poor operator

81] If you are a multi-modal property, are there some measures used on one mode but not the other? Darken all circles that apply a] no b]yes; headway adherence for rail but not for bus c]late on-time performance (and/or headway adherence) more stringent for rail than bus d] early arrivals not permitted on rail; bus permitted up to limits in answer #69.

12

Please discuss other measures you use in your comments.

Passenger Comfort and Safety

82] Do you have a formal mechanism whereby passenger comfort is evaluated?

a] yes, field surveys by planning/marketing/management personnel b]yes,"Passenger Environment Surveys"

19 c]yes, focus groups/general market research d] yes, parts of a and/or b and/or c e] no

83] Do you keep route level information on complaints in a manner other than just a listing, i.e., complaints per mile, or per 1000 boardings?

a] yes b] no

84] Do you keep route level information on missed trips or unscheduled extras?

a] yes b] no

85] Do you keep route level information on accidents?

a] yes b] no

86] Do you provide special

20 i ntervals, or special information about schedules in areas where your riders do not feel secure waiting for buses?

a] yes b] no

The evaluation efforts identified above are used to improve SERVICE DELIVERY, promote EQUITY in service distribution and availability, increase RIDERSHIP, and reduce OPERATING COSTS. As a summary, please indicate the effectiveness of the evaluation efforts upon each. Please rank the evaluation efforts' impacts with a] as greatest and e] as least influencial.

87] SERVICE DELIVERY.....

88] EQUITY ...............

89] RIDERSHIP ............

90] OPERATING COSTS...... a b c d e a b c d e a b c d e a b c d e

___________________________

19

Structured surveys done by traffic checkers with formal training in the particular survey instrument.

20

Out of the ordinary for your system.

APPENDIX C

Frequency Response to Each Question

35

36

APPENDIX C (Continued)

APPENDIX C (Continued)

37

38

APPENDIX C (Continued)

APPENDIX C (Continued)

39

40

APPENDIX C (Continued)

APPENDIX D

THE STATUS OF STANDARDS AT RESPONDING AGENCIES

______________________________________________________________________________________________

Formally Adopted Standards

______________________________________

Agency Yes No

______________________________________________________________________________________________

More Than 1,000 Buses (9 respondents)

MTA NYC TRANSIT

NJ TRANSIT

WMATA

CHICAGO TRANSIT AUTHORITY

METRO HARRIS COUNTY TRANSIT

New York, NY

Newark, NJ

Washington, DC

Chicago,IL

Houston, TX

LACMTA

MUNI OF METRO SEATTLE

TORONTO TRANSIT COMMISSION

STCUM

Los Angeles, CA

Seatle, WA

Toronto, ON

Montreal, PQ

(SEPTA, Philadelphia, PA, did not respond to this question.)

X

X

X

X

X

X

X

X

X

501 to 1,000 Buses (9 respondents)

MBTA

PORT AUTHORITY TRANSIT

MASS TRANSIT ADMIN

GREATER CLEVELAND RTA

PACE SUBURBAN BUS

DALLAS AREA RAPID TRANSIT

RTD

SF MUNI

BC TRANSIT

201 to 500 Buses (16 respondents)

MTA LONG ISLAND BUS

WESTCHESTER COUNTY, NY

METRO DADE TRANSIT AUTHORITY

MILWAUKEE COUNTY TRANSIT

SORTA

VIA METRO TRANSIT

RTA

CAPITAL METRO

THE BUS

SAMTRANS

SCCTA

GOLDEN GATE TRANSIT

SAN DIEGO TRANSIT

TRI MET

MISSISSAUGA TRANSIT

Boston, MA

Pittsburgh, PA

Baltimore, MD

Cleveland, OH

Suburban Chicago, IL

Dallas, TX

Denver, CO

San Francisco, CA

Vancouver, BC

Long Island, NY

White Plains, NY

Miami, FL

Milwaukee, WI

Cincinnati, OH

San Antonio, TX

New Orleans, LA

Austin, TX

Honolulu, HI

San Carlos, CA

San Jose, CA

Suburban SF, CA

San Diego, CA

Portland, OR

Mississauga, ON

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

CALGARY TRANSIT Calgary, BC X

______________________________________________________________________________________________

41

42

______________________________________________________________________________________________

Formally Adopted Standards

______________________________________

Agency Yes No

______________________________________________________________________________________________

51 to 200 Buses (34 respondents)

CONNECTICUT TRANSIT

CDTA

CENTRO

PENTRAN

TIDEWATER TRANS DISTRICT

New Haven, CT

Albany, NY

Syracuse, NY

Newport News, VA

Norfolk, VA

X

X

X

X

X

LANTA

MATA

CTS

TANK

BCTA

COTRAN

METRO AREA EXPRESS

VALLEY TRANSIT

MADISON METRO

GRATA

AATA

METRO IPTC

CHAMPAIGN-URBANA MTD

RTA

METRO AREA TRANSIT

KCATA

GOLDEN EMPIRE TRANSIT

MTD

RTD

MTD

Allentown, PA

Memphis, TN

Charlotte, NC

Fort Wright, KY

Broward County, FL

Palm Beach County, FL

Birmingham, AL

Appleton, WI

Madison, WI

Grand Rapids, MI

Ann Arbor, MI

Indianapolis, IN

Urbana, IL

Corpus Christi, TX

Omaha, NE

Kansas City, MO

Bakersfield, CA

Santa Cruz, CA

Sacremento, CA

Santa Barbara, CA

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

LONG BEACH TRANSIT

FRESNO AREA EXPRESS

OMNITRANS

RIVERSIDE TRANSIT AGENCY

CENTRAL CONTRA COSTA TRANSIT

SPOKANE TRANSIT

PIERCE TRANSIT

C-TRAN

GO TRANSIT

Under 50 Buses (42 respondents)

Long Beach, CA

Fresno, CA

San Bernadino, CA

Riverside, CA

Bay Area, CA

Spokane, WA

Tacoma, WA

Vancouver, WA

Suburban Toronto, ON

X

X

X

X

X

X

X

X

X

X

X

X

X

GBTD

HOUSATONIC AREA REGIONAL

TRANSIT DISTRICT

MART

CCRTA

Bridgeport, CT

Dandury, CT

Fitchburg, MA

Dennis, MA

X

X

X

X

TRIBORO COACH CORP.

Queens, NY X

______________________________________________________________________________________________

43

______________________________________________________________________________________________

Formally Adopted Standards

Agency

______________________________________

Yes No

______________________________________________________________________________________________

X

X

X

BLACKSBURG TRANSIT

ALEXANDRIA TRANSIT

COUNTY OF LEBANON TRANSIT

RED ROSE TRANSIT

YORK COUNTY TRANS. AUTHORITY

RURAL TRANSIT

LEXTRAN

OWENSBORO TRANSIT SYSTEM

AUGUSTA PUBLIC TRANSIT

CHATAM AREA TRANSIT

Blacksburg, VA

Alexandria, VA

Lebanon, PA

Lancaster, PA

York, PA

Kittaning, PA

Lexington, KY

Owensboro, KY

Augusta, GA

Savannah, GA

X

X

X

X

X

X

X

VOTRAN

MONTGOMERY AREA TRANSIT

SARASOTA CO. AREA TRANSIT

GTA

SPACE COAST AREA TRANSIT

GTA

CATA

GPTC

MUNCIE TRANSIT SYSTEM

CITIBUS

Volusia, FL

Montgomery, AL

Sarasota, FL

Greenville, SC

Melbourne, FL

Greensboro, NC

Lansing, MI

Gary, IN

Muncie, IN

Lubbock, TX

X

X

X

X

X

X

X

X

X

X

X

X

WTS

PAT

COTPA

CENTRAL ARKANSAS TRANSIT

BRAZOS TRANSIT SYSTEM

CITY UTILITIES

MTA

LOGAN TRANSIT DISTRICT

SAN JOAQUIN RTD

SOUTH COAST AREA TRANSIT

Waco, TX

Port Arthur, TX

Oklahoma City, OK

Little Rock, AR

Brazos, TX

Springfield, MO

Topeka, KS

Logan, UT

Stockton, CA

Oxnard, CA

X

X

X

X

X

X

X

X

CULVER CITY MUNI BUS

M-ST

F/STS

SANTA CLARITA TRANSIT

SCAT

LA County, CA

Monterey-Salinas, CA

Fairfield, CA

Santa Clarita, CA

San Luis Obispo, CA

X

X

X

X

X

MUNI

SALEM AREA MASS TRANSIT DISTRICT

Anchorage, AK

Salem, OR

X

X

______________________________________________________________________________________________

44

APPENDIX E

CROSS-TABULATIONS, BY SYSTEM SIZE, OF VARIOUS ASPECTS OF THE TRAFFIC

CHECKING AND CLERKING FUNCTIONS

TABLE E-1

DATA COLLECTION METHOD BY SYSTEM SIZE

Under

50 buses

51 to 200 buses

201 to

500 buses

Collected In-House

Collected by Contractor

Total

38

3

41

32

2

34

13

2

15

501 to 1,000 buses

7

2

9

Over 1,000 buses

9

1

10

Number of Properties Not Reporting: 2

Total

99

10

109

TABLE E-2

METHOD OF CHECKING BY SYSTEM SIZE

Under

50 buses

51 to 200 buses

16

9

0

5

4

34

201 to

500 buses

2

1

15

8

4

0

501 to 1,000 buses

1

0

9

5

3

0

Over 1,000 buses

1

0

10

7

2

0

Number of Properties Not Reporting: 3

Total

61

23

1

9

14

108

TABLE E-3

SIZE OF PART-TIME CHECKER FORCE BY SYSTEM SIZE

Under

50 buses

51 to 200 buses

201 to

500 buses

None

1 to 5

6 to 11

12 to 20

Over 20

Total

33

6

1

0

0

40

18

16

0

0

0

34

0

15

4

8

3

0

501 to 1,000 buses

1

1

3

3

1

9

Over 1,000 buses

4

10

2

2

1

1

Number of Properties Not Reporting: 3

Total

58

33

8

4

5

108

45

TABLE E-4

SIZE OF PART-TIME CHECKER FORCE BY SYSTEM SIZE

Under

50 buses

51 to 200 buses

201 to

500 buses

501 to 1,000 buses

Over 1,000 buses Total

None

1 to 5

6 to 11

12 to 20

27

12

1

0

11

22

1

0

7

5

2

1

6

0

1

1

7

0

1

0

58

39

6

2

Over 20

Total

0

40

0

34

0

15

1

9

2

10

3

108

__________________________________________________________________________________________________

Number of Properties Not Reporting: 3

TABLE E-5

DEDICATED TRAFFIC CLERKING FORCES BY SYSTEM SIZE

Under

50 buses

51 to 200 buses

201 to

500 buses

501 to 1,000 buses

Over 1,000 buses Total

None

Under 4

Over 4

36

4

1

23

8

2

9

5

1

7

2

0

0

4

6

75

23

10

Total 41 33 15 9 10 108

__________________________________________________________________________________________________

Number of Properties Not Reporting: 3

TABLE E-6

BY SYSTEM SIZE, INTO WHAT OTHER POSITIONS ARE TRAFFIC CLERKING

FUNCTIONS COMBINED?

Under

50 buses

51 to 200 buses

201 to

500 buses

501 to 1,000 buses

Over 1,000 buses Total

Service Analysts

Schedulemakers

Traffic Checkers

15

5

4

15

6

7

6

4

1

3

0

0

1

0

3

40

15

15

Totally Automated

Total

4

28

2

30

0

11

2

5

0

4

8

78

__________________________________________________________________________________________________

Number of Properties Not Reporting: 33

46

APPENDIX F

CROSS-TABULATION OF VARIOUS OPERATING DATA COLLECTION PRACTICES BY SYSTEM SIZE

TABLE F-1

WEEKDAY POINT CHECK FREQUENCY BY SYSTEM SIZE

Under

50 buses

51 to 200 buses

201 to

500 buses

1 set per pick per year

2 sets per pick per year

1 set per alternate pick

Once per year

Every other year or less frequently

Total

9

5

1

11

7

33

1

10

7

3

9

30

2

1

2

1

9

15

501 to 1,000 buses

4

2

2

1

9

Over 1,000 buses

2

5

3

10

Some properties will check a schedule only once per pick. Some will check twice per pick and average them if they are reasonably close to each other. In this case 'two' individual checks constitute one set. Other properties use 3 or 4 individual checks to constitute a set.

Number of Properties Not Reporting: 14

TABLE F-2

SATURDAY POINT CHECK FREQUENCY BY SYSTEM SIZE

Under

50 buses

51 to 200 buses

201 to

500 buses

501 to 1,000 buses

Over 1,000 buses Total

1 per year

2 per year

Every other year

Every third year

Hardly ever

Total

11

5

2

2

10

30

10

4

1

16

31

12

15

1

1

1

4

1

2

2

9

2

5

10

2

1

28

12

6

4

45

95

Number of Properties Not Reporting: 16

TABLE F-3

SUNDAY POINT CHECK FREQUENCY BY SYSTEM SIZE

Under

50 buses

51 to 200 buses

201 to

500 buses

1 per year

2 per year

Every other year

Every third year

Hardly ever

Total

12

23

1

2

7

1

16

27

7

3

1

12

15

2

1

1

501 to 1,000 buses

4

1

2

2

9

Over 1,000 buses

1

2

6

10

Total

21

6

48

84

5

4

Number of Properties Not Reporting: 27

Total

24

16

4

24

29

97

TABLE F-4

WEEKDAY RIDE CHECK FREQUENCY BY SYSTEM SIZE

Under

50 buses

51 to 200 buses

201 to

500 buses

1 set per pick per year

2 sets per pick per year

1 set per alternate pick

Once per year

Every other year or less frequently

Total

9

6

8

8

31

10

3

2

5

13

33

1

1

5

8

15

501 to 1,000 buses

1

1

4

3

9

Over 1,000 buses

1

4

5

10

Number of Properties Not Reporting: 13

TABLE F-5

SATURDAY RIDE CHECK FREQUENCY BY SYSTEM SIZE

Under

50 buses

51 to 200 buses

201 to

500 buses

1 per year

2 per year

Every other year

Every third year

Hardly ever

Total

11

4

2

4

8

29

13

5

1

7

7

33

2

5

15

3

1

4

501 to 1,000 buses

1

1

9

2

1

4

Over 1,000 buses

1

3

6

10

Number of Properties Not Reporting: 15

TABLE F-6

SUNDAY RIDE CHECK FREQUENCY BY SYSTEM SIZE

Under

50 buses

51 to 200 buses

201 to

500 buses

1 per year

2 per year

Every other year

Every third year

Hardly ever

Total

9

21

2

2

5

3

10

4

8

28

1

5

5

15

4

2

3

1

501 to 1,000 buses

1

9

4

1

2

1

Over 1,000 buses

1

3

6

10

Number of Properties Not Reporting: 28

Total

29

12

11

17

27

96

Total

20

10

11

13

29

83

Total

21

11

3

26

37

98

47

48

TABLE F-7

WEEKDAY RUNNING TIME CHECKS FREQUENCY BY SYSTEM SIZE

Under

50 buses

51 to 200 buses

201 to

500 buses

501 to 1,000 buses

Over 1,000 buses Total

1 set per pick per year

2 sets per pick per year

1 set per alternate pick

Once per year

6

9

8

9

2

4

6

1

1

5

2

5

2

3

18

13

5

27

Every other year or less frequently 6 10 8 2 5 31

Total 29 31 15 9 10 94

__________________________________________________________________________________________________

Number of Properties Not Reporting: 17

TABLE F-8

TECHNIQUES USED TO COLLECT ROUTE LEVEL INFORMATION/DATA FROM RIDERS

Under 50 buses (42)

51 to 200 buses (34)

201 to 500 buses (16)

501 to 1,000 buses (9)

Over 1,000 buses (10) Total

Focus groups

General market research

On-board surveys

7

12

33

2

9

18

29

3

6

8

13

3

5

3

7

4

5

8

2

31

46

90

10 Media based surveys

Meetings with organized riders' groups, community councils, etc. 13 13 7 5 6 44

__________________________________________________________________________________________________

Numbers in parentheses are the number of systems in the synthesis in that size category.

TABLE F-9

TECHNIQUES USED TO COLLECT ROUTE LEVEL INFORMATION/DATA FROM BUS OPERTORS

Under 50 buses (42)

51 to 200 buses (34)

201 to 500 501 to 1,000 Over 1,000 buses (16) buses (9) buses (10) Total

Operator quality circles

TQM (Total Quality

Management)

8

9

7

5

3

1

2

4

3

1

23

20

Special union- management meetings

Route or garage based teams or task forces

10 12 5 5 7 39

4 8 6 5 6 29

Employee suggestion plans or comment process 29 30 13 9 7 88

__________________________________________________________________________________________________

Numbers in parentheses are the number of systems in the synthesis in that size category.

APPENDIX G

DOCUMENTS PROVIDED BY AGENCIES PARTICIPATING IN THIS SYNTHESIS

Name of Agency

MAX, Birmingham, AL

Los Angeles County Metropolitan Transportation Authority,

Los Angeles, CA

Sacramento Regional Transit District, Sacramento, CA

OMNITRANS, San Bernardino, CA

San Diego Metropolitan Transit Development Board,

San Diego, CA

Santa Clara County Transportation Agency, San José, CA

San Luis Obispo City Transit, San Luis Obispo, CA

Santa Cruz MTD, Santa Cruz, CA

South Coast Area Transit, Oxnard, CA

CTTransit, New Haven, CT

Washington Metropolitan Area Transit Authority,

Washington, DC

COTRAN, W. Palm Beach, FL

CTA, Chicago, IL

Champaign-Urbana Mass Transit District, Urbana, IL

Transit Authority of Northern Kentucky, Ft. Wright, KY

Ann Arbor Transportation Authority, Ann Arbor, MI

Charlotte Transit System (Charlotte DOT), Charlotte, NC

Westchester County The Bee-Line System, White Plains, NY

Greater Cleveland Regional Transit Authority (RTA),

Cleveland, OH

Tri-Met, Portland, OR

Lehigh and Northampton Transportation Authority,

Allentown, PA

Southeastern Pennsylvania Transportation Authority,

Philadelphia, PA

Port Authority Transit, Pittsburgh, PA

Name of Document

Service Standards

Consolidated Transit Service Policies

Transit Master Plan: Goals and Objectives

Standards of Services of OMNITRANS

Standards for Provisions of Service

Short Range Transit Plan

Service Guidelines and Policies

Operations Report

Service Analysis Report

Short Range Transit Plan Evaluation Procedures

O/D & Systemwide Review

Service Standards

Rules of Practice and Procedure (Load Standards)

Proposed WMATA Service Standards Bus and Rail

Metrobus Service Productivity Report

Transit System Surveillance Report

Service Standards

Bus Route Performance Period 7, 1993

Mission Statement, Goals, and Objectives

Standards for Existing Service

Performance Monitoring at the Ann Arbor Transportation

Authority

Service Standards

Performance Indicator Report

Management Report

Service Policy

Tri-Met Service Standards

Tri-Met Scheduling Practices

Service Standards and Performance Evaluation

Measures (Draft)

SEPTA Service Standards (Draft)

Service Standards and Performance Evaluation Measures

49

50

Name of Agency Name of Document

Corpus Christi Regional Transportation Authority,

Corpus Christi, TX

Service Standards

Summary of Current Service Standards

Metropolitan Transit Authority of Harris County, Houston, TX Quarterly Ridership and Route Performance Report

Flowchart

Disaggregate Cost Effectiveness Indicators

Explanation of Service Level Default Values

Service Standards and Planning Guidelines

The Transit Services Program

VIA, San Antonio, IX

Logan Transit District, Logan, UT

Tidewater Transportation District Commission, Norfolk, VA

Municipality of Metropolitan Seattle, Seattle, WA

Service Design Standards

Service Goals and Standards

Transit Service Performance Evaluation

Metro Transportation Service Guidelines

Americans with Disabilities Act Supplemental

Service Plan

Metro Transportation Facility Design Guidelines

Spokane Transit, Spokane, WA

Pierce Transit, Tacoma, WA

Service Standards

Service Development Guidelines and Standards

Calgary Transit Transportation Department, Calgary, Alberta Calgary Transit Service Planning Guide

Vancouver Regional Transit System, Vancouver, BC

Toronto Transit Commission, Toronto, Ontario

Service Planning Process and Service Design Guidelines

1992/93 Annual Route Ridership and Performance Review

Technical Guidelines for TTC Stops Administration

Technical Background Papers

Service Standards Process

1994 Service Plan

APPENDIX H

GLOSSARY OF TERMS

TERM

A.M. PEAK

AMERICANS WITH

DISABILITIES ACT

OF 1990 (ADA)

ARTERIAL STREET

AUTOMATIC PASSENGER

COUNTERS (APC)

(predates “smart technology”)

AUTOMATIC

VEHICLE LOCATION

(AVL)

BRANCH

BUS HOURS

DEFINITION

The portion of the morning service period where the greatest level of ridership is experienced and service provided The

A M peak period has typically coincided with the morning rush hour period and, depending on the system, generally falls in between the hours of 5:00 AM and 9:00 AM In large systems with long routes, the peak may occur at different times on the various routes The advent of flex time and alternative work shifts has had an impact on the time and duration of the AM peak at some systems, generally flattening but lengthening the peak

The law passed by Congress in 1990 which makes it illegal to discriminate against people with disabilities in employment, services provided by state and local governments, public and private transportation, public accommodations and telecommunications

A major thoroughfare, used primarily for through traffic rather than for access to adjacent land, that is characterized by high vehicular capacity and continuity of movement

A technology installed on transit vehicles that counts the number of boarding and alighting passengers at each stop while also noting the time Passengers are counted using either pulse beams or step tradles located at each door Stop location is generally identified through use of either global positioning systems (GPS) or signpost transmitters in combination with vehicle odometers

A smart technology that monitors the real-time location of transit vehicles (generally non-rail modes) through the use of one or more of the following: global positioning systems

(GPS), Loran-C, or signpost transmitters in combination with vehicle odometers Most installations include integration of the AVL system with a geographic information system (GIS) or computer mapping system) The monitoring station is normally located in the dispatch/radio communications center

One of multiple route segments served by a single route

The total hours of travel by bus, including both revenue service and deadhead travel

BUS MILES

CENTRAL BUSINESS

DISTRICT (CBD)

CROSSTOWN ROUTE

DEADHEAD

The total miles of travel by bus, including both revenue and deadhead travel

The traditional downtown retail, trade, and commercial area of a city of an area of very high land valuation, traffic flow, and concentration of retail business offices, theaters, hotels, and services

Non-radial bus service which normally does not enter the

Central Business District (CBD)

There are two types of deadhead or non-revenue bus travel time:

1) Bus travel to or from the garage and a terminus point where revenue service begins or ends;

2) A bus’ travel between the end of service on one route to the beginning of another

SYNONYMS

A M Rush

Early Peak

Morning Peak

Morning Rush

Morning Commission

Hour

Smart Counters

Vehicle Hours

Vehicle Miles

Non-Revenue Time

51

52

TERM

EXPRESS SERVICE

FEEDER SERVICE

GARAGE

HEADWAY

INTERLINING

LAYOVER

LIMITED SERVICE

LINKED PASSENGER

TRIPS

DEFINITION

Express service is deployed in one of two general configurations:

1) A service generally connecting residential areas and activity centers via a high speed, non-stop connection, e.g., a freeway, or exclusive right-of-way such as a dedicated busway with limited stops at each end for collection and distribution Residential collection can be exclusively or partially undertaken using park-and-ride facilities

2) Service operated non-stop over a portion of an arterial in conjunction with other local services. The need for such service arises where passenger demand between points on a corridor is high enough to separate demand and support dedicated express trips

SYNONYMS

Rapids (1 or 2)

Commuter Express (1)

Flyers (1)

Service that picks up and delivers passengers to a regional mode at a rail station, express bus stop, transit center, terminal, Park-and Ride, or other transfer facility.

The place where revenue vehicles are stores and maintained and from where they are dispatched and recovered for the delivery of scheduled service

Barn

Base

Depot

District

Division

O/M Facility (ops/Maint)

Yard

The scheduled time interval between any two revenue vehicles operating in the same direction on a route Headways may be

LOAD driven, that is, developed on the basis of demand and loading standards or, POLICY based, i.e., dictated by policy decisions such as service every 30 minutes during the peak periods and every 60 minutes during the base period.

Frequency

Schedule

Vehicle Spacing

Interlining is used in two ways:

Interlining allows the use of the same revenue vehicle and/or operator on more than one route without going back to the garage

Interlining is often considered as a means to minimize vehicle requirements as well as a method to provide transfer enhancement for passengers For interlining to be feasible, two (or more) routes must share a common terminus or be reasonably proximate to each other (see DEADHEAD)

Through Routes

Interlock Routes

Interlocking

Recovery Layover time serves two major functions: recovery time for the schedule to ensure on-time departure for the next trip and, in some systems, operator rest or break time between trips. Layover time is often determined by labor agreement, requiring “off-duty” time after a certain amount of driving time

Higher speed arterial service that serves only selected stops

As opposed to express service, there is not usually a significant stretch of non-stop operations

A linked passenger trip is a trip from origin to destination on the transit system Even if a passenger must make several transfers during a one way journey, the trip is counted as one linked trip on the system Unlinked passenger trips count each boarding as a separate trip regardless of transfers

TERM

MAXIMUM LOAD

POINT

MISSED TRIP

OWL

PASSENGER CHECK

PASSENGER MILES

PEAK HOUR/PEAK

PERIOD

PICK

PULL-IN TIME

PULL-OUT TIME

RECOVERY TIME

REVENUE HOUR

REVENUE SERVICE

DEFINITION

The location(s) along a route where the vehicle passenger load is the greatest The maximum load point(s) generally differ by direction and may also be unique to each of the daily operating periods. Long or complex routes may have multiple maximum load points

A schedule trip that did not operate for a variety of reasons including operator absence, vehicle failure, dispatch error, traffic, accident or other unforeseen reason

Service that operates during the late night/early morning hours or all night service

A check (count) made of passengers arriving at, boarding and alighting, leaving from or passing through one or more points on a route. Checks are conducted by riding (ridecheck) or at specific locations (point check) Passenger checks are conducted in order to obtain information on passenger riding that will assist in determining both appropriate directional headways on a route and the effectiveness of the route alignment They are also undertaken to meet FTA Section 15 reporting requirements and to calibrate revenue-board ridership models

A measure of service utilization which represents the cumulative sum of the distances ridden by each passenger It is normally calculated by summation of the passenger load times the distance between individual bus stops For example, ten passengers riding in a transit vehicle for two miles equals

20 passenger miles

The period with the highest ridership during the entire service day, generally referring to either the peak hour or peak several hours (peak period)

The selection process by which operators are allowed to select new work assignments, I e, run or the Extra Board in the next

(forthcoming) schedule

The non-revenue time assigned for the movement of a revenue vehicle from its last scheduled terminus or stop to the garage

The non-revenue time assigned for the movement of a revenue vehicle from the garage to its first scheduled terminus or stop

Recovery time is district from layover, although they are usually combined together. Recovery time is a planned time allowance between the arrival time of a just completed trip and the departure time of the next trip in order to allow the route to return to schedule if traffic, loading, or other conditions have made the trip arrive late Recovery time is considered as reserve running time and typically, the operator will remain on duty during the recovery period

The measure of scheduled hours of service available to passengers for transport on the routes. Excludes deadhead hours but includes recovery/layover time Calculated for each route

When a revenue vehicle is in operation over a route and is available to the public for transport

SYNONYMS

Hawk

Tally

Commission Hour

Bid

Mark-up

Line-up

Shake-up

Sign-up

Turn-In Time

Deadhead Time

Run-off Time

Deadhead Time

Run-on Time

Layover Time

53

54

TERM

ROUTE

RUNNING TIME

SCHEDULE

SERVICE AREA

SERVICE SPAN

TIMED TRANSFER

TOTAL MILES

TRAVEL TIME

TRIP

UNLINKED

PASSENGER TRIPS

DEFINITION

An established series of streets and turns connecting two terminus locations

The time assigned for the movement of a revenue vehicle over a route, usually done on a [route] segment basis by various time of day

From the transit agency (not the public time table), a document that, at a minimum, shows the time of each revenue trip through the designated time points Many properties include additional information such as route descriptions, deadhead times and amounts, interline information, run numbers, block numbers, etc

The square miles of the agency’s operating area Service area is now defined consistent with ADA requirements

The span of hours over which service is operated, e g., 6 a.m.

to 10 p m or 24 hr (owl) Service span often varies by weekday, Saturday, or Sunday

A point or location where two or more routes come together at the same time to provide positive transfer connections A short layover may be provided at the timed transfer point to enhance the connection Timed transfers have had increasing application as service frequencies have been reduced below 15 to 20 minutes and hub-and-spoke network deployment has grown

The total miles includes revenue, deadhead, and yard

(maintenance and servicing) miles

The time allowed for an operator to travel between the garage and a remote relief point

The one-way operation of a revenue vehicle between two terminus points on a route Trips are generally noted as inbound, outbound, eastbound, westbound, etc to identify directionality when being discussed or printed

The total number of passengers who board public transit vehicles A passenger is counted each time he/she boards a revenue vehicle even though the boarding may be the result of a transfer from another route to complete the same one-way journey Where linked or unlinked is not designated, unlinked is assumed.

SYNONYMS

Line

Travel Time

Headway

Master Schedule

Timetable

Operating Schedule

Recap/Supervisor’s Guide

Span of Service

Service Day

Pulse Transfer

Positive Transfer

Relief Time

Travel Allowance

Journey

One-Way Trip

Passengers

Passenger Trips

THE TRANSPORTATION RESEARCH BOARD is a unit of the National Research Council, which serves the National

Academy of Sciences and the National Academy of Engineering It evolved in 1974 from the Highway Research Board, which was established in 1920. The TRB incorporates all former HRB activities and also performs additional functions under a broader scope involving all modes of transportation and the interactions of transportation with society. The Board's purpose is to stimulate research concerning the nature and performance of transportation systems, to disseminate information that the research produces, and to encourage the application of appropriate research findings. The Board's program is carried out by more than 270 committees, task forces, and panels composed of more than 3,300 administrators, engineers, social scientists, attorneys, educators, and others concerned with transportation; they serve without compensation. The program is supported by state transportation and highway departments, the modal administrations of the U.S. Department of Transportation, the Association of

American Railroads, the National Highway Traffic Safety Administration, and other organizations and individuals interested in the development of transportation.

The National Academy of Sciences is a private, nonprofit, self-perpetuating society of distinguished scholars engaged in scientific and engineering research, dedicated to the furtherance of science and technology and to their use for the general welfare. Upon the authority of the charter granted to it by the Congress in 1863, the Academy has a mandate that requires it to advise the federal government on scientific and technical matters. Dr Bruce Alberts is president of the National Academy of

Sciences.

The National Academy of Engineering was established in 1964, under the charter of the National Academy of Sciences, as a parallel organization of outstanding engineers It is autonomous in its administration and in the selection of its members, sharing with the National Academy of Sciences the responsibility for advising the federal government. The National Academy of

Engineering also sponsors engineering programs aimed at meeting national needs, encourages education and research, and recognizes the superior achievements of engineers. Dr. Robert M.White is president of the National Academy of Engineering.

The Institute of Medicine was established in 1970 by the National Academy of Sciences to secure the services of eminent members of appropriate professions in the examination of policy matters pertaining to the health of the public. The Institute acts under the responsibility given to the National Academy of Sciences by its congressional charter to be an adviser to the federal government and, upon its own initiative, to identify issues of medical care, research, and education. Dr. Kenneth

I. Shine is president of the Institute of Medicine.

The National Research Council was organized by the National Academy of Sciences in 1916 to associate the broad community of science and technology with the Academy's purposes of furthering knowledge and advising the federal government. Functioning in accordance with general policies determined by the Academy, the Council has become the principal operating agency of both the National Academy of Sciences and the National Academy of Engineering in providing services to the government, the public, and the scientific and engineering communities The Council is administered jointly by both

Academies and the Institute of Medicine. Dr. Bruce Alberts and Dr. Robert M. White are chairman and vice chairman, respectively, of the National Research Council.

Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement

Table of contents