CONTENTS

News Release

I. Findings

Table One: Straphangers Campaign Line Ratings
Table Two: How Does Your Subway Line Rate?
Table Three: Best to Worst Subway Lines
Table Four: Line Ratings: Current and Last Year

II. Summary of Methodology

III. Why a Report Card on the State of the Subways?

IV. Profiles of 20 Subway Lines

Credits



I. Findings
Riders want to know how their line performs. Do their trains break down more or less often than average for New York's subways? Is there a better or worse chance of getting a seat? How clean are the cars? Do trains come more or less often? Do trains arrive irregularly or with few gaps in service? How good or bad are the announcements?

That's why the Straphangers Campaign is issuing its third annual "state of the subways" report card. We look at six key measures of subway performance for the city's 20 major subway lines, using recent data compiled by MTA New York City Transit, mostly for the last half of 1998. Much of this information has not been released publicly before on a line-by-line basis.

Our report has three parts: First is a comparison of service on 20 lines, as detailed in the attached charts. Second, we give an overall "line rating" to each of 19 lines. Third, the report contains one-page profiles on each of the 20 lines. These are intended to provide riders, officials, and communities with an easy-to-use summary of how their lines perform compared to others.

This report is a follow-up to our last two state of the subways reports, which rated performance for the last half of 1996 and the last half of 1997.

Our key findings include:

1. For the third year in a row, the best subway line is far and away the 7--although its line rating dropped from $1.20 last year to $1.05. The line ranked high because there is much more scheduled service on the 7 than on most lines; riders have a greater chance of getting a seat at rush hour; and its cars break down much less often than average. The line rating dropped since last year because the line's performance worsened on four measures: fewer miles traveled between breakdowns, fewer clean cars, more crowding and worse announcements. The 7 runs between Flushing, Queens and Times Square.

2. The worst subway lines are the A, B and M, replacing the N as the worst line since our last report. Three lines tied for worst, receiving the lowest line rating, 65 cents. The lines scored poorly because:


3. Line ratings grew worse on 9 of 19 lines, improved on three and stayed the same on seven. The lines with worse line ratings are the 2, 5, 7, A, B, E, F, M and R. This is in stark contrast to last year's report card, where 14 of 19 lines had improved. The four lines with better ratings in this report card are the C, N and Q. The unchanged lines are the 1/9, 3, 4, 6, D, J/Z, and L.

4. Despite a massive increase of 590,000 riders-a-day since 1997, there has been virtually no increase in scheduled service. As of March 1999, there were 590,000 more riders using the subways each weekday compared to March 1997. Yet, there's been virtually no change in the scheduled intervals between rush-hour trains over the last two years. (Transit officials do plan to add a modest amount of rush-hour service starting in October 1999 to five lines--the A, B, L, N and R. See Appendix II for a fact sheet on the lag between transit ridership gains and service increases.)

5. System-wide in the last year:

6. The most improved line is the N, which was the worst line in last year's survey. Its overall line rating went from 65 cents to 80 cents. The N showed improvement on four measures: greater regularity, a lower car breakdown rate, less crowding, and cleaner cars. The 7 line had the biggest drop in performance, going from a line rating last year of $1.20 cents to a $1.05 rating.

7. There are great disparities in how subway lines perform. For example, the 4 had the best record on delays caused by car mechanical failures: once every 162,718 miles. The R line had the worst, experiencing breakdown delays nearly four times as often: once every 43,826 miles. The same wide disparities among lines could be seen for all our measures:

The disparities among lines are detailed in Table Two. Table One lists line ratings, Table Three ranks lines from best to worst on each measure, and Table Four compares current line ratings with those of last year.

Summary of Methodology

The Straphangers Campaign reviewed extensive MTA New York City Transit data on the quality and quantity of service on 20 subway lines. We used the latest data available for service, largely for the second half of 1998. Several of these have not been released before on a line-by-line basis. We then calculated a Line Rating intended as a shorthand tool to allow comparisons among lines for 19 subway lines, as follows:

First, we formulated a scale of the relative importance of measures of subway service. This was based on a survey we conducted of a panel of transit experts and riders, and an official survey of riders by MTA New York City Transit. The six measures were weighted as follows:

AMOUNT OF SERVICE
    scheduled amount of service 30%
DEPENDABILITY OF SERVICE
    percent of trains arriving at regular intervals    22.5%
    breakdown rate 12.5%
COMFORT/USABILITY
    chance of getting a seat 15%
    interior cleanliness 10%
    adequacy of in-car announcements 10%

Second, we compared each line's performance in 1998 for each measure to the 1996 best- and worst-performing lines for each measure. Performance in 1996--the first year for which we calculated line ratings--serves as a baseline for service. As we stated in our 1997 report, the line rating will allow us to use the same formula for ranking service on subway lines in the future. As such, it will be a fair and objective barometer for gauging whether service has improved, stayed the same, or deteriorated over time.

A line in 1998 equaling the 1996 system best would receive a score of 100 for that indicator, while a 1998 line matching the system low would receive a score of 0. Thus most lines in 1998 received a score for each measurement between 0 and 100. However, in some cases a line was awarded a score outside of that range, if it performed better than the best line in 1998, or worse than the worst line.

These scores were then multiplied by the percentage weight of each indicator, and added up to reach an overall raw score.

Third, the summed totals were then placed on a scale which emphasizes the relative differences between scores nearest the top and bottom of the scale. (A copy of the scale can be found in Appendix I.)

Finally, we converted each line's summed raw score to a Straphangers Campaign Line Rating. We created a formula with assistance from independent transit experts. A line scoring on average, at the 50th percentile of 19 lines for all six performance measures in 1996 (the baseline year) would receive a Line Rating of 75¢. A line which matched the 95th percentile of this range would be rated $1.50.

Officials at MTA New York City Transit reviewed the line profiles and ratings in 1997. They were also sent a copy of the findings for this yearÌs state of the subways report. Their comments can be found in Appendix I (which presents a more detailed description of our methodology) and in Appendix II.


III. Why A Report Card on the State of the Subways?

Why does the Straphangers Campaign publish a yearly report card on the subways?

First, riders want information on the quality of their trips. That's what public opinion polls conducted by transit officials show. "Customers have interest in knowing how their line, as well as the overall system, is doing," according to an MTA New York City Transit telephone survey of 950 riders in 1998.

Indeed, the poll found that 55% of customers would like service information to be posted at subway stations--even when asked to weigh posting in the context of competing spending priorities. Riders expressed strong interest in getting such information as "how well the line keeps to schedules, how much service is scheduled and how well announcements are made." State legislation is pending in Albany to require posters at subway stations with statistics on how that station's line(s) are performing on basic measures of service. The bill passed the State Assembly last year.

Unfortunately, the legislation--Assembly Bill 2236--has stalled because of the opposition of officials at MTA New York City Transit. They say that "performance numbers are already available to our riders upon request or at regularly scheduled public meetings" and they do not support "the routine public posting of route specific performance measures throughout the transit system."

Second, we want to give a picture of where the subways are headed. Our findings tell a mixed story.

The subways are clearly struggling to handle a flood of new riders attracted by fare discounts and a solid local economy. Line ratings went down on 9 of the 19 lines we rated. On most of these lines, subway cars broke down more often, grew dirtier and had poorer announcements. Systemwide, service grew slightly more irregular.

The increase in ridership is staggering: There are more than a million additional riders each weekday now crowding onto packed city subways and buses than there were just two years ago. Daily city transit ridership went up from 3.7 million a day in March 1997 to 4.3 million in March 1999, fueled by popular fare discounts and a good local economy.

But a 14% increase in subway ridership in two years will be met by less than a 4% increase in subway service over three years. As now planned, transit officials will add 240 daily subway trips between 1997 and the end of 1999, increasing total trips from 6,400 subway trips in 1997 to 6,590 in 1999. (See fact sheet on ridership and service at Appendix II.)

Amazingly, rush-hour "headways"--the scheduled intervals between trains--have remained virtually unchanged throughout the boom in subway ridership. This coming October, transit officials do plan to add a modest amount of rush-hour service to five lines--the A, B, L, N and R. This will help. But it's little and late.

What would be a good, attractive level of service? The system's current standards for service manage to be both ungenerous and, in some cases, unachievable.

The standard is for the system to provide a minimum of three square feet per rider during rush hour, according to MTA New York City Transit's published "loading guidelines." Consider that three square feet is a tight square measuring 1.7 feet on each side. That's why many times riders are traveling with someone's elbow in their ribs. And several lines don't meet the standard.

The guidelines also say that "seats will be provided for all customers" on most lines during weekday middays and evenings and on weekends. Much of the time this is simply untrue.

Should there be, for example, a systemwide guarantee of scheduled service of no more than four minutes during the rush-hour? Riders on 14 of the 20 major subway lines have scheduled rush-hour waits of five minutes or more.

Should the loading guidelines provide more ample elbow room? Can the system make good on its promise for seats for all customers outside the rush hour?

How much more service could be provided given existing constraints, including signal systems, availability of subway cars and levels of crowding? What would added service cost? What's the most productive allocation of current and likely resources?

The Straphangers Campaign asked the New York City Independent Budget Office to estimate the cost of a standard of no more than a four-minute rush-hour wait anywhere in the subway system. The IBO pegged the cost at $33 million annually.

Transit officials have disputed this estimate, but have to date not provided their own. The Straphangers Campaign has called on them to produce their own figure and their own assessment of what is operationally possible. (See attached correspondence in Appendix III.)

What's not in dispute is that there are financial resources to provide more service. Transit officials put more than $100 million of last year's $379 million budget surplus in an unspecified reserve fund.

This budget surplus was built on the misery of riders crammed into too few trains. A significant portion of those funds should be used to rescue riders from elbow-in-the-ribs crowding and irregular service.

Lastly, our report aims to help riders and communities win better service and hold transit managers accountable. At the Straphangers Campaign, we hear from many riders and neighborhood groups. Often they'll say "Our line has got to be the worst" or "We must be on the most crowded line" or "My line is much better than others."

For riders and officials on lines receiving a poor level of service, our report will help them make the case for improvements, ranging from increases in service to major repairs. For those on better lines, the report will either highlight areas for improvement--or spark debate or agreement on what constitutes decent service.

It is our hope that the thousands of New Yorkers who care about the city's transit system will use this report to hold transit managers accountable. That is why each of the profiles of 20 lines contains the telephone number for the superintendent responsible for that line.

This report is part of a series of studies on subway and bus service. For example, in March and May of this year, we issued major reports on city buses. These found a poor quality of overall service and documented cuts on bus routes with growing ridership.

Our plans call for continuing to issue major state of the subways and buses reports in the coming years, along with field surveys of specific aspects of service, such as car cleanliness, announcements, and station conditions.

We hope that these efforts--combined with the concern and activism of thousands of city transit riders--will win better subways and buses for New York City.

Return to top   Straphangers Campaign Home Page