I. Findings

What do subway riders want?

They want short waits, regular and reliable service, a chance for a seat, a clean car and announcements that tell them what they need to know. That’s what MTA New York City Transit’s own polling of its riders show.1

This "State of the Subways" Report Card tells riders how their lines do on key aspects of service. We look at six measures of subway performance for the city’s 22 major subway lines, using recent data compiled by MTA New York City Transit.2 Much of the information has not been released publicly before on a line-by-line basis.

Most of the measures are for all or the last half of 2004. Unfortunately, comparisons to prior years could not be made for most measures, due to changes in methodology by New York City Transit. Several lines had major route changes in February 2004 (B, D, N, Q, and W), but ratings were made for these lines.3

Our Report Card has three parts:

First is a comparison of service on 22 lines, as detailed in the attached charts.

Second, we give an overall “MetroCard Rating”4 to each of 21 lines.5

Third, the report contains one-page profiles on each of the 22 lines. These are intended to provide riders, officials, and communities with an easy-to-use summary of how their lines perform compared to others.

These profiles can also be found at our web site here.

This is the eighth Subway Report Card issued by the Straphangers Campaign since 1996. 6

Our findings show:

1. The best subway line in the city is the 6, with a “MetroCard Rating” of $1.35.
The 6 ranked high because of its frequently scheduled service and above average performance on three other measures: arriving with regularity, car breakdowns, and announcements, where it had a perfect record. The line did not get a higher rating because it performed below average on two measures: the chance of getting a seat during rush hour and cleanliness. This is the second time in a row that the 6 line has ranked first in the Straphangers Campaign Report Card. The top performance is due in part to the new technology subway cars, which began replacing the line’s aging fleet in recent years. For example, automated announcements result in a 100% performance, and new cars mean fewer breakdowns. The 6 runs between Pelham Bay Park in the Bronx and the Brooklyn Bridge subway station in lower Manhattan.

2. The worst subway line is the N, with a MetroCard Rating of 60 cents.
The N line has a low level of scheduled service and it performs below average on four other measures: arriving with regularity, seat availability, cleanliness and announcements. The N line did not receive a lower rating because its cars break down less often than the system average. The N was also the worst performing line last year in the 2004 Straphangers Campaign Report Card. We are disappointed that New York City Transit did not take actions after our 2004 report. Many riders had waited nearly two decades for express service to midtown be restored to the line in February 2004. However our report shows that the quality of new service remains poor, with less regular, dirtier, more crowded cars and with poorer announcements. That is a direct reflection of New York City Transit’s decision to not adequately address issues of cleanliness and announcements on the N, as well as to run crowded service. The N line operates between Astoria, Queens and Coney Island, Brooklyn.

3. Unfortunately, changes in methodology and new data collection by MTA New York City Transit make it impossible to directly compare our findings for this year with previous reports. (For the differences in how these measures were calculated in 2003 and 2004, see section on methodology.)

4. There are great disparities in how subway lines perform:7

  • Breakdowns: Cars on the 5 line had the best record on delays caused by car mechanical failures: once every 400,791 miles. The G line cars had the worst, experiencing breakdown delays more than eight times as often: once every 53,795 miles.
  • Cleanliness: The 1 & 9 and the W were the cleanest lines, with only 6% of their cars having moderate or heavy dirt, while 33% of cars on the dirtiest lines — the L and N — had moderate or heavy dirt, a much worse performance.
  • Chance of getting a seat: We rate a rider’s chance of getting a seat at the most congested point on the line. We found the best chance is on the V line, where riders had a 84% chance of getting a seat during rush hour. The L ranked worst and was much more overcrowded, with riders having only a 26% chance of getting a seat.
  • Amount of scheduled service: The 7 line had the most scheduled service, with two to three minute intervals between trains during rush hours. The M ranked worst, with ten-minute intervals between trains during this period.
  • Regularity of service: The 6 line had the greatest regularity of service, arriving within two to four minutes of their scheduled interval 95% of the time. The most irregular line is the 5, which arrived with regularity only 80% of the time.
  • In-car announcements: The 2, 5 and 6 lines had the highest rate of adequate announcements made in its subway cars, 100% of the time. The N was the worst, at 82%.

    5. Some variations among lines are to be expected. But the big contrasts between lines we found show either the need for improved management — in the cases of cleanliness and in-car announcements — or the result of unfair distribution of resources (car breakdowns and chance of getting a seat.) Some results are the subject of debate, such as what should be the maximum waiting times on a line during rush hour. And some are based on the nature of a line, such as the higher regularity of the 6, G, and J/Z, who do not merge with other lines.

    6. MTA New York City Transit’s own basic data indicate an ongoing trend of fewer breakdowns as new technology cars come on line: The December 2003 fleet-wide 12-month moving average breakdown rate was once every 139,960 miles and improved to 156,815 compared to the fleet-wide 12-month moving average breakdown rate. However, again, this report cannot make a direct comparison for the subway car breakdown rate with past years due to a change in our methodology. The finding is not surprising due to the hundreds of millions New York City Transit has invested in modernizing its transit fleet.

    All the findings described above are detailed in the attached charts and profiles:

    Chart One lists the MetroCard Ratings for 21 subway lines.
    Chart Two details the differences in performance on all six measures among 22 lines.
    Chart Three ranks lines from best to worst on each measure.

    Click here for detailed one-page profiles for 22 subway lines.

    II. Summary of Methodology

    The NYPIRG Straphangers Campaign reviewed extensive MTA New York City Transit data on the quality and quantity of service on 22 subway lines. We used the latest comparable data available, largely the second half of 2004. Several of the data items have not been publicly released before on a line-by-line basis.

    We then calculated a “MetroCard Rating”—intended as a shorthand tool to allow comparisons among lines—for 21 subway lines, as follows:

    First, we formulated a scale of the relative importance of measures of subway service. This was based on a survey we conducted of a panel of transit experts and riders, and an official survey of riders by MTA New York City Transit. The six measures were weighted as follows:

    Amount of service
    • scheduled amount of service

    Dependability of service
    • percent of trains arriving at regular intervals
    • breakdown rate
    • chance of getting a seat
    • interior cleanliness
    • adequacy of in-car announcements

    Second, for each measure, we compared each line’s performance to the best- and worst-performing lines in this rating period.

    A line equaling the system best in 2004 would receive a score of 100 for that indicator, while a line matching the system low in 2004 would receive a score of 0.

    These scores were then multiplied by the percentage weight of each indicator, and added up to reach an overall raw score. Below is an illustration of calculations for a line, in this case the 4.

    Figure 1
    Indicator 4 line value including best and worst
    in system for 5 indicators
    4 line score
    out of 100
    4 line
    adjusted raw score

    Scheduled service AM rush—4 min; midday—5 min; PM rush—4:15 min 79 30% 24
    Service regularity 87% (best—95%; worst—80%) 46 22.50% 10
    Breakdown rate 250,395 miles (best—400,791 miles;
    worst 53,795 miles)
    54 12.50% 7
    Crowding 29% seated (best—84%; worst—26%) 5 15% 1
    Cleanliness 86% clean (best—94%; worst—67%) 70 10% 7
    Announcements 97% adequate (best—100%; worst—82%) 83 10% 8
    Adjusted score total 4 line—57 pts.

    Third, the summed totals were then placed on a scale which emphasizes the relative differences between scores nearest the top and bottom of the scale. (See Appendix I.)

    Finally, we converted each line’s summed raw score to a MetroCard Rating. We created a formula with assistance from independent transit experts. A line scoring, on average, at the 50th percentile of the lines in 2004 for all six performance measures would receive a MetroCard Rating of $1.00. A line which matched the 95th percentile of this range would be rated $2.00.

    New York City Transit officials reviewed the profiles and ratings in 1997. They concluded: "Although it could obviously be debated as to which indicators are most important to the transit customer, we feel that the measures that you selected for the profiles are a good barometer in generally representing a route’s performance characteristics. . . Further, the format of your profiles. . .is clear and should cause no difficulty in the way the public interprets the information."

    Their full comments can be found in Appendix I, which presents a more detailed description of our methodology. Transit officials were also sent an advance summary of the findings for this year's State of the Subways Report Card. For our first six surveys, we used 1996—our first year for calculating MetroCard Ratings—as a baseline. As we said in our 1997 report, our ratings “will allow us to use the same formula for ranking service on subway lines in the future. As such, it will be a fair and objective barometer for gauging whether service has improved, stayed the same, or deteriorated over time.”

    However, in 2001, 2003 and 2004, transit officials made major changes in how the performance indicators are derived. The Straphangers Campaign unsuccessfully urged MTA New York City Transit to re-consider its new methodologies, because of our concerns about the fairness of these measures and the loss of comparability with past indicators. Transit officials also rejected our request to re-calculate measures back to 1996 in line with their adopted changes. As a result, we were forced to redefine our baseline with 2004 data, and as a result much historical comparability has been lost.

    III. Why A Report Card on the State of the Subways?

    Why does the Straphangers Campaign publish a yearly report card on the subways?

    First, riders are looking for information on the quality of their trips. That’s what public opinion polls conducted by transit officials show. “Customers have an interest in knowing how their line, as well as the overall system, is doing,” according to an MTA New York City Transit telephone survey of 950 riders in 1998.

    Indeed, the poll found that 55% of customers would like service information to be posted at subway stations—even when asked to weigh posting in the context of competing spending priorities. Riders expressed strong interest in getting such information as “how well the line keeps to schedules, how much service is scheduled and how well announcements are made.” The MTA has, unfortunately, opposed posting such information in past years. In part, our reports have filled this gap, especially the line-by-line profiles we post on our website.

    In a step forward in June 2003, the MTA began posting its quarterly performance information on its website, www. mta.info. However, none of this information is broken down by line. The information is now reported semi-annually.

    Second, we aim to give community groups and public officials the information they need to win better service and hold transit managers accountable. At the Straphangers Campaign, we hear from many riders and neighborhood groups. Often they will say “Our line has got to be the worst.” Or “We must be on the most crowded line.” Or “Our line is much better than others.”

    For riders and officials on lines receiving a poor level of service, our report will help them make the case for improvements, ranging from increases in service to major repairs. That’s not just a hope. In past years, we’ve seen riders on some of the worst lines demand and win improvements, such as on the B, N and 5 lines.

    For those on better lines, the report will either highlight areas for improvement—or spark discussion on what constitutes decent service. For example, riders on the 7 — once the best line in the system — have pointed to the line’s slipping and won increased service. New Yorkers who care about the city’s transit system can use this report to hold transit managers accountable. That is why each of the profiles of 22 lines contains the telephone number for the superintendent responsible for that line.

    This report is part of a series of studies on subway and bus service. For example, we issued annual surveys on payphone service in the subways, subway car cleanliness, and the pace of bus service.

    Our reports can be found at www.straphangers.org/reports.html, as can our profiles.

    We hope that these efforts—combined with the concern and activism of many thousands of city transit riders—will win better subways and buses for New York City.

  • www.straphangers.org | www.nypirg.org