I. Findings

PROFILES OF
20 SUBWAY LINES

(click your line!)
1/9
L
A
2
M
B
3
N
C
4
Q
D
5
R
E
6
F
7
G

What do subway riders want?

They want short waits, trains that arrive regularly, a chance for a seat, a clean car and understandable announcements that tell them what they need to know. That’s what MTA New York City Transit’s own polling of rider satisfaction measures.1

This “State of the Subways” Report Card tells riders how their lines do on these key aspects of service. We look at six measures of subway performance for the city’s 20 major subway lines, using recent data compiled by MTA New York City Transit.2 Much of the information has not been released publicly before on a line-by-line basis. Most of the measures are for all or the last half of 2010.

Our Report Card has three parts:

First, is a comparison of service on 20 lines, as detailed in the attached tables.

Second, we give an overall “MetroCard Rating”3 to 18 of the 20 lines.4

Third, the report contains one-page profiles on each of the 20 lines. These are intended to provide riders, officials and communities with an easy-to-use summary of how their line performs compared to others.

This is the fourteenth Subway Report Card by the Straphangers Campaign since 1997.5   

Our findings show the following picture of how New York City’s subways are doing:

1. The best subway line in the city was the J/Z with a “MetroCard Rating” of $1.45. The J/Z ranked number one in the system for the first time since the Straphangers Campaign Report Card started in 1997. The J/Z ranked highest because it performs best in the system on regularity of service. It also performs above average on three measures: delays caused by mechanical breakdowns, seat availability at the most crowded point during rush hour and subway car announcements. The line did not get a higher rating because it performed average on subway car cleanliness and amount of scheduled service. The J/Z runs between Broad Street in Manhattan and Jamaica Center in Queens.

2. The 2 was ranked the worst subway line with a MetroCard Rating of 90 cents, tying with the C line for last. This was the first time in fourteen annual Straphangers Campaign Report Cards that the 2 came in last. The 2 performs worst in the system on seat availability at the most crowded point during rush hour and next to worst on regularity of service. The line also performs below average on subway car cleanliness. The line did not get a lower rating as it performs above average on three measures: amount of scheduled service, delays caused by mechanical breakdowns and subway car announcements. The 2 runs between Brooklyn College and Wakefield Avenue in the Bronx.

3. For the third year in a row, the C was ranked the worst subway line, with a MetroCard
Rating of 90 cents, tying with the 2. The C line performs worst in the system on three measures: amount of scheduled service, delays caused by mechanical breakdowns and subway car announcements. The line did not get a lower rating as it performs best in the system on subway car cleanliness and above average on service regularity and chance of getting a seat at rush hour. The C operates between Euclid Avenue in Brooklyn and Washington Heights in Manhattan.

4. Systemwide, for 20 lines, we found the following on the three of six measures we can compare over time: car breakdowns, car cleanliness and announcements. (We cannot compare three remaining measures due to changes in definitions by New York City Transit. Also, the M’s routing was too changed in mid-2010 to make comparisons with the previous year on some indicators.)

  • The car breakdown rate improved from an average mechanical failure every 148,002 to 170,217 miles during the 12-month period ending December 2010 — a gain of 15%. This positive trend reflects the arrival of new model subway cars in recent years and better maintenance of Transit’s aging fleet. We found fourteen lines improved (2, 3, 7, A, B, C, E, F, J/Z, L, M, N, Q and R), while six lines worsened (1, 4, 5, 6, D, and G).
  • Subway cars went from 95% rated clean in our last report to 94% in our current report – essentially unchanged, experiencing a decrease of 1.1%. We found that twelve lines declined (1, 3, 4, 5, 7, A, E, G, L, M, N and Q) and eight improved (2, 6, B, C, D, F, J/Z and R).
  • Accurate and understandable subway car announcements declined slightly, going from 91% in our last report to 87% in the current report. We found twelve lines worsened (1, 2, 4, 5, 7, B, C, D, G, J/Z, L and N), four improved (3, F, Q and R) and four did not change (6, A, E and M).

5. There are large disparities in how subway lines perform.

  • Breakdowns: The M had the best record on delays caused by car mechanical failures: once every 843,598 miles. The C was worst, with a car breakdown rate fifteen times higher: every 54,838 miles.
  • Cleanliness: The C and E were the cleanest lines, with only 4% of cars having moderate or heavy dirt, while 13% of cars on the dirtiest lines — the G — had moderate or heavy dirt, a rate more than three times higher.
  • Chance of getting a seat: We rate a rider’s chance of getting a seat at the most congested point on the line. We found the best chance is on the 7, where riders had a 70% chance of getting a seat during rush hour at the most crowded point. The 2 ranked worst and was much more overcrowded, with riders having only a 28% chance of getting a seat.
  • Amount of scheduled service: The 6 line had the most scheduled service, with two-and-a-half minute intervals between trains during the morning and evening rush hours. The C ranked worst, with nine- or ten-minute intervals between trains all through the day.
  • Regularity of service: The J/Z line had the greatest regularity of service, arriving within 25% of its scheduled interval 85% of the time. The most irregular line is the 5, which performed with regularity only 66% of the time.

II. Summary of Methodology

The NYPIRG Straphangers Campaign reviewed extensive MTA New York City Transit data on the quality and quantity of service on 20 subway lines. We used the latest comparable data available, largely from 2010.6 Several of the data items have not been publicly released before on a line-by-line basis. MTA New York City Transit does not conduct a comparable rider count on the G line, which is the only major line not to go into Manhattan. As a result, we could not give the G line a MetroCard Rating, although we do issue a profile for the line. In addition, major changes were made to the route pattern of the M line in June of 2010; since then no comparable rider count data has been made available. For this reason, we could not give the M line a MetroCard Rating, although we do issue a profile for the line.

We then calculated a MetroCard Rating — intended as a shorthand tool to allow comparisons among lines — for 18 subway lines, as follows:

First, we formulated a scale of the relative importance of measures of subway service. This was based on a survey we conducted of a panel of transit experts and riders, and an official survey of riders by MTA New York City Transit. The six measures were weighted as follows:

Amount of service
  • scheduled amount of service
30%
Dependability of service
  • percent of trains arriving at regular intervals
22.5%
  • breakdown rate 
12.5%
Comfort/usability
  • chance of getting a seat 
15%
  • interior cleanliness    
10%
  • adequacy of in-car announcements  
10%

Second, for each measure, we compared each line’s performance to the best- and worst-performing lines in this rating period.

A line equaling the system best in 2010 would receive a score of 100 for that indicator, while a line matching the system low in 2010 would receive a score of 0. Under this rating scale, a small difference in performance between two lines translates to a small difference between scores.

These scores were then multiplied by the percentage weight of each indicator, and added up to reach an overall raw score. Below is an illustration of calculations for a line, in this case the 4.

Figure 1

Indicator 4 line value including best
and worst in system for 5 indicators
4 line score out of 100 Percentage weight 4 line adjusted raw score

Scheduled service

AM rush—4 min; noon—8 min; PM rush—4 min

71

30%

21

Service regularity

68% (best—85%; worst—66%)

12

22.5%

3

Breakdown rate

167,534 miles (best—843,598 miles; worst—54,838 miles)

14

12.5%

2

Crowding

33% seated (best—70%; worst—28%)

13

15%

2

Cleanliness

91% clean (best—96%; worst—87%)

44

10%

4

Announcements

97% adequate (best—100%; worst—72%)

89

10%

9

Adjusted score total

 

 

 

4 line - 41 pts.


Third, the summed totals were then placed on a scale that emphasizes the relative differences between scores nearest the top and bottom of the scale. (See Appendix I.)

Finally, we converted each line’s summed raw score to a MetroCard Rating. We created a formula with assistance from independent transit experts. A line scoring, on average, at the 50th percentile of the lines for all six measures would receive a MetroCard Rating of $1.15. A line that matched the 95th percentile of this range would be rated $2.25, the current base fare. The 4 line, as shown above, falls at the 41st percentile over six measures, corresponding to a MetroCard Rating of $1.00. 

New York City Transit officials reviewed the profiles and ratings in 1997. They concluded:  "Although it could obviously be debated as to which indicators are most important to the transit customer, we feel that the measures that you selected for the profiles are a good barometer in generally representing a route’s performance characteristics… Further, the format of your profiles… is clear and should cause no difficulty in the way the public interprets the information." 

Their full comments can be found in Appendix I, which presents a more detailed description of our methodology. Transit officials were also sent an advance summary of the findings for this year's State of the Subways Report Card.

For our first five surveys, we used 1996 — our first year for calculating MetroCard Ratings — as a baseline. As we said in our 1997 report, our ratings “will allow us to use the same formula for ranking service on subway lines in the future. As such, it will be a fair and objective barometer for gauging whether service has improved, stayed the same, or deteriorated over time.”

However, in 2001, 2003, 2004, 2005, 2008, 2009 and 2010, transit officials made changes in how performance indicators are measured and/or reported. The Straphangers Campaign unsuccessfully urged MTA New York City Transit to re-consider its new methodologies, because of our concerns about the fairness of these measures and the loss of comparability with past indicators. Transit officials also rejected our request to re-calculate measures back to 1996 in line with their adopted changes. As a result, in this report we were forced to redefine our baseline with current data, and considerable historical comparability was lost.

III. Why A Report Card on the State of the Subways?

Why does the Straphangers Campaign publish a yearly report card on the subways?

First, riders are looking for information on the quality of their trips. In the past, the MTA has resisted putting detailed line-by-line performance measures on their web site. That has been gradually changing. In 2009, for example the MTA began posting monthly performance data for subway car breakdown rates on its website, www.mta.info. In 2010, it made some of the performance measurement data bases available publicly on its developer resources page. Our profiles seek to provide this information in a simple and accessible form.

Second, our report cards provide a picture of where the subways are. Overall, we looked at the three measures we can compare over time — car breakdowns, car cleanliness and announcements. We were unable to compare the other three measures due to changes in methodology by transit officials.

  • The car breakdown rate improved from an average mechanical failure every 148,002  to 170,217 miles during the 12-month period ending December 2010 — a gain of 15%. This positive trend reflects the arrival of new model subway cars in recent years and better maintenance of Transit’s aging fleet. We found fourteen lines improved (2, 3, 7, A, B, C, E, F, J/Z, L, M, N, Q and R), while six lines worsened (1, 4, 5, 6, D and G).
  • Subway cars went from 95% rated clean in our last report to 94% in our current report – essentially unchanged, experiencing a decrease of 1.1%. We found that twelve lines declined (1, 3, 4, 5, 7, A, E, G, L, M, N and Q) and eight improved (2, 6, B, C, D, F, J/Z and R).
  • Accurate and understandable subway car announcements declined slightly, going from 91% in our last report to 87% in the current report. We found twelve lines worsened (1, 2, 4, 5, 7, B, C, D, G, J/Z, L and N), four improved (3, F, Q and R) and four did not change (6, A, E and M).

Future performance will be a challenge given the MTA’s tight budget.

Lastly, we aim to give communities the information they need to win better service. We often hear from riders and neighborhood groups. They will say, “Our line has got to be worst.” Or “We must have the most crowded trains.” Or “Our line is much better than others.”

For riders and officials on lines receiving a poor level of service, our report will help them make the case for improvements, ranging from increases in service to major repairs. That’s not just a hope. In past years, we’ve seen riders win improvements, such as on the B, N and 5 lines.

For those on better lines, the report can highlight areas for improvement. For example, riders on the 7 — now a front runner in the system — have pointed to past declines and won increased service.

This report is part of a series of studies on subway and bus service. For example, we issue annual surveys on payphone service in the subways, subway car cleanliness, and subway car announcements, as well as give out the Pokey Awards for the slowest city bus routes.

Our reports can be found online at www.straphangers.org, as can our profiles. We hope that these efforts — combined with the concern and activism of many thousands of city transit riders — will win better subway and bus service for New York City.

 

1New York City Residents’ Perceptions of New York City Transit Service, 2010 Citywide Survey, prepared for MTA New York City Transit.

2 The measures are: frequency of scheduled service; how regularly trains arrive; delays due to car mechanical problems; chance to get a seat at peak period; car cleanliness; and in-car announcements. Regularity of service is reported in a indicator called wait assessment, a measure of gaps in service or bunching together of trains.

3 We derived the MetroCard Ratings with the help of independent transportation experts. Descriptions of the methodology can be found in Section II and Appendix I. The rating was developed in two steps. First, we decided how much weight to give each of the six measures of transit service. Then we placed each line on a scale that permits fair comparisons. Under a formula we derived, a line whose performance fell exactly at the 50th percentile in this baseline would receive a MetroCard rating of $1.15 in this report. Any line at the 95th percentile of this range would receive a rating of $2.25, the current base fare.

4 We were unable to give an overall MetroCard Rating to the system’s three permanent shuttle lines — the Franklin Avenue Shuttle, the Rockaway Park Shuttle, and the Times Square Shuttle — because data is not available. The G line does not receive a MetroCard Rating as reliable data on crowding for that line is not available. The M line did not receive a MetroCard rating because the route was dramatically restructured after the most recent crowding data was available.

5 We did not issue a report in 2002. Because of the severe impact on the subways from the World Trade Center attack, ratings based on service at the end of 2001 would not have been appropriate.

6 See Appendix I for a complete list of MTA New York City Transit data cited in this report.