Cross Country Ratings Index (CCRI)

Last updated: March 31, 2020

The coaches of NCAA Division I Cross Country, at the 2017 USTFCCCA Convention, strongly voted in favor (75% in favor, membership vote) of an objective ranking system to be produced by the national office on a bi-weekly basis, starting with the period after the Pre-National Invitational weekend.

The original proposal was rooted by an algorithm that was adapted around the RPI (Ratings Percentage Index) model. You may recall hearing about the RPI mostly because of its role in NCAA at-large selections for basketball, volleyball, soccer, etc.

In preparation for this implementation, we found a method that takes some of the basic concepts of the RPI model — head-to-head performance and strength of schedule — and added elements relevant to cross country — margin of victory, individual performance, and potential head-to-head matchups based on existing results.


If RPI was the tip of the iceberg, the CCRI is 95% of the iceberg.

The RPI was found to be rigid, using strict win-loss records from full-field team scores (e.g., finishing fifth in a 20-team meet yields a record of 15-4 for that meet). Calculations for season winning percentage and strength of schedule used that concept.

The CCRI is rooted first on season-long individual performance of team members, paired with actual head-to-head team results, scored in a dual-meet fashion with added components of margin of victory or defeat. The scoring concepts in the mix include elements from the ELO ranking system used in rating chess players and from GolfStat, the NCAA’s rating system for golf.

All performances are rated as reported to TFRRS.

The CCRI Team Rating is a one-to-one combination of a team’s potential as measured against every team in the division plus results in “actual varsity contests” (AVC).


Measuring a team’s potential begins with obtaining a season score for each of the team’s individuals. An individual’s score, in a simple sense, is a measure of an athlete’s above- or below-average performance against above- or below-average competition. The basis of this scoring is every performance is weighed against the field’s average time and the average season-long performance of the opponents faced.

Only performances against NCAA Division I competition is included. All non-NCAA DI results are excluded from any analysis. In order for a competition to be considered intra-divisional for these purposes, there must have been at least one other finisher from more than one NCAA DI institution.

Every finisher in a race starts with 1000 points. That score is adjusted based on performance against the average time of the race — the “gap score.” For every standard deviation above or below the mean, 100 points is added or subtracted. So, an 8k race with an average field time of 25:00 and a standard deviation of 30 seconds would yield an athlete that finishes in 24:00 an additional 200 points to their score. Time is ultimately used as part of the analysis, but only in terms of the collective field and the field’s response to the course and the conditions on that day.

The resulting score is again adjusted by the collective gap scores of the field throughout the season (“race score”) and further adjusted by the field’s strength-of-schedule. This is the individual contest performance rating (ICPR).

The athlete then obtains an individual season performance rating (ISPR). The average of the athlete’s ICPRs are collected and adjusted based on the number of races finished during the qualifying season (race distances of 7500 meters for men and 4500 meters for women as the minimum starting September 7).

The FINAL individual CCRI is a combination of the athlete’s collective gap score versus all opponents and the ICPRs of their opponents. So far in 2018, there have been 2.2 million individual matchups in NCAA Division I.


Once we have individual scores, we then take the top seven scores from each team and analyze that “team” versus every other squad in the division in a mock dual meet. That’s 110,000 potential dual meets with 1.5 million points of data produced.

The Team Potential Index (TPI) is a based on a formula that weighs the opponent with the calculated margin of victory or defeat (“win share”). For example, a mock dual-meet score of 15-50 for a Team A victory would produce 1000 win share points for Team A, because the score was perfect. A 20-35 Team A win would yield 714 win share points: A 15-point victory divided by 35 perfect-score points with a 500-point coefficient. Inversely, a Team A loss by a 25-30 margin would still produce 429 win share points for Team A’s cause.

Ties are possible and not resolved. A “draw” would yield 500 win share points for both squads.


The second equal part of a team’s final CCRI is their actual performance in “actual varsity contests” (AVC). For NCAA at-large qualification, the “A” team concept relies on squad composition from the regional championships.

Since traditional “A” squads are not known until then, the algorithm looks at only contests in which at least four “varsity” individuals compete for the team and for the opponent.

For these purposes, “varsity” is defined as an individual that is ranked among the team’s top-7 in individual CCRI or finished among the team’s top-9 at their conference championships. Obviously, this current ranking only has information of the individual CCRI in the mix.

The top-7 finishers in each head-to-head matchup are then rescored as a dual. The resulting AVC score is an average of the team’s actual win share to the opponent’s TPI.


The final team rating combines a team’s TPI with AVC Index, on a one-to-one basis, resulting in the Team CCRI.


As with all ranking or rating systems, it’s important to consider the product as whole, knowing that teams earn a ranking that places one in front of another in a “big picture” analysis.

In other words, we are confident that it’s best to view these teams in five-to-ten team chunks — the top 10, next 10, etc. A single result over an opponent ranked slightly above or below the team in question isn’t in itself enough to sway the algorithm. This is an improvement from an RPI system, however, in which teams were best was most valid in 25-team chunks, in terms of comparison to final national, regional, and conference finish.

As with the USTFCCCA’s track & field ranking system, our philosophy is that in-season objective measures of programs should show satisfactory validity versus actual future outcomes. That shouldn’t be confused with absolute predictions. Given the complexity of track & field and cross country, the most realistic goal is to give an overview of what has happened and, with all things staying on a consistent plane, what has the highest likelihood of happening next.

As this is an introductory product, all NCAA Division I teams — regardless of USTFCCCA member status — are ranked, provided five or more athletes have finished a race this season.