# Time to monkey around with BCS?

September 13, 2009

Begin the countdown: With two weeks of college football under our belts, it's just five weeks until the first Bowl Championship Series (BCS) rankings are released.

If 2009 follows the trend of other seasons, the BCS will undoubtedly have its share of controversy in how it ranks teams for the bowl games.

In particular, many fans (and coaches and Congressmen) claim that only the top teams in big conferences are able to qualify for the highly lucrative BCS bowl games because the BCS gives heavy weighting to how good a team's opponents are.

Yet the BCS was instituted because not enough teams had access to the big bowl games, which were previously determined by the conference a team played in.

Problem: Slide rule required

The BCS has also been criticized for its mind-numbing complexity. It's calculated by equally weighting two human-generated polls with the average of four computer-generated polls (there are six computer polls, but like Olympic gymnastics, each team's highest and lowest computer rankings are ignored for the final BCS calculation).

For the two human polls, each team receives a value from zero to one that is proportional to the percent of votes the team received out of the total votes possible in the polls.

For each of the computer polls, a team receives 0.25 points for being ranked first, 0.24 points for being second, 0.23 points for being third, and so on. The highest and lowest values for each team are dropped, and the remaining four polls are then summed, which means that the computer polls' final output is also on a scale of zero to one. A team ranked first in every poll would receive a score of one.

A team's final BCS score is calculated by averaging the values from the first and second human polls with the average of the four computer polls.

Of the six computer polls that are used, just two have formulas that are publically known; the other four might as well be state secrets.

The closed-door aspect of the BCS computer rankings has led to several disputed outcomes.

In 2003, the University of Southern California was ranked first in all the human polls, but was left out of the national championship game because the BCS computer polls favored Oklahoma. In 2001, third-ranked Colorado was left out of the national championship game despite having spanked second-ranked Nebraska by a score of 62-36.

Solution: monkeys?

The BCS's emphasis on secret, complicated computer rankings prompted a Georgia Tech student and two of his math professors to suggest that monkeys could rank teams about as well as the BCS does.

In 2004, Thomas Callaghan, Dr. Peter Mucha (who's now teaching at the University of North Carolina), and Dr. Mason Porter wrote a paper showing that a theoretical monkey (in reality, it's a computer choosing outcomes at random) could rank teams by simply casting a vote for the team he thought was the best.

Each monkey got a single vote for the entire season. The two teams with the most votes would be picked for the national championship.

In the study, the monkey tended to abandon his favorite team as soon as it lost a game and constantly switched teams.

After a while, certain teams were getting all the monkeys' votes: the teams that tended to win most of their games and that beat other good teams. That is, the monkeys' votes yield a pretty good ranking of teams.

The monkey method gives a great deal of weight to the idea that "my team beat your team, so my team should have a higher ranking" -- a concept that the BCS computers snubbed when it came to Colorado and Nebraska in 2001.

The monkey study's purpose is to show that even someone who considers only who won and who lost - and sometimes makes irrational choices -- can still yield a pretty good ranking of football teams.

By the way, if you're interested in charting the performance of the monkeys once the BCS rankings come out, you can check out Dr. Mucha's blog at rankings.amath.unc.edu/ .