Halite Home

Faster ranking in matchmaking


The Problem

I have a top 50 bot. When I submit a revision, it can take many hours until the revision gets to play against opponents at its actual skill level. I think there is a simple fix for this.

Currently, matchmaking selects opponents by picking from others bots near your bot's rank. Which sounds great, but is actually wrong.

Matching / Ranking Terms

The Trueskill player scoring works by keeping track of two numbers for each bot:

  • Mu - how good Trueskill thinks your bot might be
  • Sigma - how sure Trueskill is about that number.

Halite takes these two from Trueskill and then computes two more numbers

  • Score - Essentially the worst possible current estimate of your ability. Mu minus 3x Sigma
  • Rank - your bot's position when all bots are ordered by Score.

Current Matchmaking

Matchmaking selects opponents by picking from others bots near your bot's rank.

When you submit a new bot - the bot gets a huge starting Sigma, or uncertainty. This makes your score go super low and you play the worst bots in the system. Which is cool for a first game.

However in the games after that, you are having matches created with players of at your rank level, which is below your Trueskill's estimate of your skill level, you spend most your time playing bots who are far worse than you.

When playing people worse than you, Trueskill expects you stomp them easily. When you win, Trueskill hasn't learned very much about you that it didn't know before, and thus Trueskill doesn't change your sigma much after the game.

In effect you have to grind your way through lots of low ranked bots before you start getting to play near your skill level. My bot update this morning played about 40 games before it finished somewhere other than first place.

The Fix

The fix for this is very simple. Instead of matchmaking by selecting bots of nearby rank, instead choose bots of nearby Mu that have a lowish sigma.

Since Mu is Trueskill's best guess at your current ability, playing bots of similar Mu gives Trueskill the most ranking information possible, and you as a player get to spend most of your time seeing your bot playing with others of similar skill.

By matchmaking against bots of lowish Sigma, Trueskill is able to be much more sure about how your ranking changes as a result of the match. (Also avoiding matching against bots of high sigma means that matchmaking avoids newly submitted bots only playing against each other.)

The one gotcha here is that you need to make sure you have enough bots for the game. There's always guaranteed to be bots += 5 ranks from you, but there is no such guarantee when using Mu. You might just have an internal mu_rank used internally.


Current matchmaking pits bots mostly against bots worse than them. To fix change line 97 of ManagerAPI.php to use mu instead of rank.


Has anyone made this PR yet?


Don't believe so.


@DanielVF can you take a look at https://github.com/HaliteChallenge/Halite/pull/257


Thanks! Commented.


I think you want to be a bit careful here to make sure there aren't any unintended consequences. I don't think there is any harm in having to wait for a while to converge to your true ranking, rather than getting instant feedback. You should probably have a good idea whether your bot is better or worse as you develop it.

However, I do agree that we can make it more efficient to converge by matching to bots with similar mu. As mentioned in the pull request, however, there are problems with not being able to find enough opponents with similar mu. Instead, why don't we swap the current rank with the rank based on mu (order by mu) only to create matches, while keeping the current rank (order by mu - 3 * sigma) for the leaderboard?


Yes, that's exactly what I'm proposing. Two ranks, one rank for the leaderboard based on the current system, and one mu_rank based on Trueskill's mu for match making.


In addition to the mu ranking change, I'm interested in improving the selection of the seed player.

50% of the time we try to pick a bot that is relatively new (high sigma). Right now we pick the user with the biggest rand()*pow(sigma, 2) value. However, really new bots will completely crowd out semi-new ones (~60 games). We would like the reduction in game frequency to be more gradual. We've fiddled around with the coefficient of the pow term, but haven't found a perfect fit.


Personally I think the fairest method is simply to always use the least recently played user1 as the seed. At one point with aichallenge we used a method that would pick new submissions first (I don't remember what we based it off of specifically whether it was sigma or not), but this encouraged and rewarded poor behavior of resubmitting a bot many, many times in order to get more games.

1 With the implementation ensuring a new user is right at the head of the list with infinite time since their last game :slight_smile:


So we play games to

1) Determine rank
2) Have games for the player to see so they can know what to improve / how their bot is doing

Both of those are far better right after a new submission than later on. When someone is actively playing the game, that's the best time for them to get game feedback. Game 30 is much more valuable for a player than game 300 is.

When a high sigma bot plays, it's also playing a couple other bots of all sigmas. So to me, high sigma seeding is much more valuable than last played seeding.

So I would actually recommend:

  • Increasing the percentage of high sigma seeding to around 60-70% from 50%
  • For last played seeding, exclude seeding bots that have more than 200? games and their players have not logged in recently.

And then just for fun, maybe have a "Play some games" button that works once a day? and gives you ten extra matches.

I think that some version of the "Wall" is going to occur no matter what - it's simply a matter of there being 50 times more bots at that "medium" sigma level. The only solution is more games overall, or less games for someone else (and that someone else shouldn't be new submissions, but should rather be taken from "abandoned" submissions).


Oops, there was one other very important portion that I forgot about in the last post. It was actually least recently played active user chosen for the seed. A user becomes active upon making a fresh submission, or clicking an 'activate' button there user page. They then remain active for X hours (I think we used 48 or 72 hours) or until the user voluntarily deactivated the submission. Deactivated submissions would still be pulled into games through the pairing process. They just are never chosen as a seed.


What do others think of this?


I think there are two main problems.

First, majority of people submitted starter kit or the bot from the tutorial and stopped at that. A lot of simulated games are like TutorialBot vs StarterKit. Luckily, TutorialBot makes its moves instantly, but it's still a waste of resources.

Second, you use this creepy casino trick where fresh submission is rewarded with a sequence of unfair wins against weaker bots. If you need it to attract new players, fine, but I'm pretty sure that the developers of the stronger bots value quick accurate placement and interesting balanced games more than cheap thrill of beating TutorialBot 50 times in a row.


I wouldn't characterize it like that.

All we do is start you at 25.000 mu and 8.333 sigma (aka 0 points) and match you against people who are close to your rank. This is very common for a trueskill system.

Shortly, we will use your mu ranking (mu is trueskill's best guess for your skill) for matchmaking. This should hopefully make ranking for high skill bots faster.


The mu ranking change is live on the site. Would someone with a high ranking bot be willing to resubmit it and report back on the speed at which it moves up the rankings?


I'll try it out. I've got a bot that's currently in the top 25.


This is fantastic! After only 25 games, I was playing against a level 60, and by 35 games, against a level 10.

I'm now playing players of my rank after only 10 minutes. Way better than the four to six hours this used to take.



Here's another spot check of the new ranking speed: @nmalaguti made it to rank 39 with only 21 games played.


And rank 6 by game 70, but that isn't even the primary benefit.

He was playing the rank 9 and 10 players in his 12th game back and 3, 5, and 8 in his 13th. So there wasn't a large number of games wasted on players with a huge skill disparity.