Round 10 of the Pro 12 and Aviva Premiership competitions, and Round 13 of the Top 14 are now complete, and we’re running into a two week hiatus while we make the switch to the European Champions and Challenge Cup games for December. It seems like a good time to take stock of how each of the teams are performing compared to how the numbers would suggest they ought to be doing.

Pro 12

Let’s start by taking a summary look at the predictions produced by the Rugbayes model for the Pro12 in round 10, and the actual results from the games.

HomeAway
NamePrediction (%)Score ScorePrediction (%)Name
Glasgow Warriors88.515168.7Munster Rugby
Ospreys97.131222.0Edinburgh Rugby
Zebre10.7243186.4Scarlets
Connacht100.04780.0Benetton Treviso
Cardiff Blues53.3223540.9Ulster Rugby
Leinster99.328150.4Newport Gwent Dragons

Overall this wasn’t a great week for our model, with Glasgow failing to win their tie against Munster, and Ulster beating Cardiff. It’s worth noting that this isn’t the first week that Glasgow have failed to meet Rugbayes expectation of them, with a win against Ospreys forecast back in round 9 too. Glasgow’s ongoing poor performance is pushing them further down the probabilities for finishing in the top four as well.

If we have a quick look at the forms for each team, taken over the last ten games played in any competition: so both the Pro 12 and the two tiers of the European competition.

LeinsterWWWWWWWWLW
TrevisoLWLLLLLLLL
OspreysWLLWWWWLWW
ZebreLLLLLLLWLL
UlsterWWLLLWWLLW
NewportLLLLWLLWWL
ScarletsLWWWWWWWWW
MunsterWWLWWWWWWW
ConnachtWWWWWWLLWW
GlasgowWWWWLLWLLL
CardiffWWLLWWLWLL
EdinburghLLLWWWLWLL

Again, these are statistics which won’t make for comfortable reading for Glasgow fans, but probably will for Scarlets, Munster, and Leinster supporters. The Rugbayes model updates its parameters each week according to new data&emdash;that’s to say it tries to become more accurate after it sees the results of each week’s games. If the teams were performing exactly as their past performance would imply they should be then we’d expect the predictions from the model to be exactly the same (give or take a few points) as the scores those teams are achieving in games. If a team is performing better than expected they should outperform the predictions, while teams who are struggling this season should show underperformance compared to the model. The model will attempt to correct this for the next week, but a team on a real trend may continue to defy expectations one way or the other throughout the competition. The relative performances for the first ten rounds of the Pro 12 are plotted below.

We can see from this that Scarlets have really managed to outperform compared to previous seasons in the last few rounds, with the model struggling to correct quickly enough. Treviso are also consistently better than previous years, while the two Scottish teams are performing around about how we’d expect (this is perhaps the first piece of good news for Glasgow fans, as it would suggest that it’s the opposition who have improved rather than Glasgow that have regressed, for the most part). Ospreys are probably the real winner on this metric, outperforming the model for nearly every game this season.

We can see how the model has attempted to cope with the actual performance of the teams by examining how it has changed their relative rankings over time.

A team wants to have a lower score in “Defence Strength”, and a higher one in “Attack Strength”, so a team moving towards the bottom right corner is becoming both more agressive and getting better at defending. A pretty good example is Ospreys, who are taking a zig-zagging path towards the lower right, which is currently dominated by Ulster, Leinster, and Glasgow. Glasgow supporters might take some comfort from this graph: there are strong suggestions that they haven’t become especially worse over the season, but that other teams have just become better. Of course, how much difference that really makes is a matter for debate! As an aside, this plot quite clearly shows how the Pro 12 is divided into very definite tiers (though we probably didn’t need a plot to work that out!). Treviso and Zebre languish up in the upper left corner, Newport and Cardiff bring up the edge of the competitive group while Glasgow, Leinster, and Ulster inhabit the lower-right corner.

Aviva Premiership

Again, let’s start by having a look at the success rate of the Rugbayes model on the round 10 games from the English Premiership.

HomeAway
NamePrediction (%)Score ScorePrediction (%)Name
Sale Sharks42.832151.6Exeter Chiefs
Bath53.6141140.0Saracens
Gloucester100.026180.0Bristol Rugby
Leicester Tigers74.8191120.2Northampton Saints
Newcastle Falcons66.9383228.0Harlequins
Worcester Warriors6.2122692.0London Wasps

Here, in contrast to a rather mediocre performance on the Pro 12, Rugbayes chalks-up a 100% successful set of predictions! This is a fairly strong indication that the English competition is performing in a manner consistent with previous seasons. Looking at the form for each team we can get an impression of why the model suggested each outcome; there aren’t really any surprises to be had here, either.

GloucesterWLLDWWLWLW
LeicesterWWLWLWLWWW
NewcastleLWLWLWLLWW
SaleWLDWLLWLLL
SaracensWWLWWWWWWL
WorcesterLDWLLLLLLL
NorthamptonWLLWLWWWLL
BathWLWWWWWWLW
HarlequinsLWLWWLWLWL
BristolLLLLLLLLLL
WaspsWWWLWDWLWW
ExeterWWLDLLLWWW

Looking at how each team has been performing over the season compared to the predictions from the model gives us an idea of how a team is performing compared to historical fixtures. The model is updated every week with the games played in the previous weekend, and so we’d expect teams who are doing really well will outpace the model’s ability to update itself, and score consistently higher than we predicted.

It looks like the model has a pretty good handle on the performances of the majority of teams (the very-nearly flat lines from Saracens, Exeter, and Northampton are testiment to that), while Bristol, Bath, and Wasps have proved harder to characterise. A lot of information about the teams is hidden in this graph by the model’s updates. To really gauge how a team is performing it’s probably best to look at how the model updates itself after each round. The plot below shows the tracks of each of the ratings which the model assigns to a team, and how they change over time. A team gets a higher score on the attack rating by being more aggressive, and a lower one on the defence score by conceeding fewer points, so a team is likely to want to aim to be in the lower right-hand corner.

We can see from this that there are a few stand-out teams. The first is Bristol, who were promoted into the league this season. We don’t currently track the lower league, so the model didn’t really know how to cope with this new entry in the first couple of rounds, and took a couple of weeks to “figure them out”. Saracens really stand out by their successful defence of the lower right hand corner all season. The very flat profile of the Saints’s, Wasps’s, and Exeter’s tracks is testiment to their successfully maintaining a strong defensive team despite variation in their attack strength.

Top 14

With only four out of seven predictions correct, this week didn’t really go Rugbayes’s way in the Top 14. The biggest surprise was Pau’s success against Clermont (the team which Rugbayes considers the strongest in the league), while the most painful miss by the model was the draw between Racing and La Rochelle, which the model missed by a single point, predicting a 21-22 victory to Racing 92.

HomeAway
NamePrediction (%)Score ScorePrediction (%)Name
Pau6.5403591.3Clermont
Stade Français94.05154.4Bayonne
Lyon OU70.4192324.8Castres Olympique
Grenoble40.5375154.2Montpellier
La Rochelle44.1232349.8Racing 92
Toulouse98.530120.9Brive
Toulon98.637101.0Bordeaux Bègles

A quick glance at the teams’ recent form gives us some insight into the way the league is shaping-up; in constrast to the Pro 12 where there are a few teams with clear and consistent records, the Top 14 supports fewer long runs (except Grenoble’s seven loss streak…).

MontpellierWWLWLWLWLW
BordeauxWWWWLLWWWL
RacingWLLWLWWLWD
CastresWWLLLWLWWW
PauWLLLLLLWLW
La RochelleWLDWWLWWLD
ClermontLWWWWWWLWL
BayonneLDLLLLLWWL
ToulonLWWDLWLWLW
StadeLWLLWWLLWW
GrenobleLLWLLLLLLL
LyonLDWWLLWLLL
BriveLWLLWLWLWL
ToulouseWWLLDWWLLW

We can examine the ability of the model to update itself to changing team performance by comparing its prediction each week to the actual outcome for each match. As with the Pro 12 and the EPR, the model attempts to adjust for a team’s changing strengths and weaknesses week on week, and if it was able to do this perfectly we’d expect these differences to be consistently close to zero. If a team is over (or under) performing, or just performing erratically, we’d expect that to show up as a greater variation in this metric. The Top 14 teams show a reasonable amount of movement on the plots below, with Racing really standing out as outperforming expectations early in the season, compared to Toulouse and Stade which have been fairly consistently adapted-to by the model.

To get a handle on the actual strengths of each team we can examine the values used by the model when it assesses each team every week, and then look at how this changes over the season.

In order to get a feel for the strengths of each team, and how they change we can look at the values the model assigns to each team’s attack and defense strength each week after observing the most recent sets of results. That way we can see how each team is progressing (or indeed regressing) over the course of a season.

A team wants to have a lower score in “Defence Strength”, and a higher one in “Attack Strength”, so a team moving towards the bottom right corner is becoming both more agressive and getting better at defending. The close competition for the top of the league can clearly be seen between Racing 92, Clermont, Toulouse, and Toulon, sitting together in the lower-right corner.

Share Share Share