What Is xG (Expected Goals)? Definition, Calculation & Examples
By Wandrille P. , March 30, 2026
Tags: what stats really mean
Quick Answers
What is xG in football? xG (Expected Goals) is a statistical metric that measures the probability of a shot resulting in a goal, expressed as a number between 0 and 1. It evaluates the quality of a chance based on where and how it was taken — not whether it actually went in.
How is xG calculated? Each shot is compared to thousands of similar historical attempts. Analysts weigh factors including distance from goal, shooting angle, body part used, type of assist (cross, through ball, cutback), and — in advanced models — the positions of defenders and the goalkeeper at the moment of the shot.
Does xG actually predict results? Over a large sample, yes. Research has shown that xG is a better predictor of future goals than actual goals scored. A single match is too small a sample to draw firm conclusions, but over a full season, a team’s xG difference is one of the most reliable indicators of their true quality.
Key Takeaways
- xG measures chance quality, not finishing ability
- It is more reliable over multiple matches than a single game
- Different models produce different values
- It should be used to inform analysis, not replace it
- No—xG does not say who deserved to win
See xG in action in our match analysis: Bayer Leverkusen 6-3 VfL Wolfsburg
1. The One-Line Definition
xG measures the probability that a shot will result in a goal, based on the specific characteristics of that attempt and the historical frequency of goals scored from similar situations.
2. Why It Was Invented
For most of football’s history, the standard measures of attacking performance were goals, shots, and shots on target. The problem: these metrics treat a speculative 35-yard effort and a tap-in from two yards as equivalent data points. They tell you how much a team attacked — not how dangerously.
The analytical community needed a way to quantify chance quality. Early foundations were laid as far back as 1997, when researchers Pollard and Reep used hand-charted data from the 1986 World Cup to establish that distance and angle were the primary drivers of scoring frequency. But the modern version of xG only took shape around 2012–2013, with early models developed by analysts such as Salvador Carmona at Driblab, building on rapidly growing databases of digitised shot data.
xG entered mainstream football coverage around 2017–2018, when broadcasters like Sky Sports and the BBC began displaying it on screen during and after matches. By then, major clubs had been using it internally for recruitment and tactical analysis for several years. The key insight that drove its adoption: xG has been mathematically shown to be more predictive of future goals than actual goals scored in previous matches — meaning it strips away luck more effectively than the scoreboard does.
3. How It’s Calculated
No heavy formulas here, but the logic is worth understanding properly.
The foundation is a large database — leading providers train their models on 300,000 shots or more. For each new shot, the model looks at all historical attempts taken from comparable situations and asks: what percentage of those ended up as goals? That percentage becomes the xG value.
Distance from goal is the single strongest predictor. The further a shot is taken from goal, the lower the probability of scoring. Shots from inside the six-yard box can carry values above 0.40; shots from outside the 18-yard line typically fall below 0.05. The relationship isn’t linear — probability drops off sharply as distance increases.
Shooting angle matters almost as much. The wider the angle of vision to the goal, the more target a player has to aim at. A striker central and 12 yards out has a very different chance than one at the same distance but tight to the byline — even though the raw distance is identical. Advanced models measure this as the exact angle between the two goalposts from the shooter’s position.
Body part used affects conversion rates significantly. Foot shots are consistently more accurate than headers, which are harder to direct precisely and carry less power. The same position in the box will carry a lower xG value if the opportunity is headed rather than struck with the foot.
Assist type — the action that created the shot — is a powerful contextual variable. A through ball that splits the defensive line and results in a one-on-one with the goalkeeper generates a very different kind of chance than a cross into a crowded penalty area. Through balls and low cutbacks across the six-yard line tend to produce the highest xG values; high crosses and long balls are typically much lower.
Situation — open play, counter-attack, corner kick, free kick, or penalty — also influences the model. Counter-attacks, for example, tend to produce higher xG shots because defences haven’t yet had time to recover their shape.
What separates advanced models from basic ones is the addition of defensive context. Since 2018, StatsBomb has captured “freeze frames” — a spatial snapshot of every player’s position at the exact moment a shot is struck. This allows their model to factor in how many defenders are between the ball and the goal, whether the goalkeeper is in a standard position or out of place, and exactly how much of the goal is actually visible to the shooter. A shot with the same distance and angle carries a very different probability depending on whether the goalkeeper is on their line or has rushed five yards off it.
4. What It Tells You
xG’s power is greatest in three specific situations.
Evaluating performance independently of luck. Football’s low-scoring nature means results are heavily influenced by randomness. A team can dominate a match entirely, generate 2.8 xG, and lose 1–0 to a single breakaway goal. The scoreline suggests a comfortable win for the opposition; the xG tells a different story. Over time, results tend to converge toward what the underlying xG would predict — which is why teams that consistently lose “despite the xG” eventually stop losing those games.
Comparing players across different contexts. A striker scoring 15 goals in a weaker league might be generating them from very high-quality chances (high individual xG per shot), or might be overperforming a more modest shot profile. xG allows scouts to distinguish between a genuinely clinical finisher and one who is simply in the right place at the right time in a tactically easier environment.
Assessing tactical systems rather than just outcomes. A defensive team might concede very few goals across ten matches — but if their xGA (expected goals against) is high, their goalkeeper is probably carrying the team. Conversely, a pressing system can be validated by its ability to force opponents into low-xG shots, even before that shows up in clean sheets.
At a seasonal level, Expected Goal Difference (xGD — the gap between xG generated and xGA conceded) is one of the most reliable predictors of where a team will finish in the table, outperforming actual goal difference as a measure of underlying quality.
5. What It Doesn’t Tell You
This is the section that determines whether you’re using xG correctly or not.
It treats every player as equally skilled. By design, xG models assume the “average professional footballer” is taking each shot. A 0.25 xG chance is the same number whether it falls to Erling Haaland or a centre-back. That’s intentional — it lets you measure how much a player exceeds or falls short of the average. But it means that teams with elite finishers will consistently “overperform” their xG, and this should be expected, not treated as a mystery. xG tells you about the chance, not about the person taking it.
It ignores pre-shot decision-making. Two shots from identical positions can come from completely different situations. One might be the product of a 25-pass move that systematically dismantled a low block; the other might be an opportunistic strike after a defensive error. The model assigns them the same value. The quality of the build-up, the intelligence of the movement that created the space, and the tactical context are invisible to standard xG.
It cannot account for the specific goalkeeper. xG models are built on historical averages across thousands of goalkeepers. They cannot know in advance that a particular shot is being faced by a world-class shot-stopper or a reserve keeper playing their first match. A consistently high PSxG (post-shot xG) conceded minus actual goals ratio for a goalkeeper is one of the ways analysts identify elite keepers — precisely because the model doesn’t “see” them.
It ignores intent. If a striker in a great position deliberately squares the ball for a teammate rather than shooting, the resulting chance is credited entirely to the receiver. The creator’s decision — and its quality — is invisible to xG.
Rebounds and sequences inflate the numbers. xG models typically treat every shot as independent. In a rapid sequence where a player hits the post (0.65 xG) and a teammate hits the rebound (0.80 xG), the model adds both to give 1.45 xG for a passage of play that can only produce one goal. This is a known structural problem, particularly in high-tempo phases, and means cumulative match xG totals can be technically impossible to realise.
Different providers give different numbers — sometimes significantly. Opta, StatsBomb, and Understat all use their own models with different variables and data inputs. For a high-scoring team in a busy match, the difference between providers can reach 3 or 4 xG. This is not an error — it reflects genuine methodological divergence. xG is a model, not a measurement. Citing a specific number as if it were a fixed fact misrepresents how the metric works.
One match is almost never enough. This is perhaps the most important caveat. A single match gives you a sample of perhaps 10–20 shots. The statistical noise at this scale is enormous. xG becomes genuinely informative across 6–10 matches, and truly reliable over a full season. Using a single game’s xG to make sweeping claims about a team’s quality is one of the most common misuses of the metric in football media.
6. Two Real Examples — Where xG Was Right, and Where It Wasn’t
Two matches from this blog illustrate xG’s power and its limits simultaneously.
In PSG 3-1 Toulouse, PSG generated an xG of 1.79 against Toulouse’s 0.16 — an 11-to-1 ratio. The model predicted a comfortable home win. It was right about the result. What it could not account for was Matvei Safonov racing off his line to claim a corner, missing the ball entirely, and gifting Toulouse a goal their 0.16 xG had no framework to produce legitimately. A goalkeeper error handed them a goal the model had correctly assessed as almost impossible. xG got the story right. It missed the footnote that became the match’s main talking point.
In Switzerland 3-4 Germany, Germany won the xG battle 2.50 to 0.57 — yet trailed twice. Switzerland scored three goals from seven shots, converting at 43%. Germany needed Florian Wirtz to score twice from low-probability positions — one from a difficult angle, one curled into the top corner — to complete the comeback. The model said Germany would win comfortably. It was right about who controlled the game and wrong about almost everything in between.
Both matches are covered in full in the Where the Stats Lie series.
7. The Bottom Line
xG is a model, not a measurement. It is one of the most useful lenses in modern football analysis — not because it tells you who played better, but because it tells you how dangerous what they created actually was.
Use it to ask better questions, not to settle arguments.
This article is part of Ultrivia’s data-driven football analysis, combining statistical models with tactical understanding of the game.
Written by Wandrille P — football analyst specializing in data-driven match analysis and creator of Ultrivia.