Three Things to Check Before You Trust Your GPS Report
Monday morning. You open the weekend match report. Your centre-back topped the team’s running distance. Good sign — or evidence the defensive shape collapsed? Before you can answer, you need to understand what happened to that number before it reached your screen.
Football performance data passes through three filters: measurement error in collection, arbitrariness in threshold setting, and absence of context in interpretation. Think of it as a water purification system. If the first filter is contaminated, it does not matter how good the remaining ones are — what comes out the other end is compromised. A survey of 41 elite clubs found 56 different training load variables in use, yet the gap between expected and actual monitoring effectiveness was striking (Akenhead & Nassis, 2016). Numbers everywhere, useful decisions nowhere. The problem is not data volume. It is pipeline integrity.
How Accurate Is Tracking Technology?
Every number in a match report begins with a position estimate. Global Navigation Satellite Systems (GNSS), Local Positioning Systems (LPS), and Optical Tracking Systems (OTS) each carry a distinct error structure. LPS showed the strongest overall validity and reliability, with static position error of 23 cm compared to 96 cm for GNSS and 56 cm for OTS (Pino-Ortega et al., 2021).
Here is the paradox. Total Distance (TD) is relatively accurate across all systems — one OTS validation reported just 0.3% deviation (Linke et al., 2020). But the variables practitioners actually care about — High-Speed Running (HSR), acceleration, deceleration — are precisely where error escalates. Segment-level distance deviations reached 19%. Imagine a thermometer that reads body temperature accurately on the whole, but the scale wobbles exactly in the fever range. The variables practitioners depend on most are the least stable (Buchheit & Simpson, 2017).
So can these numbers be compared across systems?
Same Number, Different Meaning
Training tracked by GNSS, matches tracked by OTS — a common setup. Placing the two sets of numbers side by side is also common. But the comparison is more dangerous than it looks.
Across 38 official matches in Spain’s second division, distance variables ran higher on OTS while speed variables ran higher on GNSS (Pons et al., 2019). The bias flips direction depending on which variable you look at. One system says the player covered more ground; the other says the player moved faster. Same player, same match, different story.
It gets worse. Even within the same GNSS brand, inter-unit variability can reach 50% (Buchheit & Simpson, 2017). The recommendation to assign each player a dedicated unit is well known but easily broken by battery swaps and equipment rotation. Switching tracking providers mid-season, or pooling training data (GNSS) with match data (OTS) without a validated conversion equation, is like adding measurements taken with different rulers. The numbers combine, but the meaning has already drifted apart.
Even if measurement were perfect, though, another problem waits. Who decides what counts as “fast”?
Whose High Intensity?
The absolute sprint threshold of 25.2 km/h makes life simple. One number, universal comparison. But simplicity has a cost.
Absolute thresholds are one-size-fits-all clothing. Some players are squeezed; others swim in fabric. That 25.2 km/h line corresponded to anywhere between 70.4% and 81.8% of individual Maximal Sprint Speed (MSS) (Silva et al., 2024). For a slow centre-back, it is near-maximal effort. For a fast full-back, comfortable cruising. Under absolute thresholds, full-backs logged 5.8 times more sprint distance than centre-backs. Recalculate at 80% of MSS and the ratio drops to 3.1. Absolute thresholds underestimate load for slower players and overestimate it for faster ones.
Individualised thresholds — anchored to MSS, Maximal Aerobic Speed (MAS), or Anaerobic Speed Reserve (ASR) — address this by reflecting each player’s physiological ceiling. But they introduce a new problem: standardisation. A scoping review of 36 studies found terminology and criteria varied from study to study, and a third of studies had fitness assessments sitting more than four weeks from the data collection window (Clemente et al., 2023). A pre-season anchor may not reflect mid-season capacity.
When both methods were applied to the same training dataset, rank order held but absolute values diverged dramatically — bias ranged from -69% to +70% (Rago et al., 2020). The two approaches are rank-compatible but value-incompatible. The practical answer is not to choose one over the other. Use absolute thresholds for team-level benchmarking, individualised thresholds for player-level load management, and never confuse the two.
Measurement calibrated. Thresholds tailored. Does higher running distance now mean better performance?
Does More Running Mean Better Performance?
This is where the most common misreading begins. Match running output is not a direct expression of fitness. It is a product of tactical role, match state, opposition quality, and environment.
In the English Premier League (EPL), top-ranked teams covered less high-intensity distance than lower-ranked teams (Paul et al., 2015). The explanation is tactical: dominant teams control possession, reducing recovery runs and defensive sprints. Attacking formations raised in-possession high-intensity running by 30-40% compared to defensive setups, while total distance stayed flat. Temperatures above 20 degrees Celsius reduced HSR by 8.5% (Trewin et al., 2017). The number on the screen can be identical for completely different reasons.
The most direct challenge to running as a Key Performance Indicator (KPI) comes from three seasons of Italian professional club data. The highest win probability occurred when players had high fitness and freshness but covered relatively low total distance. Draws — not wins — produced the highest running volumes (Mandorino et al., 2025). This is a single-club dataset, so association rather than causation. But the direction is clear. Running distance is an outcome of the game, not a driver of the result.
So how do you extract real meaning from the data?
Making Data Meaningful
Traditional Time-Motion Analysis (TMA) answers “how far did the player run?” The Integrated Physical-Tactical Approach answers “why did the player run that distance?” (Bradley & Ade, 2018). Consider: a centre-back and a centre-forward both log roughly 600 m of high-intensity distance. The number is the same. Code each effort by tactical purpose and the picture splits. The centre-forward concentrated high-intensity running on pressing and breaking into the box. The centre-back spent it on covering. Same distance, entirely different job.
Refining positional categories sharpens the lens further. Lump full-backs and wing-backs into “wide defenders” and you blur the signal. Full-backs covered 34% less High-Intensity Running (HIR) distance than the broad category average, while wing-backs covered 15% more (Ju et al., 2023). A word of caution, though: match-to-match variability for contextualised HIR was high, with a Coefficient of Variation (CV) of 62-67%. Judging a player from a single match is like grading a student from a single test.
External Training Load alone leaves the puzzle incomplete. The same external stimulus produces different internal responses depending on fitness, fatigue, and psychological state (Impellizzeri et al., 2019). Judging a player’s condition from GPS numbers alone is like diagnosing an illness from a thermometer reading — technically relevant, practically insufficient. Tracking the ratio of external to Internal Training Load, building drill databases, and maintaining staff communication protocols complete the pipeline (Pillitteri et al., 2024).
Practical Implications
- Calibrate across systems. When switching tracking providers or comparing training (GNSS) and match (OTS) data, develop club-specific conversion equations. Assign each player a dedicated unit.
- Respect the error hierarchy. HSR, acceleration, and deceleration carry the widest confidence intervals. Apply wider thresholds than the Smallest Worthwhile Change (SWC) before acting on small fluctuations.
- Use both threshold types deliberately. Absolute thresholds for team-level benchmarking, individualised thresholds for player-level load management. Reassess fitness anchors regularly throughout the season.
- Stop treating running distance as a KPI. Running is an outcome of tactics and match state, not a cause of results. Readiness — fitness and freshness — is the precondition.
- Integrate physical and tactical data. Code high-intensity efforts by tactical purpose, refine positional categories, and read external and internal load together.
At the end of all this work, one question remains. Is your data pipeline turning numbers into decisions — or just turning numbers into more numbers?
References
- Akenhead, R., & Nassis, G. P. (2016). Training load and player monitoring in high-level football: Current practice and perceptions. International Journal of Sports Physiology and Performance, 11(5), 587–593. https://doi.org/10.1123/ijspp.2015-0331
- Bradley, P. S., & Ade, J. D. (2018). Are current physical match performance metrics in elite soccer fit for purpose or is the adoption of an integrated approach needed? International Journal of Sports Physiology and Performance, 13(5), 656–664. https://doi.org/10.1123/ijspp.2017-0433
- Buchheit, M., & Simpson, B. M. (2017). Player-tracking technology: Half-full or half-empty glass? International Journal of Sports Physiology and Performance, 12(s2), S2-35–S2-41. https://doi.org/10.1123/ijspp.2016-0499
- Clemente, F., Ramirez-Campillo, R., Beato, M., Moran, J., Kawczynski, A., Makar, P., Sarmento, H., & Afonso, J. (2023). Arbitrary absolute vs. individualized running speed thresholds in team sports: A scoping review with evidence gap map. Biology of Sport, 40(3), 919–943. https://doi.org/10.5114/biolsport.2023.122480
- Hoppe, M. W., Baumgart, C., Polglaze, T., & Freiwald, J. (2018). Validity and reliability of GPS and LPS for measuring distances covered and sprint mechanical properties in team sports. PLOS ONE, 13(2), e0192708. https://doi.org/10.1371/journal.pone.0192708
- Impellizzeri, F. M., Marcora, S. M., & Coutts, A. J. (2019). Internal and external training load: 15 years on. International Journal of Sports Physiology and Performance, 14(2), 270–273. https://doi.org/10.1123/ijspp.2018-0935
- Ju, W., Lewis, C., Evans, M., Laws, A., & Bradley, P. (2022). The validity and reliability of an integrated approach for quantifying match physical-tactical performance. Biology of Sport, 39(2), 253–261. https://doi.org/10.5114/biolsport.2022.104919
- Ju, W., Doran, D., Hawkins, R., Evans, M., Laws, A., & Bradley, P. (2023). Contextualised high-intensity running profiles of elite football players with reference to general and specialised tactical roles. Biology of Sport, 40(1), 291–301. https://doi.org/10.5114/biolsport.2023.116003
- Linke, D., Link, D., & Lames, M. (2020). Football-specific validity of TRACAB’s optical video tracking systems. PLOS ONE, 15(3), e0230179. https://doi.org/10.1371/journal.pone.0230179
- Mandorino, M., Lacome, M., Verheijen, R., & Buchheit, M. (2025). Time to drop running as a KPI in elite football: Football fitness and freshness as match-day preconditions. Sport Performance and Science Reports.
- Paul, D. J., Bradley, P. S., & Nassis, G. P. (2015). Factors affecting match running performance of elite soccer players: Shedding some light on the complexity. International Journal of Sports Physiology and Performance, 10(4), 516–519. https://doi.org/10.1123/ijspp.2015-0029
- Pillitteri, G., Clemente, F. M., Sarmento, H., Figuereido, A., Rossi, A., Bongiovanni, T., Puleo, G., Petrucci, M., Foster, C., Battaglia, G., & Bianco, A. (2024). Translating player monitoring into training prescriptions: Real world soccer scenario and practical proposals. International Journal of Sports Science & Coaching, 20(1), 388–406. https://doi.org/10.1177/17479541241289080
- Pimenta, R., Antunes, H., Maia, F., Ribeiro, J., & Nakamura, F. Y. (2025). Sprint and high-speed running in soccer: Should we use absolute or normalized thresholds? Journal of Human Kinetics. https://doi.org/10.5114/jhk/209540
- Pino-Ortega, J., Oliva-Lozano, J. M., Gantois, P., Nakamura, F. Y., & Rico-González, M. (2021). Comparison of the validity and reliability of local positioning systems against other tracking technologies in team sport: A systematic review. Proceedings of the Institution of Mechanical Engineers, Part P: Journal of Sports Engineering and Technology, 236(2), 73–82. https://doi.org/10.1177/1754337120988236
- Pons, E., García-Calvo, T., Resta, R., Blanco, H., López Del Campo, R., Díaz García, J., & Pulido, J. J. (2019). A comparison of a GPS device and a multi-camera video technology during official soccer matches: Agreement between systems. PLOS ONE, 14(8), e0220729. https://doi.org/10.1371/journal.pone.0220729
- Rago, V., Brito, J., Figueiredo, P., Krustrup, P., & Rebelo, A. (2020). Application of individualized speed zones to quantify external training load in professional soccer. Journal of Human Kinetics, 72(1), 279–289. https://doi.org/10.2478/hukin-2019-0113
- Silva, H., Nakamura, F. Y., Loturco, I., Ribeiro, J., & Marcelino, R. (2024). Analyzing soccer match sprint distances: A comparison of GPS-based absolute and relative thresholds. Biology of Sport, 41(3), 223–230. https://doi.org/10.5114/biolsport.2024.133663
- Trewin, J., Meylan, C., Varley, M. C., & Cronin, J. (2017). The influence of situational and environmental factors on match-running in soccer: A systematic review. Science and Medicine in Football, 1(2), 183–194. https://doi.org/10.1080/24733938.2017.1329589