Episode 10: Athlete Tracking – How Much Is Too Much?
For modern day professional athletes, the idea of your every waking (and sleeping) moment being tracked and measured isn't so far-fetched. On today's episode, we dig into when, why and how athlete tracking can be done effectively, and when it's taken too far.
This week's expert guests are Stuart Morgan, the Lead of Artificial Intelligence, Machine Learning and Data Innovation at the Australian Institute of Sport; and Steve McCaig, the Athlete Health Consultant at the English Institute of Sport.
Together, Sam, Stuart and Steve dig deeper into the rapid evolution of athlete tracking, where we get things right, and where we get things oh so wrong.
Full Episode Transcript
10. Athlete Tracking – How Much Is Too Much?
Intro
[00:00:00] Sam Robertson: In his classic novel, 1984, George Orwell dreamt up the all-seeing, ever-present, omnipotent entity, Big Brother. This concept of tracking someone's every waking - and sleeping - moment was once a fictional warning of a dystopian future. Now, I'm far from the first to comment on just how accurate some of the book's predictions turned out to be, but one of its eerier parallels is just how comfortable and accepting people can become of certain conditions when everyone is experiencing them, or when there is limited intervention from outsiders.
[00:00:34] For some, the notion of being tracked every second of every day is not such a far-fetched reality. In modern day sport, many athletes are being monitored in some way, shape or form from the second they step into the workplace, to the moment they leave again, and sometimes even beyond. In the space of merely a few decades, we've started measuring the athlete for just about everything we can think of. Whether it's self-reported monitoring of their health and wellness, detailed physiological assessments, or prolific camera tracking of their every move - a condition we're all too familiar with in broader society too - no stone is left unturned.
[00:01:10] Undoubtedly, these developments have been transformational to the industry, providing new insights, refining current practice, and improving overall athlete health. But how much information is really needed in order to understand the athlete? Are the obvious privacy and ethical considerations being addressed? And how much say does the athlete really have in what they sign up to?
[00:01:33] I'm Sam Robertson and this is One Track Mind.
Interview One - Stuart Morgan
[00:01:43] Hello and welcome to One Track Mind, a podcast about the real issues, forces and innovations shaping the future of sport. On today's episode, we're asking: athlete tracking - how much is too much?
[00:01:55] My first guest is associate professor Stuart Morgan. Stuart is lead of the Australian Institute of Sport's Machine Learning, Artificial Intelligence and Data Innovation group. He completed his PhD in sensory neuroscience at Swinburne University in 1999, and has worked as a sports analyst at the Victorian Institute of Sport and the AIS, including with the successful Australian men's hockey team at the 2008 Olympic games.
[00:02:20] Stuart is also involved in research, having co-authored a competition-winning paper at the MIT Sloan Sports Analytics Conference, and his current work focuses on research and development in computer vision for sports, including using data mining and machine learning to gain competition and training insights.
[00:02:38] Stuart, thanks for joining us.
[00:02:39] Stuart Morgan: Very nice to be with you.
[00:02:41] Sam Robertson: Now, before we get into the detail about how computer vision is being used around the world and why it's so fantastic, it's completely understandable that some of our listeners may not be completely familiar with what it is and potentially some of its current applications now. The first comment I'd make on it is it moved incredibly quickly. I remember, in the football codes, which I'm most heavily involved with, going from being an utter skeptic maybe just a handful of years ago on how good it could be for positional tracking, the position of a player on the field, to now being almost completely won over about it's maybe a better option than other technologies we've got out there.
[00:03:18] So, as someone that's been involved with it for a very long time now, not saying that you're old Stuart but been around for a while, what are some of the main applications that we're seeing and what indeed is computer vision?
[00:03:29] Stuart Morgan: Well, there's a couple of questions there, isn't there ? Firstly, computer vision really at its essence is the task of image understanding, and we want to use the computers to try to help us understand something about what's happening, in this case in the sporting environment. So what are athletes doing? How fast are they running? What actions are they performing? What kind of techniques are they executing? And so, at its essence, computer vision is simply about understanding imagery and video is really just a stream of images. And what we ask a computer to do is, at each frame, understand something about what's going on.
[00:04:03] At a high performance level, though, the problem becomes more sophisticated. We're not just asking questions about, is there an athlete in the scene? Are they holding a tennis racket? We're asking more sophisticated questions about what are they doing? And once we arrive at the high-performance level, we're really asking our computer vision systems to give us some kind of actionable intelligence, actionable insight about what's going on. The principle objective, of course, using computer vision as opposed to, as you alluded to in your question, sensors, maybe GPS sensors, or accelerometers, or other kinds of devices that you might attach to the athlete, is exactly that, that you have to attach a device to the athlete. So the inspiration behind investing in computer vision in sport has been the non-invasive nature.
[00:04:50] And it comes with a whole range of strengths and weaknesses, of course, which we can dive into, but at its essence, what we're trying to do with computer vision in sport is to understand something about what athletes are doing. Maybe it's in the training environment, the daily training environment, where we might be interested in monitoring their workload. It could be in the competition environment, where we're interested in monitoring the actions and behaviors of our opponents and understanding in a game analysis sense or a sports analytics sense, deriving information that we can use for competition insight.
[00:05:20] But the principle difference here between computer vision and the reason we use that and not sensors is that it is noninvasive, and so it means that we can derive information about our opponents in competition. And we're not generally going to be able to do that with sensors, unless they're willing to share their GPS data with us or put sensors on for us and help us out in that way. But generally speaking, we have an objective where we'd like to know more about our opponents than they know about themselves, and so that necessarily means doing things that are noninvasive, which is where the computer vision comes in.
[00:05:53] But there is a cost and the cost is that computer vision is more difficult to do than simply putting on a sensor, and so it comes with a whole raft of additional computational processes we need to go through to make sure that the data we're getting is clean, it's accurate, it's valid, and that we can work with it, and sensors are often a lot easier to work with on that front. So it comes with...there's a trade-off, but essentially what we want to do with computer vision is learn something about our opposition without them knowing the least bit about what we're doing.
[00:06:24] Sam Robertson: You mentioned some of the challenges and some of the things that have progressed over time and I do want to dig into those, you're right, but before we do, I'm interested in just setting the scene, just how pervasive this has become across sport. When I speak to people that aren't familiar with computer vision in sport, it can seem a bit like a black box to them. That's not to say that other forms of sensors aren't either, but I'm just wondering, even though it's a lot more feasible now maybe than it has been, is it really that pervasive still across sports and in particularly in some of these individual sports? I think the most publicized one, I would imagine, is sports like basketball because they probably provide a very user-friendly environment for computer vision, don't they? It's an indoor controlled environment, fairly even dimensions, low number of people on the court. But just how pervasive is this across sports? Do you have a take on that?
[00:07:10] Stuart Morgan: Yeah, it's becoming increasingly pervasive for two principle reasons. Firstly, the complexity in computer vision algorithms has enabled us in recent years, particularly with the advent of artificial intelligence, to solve much more difficult problems. So it used to be that, for instance, Hawk-Eye is a computer vision based system, and so for a system like Hawk-Eye you're dealing with a very small number of players, a very homogenous environment, nice blue court or a nice green court, and tracking the ball is relatively easy, the lines don't move, and so it's a very constrained environment to do computer vision and so it's quite a tractable problem space.
[00:07:51] It gets progressively harder when you start moving into team sports, where you've got lots of players that might cluster together and they become hidden by one another, and becomes even more difficult when you start trying to create mobile and ad hoc systems, maybe even handheld systems, that can derive something meaningful. But artificial intelligence has become a pipeline, like an algorithmic pipeline, or a technical solution to solving a whole range of things in computer vision that were just unthinkable even five or six years ago. So that means a couple of things. That firstly means that the sophistication in the systems is increasing and the capacity to deploy these systems in really complicated world spaces, in the wild, is increasing, but we're also seeing changes in the technology that's available to deploy these as well.
[00:08:41] If I can indulge you, our indulge myself, with a small story... we're going back, I mean, you did allude to my age earlier, and so going back nearly 10 years...
[00:08:49] Sam Robertson: I didn't mention it though! (Both laugh)
[00:08:52] Stuart Morgan: Going back nearly 10 years now, one of our first significant deployments of a player tracking system, which we call Vision Kit, was deployed in Dublin at an international hockey competition. And we did that in partnership with Disney Research, thankfully because it was an expensive proposition, and the system consisted of a shipping container that was dropped in pitch side, behind the grandstand, with roughly a hundred thousand dollars worth of computing equipment on board, camera's mounted up in different places around the stadium and on the light posts, and it was a substantial investment in resources. And that was the scale of the computing resources that we needed at the time, in order to track the players as they ran around the hockey pitch. And that wasn't even an automated system. That was a semi-automated system that required a lot of human input at the end of the pipeline.
[00:09:42] But we jump forward a few years and we have roughly the same capability now embedded on iOS devices. So we can carry around an iPad Pro and do many of the same things with increasing sophistication. These iPad Pros now have these neural engines on board, which allow a significant amount of computer vision and artificial intelligence to happen in real time on a handheld device. And that transition's happened in under 10 years.
[00:10:08] So I think what we're going to see is this evolution of pervasiveness across sport really grow as the computational resources available to us on mobile devices increases. You may be familiar with, or many of your listeners may be familiar with, HomeCourt, which is an iOS app, runs entirely on an iPhone and you can take it to your local basketball court with your daughter or your son and film their training session or film their game and you can map out on a court space where their shots have been taken from and the shooting percentage and a whole range of really cool features. And that lives now on an iOS device that you carry around in your pocket. And that's only 10 years after a similar kind of operation would have been tens if not a hundred thousand dollars worth of computing equipment and a shipping container and a whole lot of infrastructure.
[00:10:53] So, your question is about the pervasiveness, and I think as the technology evolves and the sophistication of the algorithms evolve, it's going to be increasingly the domain of amateur and participation sport. Whereas previously Hawk-Eye, or the basketball examples that you talked about, were really the domain of the big professional sports that could afford that kind of infrastructure.
[00:11:15]Sam Robertson: I've always kind of had in my head that GPS and other athlete tracking technologies and computer vision converging on the same space and being used for the same purpose, but it occurred to me just listening to you then that in many ways they're actually separating out, aren't they? I mean, potentially there's things that computer vision can now be used for which, you know, we always knew maybe it could be eventually, that is becoming a reality.
[00:11:37] And even in the way that they're being marketed they're separating out, for example, GPS systems now are being marketed at cheaper rates than ever before and there's still 10 Hertz tracking systems, but extremely accessible. But in a sense, probably starting to be used for different things to what a computer vision system would be used for. Even though, yes, they're still going to be used for tracking position of an athlete, or maybe technique, and certainly velocities, these types of characteristics, but maybe they're separating out a little bit and that really ties into something I wanted to talk to you about, which is not so much just what's been constraining the development of computer vision, which doesn't sound like it's been too much given the growth we've seen in 10 years, but what's the future look like in terms of the challenges to be overcome?
[00:12:19]Feel free to rip apart my metaphorical buckets here, but like, I guess I always view this problem in a couple of different buckets, which is in terms of the hardware, in terms of what's available to us, and I guess I'll separate the computer from the camera so to speak here for the example that we're about to use, and then the algorithms themselves. And to me, I might be wrong here and I'm interested in your take, but it doesn't seem like the algorithms are the limiting factor right now. It'll literally be the computation. Is that correct? Are the cameras moving quicker and the algorithms are up to speed? Is it more the computation that's going to need to continue to keep up, so to speak?
[00:12:54] Stuart Morgan: Yeah, that's a really interesting question. I mean, ironically, we actually use sensors in many cases as part of the training environment for computer vision. If we start from the principle assumption that a sensor is as close as we can get to a gold standard in estimating some kind of physiological parameter, and maybe it's ground reaction forces, or maybe it's running speed, although I'm always hesitant to say that in relation to GPS specifically, but there are other more accurate methods, of course. We often use sensors as the principal source of data that we then use algorithmically to train an AI to predict what would the sensor output be, if an athlete was wearing a sensor, in an environment where the athlete's not wearing a sensor.
[00:13:41] So that relates to , of course we talked before about opposition analysis. So if we want to know something about what the opposition is doing and we don't have sensors on them, then we want to predict what their sensors would be telling us if they were wearing them. And there are other cases, of course, where sensors are just not even safe to wear. We do a lot of work with diving, for instance, and we're interested in some of the forces that act on the body. Divers can train for three hours in a high-performance session and maybe do as many as 200 dives, and if they're diving from the 10-meter platform, they're hitting the water at 80 kilometers an hour. So these physical forces that run through the body are immense and you can't put sensors on a diver in that scenario, because it's just too dangerous when you hit the water at that speed to have things strapped to bodies. So in those kinds of instances, we're trying to predict what would a sensor tell us if we were able to wear one? And so we often then use sensor based data to train AI models to predict that.
[00:14:36] Jacqueline Alderson is doing some interesting work at UWA, again using AI and computer vision to try to predict ground reaction forces, where they have large scale resources in terms of Vicon and force plate data. So a really highly sensored environment, a laboratory environment, and then trying to transport that out into the real world so that you could make reliable predictions about what would a force plate be telling you about the ground reaction forces of an athlete doing a particular action in an environment where you don't have that as a sensor resource.
[00:15:09] And the other step from that is what we call sensor fusion, where we actually take the best of both worlds. We take the really interesting things about sensors and information we get from that, but then we fill in the gaps with the things that sensors can't give us. Sensors are not very good at giving us information about pose, for instance. If you're wearing a GPS unit and we're interested in what you're doing with the football, the GPS unit gives us nothing in terms of understanding anything other than really how fast are you running, how hard did you accelerate, and where are you when you receive the ball? And so computer vision can add to that information network. And so by fusing both sources of information together, you can go a long way towards building a much richer and more granular picture and a more representative picture of what people are actually doing in sport. And so far from it being, as you were alluding to, far from it being one versus the other, you can often kind of bring them in together in many ways to arrive at a better overall solution.
[00:16:07] Sam Robertson: You mentioned the example there of using the sensor and maybe not as a gold standard, but at least as a criterion measure to compare against division output, and when you talk about opposition analysis, for example, in not only team sports, but individual sports, say some of the combat sports, for example, it does invoke images of being able to get a very, very good insight about the behaviors, not only the preferences, but also the movement patterns of your individual in a way that we've never seen before. Of course the really beneficial part there, but also the scary part for the opposition, is your ability potentially to get this information without asking their consent, in this case just through filming them.
[00:16:46] And that obviously opens up some ethical questions, which I do want to ask you about, but before we get to that, how much further can this go in the short term? When I look at this, there's an operational piece to it, but there's also an applications piece, and you just talked about a fantastic one from Jacqueline Alderson's group. I had the good fortune of marking one of the PhD theses from that work, which was a really interesting read. But my mind goes to things like automated refereeing, maybe using computer vision eventually to detect gaze behaviours, which I'd imagine right now is probably beyond what it's capable of but I'm interested in your thoughts on that.
[00:17:23] But then from an operational standpoint, I also think of things like completely instrumented training bases for organisations, where the player can literally be tracked inside and outside an entire facility, 24 hours a day, or the entire period they're at that training base. Which again, sounds like a very scary proposition, but there's massive advantages of that. Now they might seem far-fetched, but I get the feeling you might suggest that they're not that far-fetched. Are there's some things that you think of? I know you spend a lot of time in this space, most of your time.
[00:17:53] Stuart Morgan: Yeah there's still a lot to do. I mentioned the advent of AI; it might be worth just spending a couple of minutes for your listeners to kind of have a clear sense of what the real meaning of that is. Historically, when we build computer vision systems, we do it in a programmatic way. So we'd write a program line by line that says, okay, if we want to understand and image that has a tennis player in it, and we want to find the tennis ball, or we want to find the athlete, then we have to write, line by line, code that tells us maybe breaks apart the colors to separate the athlete from the foreground and the background. And then we have to maybe build another part to the program which works out what are the straight lines which might approximate part of the tennis racket, or what are the circles which might be used to find the ball.
[00:18:35] So line by line and piece by piece, the historical way of doing computer vision has been to build these very handcrafted computer programs to try to solve these problems. You can, kind of, arrive at fairly good outcomes doing it that way, but artificial intelligence takes a different approach where the idea is that the algorithm is a learned approximation between what it is that you're putting in, which might be an image, and what it is that you want to get out, which might be the location of the tennis player and the orientation of the tennis racket.
[00:19:07] And if you have that information, so if you have the vision and you have knowledge of what the answer should be, artificial intelligence is the algorithmic process of a computer over thousands and thousands, in often cases millions, of trials working out some kind of set of a neural network, a set of numerical weights if you like, that when you multiply all those numbers together, you get the answer that you want. And when you talk about black box and that's exactly what it is, humans can't really interpret what all these numbers mean because there can be millions of numbers in these numerical weights that when they're multiplied together in certain ways give you a particular kind of response that you're searching for.
[00:19:48] I'm conscious that I don't want to muddy the waters here with too much AI talk, but the end result of this process is that we end up with systems that are much more robust and much more reliable, if you have a lot of training data to go on. So in other words, if you have hundreds and hundreds of examples of a tennis player holding a racket, then the AI learning process is able to build its own model construct to predict what a tennis player would look like and what they'd be doing with a tennis racket in some new examples.
[00:20:21] Now that's important for two reasons. One of them is that it completely changes the game for computer vision, and it means a whole bunch of things that were incredibly hard five or six years ago, before this AI revolution really took hold, are much easy now, and there really aren't too many computer-vision-based problems, whether it relates to position tracking or pose detection or action recognition or face detection, number recognition, these kind of things. They're all really quite easily solvable problems now because these AIs are able to solve these problems with the right training data.
[00:20:54] But that's the second part of the equation. The training data is a really critical part and it's the barrier at the moment to anything we might do in the future. If we have lots of training examples that we can teach an AI to solve, so that then it's able to go off into the future and deal with new examples robustly, we need to have a lot of those initial examples as training cases. That's a barrier then, for instance, if you take the model that you've trained to find tennis players and you take them off the hard court and you put them on grass, suddenly the environment changes and the model needs to change. So you need to have a lot of data with a whole range of different venues and a whole range of different camera angles and perspectives and color balances and all of these kinds of things that then the AI is able to take into account so it learns exactly what you want it to predict and to find the actions or understand the images in the way that you want it to.
[00:21:47] Now, that's a long-winded answer and I apologise for that, but I'm coming back to the original point, which is where are the barriers and what still needs to be done. And one of the really active areas of research at the moment in AI and computer vision is how can we get models to learn how to predict what it is that we want them to predict with many fewer cases. If the barrier to building a new system tomorrow that tracks people kicking footballs is that you need hundreds or thousands of examples of people keeping footballs, and you need either the sensor data, or you need manually annotated examples of what it is that you want, then it could be quite some time before you actually have a working model.
[00:22:26] And AI is inspired partly by the way the human brain works, but humans are very good. We only have to see somebody kick a football once to know that that's the act of kicking the football, whereas a computer needs to seed hundreds and maybe even thousands of times. So the really active part of research at the moment is how can we get computers to learn and to build these AIs without having to first collect hundreds or thousands of examples to train it on. That then opens up a whole new cornucopia of opportunities in sports science, where a computer only has to see something once in order to recognise what it is.
[00:23:04] So imagine a more sophisticated scenario where maybe we're filming a football match and we see a particular kind of play. If we had thousands of examples of that kind of play, we could teach an AI to recognise that and so any time it sees it in the future, it could automatically tag that, recognise it, log it away, and create a video highlights package of all of those kinds of examples. And maybe it's a tactical consideration, maybe it's a centre clearance with certain kinds of features around it. A human can see that once, a skilled human can see that once, and know it again, whereas a computer at the moment can't, and the idea that we're working towards is that, or that the community is working towards is that, a computer should be able to see an example of a centre clearance, with particular kind of tactical constraints around it, only once, and then be able to find it. And I think that's where the next big area of work is happening that will be important for sport at a higher level than just action recognition or player tracking. Those things are solved. What we want to do now is go to that next level further, where we're looking at the more high-level tactical constraints or tactical considerations. And it's a much harder proposition for a computer because they need lots and lots and lots of examples in order to learn what it looks like.
[00:24:16]Sam Robertson: Like a lot of things we talk about on this show, it's a really kind of multidisciplinary, well interdisciplinary, kind of problem isn't it? Listening to you talk about the thousands or millions of examples required and ensuring you get adequate heterogeneity in the context and also the types of samples, that's a very human-dependent problem right now, I'd imagine. You know, you need to get someone who, if you're looking at detecting the as you mentioned around a tennis forehand, you'd want to know someone who knows every single court surface that someone could ever play tennis on and all the varying techniques and things like that to make sure they're represented, and that is an arduous task.
[00:24:50] And then of course, coming back to your second part of your example, when you're talking about the ability for humans to still at least pattern recognise in ways that computers can't yet. I'm not an expert in that area, but is it as simple as we're better at extracting features from that? Is that part of the problem, that computers need to get better at, or at least creating higher level, higher order, constructs or models from those features? I mean, I think your tactical example is a great one there and also probably an element of humans picking up things subconsciously as well. They're not sure that they're pattern recognising even when they are. It seems like both of those things still rely on a lot of human involvement, don't they?
[00:25:27] Stuart Morgan: Yeah they do. It's interesting because I think, as you alluded to, humans are very good at recognising patterns and if you were a novice at football and you'd never seen a football game, you get flown in from the US and you got no idea what this insane sport is about, then the odds of you being able to recall the features of a centre clearance with the kinds of features we talked about are pretty small, right? Round about the same odds as I could recognise a recurring pattern in an American football game, which is a complete black box to me. But if you watch that sport often enough, then you build these kind of perceptual schema, these understandings of the patterns that are relevant. So it's a level of skill acquisition, isn't it? And any new variation you see on that is not that far from something you've seen before. And so there's this idea of chunking isn't there, where you chunk complicated ideas into more manageable neural bits that then anything new that you see has some relationship to, and so you're able to recall it on the basis of other patterns that you've already seen. So it's just like what they normally do, but this play was in a slightly different position.
[00:26:33] And that's the challenge I think, in the computer vision space, is getting computers to understand, to have all of this historical information as a starting point, and then to be able to take that, if we continue the perceptual chunking analogy, for computers to have been able to take historical information and learn something about that, so that they only need to see one or two examples of something new to recognise where it sits in the greater scheme of centre clearances. And that's still an open question.
[00:27:02] Sam Robertson: And that's the really nice thing about, I guess, most sports as opposed to the, I think the wild is what you referred to earlier, because the constraints that are inherent in many sports, you know, you've got a fixed field position, you've got a fixed size, fixed number of people, those things aren't changing over time. So it does provide an advantage for AI to use the historical data. Whereas we know in some problems, unfortunately, that's not a luxury we have because the world is changing more quickly than we have time to get up to speed with and our models become redundant overnight. And fortunately that's not always, or most of the time, that's not an issue for us in sport, is it?
[00:27:37] So I did want to talk to you about the ethical part, not so much about the ethical considerations, but just more the future focus, but I think ethics is part of that. Data ownership is another one. Consent, which probably falls under ethics. Shared infrastructure. All these different things. Are we prepared for this in sport, as a collective we? And what are the big things we're going to run into?
[00:27:57] Stuart Morgan: Yeah I don't think that we're even remotely prepared for it. I think that the technology has evolved so quickly that we're still, after the event, coming to terms with what some of these technologies mean. And in no particular order, I can think of a bunch of things that are ethical considerations, and I throw open to discussion with no preconceived idea about what's right or wrong or where we should be heading.
[00:28:22] But anytime an athlete performs, there's a huge amount of their own intellectual property embedded in that performance, and there's an understanding and an expectation that you perform in a public domain and that people are going to video your performance and they're going to see you and they're going to understand something about what you do, but we're reaching levels of sports analytics, and computer vision is part of that, as are sensors as we've said, but we're building much more detailed understandings of the way that athletes perform and so this whole battleground of sports analytics has the potential to end or shorten people's careers. If you're able to unlock the key to defeating a certain tennis player, then you have the capacity to shorten their career or end their career and diminish their capacity to earn an income for their family.
[00:29:15] Now, I'm not saying for a moment that we should be living in this utopian world, where people are allowed to compete without being scrutinised. It's a competitive world, but it's also an imbalanced world because not everybody has access to these kinds of technologies. It's the developed nations that have access to the computing resources and the expertise to do all these kinds of systems.
[00:29:35] So, I'm not for a moment positioning myself ethically anywhere on this spectrum, just raising these as some of the really interesting questions that are starting to evolve that I'm not sure anybody's really thought about. And there's a broader question of course with, we're getting science-fiction a little bit, but the ethics of artificial intelligence altogether, and the safety of artificial intelligence as we build more and more sophisticated computer vision systems and robotic systems.
[00:30:00] If anybody wants to be thoroughly terrified, they should look up Boston Dynamics on YouTube and take a look at some of the new robotic entities, these bipedal robots that can navigate their way through really tricky physical environments. They're about six foot eight and they weigh about 250 kilos and they can jump and they can shoot and they can chase, and it's really Terminator kind of stuff. And this is the natural landing point for evolution in computer vision and robotics and artificial intelligence. And so there's a whole range of ethical questions about what should we be, in fact, thinking about when we build computer vision systems. I think sport is somewhat separate to those kinds of considerations, but there are a lot of ethical considerations to take into account in artificial intelligence.
[00:30:48]You alluded to earlier actually, and it's a good point, that many AI algorithms, computer vision included, are black box kind of algorithms. The image comes in and the predictions that you want, whether it's pose or position or action recognition, or any of those kinds of features, that comes out, and all that you have in the middle are these boxes of numerical weights between one and zero, thousands and thousands and thousands of them, and they're impossible for humans to really interpret.
[00:31:20] So what do you do then when decisions about selection to the Olympic Games, or decisions about selection for the next game, what do you do when those decisions might be made by an artificial intelligence and nobody can explain to you how the AI came that decision? It's not a decision tree where somebody says, okay, these are your positions in the last three games, you've barely touched the ball, you're not scoring, okay, you're on the bench. If an algorithm just says, your stats went in and a yes or no answer came out, are you picked or not, and there's nothing in between that's interpretable by a human, do you as an athlete have a right to a better explanation about why that decision has been made about you? And so I think there's a whole range of these kinds of questions that we haven't really got to the teeth of yet because the technology is so new, but I think we're going to have to grapple with a lot of these in the future. Particularly as it relates to the impact that these decisions might have on individuals and their capacity to earn an income as professional athletes.
[00:32:17] Sam Robertson: Yeah, I'm glad you brought that example up because I was thinking about the word bias throughout, and I think some people have an understanding or at least a perception that bias is either heavily reduced or removed when they're handing over these decisions to AI, and that's probably not a correct assumption a lot of the time. And again, the inability to go and dig into where the recommendation is coming from, particularly with the yes, as you said, these black box approaches, that is concerning in a lot of applications. Maybe not everything. Maybe it's fine not to know all the time, but I think certainly in selection for an Olympic games, as you said, I think the athlete has a, at least I do, I think they have a right to know.
[00:32:53] But it's a very in-depth topic and probably beyond of where we got time for today, but thanks for joining us, Stuart, it's been a pleasure as always and a very interesting topic that is sure to be redundant, a lot of what we said, in five years time.
[00:33:05] Stuart Morgan: I think so! It's been great to talk, thanks for having me.
Interview Two - Steve McCaig
[00:33:13] Sam Robertson: Our next guest on the show is Steve McCaig. Steve is the Athlete Health Consultant at the English Institute of Sport, where he provides athlete health plans and system-wide initiatives to a range of sports. Previously, he worked at the English Cricket Board as a senior physiotherapist, and more recently as the Head of Research and Innovation for Physiotherapy.
[00:33:31] Steve has published research on injury surveillance, workload monitoring, and musculoskeletal adaptations to throwing in cricketers, and is currently completing his PhD on throwing arm pain in elite adolescent players. He has three children aged five and under and has little time for much else these days, which is all the more reason as to why we're really lucky to have him provide his thoughts on the topic today. Steve, thanks for joining me on the show.
[00:33:54] Steve McCaig: Thanks, Sam. Thanks for the invitation.
[00:33:55] Sam Robertson: Now I want to start off by talking a little about an area that I know you've spent a lot of time in your work, particularly with the English Institute of Sport, and that relates to identifying some of the main reasons as to why sports actually track or monitor athletes.
[00:34:09] Now, I think it's fairly intuitive that practitioners want to collect information, and that's probably any type of practitioner, on athletes so that they learn more about the athlete and of course informs the way in which they engage with them. But I think, and I often wonder about this, does athlete tracking and monitoring need to be a bit more pointed than just that in terms of its goals? For example, some common reasons I hear for undertaking monitoring are to gain insights into an athlete's readiness, or their wellness, or to assess their level of fatigue, but I think this raises another question with respect to how well we can actually really measure or even differentiate those types of concepts. And then the next step of course, is whether they're really understood by the key stakeholders. Is that something you've explored in some of your work at the EIS?
[00:34:51] Steve McCaig: Yeah it is, Sam. So, the reason I've done a lot of work on our fleet monitoring is at the English Institute of Sport we have the Air app, and the Air app is an app that athletes can use to record self-reported monitoring measures that the sports use. And as part of my role, we help advise sports on when using the Air app and how to get it in place and when I talk to the sports there's a lot of discussion around what measures we should be taking and what we should be using, but less discussion around the why and the purposes and the processes. And so that's why we decided to do a review on monitoring within, not only the high performance system, in terms of, you know, what our sports we're doing, what our practitioners were doing, but also what does the research tell us about monitoring? And so that's how I kind of got into it.
[00:35:41] You know, in terms of the purpose, if you ask practitioners why they monitor, most of them, well it'll depend on who you ask, first of all, but most of them will say is to prevent injuries and illness, and some will say optimising performance. And I think we've really got to question ourselves, is monitoring capable of actually preventing injuries and illness, or is it just part of the picture? And so that's what practitioners will say. If you look at probably more the theoretical reasons for why we monitor, originally it was about training and is our training program being effective and is it having the adaptations that we want? That was how monitoring really kind of started. And then recently we've probably become more interested in, can we use monitoring to prevent injuries? And in some cases, people say predict injuries. And certainly in team sports, as you mentioned, there's a lot around this concept of readiness, you know, are they ready to perform? Those are some of the reasons that are given.
[00:36:33]I think my take on it is it should just be about informing conversations with coaches and athletes about their management. It's just that extra bit of information to help have better conversations about how we're going to look after you. That's how I think we should be monitoring.
[00:36:47]Sam Robertson: Listening to you speak, there's kind of three main areas that it sounds like you've found in some of your work, which is the performance angle, the injury angle, and then that kind of athlete response to training or practice. Without being specific about any sports or any different personnel, what have you found just in broad terms with respect to, are they three quite different models that you find that sports are using? Like for example, if a sport's very predominantly focused on monitoring for injury prevention versus performance, are you finding they're using quite different measures or have quite a different approach to the way that they do their monitoring?
[00:37:20] Steve McCaig: I think what was most interesting on the journey that we did with this review is when you look at the literature out there, most of it's been done in professional team sports, and what we saw was there are quite a number of Olympic sports who were just trying to replicate what had been done in professional team sports, and actually not realise well what's appropriate for us. And so I think a lot of the measures that are taken can be used for different purposes, but I think it's more about actually what's appropriate for our sport and for what we're trying to achieve rather than just a, this has been done elsewhere. I don't know if that answers your question, Sam, or covers some of the things you're talking about.
[00:37:56] Sam Robertson: Yeah, the way I kind of interpret your answer there is some sports are going to be more, well they have higher injury rates full-stop, and it's probably more accepted in certain sports, certainly some of the team sports potentially than say other sports. So I think that probably talks to culture a little bit in the sport, which is something I wanted to ask you about a little bit later on, but I think I'll save that for later. But I wanted to talk a little bit about the notion of data in monitoring, and I'm sure that the EIS, and different sports in the EIS, have various athlete monitoring systems.
[00:38:24]We see AMS's all over most kind of organised, or semi-professional, professional sports these days, but something that I find quite interesting is, monitoring is not really new. It's something that's been done by coaches or practitioners for a long period of time, even if it's just been through their eyes. And obviously AMS's, or athlete management systems or monitoring systems, have kind of made it a little bit more structured, and certainly the data we're getting from technology has increased that structure as well. So I think a really interesting question, given that we're really still at the early stages of that, I know we've been monitoring with data for a little while, but we really are still at the early stages of technology I think in the overall scheme of things, is how do we reconcile the data that we're getting from a piece of machinery, or a piece of data that's coming from a technology, with that of the coach or the practitioner?
[00:39:11] And from my observations, it seems like technology is starting to measure certain aspects of an athlete's status, whatever that is, whether it's readiness or performance or recovery, probably more reliably and accurately than say a human can in some cases, but in other aspects, it's definitely the reverse is true. To be specific for a moment, I don't think technology measures the psychological component particularly well yet, versus say a very experienced coach. Can you talk a little bit about how you're reconciling those two different views of monitoring? I mean, you mentioned before that very much the data or the monitoring informs the conversation. Is it as simple as that or is there a need and a place for both of them?
[00:39:49] Steve McCaig: I think there's a need and a place for both. And I think, again, it does really depend on the context of the sport. So if you've got an A to B sport, which is very physiologically driven - so triathlon, swimming, rowing, those types of sports - you want to make sure that your training program is stimulating the adaptation that you're desiring, and so therefore monitoring that. So monitoring actually, you know, the coaches are in the training program, has what's delivered actually matched that? Have we got the physiological response that we want and have we got the adaptations that we want? That's pretty critical in those sports, because again, the coach might have their coach's eye about what they're doing, but does that actually match up with what's happening?
[00:40:28] I think the other context too is that one of the challenges that everyone has with monitoring systems can be coach buy-in. And if they're used to doing things a certain way, and all of a sudden they're being presented with a lot of data that may not be interpreted that well, or may conflict with their opinions, do they actually act on that information and change it? Now I think as technology becomes more commonplace and people get more used to using it, that's going to become less and less, but there is still coaches who, this is my method, this is what I've used, this is what I've been successful with, and getting that buy-in can be quite challenging.
[00:41:02]Whereas if you were in another sport, for example, let's say your target sports such as archery or shooting, well the monitoring that you're going to do there is going to be very different. You're not necessarily looking for what is their training load and what is their response look like to get a physiological response. So what's your purpose for monitoring them? Is it actually, you just want to know have they been doing the volume of training to improve that skill? And it's not physiological based. And you might want to make sure that you're really looking at, are there any signs that they could be at risk of injury and illness, and that time loss from those injuries and illness is actually the biggest factor for them rather than actually the training that they're doing.
[00:41:41] So I hope that kind of covers you in the differences in why you might use it. And again, the challenge is you've got this coaching model and how actually that fits with that coaching model, and that's going to depend so much on the sport, the coach's' backgrounds and coach's beliefs, but also how we present that data to the coach in a way that fits with their approach.
[00:42:03] Sam Robertson: Well, I'm thinking a lot of culture as we go through this conversation and the sport differences and particularly those sports that are really coach-led, which are a lot of sports still. But I'm interested in particularly, and this is probably another research question in and of itself and something we're going to have to grapple with over the next couple of decades is, what happens in these scenarios where we have two forms of monitoring measuring the same construct, and let's say it's fatigue where we have a really nice suite of physiological data and an athlete that says that they are fatigued or they're not fatigued, and then the coach is giving a very different - and maybe even the athlete for that matter - is giving a very different response to that. And that's probably another question which I won't raise now, but is the athlete's self-awareness of their own body as well. This is a difficult question, I think, but how do we reconcile that? When we're getting completely different impressions from an athlete versus a coach versus the data?
[00:42:53] The thing that comes to mind for me is, with my research hat on, is a longitudinal comparison and a validation process, but that's very tricky in the applied world, isn't it? So, are there other ways that you can reconcile that and move forward?
[00:43:05] Steve McCaig: I think it depends a lot upon how I'm using the multidisciplinary team works within that sport, but also, I suppose the interdisciplinary team, so actually, multidisciplinary team just implies that we've got people from disciplines working in the same sport. That doesn't mean they're necessarily working together. And I think it depends on how decisions are made within that sport. What's the forum? Do you have quite a didactic coach who it's, you know, my way or the highway, or actually do you have a forum which is athlete-centred, coach-led, where everyone's allowed to input, and then we discuss, what is the best way to manage that athlete? And we make that collaborative decision with the athlete at heart.
[00:43:40] I think it also depends on risk. So, what is the risk to that athlete if they ignore that information? The risk is, okay, do they get nonfunctional, overreaching, overtraining syndrome, injured or ill? That's a risk. But also if you're in an Olympics scenario, where you've got to get a certain volume of training in to perform, if we pull you out when you don't need to be, that's a potential risk to performance. And I think there's lots of shades of gray there. And again, it does depend upon what is the actual risk to these athletes? And that's going to depend a lot on the athlete, the goal, the time in the cycle, what that bit of information is telling us. And I think that's where, as I talked about earlier in the review, the processes that support your monitoring system are more important than the actual measure that you're taking, if you want it to have input.
[00:44:32] Sam Robertson: So with that in mind then, and coming back to the culture pieces as well, and again without being specific, are there instances where you've felt like a sport, is it going over the top in terms of what they're covering? Or maybe the other way, maybe not doing enough? With a dominant practitioner, who's having a large say, and maybe it's an individual practitioner having a large say on what it looks like, or indeed the athlete for that matter. Have there been instances where you've had to intervene in your role or at least work with stakeholders to intervene to increase that information coming in, or indeed decrease it? And that's really changing a culture, I'd imagine, if you do have to do that.
[00:45:06] Steve McCaig: One of the tendencies with technology, and that there are so many different measures you can take in so many different metrics and new stuff coming out every day, is the temptation is to just look at it all and feed it all back. And then again, when you get really busy feedback sheets or PDF that you hand to a coach, if it's taking you more than 30 seconds to interpret it, and that's probably generous, and you have to work really hard to understand what this means in doing it, you're probably doing too much.
[00:45:37] And again, because there are some coaches who they won't look at that unless it's really obvious. The only two is if you've got all these measures, well, what are you really hanging your hat on? What's the thing that actually you value that is most important? And again, I think it's one where you kind of need to commit to what you think is the key measure and really track that closely over time, rather than just do everything and then not be able to interpret it, and I've definitely seen that. And I think GPS is, and the inertial measurement units, are brilliant, but there's so many different metrics they can spit out. I know the football codes have probably gone through an evolution of, well actually, what is most important for us? And I think that's when it comes back down to the, just that clarity of purpose, like actually, what are we trying to achieve and what do we need to do?
[00:46:23] I think from an athlete's self-reported measures too, with these apps that you can use with the different question options, there's a tendency for, I want to ask about this, I want to ask about mood, I want to ask about sleep, I want to ask about this, and then before you know it you've got 10 questions, which haven't necessarily been validated, if they're ones that you've developed, and the athlete gets questionnaire fatigue, because they're like, I'm having to fill in this thing all the time, so they just put the same answers down. They don't get any feedback, so they don't value it. I think those are things I've seen as well. It's like, okay, well actually, when you're asking an athlete for these daily self-reported measures, what are you actually trying to get from it? And I think we're probably seeing the pendulum going from people asking, well, what are these daily measures? To some people saying actually are they even worthwhile at all?
[00:47:09] Some of that sports have gone to a stage now where they're not asking for a score on these measures, they're just saying do you have any concerns in these areas? And then that flags a conversation, they have a conversation with the athlete and they decide what they do, rather than trying to interpret, do I need to act on this score that's six out of 10? What does that mean? And so I think we've definitely seen that shift of, okay we can ask all these things let's ask everything, to oh hang on it's not really telling us what we think it is, in some cases let's not even ask at all, to this kind of middle ground of, okay, is there any issue that you would like to talk to someone about?
[00:47:45] Sam Robertson: To me that is probably evidence of how important a role like yours is, really, and having it in sports or at least organisations like the governing bodies or institutes of sport, to have that role. But it also leads me to another question, which is I wonder where the responsibility lies in terms of ensuring that these things are valid and they are fit for purpose. You mentioned then, it's impossible to keep up with the, not only the types of measures that are coming out or the types of technology that are coming out, but also the volume of options that someone has. And again, the typical practitioner, whether it's a physiotherapist or a sports scientist or strength and conditioning coach, that's a very tricky job, when they're already servicing athletes.
[00:48:23] So, is that something that you've looked at as well? Like how that model works? Is it through collaborations with third parties that are doing that validation or is it something that you're trying to handle in-house? I know once upon a time, you know, it was a major part of selling the institutes of sports, but it's a big job in and of itself particularly as there's a lot more technology around than there was 20 years ago.
[00:48:41] Steve McCaig: And getting bigger and bigger all the time. And they're driven by commercial imperatives as well, with wanting to get things out there and your athletes are being approached by companies to wear this thing or do this, and it's really quite challenging for the individual practitioners to keep pace. In terms of the organisations themselves, you know, you'd like to think that they've gone through some validation process of their metrics, and I think that that is happening. Although in terms of being able to see published evidence of that being done by independent organisations is uncommon, shall we say? Although there's more coming.
[00:49:15] But I think the first thing for me is with any measure that you're taking, is it reliable? Does it claim to measure what it claims to measure? And I think on that validity point of view, touching back to what you talked about earlier is, if we take account of movement jump and the various metrics you can get from a force platform that may indicate neuro muscular fatigue, we're then assuming that that is a measure of readiness. The readiness is a construct which makes up lots of different things. So, you know, a jump on a countermovement jump on a force platform, is a valid measure of that jump, that doesn't make that a valid measure of readiness for sport X, sport Y, sport Z. So even for a device such as a countermovement jump on a force platform, which there's so much reliability data out there and so much that it is a valid measure, that doesn't make it valid for the construct that you're interested in necessarily, I've probably go way off topic there, but...
[00:50:09] Sam Robertson: No, not at all, I'm really glad you brought that up because I've had that conversation in the past with people and I think it's hard enough to get certain people to understand the concept of validity without actually getting them to understand the different types of validity and also the different applications of it, and I think that's a really good example that you made then.
[00:50:24] This is a question that's maybe a little bit off topic as well, but is it regulated well enough, these technologies? And whose job is it? Maybe that's the challenge in regulating and obviously validating a piece of technology, that the results could vary depending on the application, right? So it's a tricky job to regulate. And if, for example, EIS was going to put someone on technology validation, they may have someone like that, but there would be a number of full-time jobs I'd imagine to get that job done well.
[00:50:48] Steve McCaig: Yeah I mean, there's certainly some bits of tech, which we have done some internal validity on, and again that's probably the ones which are most commonly used, but as you mentioned there's new products coming on a weekly basis. It's a full-time, well it's more than a full-time job for one person. And then again I think, is that a useful way of spending resources? On just validating these different metrics? And I think that's where I kind of come back to first principles, because that bit of kit could be valid, it could be reliable, but if you're not really clear on what your purpose is for why you're using it, or you don't have the processes in place to actually capture it in a time efficient manner, analyse it, interpret it, contextualise it, feed back to the relevant people and act on it, well why are you doing it?
[00:51:31]I've spoken to friends in other sports where they've just got files and files of monitoring data that just sits on a computer somewhere that no one's ever looked at because we have to be monitoring and that's not really an effective use of time. And so I think for me it's really nail why you're doing it and can you do it in your environments? And then let's pick the measure, as opposed to, here's this new device let's use it. No, actually, let's really identify what we're trying to achieve and then let's figure what is the best way to actually achieve that, because the technology can be quite seductive in its claims and there's pressure to be seen to be using the most up-to-date thing.
[00:52:07] Sam Robertson: I would 100 percent agree with that. So, coming back to your countermovement jump example, let's say neuromuscular fatigue is the measure that's deemed important, you then use that particular test as a proxy, and you agree on that. And then of course, the knowing that the technology is going to change all the time in a sense it does matter, but it doesn't matter. You know, you're planning that there could be a better provider in 12, 24 months, but the constant is that you associate the countermovement jump with neuromuscular fatigue and you keep that running through your program. So, yeah, I think that's probably the best way that that whole approach can be covered.
[00:52:37] Steve McCaig: And on that point, Sam, that doesn't mean that once you've got your process in, that's it. There's a continual reiterating and evolving and reviewing and pushing forwards. And I think that's the final part I omitteed. Once we've got it up and running, we then need to review it and say, is this being effective? Is there something new coming along? Get that feedback from the athletes and coaches around that whole process. And again, if there becomes a better bit of kit on the market to help achieve that purpose, well then you jump on it.
[00:53:06] Sam Robertson: Yeah and that brings me to a question that I wanted to ask you now anyway, it's a nice way segue in, which we've kind of talked about a little bit, but the question was going to be around how do you know when you're monitoring too much, or maybe not enough? And we kind of talked about that earlier, but I think with the output, you mentioned, if a coach is taking, you know, 30 seconds to look at the output, it's too long, which I think we'd all agree with. But that doesn't really solve the problem of, let's say down the track, the scenario you kind of outlined then does start to occur and we start to get a lot of good measures on a lot of good constructs or a lot of good components of performance that we don't have now. We're probably negligent if we're not looking at that data in some way, shape or form.
[00:53:43] And so to be specific, let's say we start to understand the psychological component of performance better with data. We get more granular, sophisticated measures of recovery and some of the measures we can get now in the lab, we can get a little bit easier. And all of a sudden we have a large suite again. Obviously you've got to take the information in, that's going to be a constant. And assuming we're not going to roll out and automate readiness decisions yet to a computer, which may well be on their radar, how are we going to know? Is it simply going to be constrained by time and cost and these things, or should there be some kind of philosophy behind that I wonder? Or is it again, maybe a sport by sport problem?
[00:54:19] Steve McCaig: Let's talk about readiness and I think ready for what? And I think being ready for a game of AFL on Saturday is very different to being ready for this block of training in an A to B sport like rowing, triathlon or swimming, where they might be doing a really hard block and actually they don't need to be a hundred percent ready. And I think that's what is actually relevant, whereas closer to a major games or an Olympics, well actually that's different, we want them to be optimally ready or as close to optimally ready as possible. So I think it's periodising when these metrics are most important in Olympic sports, whereas in professional sports, when the next game is the next most important one to win and coaches' careers on the line and professional careers on the line, readiness in that context is very different to an Olympic sport.
[00:55:05] Now that doesn't mean that while they're doing that hard block, you can't be gathering data to understand readiness, which might be used later on. And I think touching on the things where I may have come across a bit black and white in terms of, if you're not using that data, why are you collecting it? But that doesn't mean that, okay, we're not sure about this thing, let's just track it over 12 months and see if it's valuable. I think there's a real role for that, because again, if you just go, well, we're not sure, so we're not going to do it well, you miss out on that opportunity to learn. And I think, certainly in an Olympic cycle, the time to do that is in the first one or two years on the cycle where you learn as much as you can and then going into your third and fourth year, it's like, okay right, we're really clear on what we're going to do here, we've captured these learnings, we're going to move forward.
[00:55:52] There are opportunities to go broadly and learn, but it's a matter of, okay, we're going to trial this for X amount of time. We're going to say, what did we learn, and then what do we integrate moving forwards? As long as everyone's clear that that's why we're doing it, and in particular the athlete, because if an athlete sees all this stuff being done to them, but they're not getting any feedback or it's not changing how they're doing it, then their buy-in is going to drop as well. And I think just being really clear on, okay, we're taking this stuff, we're not sure if it's going to be useful, we're going to figure out this is why we're doing it, and at the end of this period we're going to feedback to you as to what we've learned and what we need to do differently.
[00:56:29] Sam Robertson: And that's where I come back to that notion of covert monitoring, those monitoring measures which they don't encumber the athlete per se, and obviously they present more of an opportunity than others potentially to do that trial and error. And you talked about periodisation. I just was thinking it's almost like an opportunity for monitoring periodisation as well, isn't it? It's almost the same as the training, so interesting in and of itself.
[00:56:52] Now I know we've got to let you go, but I do want to talk a bit about the future. How will this space look in the future? And let's put a time limit on it, let's say 10, 20 years from now. We know it's, it's moving pretty quick, I think the startup, the sports startup scene, is a big reason for this. There's a lot of companies all over Europe or the US, and probably even here in Australia, that are doing a lot of work on monitoring and some of them will be successful I'm sure, but what's going to be the big change that you see across sports, say in 10, 20 years time, in monitoring?
[00:57:23] Steve McCaig: Well, as we talked about earlier, there's going to be more options out there. There's going to be more commercial organisations doing it. There's going to be, I suppose, more claims about what it can do. The end user practitioner, the athlete, is going to be more savvy about this. They're going to have to be more savvy in identifying what is useful for them and make choices. I think in terms of analysis, no one sport is ever going to have enough data points to really infer much, and I think we're seeing some organisations already where they're de-anonymising data and collating it to see what insights we can gain from that.
[00:58:01]I think it's one where institutes of sport are going to have to look at, okay, are we comfortable sharing our data with this organisation? And how is that going to be used when potentially our major competitors could be using that data as well? So are the insights that we're going to gain from this worth the risk of sharing that competition IP, because if we look like some of our smaller sports, where there may only have six athletes on program, it's going to take you years and years and years to learn a lot about them from a monitoring point of view or a performance point of view. Even in our bigger sports where they might have 40 or 50 athletes on scholarship, I mean, you know statistics better than me, Sam. That's not a lot of athletes to gather insights on. And so I think you're looking at how we can collaborate with other organisations to learn, but still protect our own IP and our own performance advantage. It's going to be challenges for institutes to reconcile against, and professionals too.
[00:58:58] Sam Robertson: Yeah, absolutely. I would agree with that. I know there are pockets of work being done around the world on that topic, but we're a little bit late to the party, I think, as an industry I would imagine. Not that we're on our own in that space, but yeah, I think it's caught a lot of industries off guard and definitely sport's one of them. With that in mind, Steve McCaig, thank you so much for joining us on the show, it was a real pleasure.
[00:59:17] Steve McCaig: Yeah cheers, thanks a lot, Sam.
Final Thoughts
[00:59:25] Sam Robertson: And now, some final thoughts from me on today's question. As with so many aspects of sport, athlete tracking and monitoring continues to be somewhat of a runaway train, moving more quickly than stakeholders can keep up with.
[00:59:38] There's a clear need to pause and reflect on the actual purpose of the information we're collecting. Is the data going to help coaches and practitioners deliver the best possible service and duty of care to the athlete? How does it serve our bottom line? Sure, more data can help to make a high-performance program look leading or sophisticated, but its intended use needs to be clearly articulated before being rolled out. Further to that, it's underlying validity should be established - old news, I know, but seemingly still bears repeating. As the commercial battle heats up for companies offering new and improved forms of tracking and monitoring, so too will questions of quality, privacy and ownership.
[01:00:19] There's no doubting that new tools such as computer vision have the potential to describe athletes actions in currently unrivaled detail, however, the inner workings or the intent behind why and when athletes behave the way they do, remains largely phenomena described or judged by manual and human means. It does make one wonder whether there is still a place for simply the humble conversation?
[01:00:43] So with all of that in mind, how much is too much? I'm optimistic that sports science can be key in answering this question both now and into the future. Whether conceptual models being implemented to recogni se saturation points in information collection, to checking overall data quality, in this area as much as any, sports science needs to be leading the way. And of course, if all else fails, we could always ask the athlete.
[01:01:10] I'm Sam Robertson, and this has been One Track Mind,
Outro
[01:01:15] Lara Chan-Baker: One Track Mind is brought to you by Track and Victoria University. Our host is Professor Sam Robertson and our producer is Lara Chan-Baker - that's me!
[01:01:24] If you care about these issues, as much as we do, please support us by subscribing, leaving a review on iTunes, and recommending the show to a friend. It only takes a minute, but it makes all the difference.
[01:01:36] If you want more where this came from, follow us on Twitter @trackvu, on Instagram @track.vu, or just head to track vu.com. While you're there, why not sign up for our newsletter? It's a regular dose of sports science insights from our leading team of researchers, with links to further reading on each episode topic.
[01:01:56] Thank you so much for listening to One Track Mind. We will see you soon. .