Sunday, August 30, 2020

Fulgaz App : Validating Model Prediction & Performance Results


After my self-inflicted Poor Man's Tour de France that ended on 22nd August, I took a week of recovery and lunged into the Fulgaz app's 3 week fundraising campaign called French Tour. This "curated" Tour campaign features most of the celebrated climbs of the Tour de France in the high Alps along with other famous circuits in and around France. With a real-time leaderboard and 381 virtual kilometres with 14000m of climbing over 21 stages, its a challenging event to keep my mind occupied while the actual Tour de France plays out. 

When riding the Tour in Fulgaz, you see a beautiful HD or 4K video of the course taken by a volunteer rider on a high resolution cam. The volunteer would obviously ride the course at their own capacity, so when you use the app, the speed of the footage can be set to "reactive", so effectively it'd speed up or slow down based on how closely you match the recording (for example, 1x, 0.9x, 0.8x or >1x). 

Being new to Fulgaz, I was quite impressed with the app's in-built features and sliders to "tune" nearly everything that would have appreciable impact on the ride - for example system weight, rolling resistance, drag area, even wind speed and direction. The speed with which the app loads is extremely fast, about a second on my Windows 10 pc with a 32gb RAM. Whats more, you can download all the high resolution videos to stave off any buffering troubles. A download would take 15 minutes for a full HD video on my modest internet connection. 

All this was fascinating, given that a) I'm quite new to indoor cycling apps and b) Zwift, another leading indoor cycling app of which I'm a paying customer, keeps a lot of these variables under tight secrecy so effectively you have little clue what is driving the model. 


Sometime ago, I built myself a cycling performance model for personal use. The model is built based on Martin's power model for cycling which I like to use for personal and coaching related estimation purposes. I can build as many segments of a course in the model and make tweaks to inspect how it changes my performance. This is handy for climbing and TT predictions and even drafting simulations. 

For Stage 3 of the French Tour, registrants would have to climb the 1200m vertical Col du Galibier. I was interested in knowing how my performance results in Fulgaz would compare with the model predictions for the same given power input. So I did the Galibier this morning, staying completely aerobic, sweating buckets, powers tuned to steady perfection with a Computrainer erg controller and a second Powertap pedal (those curious how I used the Computrainer with Fulgaz can send me an email or comment). Once I had my performance, I input the same powers into the model along with the driving variables input that I'd input into the app. Results are below. 


Given the assumptions I used (stated in the graphic below), the app performance results and the model predictions converged very well, which I'm pleased with. This gives me further trust in the app. Note how the positive and negative errors negate each other over time. Also note that generally, the errors are within 5% and the average error for 17.9 segments I manually built is -0.26%. 

(Click to view) "Virtual" performance in the Fulgaz app compared to model predictions for given power. The ride was done on a Computrainer using Fulgaz app. Strava results :

I believe the errors are partly due to :

1) The chosen granularity of the course, which is a km at a "constant elevation". In reality, the road might step up or step down several times within a kilometre. However, for my purposes, this would suffice.

2) I did not ride km segments at "constant power". Infact, I modulated it based on how I felt. 

3) I've assumed a constant rolling resistance per segment of 0.003. If the Fulgaz app changes rolling resistance in real time based on the segment you're on, that could affect the speed slightly. 

Also note that I have not applied altitude power-attenuation ("de-rate") in the model, which is because this is a virtual environment. However, we know for a fact, from both tested runners and cyclists, that aerobic capacity drops at moderate to high altitudes, how much depending on acclimatization levels and individual attributes. So in reality, actual times are very likely going to be slower. How much slower is another conversation. I hope to tackle that in an upcoming post. 


The close results from my model and the actual performance on Fulgaz makes the Fulgaz app a reliable training tool, in so far as it is used for constant, steady speed climbing (I have yet to test it for solo TT efforts against the wind). It also validates the Martin model (which has probably been done several times before by several people). When I shared this article with Mike Clucas of Fulgaz, he essentially confirmed that my reverse engineering closely matches the inputs that drive their model, atleast in curated events such as the French Tour. 

This post generally speaks to the need for indoor cycling training apps to make transparent to a customer what is driving their in-game physics. If in-game physics is non transparent, predictions based on widely used open source models will be vastly different to in-app performance. 

If the variables that impact the in-game performance are not transparent, you can't effectively do "what-if" predictions as almost all cyclists do in real life ("if I ride with x equipment and/or shed a few pounds, how would that affect my performance?"). This does not help those who take their training and racing very seriously and like to pre-plan for the event.

One might argue that indoor cycling apps are built like "games", hence the physics can deviate to an extent simply because it is a game. But there can be impacts. Depending on the magnitude of the deviation, a host of things can be affected ranging from perceived exertion, fatigue, CP and W' dynamics and most importantly the nutritional needs that an app based performance requires.  Either-way, if you can't predict something with physics, it's unverifiableunpredictable and might I add, possibly unstable

If on the other hand, all indoor cycling apps used a verifiable model, one could effectively standardize/minimize/account for a source of variability while the rest of the differentiation can be in the graphics, software performance and other perks unique to each app. 

I look forward to actually climbing the beautiful Galibier in reality, if I'm lucky enough to put together my coin collection and go to France. Huge thanks to everyone at Fulgaz for keeping me entertained during this tumultuous time.


No comments: