I believe I heard @Nate_Pearson talk about Normalized Power Personal Records on the podcast. Any update on this or where it may be on the list of features in development?
I would love to be able to track this aspect of my power as well - particularly as I’ve been pretty happy with my NP on many of my outdoor rides this season!
I’d like this too. It’s a much more CPU intensive calculation and that’s why it wasn’t done with the regular PRs at first.
Question: How short should we calculate NP for? Coggan says don’t show it for anything less than 20 minutes. I feel like people wouldn’t like that. I was thinking 5 minutes.
This makes sense to me too. Certainly fewer time intervals than average power.
I would actually be fine with starting NP at 20 minutes given Dr. Coggan’s view…but I also agree that many would want to start it earlier, so starting at 5 minutes seem reasonable.
Whatever is easier for you guys! My assumption that fewer time intervals would be easier to manage, but I clearly have no idea of the back-end implications.
For record-chart purposes, I think the main thing that limits how short you can compute NP for is the fact that for short time intervals, the 30-second smoothing applied when computing NP starts pulling NP below average power. With some simple test cases, that effect seems to become small somewhere in the vicinity of 6 minutes.
One thing you might want to consider is, instead of showing a graph of NP records, show max(AvgP, NP). That lets you have it be well-behaved at all time scales. Should be easy enough to mock up internally once you have a NP-record graph. You could even come up with your own trademarked term for it.
rolling_average = 30 second rolling average
rolling_avg_powered = rolling_average^4
avg_powered_values = average of rolling_avg_powered
NP = avg_powered_values^0.25
I think if you were to do a strict ‘compute NP for every applicable bin at every second’ it would be overly expensive. You would effectively be forced to re-compute the PB chart for every second of the workout, so all of a sudden computing bests for a 3 hour workout is 11,000x more expensive.
Looking at it it seems like there should be some cheats to make it cheap by not going through the whole algorithm at every point. But I would need to spend time actually implementing it or doing math to show that it’s correct. They don’t pay me enough for that.
I’m still thinking it’s just as expensive as making the PR chart for average power.
Finding the maximum k-second average of a function is O(N), The average-power PR chart for a single ride is the k-second average for every value k. I don’t know offhand if there’s a clever way of making that cheaper than O(N^2). Regardless, TrainerRoad has some algorithm that they use, make_ride_pr_chart, that computes the maximum-averages PR chart R(k) from the ride power function P(t).
Computing the NP PR chart is the same problem. It’s just applied to a different function, Q(t) = [ smooth_30_sec( P(t) ) ]^4. If you apply make_ride_pr_chart to Q(t) instead of P(t), you get the NP PR chart. (After you do x^0.25 on the result.)