Jump to content
IGNORED

Any Golfers Good at Math


MEfree
Note: This thread is 4723 days old. We appreciate that you found this thread instead of starting a new one, but if you plan to post here please make sure it's still relevant. If not, please start a new topic. Thank you!

Recommended Posts



Originally Posted by mdl

@zeg,

However, if you consider the same number of tournaments for each player (say best 4 for each player, instead of at least 4 but up to 8 if you participated in that many), then the weighted average and weighted sum should maintain rankings, no?

Assuming you want to allow members who don't have time to participate in all or almost all the tournaments to have a shot at player of the year, then obviously weighted average (the correct one, not weighted sum divided by unweighted tournament count, which is sort of a meaningless quantity) is the best measure.  I'm just wondering about the rankings question.


I'm going to a stab at this...I'm going to agree with you if you are saying weighted average and weighted sum are both legitimate methods (when used in the proper circumstances) but they don't always produce the same result.  Here is an example:

Tournament Weight Player A Player B
10 1 DNP 80
9 0.9 DNP 80
8 0.8 100 80
7 0.7 100 80
6 0.6 DNP 80
5 0.5 100 80
4 0.4 100 80
3 0.3 100 80
2 0.2 100 80
1 0.1 100 80

The weighted average for A is 100 and for B is 80 (making A the wiinner).  However, if you only take the 4 best weighted totals, B is ahead:

Player A Player B
80 80
70 72
50 64
40 56
Total 240 272

A still looks like the better player as he beat B in all 7 events they played together, but had the Committee posted ahead of time that they were only going to look at your best 4 weighted point scores and total them up to determine the winner, B would be it because A missed the 2 most valuable events.  I think it was outcomes like this which caused the committee to move away from total points to "average" points to determine the rankings.  The problem is that they currently use a weighted numerator and unweighted denominator to compute "average".  As you and others have said, this is not really an average at all.

:mizuno: MP-52 5-PW, :cobra: King Snake 4 i 
:tmade: R11 Driver, 3 W & 5 W, :vokey: 52, 56 & 60 wedges
:seemore: putter

Link to comment
Share on other sites



Originally Posted by mdl

@zeg,

Obviously using weighted sum instead of weighted average introduces an advantage to the player who plays in more tournaments, with the advantage increasing when player A plays in more later tournaments than player B.  So this isn't really measuring average performance in tournaments which you participated in, with increasing weight given to later tournaments which presumably had more difficult fields, or at least are considered more important.

However, if you consider the same number of tournaments for each player (say best 4 for each player, instead of at least 4 but up to 8 if you participated in that many), then the weighted average and weighted sum should maintain rankings, no?

Assuming you want to allow members who don't have time to participate in all or almost all the tournaments to have a shot at player of the year, then obviously weighted average (the correct one, not weighted sum divided by unweighted tournament count, which is sort of a meaningless quantity) is the best measure.  I'm just wondering about the rankings question.


Yes, the weighted sum does give an advantage to a player who plays in more events.  That may or may not be desired, depending on what you want the rankings to reflect. With respect to your second point, as MEfree points out, the two methods don't necessarily agree.  That's not necessarily a problem, though.  I think they do maintain rankings in the sense that if two players played in the same events and A consistently beat B, A will come out ahead.  If they don't play in the same events, it's hard to define a unique ranking, but it's not the case that A's scores being higher than B will ensure he outranks him in the weighted sum case.

The real problem with the weighted average that I see is that if you got off to a hot start, say you won the first four events.  Your best strategy to win the award is to skip the rest of the season.  That leads me to think a weighted sum is preferable, but it still has the problem you allude to.  Assuming players don't game the system, I like the weighted average, but I think you need to couple it with a rule that you must play in the season-end tournament to qualify or something, just to avoid someone romping in the early rounds and then retiring.

In the bag:
FT-iQ 10° driver, FT 21° neutral 3H
T-Zoid Forged 15° 3W, MX-23 4-PW
Harmonized 52° GW, Tom Watson 56° SW, X-Forged Vintage 60° LW
White Hot XG #1 Putter, 33"

Link to comment
Share on other sites


@ Zeg

What do you think the averaging calculation should be if it was based on a two year rolling period and used to determine seeding and eligibility for various events with the following weightings?  Should the divisor be the sum of the weights of the events you play in (a true weighted average) or simply the number of events you play in (like B wanted in my first example).  Assume that the committee is smart enough to put a minimum divisor so a guy can't just go out and win the first event and skip the next 2 years and stay #1.

Week (ago) Weight
1 1.00
2 1.00
3 1.00
4 1.00
5 1.00
6 1.00
7 1.00
8 1.00
9 1.00
10 1.00
11 1.00
12 1.00
13 1.00
14 0.9891
15 0.9783
16 0.9674
17 0.9565
18 0.9457
19 0.9348
20 0.9239
21 0.9130
22 0.9022
23 0.8913
24 0.8804
25 0.8696
26 0.8587
27 0.8478
28 0.8370
29 0.8261
30 0.8152
31 0.8043
32 0.7935
33 0.7826
34 0.7717
35 0.7609
36 0.7500
37 0.7391
38 0.7283
39 0.7174
40 0.7065
41 0.6957
42 0.6848
43 0.6739
44 0.6630
45 0.6522
46 0.6413
47 0.6304
48 0.6196
49 0.6087
50 0.5978
51 0.5870
52 0.5761
53 0.5652
54 0.5543
55 0.5435
56 0.5326
57 0.5217
58 0.5109
59 0.5000
60 0.4891
61 0.4783
62 0.4674
63 0.4565
64 0.4457
65 0.4348
66 0.4239
67 0.4130
68 0.4022
69 0.3913
70 0.3804
71 0.3696
72 0.3587
73 0.3478
74 0.3370
75 0.3261
76 0.3152
77 0.3043
78 0.2935
79 0.2826
80 0.2717
81 0.2609
82 0.2500
83 0.2391
84 0.2283
85 0.2174
86 0.2065
87 0.1957
88 0.1848
89 0.1739
90 0.1630
91 0.1522
92 0.1413
93 0.1304
94 0.1196
95 0.1087
96 0.0978
97 0.0870
98 0.0761
99 0.0652
100 0.0543
101 0.0435
102 0.0326
103 0.0217
104 0.0109

:mizuno: MP-52 5-PW, :cobra: King Snake 4 i 
:tmade: R11 Driver, 3 W & 5 W, :vokey: 52, 56 & 60 wedges
:seemore: putter

Link to comment
Share on other sites


I think the "committee" could have eliminated the problem if they had informed the players that they wanted to add emphasis on the later rounds.  In this case the last round is 10 times more important to winning than the first round.  The weighted average is just a computational method to accomplish that goal of making later rounds more important to being club player of the year than are the earlier rounds.  I am not arguing fairness but the technique is common and that is why you can get into the LPGA "hall of fame" with fewer total  wins if some of your wins are majors.

Butch

Link to comment
Share on other sites


To determine eligibility, I'd say you probably want the weighted average so that someone who has simply played in many events doesn't get a big advantage.  For a club ranking, I think this one makes the most sense (though, to be honest, I'd be inclined to use a straight unweighted average to equally reflect the whole year---if some events are not competitive enough to be useful, I'd just drop those from the ranking).

I looked up the OWGR method and, somewhat surprisingly, they use the method that Player B is endorsing: accumulated points according to a decreasing scale (in fact, the one that you posted, as you probably know), averaged over the number of events played, with a minimum divisor of 40 and maximum of the players last 56 (soon to be 54) events.  I was a bit puzzled, but I think the explanation is two parts.  First, they are not measuring best performance over any specific period, rather are just trying to capture who is "best," so some mix of timescales heavily weighted toward recent performance is ok.  Second, most players play in similar numbers of events over similar time periods, so anomalies like that at your club are unlikely.

In the bag:
FT-iQ 10° driver, FT 21° neutral 3H
T-Zoid Forged 15° 3W, MX-23 4-PW
Harmonized 52° GW, Tom Watson 56° SW, X-Forged Vintage 60° LW
White Hot XG #1 Putter, 33"

Link to comment
Share on other sites


I may not be able to golf worth a damn, but I'm decent at mathematics.  I deal with this stuff on an almost daily basis at work.  Black Death is right on.  The definition does not have to explicitly specify weighted averages because it's implied by the use of a weighting system.  Otherwise, people who play more events will be penalized relative to those who play fewer events....simply for the reason that they played more.  Divide by the weights and not the events.  Player A wins which is intuitively obvious given the frequency of play and consistently higher per-event points through the entire season.

Driver:  Callaway Diablo Octane iMix 11.5*
Fairway: Cobra Baffler Rail F 3W & 7W
Irons:  Wilson Ci
Wedges:  Acer XB (52* & 56*)
Putter:  Cleveland Classic #10 with Winn Jumbo Pistol Grip

Link to comment
Share on other sites




Originally Posted by zeg

I looked up the OWGR method and, somewhat surprisingly, they use the method that Player B is endorsing: accumulated points according to a decreasing scale (in fact, the one that you posted, as you probably know), averaged over the number of events played, with a minimum divisor of 40 and maximum of the players last 56 (soon to be 54) events.  I was a bit puzzled, but I think the explanation is two parts.  First, they are not measuring best performance over any specific period, rather are just trying to capture who is "best," so some mix of timescales heavily weighted toward recent performance is ok.  Second, most players play in similar numbers of events over similar time periods, so anomalies like that at your club are unlikely.

I agree that the Player B/OWGR method gets most of the players ranked close to the same position as the Weighted ave method most of the time because many of the players play similar schedules.  In fact, if all the players played the same schedule, then the rankings order would be the same with both methods and the only anomaly would be that the Player B/OWGR method could produce an "average" that is lower than a players lowest point total (unlikely to happen because no players are that consistent, but if you had a player who got 10 points in each event, his "average using the B/OWGR method would likely fluctuate between 5 to 7 depending on time of year and his exact schedule.

Originally Posted by Topper

I may not be able to golf worth a damn, but I'm decent at mathematics.  I deal with this stuff on an almost daily basis at work.  Black Death is right on.  The definition does not have to explicitly specify weighted averages because it's implied by the use of a weighting system.  Otherwise, people who play more events will be penalized relative to those who play fewer events....simply for the reason that they played more.  Divide by the weights and not the events.  Player A wins which is intuitively obvious given the frequency of play and consistently higher per-event points through the entire season.

Yes, but to clarify, the bias is not against playing a lot of recent events, only having played a lot of older events (which eventually all events become).  The most telling about this is that typically a golfers OWGR "average" is hurt by having played an event that he did well in or even won when the event gets close to the end of the two year rolling period.

The bias by the OWGR not using a true weighted average favored Westwood over Donald this past week (allowing Westwood to hold onto the "Official" #1 even though Donald had the higher weighted ave).  This coming week there is still a very slight bias against Donald (that he can overcome with a better finish than Westwood at the BMW PGA).  After that, the bias favors Donald for 5 weeks before shifting back to Westwood.  Players who play the exact same schedule will have no bias relative to each other, it is only when players play different weeks that the bias is created using the B/OWGR method.

:mizuno: MP-52 5-PW, :cobra: King Snake 4 i 
:tmade: R11 Driver, 3 W & 5 W, :vokey: 52, 56 & 60 wedges
:seemore: putter

Link to comment
Share on other sites


So, by these rules it's better to only play the last 4 and finish 3rd in each than it is to win 9 but skip one somewhere in the middle? Seems like there's a better way to do it

Link to comment
Share on other sites


it is situations like these that make people change the rules of tournaments/seasons etc. Clearly your leagues method of working out 'player of the year' is severly flawed. Maybe they should ditch the weighting system and go with a straight average score.

Maybe they should go with a gross points score? The more you play (get people out there every week) the better your chances are of winning.

There's no weighting in Fed Ex Cup or in Orders Of Merit anywhere in the world. Maybe this is why?

Link to comment
Share on other sites


I looked up the OWGR method and, somewhat surprisingly, they use the method that Player B is endorsing: accumulated points according to a decreasing scale (in fact, the one that you posted, as you probably know), averaged over the number of events played, with a minimum divisor of 40 and maximum of the players last 56 (soon to be 54) events.  I was a bit puzzled, but I think the explanation is two parts.  First, they are not measuring best performance over any specific period, rather are just trying to capture who is "best," so some mix of timescales heavily weighted toward recent performance is ok.  Second, most players play in similar numbers of events over similar time periods, so anomalies like that at your club are unlikely.

What do you mean by "accumulated points according to a decreasing scale"? Each tournament has it's own point value calculated based on a number of factors specific to that instance of the tournament, such as the strength of the field. They're not really using Player B's method because they're not using [i]weights[/i] for the scores, they're using an honest-to-goodness average. Here's the difference: 1) The idea behind weights is that there is a baseline against which all other instances are measured and given a scaled down version. In this case, you scale the worth of a tournament down by giving the score from it a smaller influence on the final score. The key there is "influence". The final score is simply influenced less by this score than it would be another score. The tournament [i]itself[/i] is worth less than a higher-weighted tournament. Taken individually, the ratio of each tournament's points to it's weight is the same, but when accumulated they don't all have the same influence on the final number. 2) In the OWGR, each tournament has equal [i]value[/i], but the availability of points for it is different. The tournaments are all equally important, but how many points you can score in it fluctuates based on factors for the tournament. Every final place in a tournament across the entire year has a unique place with a point value assigned to it that can be compared against any other finishing position in any other tournament. 3) Now, "player B"'s method involves taking weights of value for the numerator and then dividing by constant value in the denominator. In order for Player B's method to work, they would have to assign the tournament points independently [i]without weighting[/i]. Now, they could choose the same numbers (10, 20, 30, ..., 100), but that would be coincidence -- in fact they'd be unlikely to choose such numbers for problems outlined in this thread. OWGR gets around that problem by making the number of available points reflect factors about the tournament itself (completely different from the "importance" of the tournament). The key is that those are individually assigned numbers that have no relation to a base standard. But as soon as you start using words like "weight" in a calculation, you can't mix weighted scores with absolute-value tournaments. You're taking numbers from different places, they won't mix properly. Note from (2) the phrase "Every final place in a tournament across the entire year has a unique place with a point value assigned to it that can be compared against any other finishing position in any other tournament" -- that doesn't hold when using weights. Player B can add any numbers he wants and divide them in any way,, yes, but those numbers have to mean something, and in the OGWR case they have a different meaning than in Player B's tournament. You can multiply any two numbers together in "F=ma", but if one isn't a mass and the other an acceleration, the result is meaningless. Same principle applies here: It's all about where the numbers [i]came[/i] from. Weights have a certain concept associated with them -- if you don't divide by the sum of their weights then you really aren't using weights, hence you aren't following the promised grading scale. Assuming they promised they'd use weights -- maybe they didn't say a thing about it, in which case they can do this because they can claim weren't using weights to begin with, even though it is obvious that's what they were trying to do.

"Golf is an entire game built around making something that is naturally easy - putting a ball into a hole - as difficult as possible." - Scott Adams

Mid-priced ball reviews: Top Flight Gamer v2 | Bridgestone e5 ('10) | Titleist NXT Tour ('10) | Taylormade Burner TP LDP | Taylormade TP Black | Taylormade Burner Tour | Srixon Q-Star ('12)

Link to comment
Share on other sites



Quote:
Originally Posted by B-Con View Post

What do you mean by "accumulated points according to a decreasing scale"? Each tournament has it's own point value calculated based on a number of factors specific to that instance of the tournament, such as the strength of the field.

I mean what I say.  From http://www.owgr.com/about_us/default.sps?iType=425

Quote:
The World Ranking Points for each player are accumulated over a two year “rolling” period with the points awarded for each event maintained for a 13-week period to place additional emphasis on recent performances – ranking points are then reduced in equal decrements for the remaining 91 weeks of the two year Ranking period.  Each player is then ranked according to his average points per tournament, which is determined by dividing his total number of points by the tournaments he has played over that two-year period. There is a minimum divisor of 40 tournaments over the two year ranking period and a maximum divisor of a player’s last 56 events (54 from June 26 2011).

Only the most recent 13 weeks' tournaments count for their "face value" points.  The others are scaled down by a weighting factor (which, I believe, goes according to the schedule that MEfree posted in a couple of posts).  They phrase it slightly differently, but this is exactly the method that B is pushing, for the special case that every tournament is worth 100 points (and with a different weighting schedule).  Thus, if you don't play for a while, your points drift steadily to zero as the weights drop.

These really *are* weights, independent from the values assigned to the tournaments.  They depend on how long ago a particular tournament occurred, not what tournament it was.  It's the "duck test": regardless of the vocabulary used, if it quacks like a duck, it's a duck.

So they are taking a straight average of down-weighted scores.  I think it's an acceptable method for their purpose (although there are certainly anomalies---sorry MEfree, I haven't had a chance to read through your PMs yet, I haven't overlooked them), but I don't think it works well for the player of the year award.

In the bag:
FT-iQ 10° driver, FT 21° neutral 3H
T-Zoid Forged 15° 3W, MX-23 4-PW
Harmonized 52° GW, Tom Watson 56° SW, X-Forged Vintage 60° LW
White Hot XG #1 Putter, 33"

Link to comment
Share on other sites


Note: This thread is 4723 days old. We appreciate that you found this thread instead of starting a new one, but if you plan to post here please make sure it's still relevant. If not, please start a new topic. Thank you!

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Want to join this community?

    We'd love to have you!

    Sign Up
  • TST Partners

    TourStriker PlaneMate
    Golfer's Journal
    ShotScope
    The Stack System
    FlightScope Mevo
    Direct: Mevo, Mevo+, and Pro Package.

    Coupon Codes (save 10-15%): "IACAS" for Mevo/Stack, "IACASPLUS" for Mevo+/Pro Package, and "THESANDTRAP" for ShotScope.
  • Posts

    • Day 549, May 4, 2024 After lessons and working with Natalie, hit some balls for awhile. Just backswing stuff. Forgot about the slightly shorter stuff, though I'm sure it was as they were only about 75% speed with brief pauses.
    • Not a coach, but this looks pretty solid to me! PGA TOUR (@pgatour) • Instagram reel 30K likes, 63 comments - pgatour on May 4, 2024: "Come for 16-year-old @kris.kim59’s near ace … Stay...  
    • Best drive I've ever hit: I will not be answering any questions about the rest of the hole. Or the round, for that matter.
    • I tried hybrids way back when TaylorMade introduced the copper orange Firesole Rescue, the clubhead having been made of titanium which was still relatively new even in drivers back then. I couldn't hit it well at all, and while the success of hybrids suggests that the modern ones must be quite good,  I'm perfectly happy with the 5, 7, and 9-woods.  Early ones of mine were Top Flite Intimidator 400s made by Spalding... and also made of titanium, now that I think of it.  I still have them in my basement. I do bag a driving iron, but it's a one-trick-pony that never sees fairway use.    
    • The last time I played Maxfli balls, Dunlop was still making them. How long ago was that? Mostly, though, I used to play Top Flites (original 336 dimple model) when Spalding was still making them. Now I play the Pro V1x. Last time that I ordered some, Titleist was still making them. Let's see how long that lasts.
×
×
  • Create New...

Important Information

Welcome to TST! Signing up is free, and you'll see fewer ads and can talk with fellow golf enthusiasts! By using TST, you agree to our Terms of Use, our Privacy Policy, and our Guidelines.

The popup will be closed in 10 seconds...