Jump to content
IGNORED

OWGR Biased Against PGA Tour Players


iacas
Note: This thread is 1874 days old. We appreciate that you found this thread instead of starting a new one, but if you plan to post here please make sure it's still relevant. If not, please start a new topic. Thank you!

Recommended Posts

Quote:

Originally Posted by Nosevi

I read the PDF. I remain unconvinced :)

So all of that was to say that you don't believe what a 2012 study on the OWGR concludes?

Yep, because it's nonsense :)

In another thread I was told I was not properly reading this study, if I did I'd see how true the title of this particular thread is. It was off topic in the other thread (the Ryder Cup discussion) so I posted my response here.

I read it properly - I'm not convinced because the whole pretense of the thing is utterly flawed. It's based on a barking mad US based ranking system that no one takes any notice of (outside the US possibly, not sure if those in the US give it any credence) and another ranking system that ignores a basic facet of the game when it comes to scoring ability vs round difficulty. It's clearly written with an agenda - you don't post your findings in an introduction unless you're setting out to make a given point.

Besides, I'm waiting for a new set of clubs to arrive - I've got way too much time on my hands :)

Pete Iveson

Link to comment
Share on other sites

Awards, Achievements, and Accolades

So if I understand you correctly, this study (and conclusion of it) is partly based on some system that puts,, in the current situation, Furyk (with no wins) above McIlroy (impressive year), ánd important in the study is the fact that scores on the PGA are lower than on ET (however, players that play on both tours score also lower on the PGA than on ET)....??

~Jorrit

Link to comment
Share on other sites

Awards, Achievements, and Accolades

The author (or authors) uses the Golfweek rankings to 'prove' his theory and his own ranking system. They look back over the last year at the performance of players to rank them and yes, they rank Furyk ahead of Rory. The author also says that the difficulty of a course or tournament round is not taken into consideration, just the relative scores of one player to another to 'estimate' their skill differential. But as it can be shown that, on average, the pros that play both the ET and PGA Tour score higher on the ET, it follows that the rounds they play there are in some way more difficult. My point is, that being the case, you would expect a slightly bigger score differential in more difficult conditions between the most talented and less talented golfers but the author makes no allowance for this. As the scoring average in the pro ranks is so tightly packed (1 shot can cover 100 ranking places on a single tour), when you multiply this by all the tours, half a shot off on his 'skill rating' equates to far more than his claimed bias against PGA players in the OWGR.

Pete Iveson

Link to comment
Share on other sites

Awards, Achievements, and Accolades

Cool, thanks for explaining. I tried to read the pdf as well, but it's hard such a document to understand all and completely with the language barrierer and all. When it comes to statistics, you can't just say 'facts are facts', but always have to look at who provides them, with what purpose and most important how they can be interpretated.

~Jorrit

Link to comment
Share on other sites

Awards, Achievements, and Accolades

Cool, thanks for explaining. I tried to read the pdf as well, but it's hard such a document to understand all and completely with the language barrierer and all. When it comes to statistics, you can't just say 'facts are facts', but always have to look at who provides them, with what purpose and most important how they can be interpretated.

The 'purpose' is my take on it. But (IMO) when a US 'accademic' working at a US institution writes a paper, announcing his conclusion in the introduction, using data entirely provided by a US organisation who is the party you are 'proving' should benefit from a change (in the form of the PGA Tour) and proves it with data from another US organisation whose data is blatantly and obviously ridiculous (in the form of Golfweek), I'd argue there's an agenda. He didn't wake up one day and think "I wonder if the OWGR system helps PGA Tour players?" and get a shock when he 'proved' the opposite.

Pete Iveson

Link to comment
Share on other sites

Awards, Achievements, and Accolades

The 'purpose' is my take on it. But (IMO) when a US 'accademic' working at a US institution writes a paper, announcing his conclusion in the introduction, using data entirely provided by a US organisation who is the party you are 'proving' should benefit from a change (in the form of the PGA Tour) and proves it with data from another US organisation whose data is blatantly and obviously ridiculous (in the form of Golfweek), I'd argue there's an agenda. He didn't wake up one day and think "I wonder if the OWGR system helps PGA Tour players?" and get a shock when he 'proved' the opposite.

So, this article that you are disproving didn't actually provoke any changes in the OWGR, correct? So you wrote a long dissertation to disprove an article that has no real influence? Why don't you just send a letter to the people who wrote the article, instead of posting it on here where it has absolutely no use.

Link to comment
Share on other sites


So, this article that you are disproving didn't actually provoke any changes in the OWGR, correct? So you wrote a long dissertation to disprove an article that has no real influence? Why don't you just send a letter to the people who wrote the article, instead of posting it on here where it has absolutely no use.

Discussing about a golf subject on a golf forum is no use? I find it very interesting. In the Ryder Cup topic the discussion started wether the ET is overrated in points compared to the PGA, and this article came up as an argument to back that up. By request of one of the moderators it's continued in this topic. I think it can be an interesting debate, and if one uses this article as proof it's perfectly fine for others to question that artice/study. Am very curious about serious responses from others and if @Nosevi is making any wrong assumptions according to them. Interesting discussion.

~Jorrit

Link to comment
Share on other sites

Awards, Achievements, and Accolades

[QUOTE name="Nosevi" url="/t/56702/owgr-biased-against-pga-tour-players/36#post_1062460"] The 'purpose' is my take on it. But (IMO) when a US 'accademic' working at a US institution writes a paper, announcing his conclusion in the introduction, using data entirely provided by a US organisation who is the party you are 'proving' should benefit from a change (in the form of the PGA Tour) and proves it with data from another US organisation whose data is blatantly and obviously ridiculous (in the form of Golfweek), I'd argue there's an agenda. He didn't wake up one day and think "I wonder if the OWGR system helps PGA Tour players?" and get a shock when he 'proved' the opposite. [/QUOTE] So, this article that you are disproving didn't actually provoke any changes in the OWGR, correct? So you wrote a long dissertation to disprove an article that has no real influence? Why don't you just send a letter to the people who wrote the article, instead of posting it on here where it has absolutely no use.

No it didn't provoke any changes, possibly for the reasons I've given - it doesn't 'stack up'. I would have replied to Erik on the thread where he said it 'proved' that PGA players are better than their OWGR shows, but was asked not to. A thread about precisely this study seemed a relatively sensible place to post an alternative view of it. If you're not interested, why not exercise your right not to read it? Seems a bit of a waste of time to read it (if you have) only to say I shouldn't have written it. And if you didn't read it because you're not interested, what do you care that I took the time to research it and write on it?

Pete Iveson

Link to comment
Share on other sites

Awards, Achievements, and Accolades

  • Administrator
I would have replied to Erik on the thread where he said it 'proved' that PGA players are better than their OWGR shows, but was asked not to.

I'll reply more later, and caution you guys against using ONE example to make your case (I'm quite sure examples in the opposite extreme can be found), but I'm pretty sure I never used the word "prove" or "proof" like this. In fact, the words "prove" and "proof" don't appear in that thread in the last four pages.

I tend to detest intellectual dishonesty.

Erik J. Barzeski —  I knock a ball. It goes in a gopher hole. 🏌🏼‍♂️
Director of Instruction Golf Evolution • Owner, The Sand Trap .com • AuthorLowest Score Wins
Golf Digest "Best Young Teachers in America" 2016-17 & "Best in State" 2017-20 • WNY Section PGA Teacher of the Year 2019 :edel: :true_linkswear:

Check Out: New Topics | TST Blog | Golf Terms | Instructional Content | Analyzr | LSW | Instructional Droplets

Link to comment
Share on other sites

Awards, Achievements, and Accolades

Quote:

Originally Posted by Nosevi

I would have replied to Erik on the thread where he said it 'proved' that PGA players are better than their OWGR shows, but was asked not to.

I'll reply more later, and caution you guys against using ONE example to make your case (I'm quite sure examples in the opposite extreme can be found), but I'm pretty sure I never used the word "prove" or "proof" like this. In fact, the words "prove" and "proof" don't appear in that thread in the last four pages.

I tend to detest intellectual dishonesty.

No, that's true I suppose. You stated as fact that the European Tour is over-rated in terms of OWGR (in comparison to the PGA Tour) and used this paper to demonstrate the fact. I wrote "proved" in inverted commas as you didn't use that word, you merely stated it was a fact, not an opinion. Saying that someone used a study to "prove their point/conclusion" isn't intellectual dishonesty, it's a commonly used phrase to describe what you did. If you had said it was your "opinion" that the European Tour players were too high on the OWGR I would have said you used the study to "support" your view. At least that's what I've always been taught when writing studies myself :)

Pete Iveson

Link to comment
Share on other sites

Awards, Achievements, and Accolades

No it didn't provoke any changes, possibly for the reasons I've given - it doesn't 'stack up'. I would have replied to Erik on the thread where he said it 'proved' that PGA players are better than their OWGR shows, but was asked not to. A thread about precisely this study seemed a relatively sensible place to post an alternative view of it.

If you're not interested, why not exercise your right not to read it? Seems a bit of a waste of time to read it (if you have) only to say I shouldn't have written it. And if you didn't read it because you're not interested, what do you care that I took the time to research it and write on it?

Trust me, I wish I would have saved time and not read it, or responded, or responded again here lol (I guess I'm wasting my sunday away until football starts). I read it because I thought it sounded interesting, then when I finished I realized that neither "side" of this "discussion" has any real impact on golf, playing golf, OWGR, or anything else. That's all I'm saying.

Link to comment
Share on other sites


Quote:

Originally Posted by Nosevi

No it didn't provoke any changes, possibly for the reasons I've given - it doesn't 'stack up'. I would have replied to Erik on the thread where he said it 'proved' that PGA players are better than their OWGR shows, but was asked not to. A thread about precisely this study seemed a relatively sensible place to post an alternative view of it.

If you're not interested, why not exercise your right not to read it? Seems a bit of a waste of time to read it (if you have) only to say I shouldn't have written it. And if you didn't read it because you're not interested, what do you care that I took the time to research it and write on it?

Trust me, I wish I would have saved time and not read it, or responded, or responded again here lol (I guess I'm wasting my sunday away until football starts). I read it because I thought it sounded interesting, then when I finished I realized that neither "side" of this "discussion" has any real impact on golf, playing golf, OWGR, or anything else. That's all I'm saying.

Ok mate, fair point :)

Like I said, new custom fit golf clubs were due to be delivered Friday, now coming Monday (Ping i25, project x 6.5 shafts, Tour Gorge wedges, G30 driver and woods with PXV Tour 6.5 shafts)..... Anyway, I had time on my hands.

Pete Iveson

Link to comment
Share on other sites

Awards, Achievements, and Accolades

I'll reply more later, and caution you guys against using ONE example to make your case (I'm quite sure examples in the opposite extreme can be found),

Very quickly, Erik I'm sure you'll give your opinion, and I'm sure it's valid and you're more than entitled to it....... but that's what it is - an 'opinion'. It's not a fact that the European Tour players are higher than they 'should' be on the OWGR. It's not a fact because of what the OWGR system is. Broadie 'complains' (in inverted commas) that the points awarded aren't linear. They're not supposed to be linear, they're supposed to award more points for success. It's supposed to award more points for Majors because of the pressure involved - winning can and does change a guy's career. In a similar way they award points in more minor competitions where the competition is not as strong as on some tours - you can only beat the competition you have in front of you. It's certainly not supposed to be some form of 'corrected stroke average' calculator or a detached measure of a golfer's skill at scoring but not winning.

Regarding using one example to make a case that the Golfweek rankings are wrong, I'm not. I'm using one case (and a blindingly obvious one at that) to prove the way in which they calculate who is the best (or has performed the best) is utterly invalid. When you write any formula (or computer program) it's obviously not uncommon to test it. To do that you'll throw in something you know the answer to or something you can test in an alternative way and make sure the formula or program spits out the 'correct' answer.

In terms of golf you might plug in McIlroy's and Furyk's figures and make sure it comes out with Rory on top. Why? Because Rory has the lower stroke average, is better in stokes gained, is better in wins, top 10 finishes, performance in the Majors, performance in WGC tournaments....... in fact in every measurable way it's blindingly obvious that Rory is, and has been for the past year, above Jim.

Plug those 2 in and the Golfweek Sagarin formula (whatever it is) says Furyk is ahead of McIlroy in terms of performance and ahead quite comfortably. At that point any sane person would go back to the drawing board.

I'm not using one example to make a case. I'm showing that the Golfweek Sagarin rankings are invalid because when you plug in a known 'quantity' (McIlroy and Furyk) it gets the answer wrong. The only defence of the formula they use is that it got it correct and Jim Furyk has, in fact, out performed Rory McIlroy over the past year........ but that'd be a tough point to prove.

Like I said, we're each entitled to our opinion and I look forward to hearing yours :)

Pete Iveson

Link to comment
Share on other sites

Awards, Achievements, and Accolades

  • Administrator
Let's start with a more recent post and work our way through this.
No, that's true I suppose. You stated as fact that the European Tour is over-rated in terms of OWGR (in comparison to the PGA Tour) and used this paper to demonstrate the fact.

I did say that. I said the paper demonstrates this. At no point did I call it a "fact," which is now the second time you've been intellectually dishonest in two posts.

I wrote "proved" in inverted commas as you didn't use that word, you merely stated it was a fact, not an opinion.

I do not believe that to be true. I went back to pages 39-42 and looked at every post. I searched for the word "fact" and found myself using it only twice:

Scoring data was for all the Tours. It was supplied by the PGA Tour, because they have access to and can supply scores as part of the federation of professional tours. Unless you wish to claim that they altered the scores of the European Tour, Asian Tour, Australasia Tour, Sunshine Tour, Japan Tour, Nationwide Tour and Challenge Tours… the fact that he got the tournament results from the PGA Tour means nothing. There's no bias in the PGA Tour saying "you want tournament results for the past decade from a bunch of tours? Okay. Here you go. Here's a bunch of data."

"while taking into account a ranking system he is 'proving' is invalid in his calculations and coming to the grand conclussion that, "a ranking system where points are determined by a committee, rather than objective analysis, could easily lead to the biases described in this paper." When the ranking points are no longer, in fact, determined by a committee, they are determined by a set formula."

- False/Misleading - The formula is determined by the committee. That's all that says. There's been no real change to the way the OWGR is calculated since 2012.

One of those is actually YOUR use of the word, I simply quoted in quotation marks instead of in a quote block. The other is a true fact: that he got scoring data from the PGA Tour for all the other Tours.

The only time I believe I used the word "opinion" was to say that your "opinion is subject to fault." You're biased, and perhaps more so than I am - I knew who Ross Fisher was, but you didn't know who a PGA Tour winner this year was. You wish to believe that the European Tour is not given inflated OWGR points, so you seek out things that you believe backs you up.

You may allege the same of me, but I don't care who is given more points. I don't pay much attention to the OWGR. I don't care if the U.S. takes up a bunch of the top spots. I just like to see good golf. I'm cool with Rory, or Lee, or even Luke being atop the OWGR list. I was fine with Graeme sneaking into the top 50 by playing at Tiger's event, and thus winning a U.S. Open. All good.

Saying that someone used a study to "prove their point/conclusion" isn't intellectual dishonesty, it's a commonly used phrase to describe what you did.

I didn't do that, though. I simply asked you to read and respond to the study. That's something which, at that point, you'd completely failed to do, as @turtleback pointed out to you.

If you had said it was your "opinion" that the European Tour players were too high on the OWGR I would have said you used the study to "support" your view. At least that's what I've always been taught when writing studies myself :)

I never said it was a fact. Again, you don't seem to have read what was written.

Here's a search for "fact" in posts by me. Some of my posts come up because… I quoted you. Some are about different topics entirely (including me saying "that's not a fact, it's speculation"): http://thesandtrap.com/newsearch?advanced=1&action;=disp&search;=fact&titleonly;=0&byuser;=iacas&output;=posts&replycompare;=gt&numupdates;=&sdate;=7d&newer;=1&sort;=lastupdateℴ=descending&Search;=SEARCH&Search;=SEARCH

And seriously, you like to use the word "fact" quite a bit. Perhaps your love of the word is partly to blame for why you think I said it?

My hope is that, from here on out, you will actually read what's written and not put words in my mouth and then quote them back to me as if they were things I said.

Having been 'accused' in another thread of not taking Mr Broadie's work on this subject seriously, or of not reading it properly, I'd like to say exactly what I think of it. Although before I had scan read it (and was less than convinced) I've looked at it pretty closely as I was told all I had to do was read it and it 'proves' that PGA Tour players are disadvantaged in terms of OWGR in comparission to say guys on the European Tour.

Again, you were not told that or anything like that. I have no horse in the race, I simply like to understand things. If Europe was under-represented based on their level of play, I'd trumpet that. In fact, I'm suspicious of things that back my claims, because I"m aware of how readily someone looks for things which support their opinions, and try not to fall prey to that manner of thinking. Blame the scientific background in me - I'm a "question everything" kind of guy. But I also think I am the type of guy who knows when and how to accept something as being more accurate than my opinions.

Leaving aside my belief that this is not good science (basing or backing up your conclusion even partly on a statistical analysis which you do  not understand

It's mildly disingenuous to say he doesn't understand it. He likely would understand it if given the chance to explore it, but because it's proprietary, he doesn't have that opportunity. He certainly understands the basics - it's based on wins and losses (and ties). So if A > B, and B > C, then A > C.

Let's look at the current rankings as of 29 September 2014. At number 1 in the Golfweek Sagarin rankings is Jim Furyk. At number 4 is Rory McIlroy.

Now Broadie quotes that the Sangarine system "is based on a mathematical formula that uses a player's won-lost-tied record against other players when they play on the same course on the same day, and the stroke diferential between those players, then links all players to one another based on common opponents. The ratings give an indication of who is playing well over the past 52 weeks."

You don't seem to understand that you cannot look at one player versus another player and draw your conclusion. Again, there are plenty of dots from which to choose. You found one dot of the 200 listed that contradicts "common sense" (and even the W-L-T numbers, which I'd agree seem off in this case).

In other words, you've found an exception. Congratulations. But again the charts demonstrate that I could likely find more exceptions than you can.

No ratings system is going to be 100% accurate. After all, they're trying to rate "player skill level" essentially, and over quite a long period of time. You seem to forget that Rory struggled for much of this year, while Jim Furyk has had a weirdly good year (albeit without a victory). Would I put them at #1 and #4? No. Again, their W-L-T numbers don't make sense to me either.

But you're not seeing the forest for the trees (or in this case, tree).

The OWGR is a goofy formula too. There have been several instances where players are better off skipping an event to move up in the rankings than playing and even finishing in the top ten. It's not without faults. It presents some goofy results occasionally as well.

I'd suggest their algorithms are a tad off. The Golfweek Sangarin rankings that put Furyk at number one in the world and McIlroy at number 4 are not taken seriously by anyone outside the States yet Broadie uses them to back up his own method of determining the true rank or skill of the players.

Do you not see how disingenuous it is to take one example - which could easily be an exception - to make a sweeping generalization about 199 other players ?

Again, even just using the Sagarin, there are more U.S. players ABOVE the line and more European Tour players BELOW the line, so I could find more exceptions than you. The fact that there are "more exceptions" is what leads Broadie to the conclusion he reaches: on average, European Tour players have inflated rankings.

Finding one case where this doesn't seem to be true is not going to change anyone's mind, not if they know the first thing or two about statistics.

Stephen Gallacher is 119th in Sagarin (rating 70.65, Sked 71.42). He's 18-90-2, 162-331-12, and 338-534-24 versus the top 10/50/100. So, he has a losing record (338-534 W-L) and about a .387 winning percentage against the Top 100… which makes his 119th ranking seem to fit.

The OWGR has him at 34th. That's a difference of 85 spots .

Never mind the fact that… Rory primarily played the PGA Tour this year! I'm not sure you can say that Broadie's study would even listed him as a primarily Euro player in 2014.

I'd suggest their algorithms are a tad off. The Golfweek Sangarin rankings that put Furyk at number one in the world and McIlroy at number 4 are not taken seriously by anyone outside the States yet Broadie uses them to back up his own method of determining the true rank or skill of the players. I'd suggest any system that can't work out that the guy that's won the most (and done it with convincingly the lowest stroke average, most strokes gained, most top 10 finishes etc) is probably the number one player right now. In fact, correct me if I'm wrong but Furyk hasn't actually won a tournament this year. Rory has won 4 times, 2 are Majors, 1 a WGC and our own PGA in the UK. I'd say that any system that (currently) puts Furyk above McIlroy is flawed.

You sure do spend a lot of energy to argue over one of the little data points out of the 200 on the graph.

Guess what? I'll give you one data point. I agree. Those are goofy. Something's weird there.


But again, there are a lot more data points on the "wrong" side of the graph that support the idea that Euro OWGR is inflated than to support your claim.

Looking at Broadie's own SBSE (Score-Based Skill Estimate) The one thing he is totally spot on about is in it's name - it's an estimate. In fact if you count up the number of times he uses the word 'estimates' in it you'll give up :)

Broadie says that his formula uses an "estimate of the mean score of player i on a 'neutral' course."

But then adds:

"Although we do not take specific information about course setup and weather conditions into account in estimating....."

This is where the formula falls down (or at least 'should' fall down in any kind of statistical analysis).

You do realize that the PGA Tour's scoring average is adjusted too, right? It's to stop someone from playing all the easy courses and claiming the Vardon Trophy while Tiger or Rory or whomever play the majors and the tough courses.

As I understand it, the SBSE (which is used internationally in thousands of applications) simply says that if Player A shoots 71 today, Player B shoots 72, and the field average is 73, then player A is a +2 and player B is a +1. If player B then plays another event in which Player C and shoots better than player C, we can conclude that A > B > C. Because players intermingle so frequently, and play so many rounds, this often isn't a few data points only. It's thousands, even tens of thousands, per player in a complex web of relationships. Because you can have situations where, in one event, A > B > C > D > E > F > G, but in another event where C doesn't play, the scores come out as E > B > G > F > A or something. Oftentimes that's simply round to round, thus again, it's fairly complex.

It's also my understanding that weather and course setup and stuff don't matter because they're only really comparing players to other players who played that course the same day. The PDF makes note of multi-course tournaments, for example, and says you can't compare players playing at Pebble Beach to players playing at Spyglass, even though they're technically part of the same "tournament." He's simply saying, I believe, that you cannot factor in an afternoon increase in wind. Those things probably tend to even themselves out.

Unless you're fluent in R or at least know how to use Mathematica or whatever other apps statisticians use, I'm going to assume that crunching this kind of system in your head is beyond your ability levels. I'll leave it to the guy whose job it is to do this. And, unlike you, I'm not going to assume that he's biased simply because he works for a U.S.-based university.

In fact, since you have such a problems with SBSE and Sagarin, what other performance measures can you think of? To be clear, I want to see this for the top 200 or so players in the world, not the top five or ten. I want to see a clear pattern of, as you've stated, the PGA Tour being the one with inflated rankings. Please, enlighten us.


Because thus far you've only said, essentially, that Sagarin and SBSE are junk because they goofed up on Furyk and McIlroy (which I agree is goofy).

The SBSE uses players who play against both those on (for example) the European Tour and those that play on the PGA tour, to compare players on each tour that do not play against each other. So if Player 1 is a 'star player' from Europe who plays both tours then Player 2 (a player on just the PGA Tour) is compared to Player 3 (a player on just the European Tour) based on their scoring performance relative to Player 1. And scoring performance is important. Also note that we're not talking about star PGA players coming and playing on 'standard' tour events on the European Tour, it only really happen the other way around.

Looking at the Scoring average of players this year that play both the PGA and European tours:

Player - PGA scoring average/European scoring average

Rory - 68.836/69.43

Sergio - 68.959/70.02

Justin Rose - 69.632/70.11

Graeme McDowell - 69.651/70.47

Charl Schwartzel - 69.680/70.17

Paul Casey - 70.198/70.57

Further down

Luke Donald - 70.745/71.47

Lee Westwood - 70.747/71.03

Louis Oosthuizen - 70.864/71.26

Exception to the 'rule' Henrik Stenson - 70.177/69.87

I'll address this more below, but you don't seem to understand that on the PGA Tour it's the adjusted or weighted stroke average. Rory did not actually average 68.836 on the PGA Tour. Look at the last column: it says "total adjustment" right there.

Besides, I think you've completely misinterpreted this. It's not just the scoring average. It's the scoring average that day against the other players on that course on that day .

If A and B have scoring average in tournaments where they play the same courses on the same days of 73.5 and 74.2, and player B and C have scoring averages of 70.3 and 71.6 when they play the same courses on the same days, then A > B > C even though their "raw" scoring averages would indicate that the order might be different.

IIUIC, again, it's your score on a particular course, on a particular day against anyone else playing that course on that day.

Almost all the players that play both tours have a higher stroke average on the European tour than on the PGA tour.

Completely beside the point, and again, I dare say you don't understand the complexity of SBSE. I believe each player has thousands or tens of thousands of connections, relationships, etc. in an SBSE type system. Not just W-L-T, but margins of victory/loss, with the "wins" and "losses" being score that day, and on that course . You seem to think it's just a ranking of overall (or overall adjusted) scoring average.

Broadie disregards this fact.

I think it's more accurate to say that you do not understand how SBSE works than what Broadie - a guy who has a Ph.D. in this - "disregards" or regards.

For whatever reason (and I'm not being judgemental, I'm just stating fact) those players that play both tours score higher on the European Tour - either the courses or weather makes it harder to play the game of golf.

Which as I understand things is completely irrelevant in SBSE.

Again, if A > B in one event, and B > C in a totally separate event, it's reasonable to conclude that A > C, particularly when you expand those connections out into the tens of thousands (I'm making it simple talking about one event, but I could just as easily say that if A > B in 40 rounds they play against each other, and B > C in 32 rounds they play against each other, then you can conclude that A > C and feel pretty good about the solidity of such a conclusion).

My point is this - the golfers being used to compare guys from different tours are the guys that play both and they are up at the top of the rankings (however you calculate them). It's harder to score on the tournaments that the 'just European players' play in than it is to score in the tournaments that the 'just PGA players' play in. If it's harder to score, then you would expect the differential to be greater between those on the European Tour only compared to the 'both tour' top guys, than it is between the 'PGA Tour only' and the 'both tour guys'. That's what happens as the course (or weather) gets harder - the scoring differential increases between those at the top and those with a bit less skill.

That's a bit of a new wrinkle, but it's also an assumption. I don't imagine that you have any actual proof that scoring - relative to ability levels, of course, as the European Tour attracts lesser skilled players in general than the PGA Tour - demonstrates an increased variety of scoring than we see on the PGA Tour. I'm sure, like you, I could find a single example to demonstrate this, but I realize how silly that would be in a case wherein we are talking about nearly 100 events and hundreds or even thousands of players.

Furthermore, again, I believe SBSE is simply used to rank players, and because of all those webs that connect people playing the same course on the same day, it's all relative to everyone else. You know six degrees of Kevin Bacon? Nobody in the system is probably more than one degree away from anyone else, because so many players (a "statistically significant" number) play events on both Tours. This has a normalizing effect on the data, I imagine.

Not for nothing (but not for much at all more than that), but I would say scoring differences on the PGA Tour tend to be higher relative to the skill levels of the players on it . Yes, when Rory McIlroy squares off against Nacho Elvira (real person), he's probably going to beat him soundly. But that's why Nacho is playing the European Tour… he's not good enough to play the PGA Tour. Of course we'd expect to see a wider scoring gap there. To attribute it solely to "weather" and/or course difficulty is odd. For all you know, Nacho shoots BETTER (relative to a top player) on the European Tour than he would on the PGA Tour.

This is entirely ignored in Broadie's statistical model and it assumes a linear score differential indicates a linear skill differential regardless of the dificulty of the round. Golf isn't like that. Broadie is either ignoring this fact or he's not a golfer. Possibly both.

Now you're just being silly. Mark has won his club championship several times, is a 4-handicap, and a few other things.

http://www.pgatour.com/news/2013/10/23/Dorman-Strokes-Gained-Putting.html

"Although the author, Mark Broadie, is a professor at Columbia University Business School, he also is 4-handicap amateur golfer, former club champion at Pelham Country Club and part of a team of three MIT researchers and PGA TOUR stats experts that designed and implemented the “strokes gained-putting” stat in 2011."

His MIT thing likely trumps your statistical background, too…

"The OWGR method is based on tournament finishing positions, while the SBSE and Sagarin methods are based on 18-hole scores. The diference between OWGR points for finishing first versus second is much larger than the diference in points for finishing nineteenth versus twentieth.That is, OWGR points are awarded using a nonlinear scale."

Professional golf is about winning tournaments (or placing as high up in them as possible).

That is a complete straw man type thing. Nobody is talking about what "professional golf is about." If that were the case, a rankings list would be about 50 names long. Rickie Fowler wouldn't be on the list at all (unless you go back a few years). C'mon, man, you're not even trying.

He's pointing out that winning and finishing at the top inflates your rankings. That's true on ALL Tours. But a guy who shoots 68 four times is not 5x better than a guy who shoots 68 twice and 69/70 in the same tournament to finish T6. The SBSE/Sagarin attempt to rank SKILL while OWGR, he's saying, rewards finishing higher non-linearly.

Getting the job done, holding it together down the stretch, winning the tournament SHOULD be worth more.

This isn't even on topic, nor is it worth the time to address. I agree that winning matters. But it's neither here nor there when discussing overall ranking. Since you like single examples, look at Rickie Fowler, who is a top ten player in the world in OWGR despite not winning, but playing well in a few majors.

Never mind that winning on the European Tour is likely significantly easier than winning on the PGA Tour (and winning on the Sunshine Tour is even easier - heck, even Ross Fisher can do it!).

That's why the OWGR system awards more points for success.

Shooting a lower score for someone is a measure of "success" too. You don't seem to have understood the part about how the points are awarded in a "non-linear" fashion. Someone who finishes top ten a bunch of times may be a better player than someone who wins a tournament, misses some cuts, and finishes toward the bottom of those he doesn't miss the cut in, but the OWGR may rank him above the consistent player who never manages to win.

The Sagarin method is seriously flawed.... unless you think that Jim Furyk should be above Rory based on wins vs losses when Rory has both more wins overall and a higher percentage of wins. I like Jim Furyk a lot, in fact he's one of my favourite golfers, but he's currently not the World number one by any measure anyone can come up with other than Golfweek who won't say how they came to that conclusion.

That's almost so silly I wasn't sure I'd respond to it. If you want to trash an entire system because of ONE data point, be my guest. But it makes your entire position in this discussion highly suspect.

Again, I could counter with TENS of examples that speak to the OPPOSITE. Examples like Stephen Gallacher, who is 85 places higher in OWGR than Sagarin.

The SBSE system is also flawed (although perhaps to a lesser degree) as it assumed a linear score differential indicates a linear skill differential as course/scoring difficulty increases. It  doesn't in golf and never has. As the game gets tougher (maybe due to the weather our guys play in more than the courses) the scoring of the top guys goes up a bit, the scoring of the guys with a little less talent goes up more.

You don't have any factual basis for saying that, nor are you weighting it against the fact that PGA Tour players are better overall than European Tour players. Perhaps, for all you know, the "slope" (U.S. course handicapping type measure) is lower on European Tour courses, but the course rating is higher, and European Tour players who miss the cut would have missed by even larger margins if they were playing against PGA Tour players.

The conclusion Broadie comes to is flawed as it is based entirely on these 2 systems.

Just so we're clear: you're reaching this conclusion about Broadie's conclusion based on what seems to be a weak understanding of the two systems he used and a single Furyk/McIlroy example?

Now by all means disagree with me. Perhaps you think Furyk has had more success or deserves a higher 'power rating' than McIlroy given their golf over the last 52 weeks.

Your reliance on this single data point is laughable at this point.

Or perhaps you think the fact that practically all the players that play both tours find it harder to score on the tournaments that are only on the European Tour than they do on the tournaments that are only on the PGA tour is a statistical anomyly, a fluke.

I've had about enough of that. Since you like to rely on this scoring differential, I'm going to take a similar approach as you and look at one player: Rory McIlroy.

What European Tour events did he play that contributed to his scoring average? Let's see here…

The Alfred Dunhill Links, which he just played and which were not part of your scoring average: -16, T2.

He also played in the Scottish and Irish Opens, where he shot -7 and +1 respectively.

He played, quite awhile ago, in the Dubai Desert Classic and the HSBC, where he shot -12 and -13.

He played the BMW PGA, shooting -14.

All of the other events in which he played are co-sanctioned events , so his scores are applied to both his European Tour and PGA Tour scoring averages: The PGA Championship, WGC-Bridgestone, British Open, US Open, Masters, and WGC-Cadillac.

His scoring average on the European Tour events I listed first: -10 per tournament.

His scoring in the co-sanctioned events on the Euro Tour site, all of which (except the British Open, which he won) are in the U.S.: -37 in 6 events, or about -6 per event.

He scored better on the European Tour-only events. The co-sanctioned events helped raise his average . The gap only widens if you take the British Open and his -17 finish out of the "PGA Tour" side and stick it with the European Tour events.

How about Sergio Garcia?

-3 PGA, -13 WGC-Bridgestone, +7 U.S. Open, +5 Masters, +4 WGC-Cadillac = even par in 5 events.

-15 British Open, -16 BMW, +4 Spanish Open, -16 Qatar Masters, -6 Abu Dhabi, -18 Nedbank = -67 for 6 events.

Again, the co-sanctioned events dramatically raise his European Tour scoring average.

Justin Rose was third on your list.

PGA -8, WGC-Bridgestone -9, U.S. Open +3, Masters +1, WGC-Cadillac +6 = -7 in 6 events.

British Open -5, Scottish Open -16, BMW -4, Nedbank -12 = -37 in four events.

Again, the co-sanctioned events dramatically raise his European Tour scoring average.

Not to mention, again, that the PGA Tour scoring average is adjusted or weighted based on the scoring average of those in the field. If he shoots even par 72 but the scoring average is 74, his scoring average is adjusted downward from 72 to 70.1 or something for the round. AFAICT this does not occur on the European Tour stats.

But I think Mr Broadie had an agenda - to 'prove' that the guys on the PGA Tour are lower down the OWGR than they should be. He says it at the begining of his paper as an 'attention grabber' - it's blidingly obvious who his target audience is.

What? It's sad that you reached the "let's just make shit up" part of your argument so quickly.

He ignores several fairly obvious facts in his analysis (like Rors is a tiny bit better at golf than Jim at the moment no matter what Golfweek says....) and the 2 models he uses to prove the fact that the players on the PGA Tour should be higher on the OWGR are flawed.

a) The article was written in 2012. He didn't talk about Jim Furyk and Rory McIlroy. Only you seem hung up on them.

b) What models should he have used?

The beginning of his paper is the abstract . The point is to summarize the paper. It says, among other things:

"In this paper, we investigate whether the OWGR system is biased for or against any of the tours, and if so, by how much. To investigate any potential bias, we compare the OWGR system with two unbiased methods for estimating golfer skill and performance. The first is a score-based skill estimation (SBSE) method, which uses scoring data to estimate golfer skill, taking into account the relative difficulty of the course in each tournament round. The second is the Sagarin method, which uses win-lose-tie and scoring differential results for golfers playing in the same tournaments, to rank golfers. Neither the score-based skill method nor the Sagarin method use tour information in calculating player ranks, and therefore neither method is biased for or against any tour."

I read it properly - I'm not convinced because the whole pretense of the thing is utterly flawed.

I disagree.

It's based on a barking mad US based ranking system that no one takes any notice of (outside the US possibly, not sure if those in the US give it any credence) and another ranking system that ignores a basic facet of the game when it comes to scoring ability vs round difficulty. It's clearly written with an agenda - you don't post your findings in an introduction unless you're setting out to make a given point.

Have you ever read a scientific paper? It's the abstract, dude.

It's "clearly written with an agenda?" Get real.

The author (or authors) uses the Golfweek rankings to 'prove' his theory and his own ranking system.

There is no "theory" to prove or disprove. Guess what? If the results of his statistics showed that the European Tour players weren't given enough points, he'd have published that. You don't seem to realize how big of a deal it is to make the allegations you're making in a mathematical/scientific community.

The author also says that the difficulty of a course or tournament round is not taken into consideration, just the relative scores of one player to another to 'estimate' their skill differential. But as it can be shown that, on average, the pros that play both the ET and PGA Tour score higher on the ET

Please show that. And then, if you're able to show that, removing the "adjustments" the PGA Tour makes for their scoring average (i.e. again, so that people who play nothing but the John Deere and the Bob Hope aren't winning Vardon trophies over major championship players, etc.), please demonstrate how that actually matters, since you're comparing players who play the same course on the same day against other players who play the same course on the same day to come up with a RANKING system .

Is your brain capable of making those tens of thousands of connecting lines? I doubt it.

When it comes to statistics, you can't just say 'facts are facts', but always have to look at who provides them, with what purpose and most important how they can be interpretated.

I don't think anyone's done that. I certainly don't believe I have. The statistics back the idea that Euro Tour players are over-rated. That's all I'm saying, and really that's just repeating the conclusion the study reaches.

The 'purpose' is my take on it. But (IMO) when a US 'accademic' working at a US institution writes a paper, announcing his conclusion in the introduction, using data entirely provided by a US organisation who is the party you are 'proving' should benefit from a change (in the form of the PGA Tour) and proves it with data from another US organisation whose data is blatantly and obviously ridiculous (in the form of Golfweek), I'd argue there's an agenda.

I have no words to describe how ignorant you're behaving about this.

Why the **** is "academic" in quotes? What the hell does his nationality have to do with anything? Holy heck - this is a scientific paper . He wasn't paid by the PGA Tour to write this paper.

Guess what? In academia, scholars are expected to PUBLISH . That's what he did here. He helped create "Strokes Gained" stats. He published a book "Every Shot Counts." You didn't even think he was a golfer, despite being a better golfer than you and a club champion.

You don't seem to understand how SBSE works. You rely on a single example to "disprove" Sagarin. You allege that he gave way his conclusion in the introduction, despite it clearly being labeled the ABSTRACT.

https://en.wikipedia.org/wiki/Abstract_(summary)

" An abstract is a brief summary of a research article , thesis , review, conference proceeding or any in-depth analysis of a particular subject or discipline"

No it didn't provoke any changes, possibly for the reasons I've given - it doesn't 'stack up'.

Highly unlikely for the reasons you've given. Speculation used to pat yourself on the back?

I would have replied to Erik on the thread where he said it 'proved' that PGA players are better than their OWGR shows, but was asked not to.

I see now that the intellectual dishonesty you displayed here - which you later repeated in saying I'd used the word "fact" - was not a one-off or an isolated example. You've been intellectually dishonest about many things here. You've made assumptions out the wazoo, you've offered as proof a SINGLE example, you've failed to demonstrate an understanding of even the fact that the PGA Tour uses an adjusted scoring average, and more.

Very quickly, Erik I'm sure you'll give your opinion, and I'm sure it's valid and you're more than entitled to it....... but that's what it is - an 'opinion'.

You're the one who has tried to put "fact" and "proof" words in my mouth.

Stats are not an opinion. They can demonstrate things, and I'm not calling them "facts" either, except that it's a "fact" that Rory's adjusted scoring average on the PGA Tour in 2014 is 68-point-whatever, but some of this is opinion, and some of it is not.

It's not a fact that the European Tour players are higher than they 'should' be on the OWGR. It's not a fact because of what the OWGR system is.

Nobody ****ing said it was. The only person who has said anything like that has been you in trying to put those words into my mouth.

Broadie 'complains' (in inverted commas) that the points awarded aren't linear.

****ing stop. He didn't "complain". He's writing a scientific/statistical/mathematical paper. He POINTS OUT the FACT that OWGR points are not awarded linearly. They are not.

He does not make value judgments about whether they "should" or not. He does not make value judgments about what the "point" of tournament golf is. He simply points out that if someone shoots four 70s, someone else shoots four 72s, and someone else shoots four 74s, those players are roughly evenly spaced, and the guy who shot 74 is not 100x or 50x worse than the guy who shot four 70s.

Regarding using one example to make a case that the Golfweek rankings are wrong, I'm not. I'm using one case (and a blindingly obvious one at that) to prove the way in which they calculate who is the best (or has performed the best) is utterly invalid.

Let me rephrase what you just wrote:

Regarding using one example to make a case that the rankings are wrong, I'm not. I'm using one case to prove that the way the rankings are done is wrong.

Heck, just substitute the word "wrong" where you wrote "invalid" and you can see that you just said effectively opposite things. The paper is not about determining the ranking of the SINGLE best player, it's about determining the rankings of the top 200 players.

If it was about determining the single best player, I'd agree with you that the Sagarin is screwy. You're using one exception to disprove the entire thing. It's pure folly.

You get to have the last word. Go for it. I'm pretty comfortable with the fact that I've said just about all I can say in response to your argument to this point. Undoubtedly you may find a thing or two at which to pick in this post, but I'm good with that, and don't think it will diminish the overall post very much if at all. I'm also relatively confident that anything you have to say will be based, as seems to be your penchant, in what you "believe" to be true, will ignore the obvious generalizations and focus on only specific weird examples (which the OWGR has as well), will make blatantly rude and obnoxious allegations against a mathematician, and/or will be full of assumptions like "I bet that four-handicap club champion doesn't even play golf" (paraphrased, of course).
Overall, I must say, I'm completely disappointed by your post. As a scientist myself, I like to be shown how I can learn more, grow more, and increase my understanding of things. Scientists do this by being shown that they're wrong. If they're "right" then they learn nothing. It's when a scientist learns that he's wrong - and WHY - that he gets excited. Being wrong is an instant opportunity to learn and grow . I am disappointed because I felt like you might be able to provide that for me, but you've come up remarkably short. Bummer.

I'm truly and completely done discussing this with you now.


Here's the weird thing… I don't even care about the OWGR or Sagarin or SBSE. I don't care if the Euros are mildly, grossly, positively, or negatively affected by the OWGR rankings. Except for a few situations, like a player sitting out a tournament rather than finishing T7 so that they can remain world #1, I think the OWGR is fine, particularly for those inside of about the top 50. Like every other ranking system , it has its quirks.

So why the long response? First, I type quickly. Secondly, I hate hate hate shitty arguments, and you, @Nosevi , have provided little else (there's my "opinion"). Finally, I was a bit bored today. Today happened to be a lazy Sunday. I'm gonna go fast forward through the Steelers game now (they're probably losing, the dummies), and get on with my life. I'm done thinking about this.

If I had known your argument was so poorly constructed, relied on so many misunderstandings and assumptions, and so on, I probably would have spent the last hour doing something else with my life. But I started to respond, and by the time I realized it was a waste of time, I'd already spent enough time and gotten too deep into it that scrapping it all and going with my original post seemed like a bigger waste of time than just finishing it out. Had I gone with my original response, you'd have simply read what appears below.

Ha ha ha ha ha ha ha ha ha.

Have a good day.

  • Upvote 3

Erik J. Barzeski —  I knock a ball. It goes in a gopher hole. 🏌🏼‍♂️
Director of Instruction Golf Evolution • Owner, The Sand Trap .com • AuthorLowest Score Wins
Golf Digest "Best Young Teachers in America" 2016-17 & "Best in State" 2017-20 • WNY Section PGA Teacher of the Year 2019 :edel: :true_linkswear:

Check Out: New Topics | TST Blog | Golf Terms | Instructional Content | Analyzr | LSW | Instructional Droplets

Link to comment
Share on other sites

Awards, Achievements, and Accolades

It's too bad 'everybody' is saying they are done with the discussion, since I find it quite interesting to read all. Even though I can't understand everything, it seemed like some solid points were made, but then how Erik is explaining them also is very interesting stuff to read. Btw, not sure if relevant, but I can imagine a Euro player who just sometimes play in the States, scores less good when he plays incidentally there (time difference, travel, other kinds of courses) and also the other way around (US player PGA all of a sudden for one week in Europe to play KLM Open; can't imagine that will go well).

~Jorrit

Link to comment
Share on other sites

Awards, Achievements, and Accolades

I'm not using one example to make a case. I'm showing that the Golfweek Sagarin rankings are invalid because when you plug in a known 'quantity' (McIlroy and Furyk) it gets the answer wrong. The only defence of the formula they use is that it got it correct and Jim Furyk has, in fact, out performed Rory McIlroy over the past year........ but that'd be a tough point to prove.

Like I said, we're each entitled to our opinion and I look forward to hearing yours :)

Your first sentence is like saying that a single coin flip that comes up heads somehow invalidates the FACT (and this is an actual fact, not your nonsense) that the probability of a coin flip coming up tails is 1/2, is invalid.  In statistics, no single data point can tell you ANYTHING about the validity of the data, the validity of the methodology, or pretty much anything else.  Statistics is about aggregates of data not individual data points.

Which brings us to your second point.  Yes we are all entitled to our opinion.  But based on your utter inability to grasp the most basic concepts of statistics, the scientific method, or the rigors of academic publishing, MY opinion is that your opinion on this matter is valueless as you have not shown any ability to interact with the facts or the arguments presented in any kind of intellectually appropriate way.  And in saying it this way I am being far more courteous to you than you have been in the charges of bias and intellectual dishonesty that you directed at the author of the study.

  • Upvote 1

But then again, what the hell do I know?

Rich - in name only

Link to comment
Share on other sites

Awards, Achievements, and Accolades

  • Moderator

I have no words to describe how ignorant you're behaving about this.

Why the **** is "academic" in quotes? What the hell does his nationality have to do with anything? Holy heck - this is a scientific paper. He wasn't paid by the PGA Tour to write this paper.

Guess what? In academia, scholars are expected to PUBLISH. That's what he did here. He helped create "Strokes Gained" stats. He published a book "Every Shot Counts." You didn't even think he was a golfer, despite being a better golfer than you and a club champion.

Mike McLoughlin

Check out my friends on Evolvr!
Follow The Sand Trap on Twitter!  and on Facebook
Golf Terminology -  Analyzr  -  My FacebookTwitter and Instagram 

Link to comment
Share on other sites

Awards, Achievements, and Accolades

:) What an interesting definition of 'forum' you guys have here. I did have the last say, Erik. And despite it being relatively polite you deleted it. As I say, interesting way to run what you laughingly call a forum. And if you scientists can't see that a formula that puts one golfer above another when that golfer has out performed the first in every single conceivable way is not a single data point, it's a formula that simply isn't fit for purpose, I don't know what to tell you. As a golfer I want to see the top guy ranked first. Maybe scientists are happy if the ranking system gets it hopelessly wrong a given percentage of the time. And stop using the phrase "intellectual dishonesty" it's not only irritating, it smacks of what academics do to try to make themselves look clever. It annoyed me when I was at Oxford University getting a degree with honours and it's pretty irritating now. Yes, I have studied a bit myself. And (not that it lends much to the discussion) if Broadie is a 4 handicapper over there, no he's not a better golfer than me. How a 4 handicapper could win a Club Championship is beyond me, at my club we have numerous guys playing off plus handicaps. Was practicing with one of our girls who plays off plus 2 on Friday. Maybe his country club is full of academics rather than golfers.... Nice place you have here. Enjoy it :)

Pete Iveson

Link to comment
Share on other sites

Awards, Achievements, and Accolades

Note: This thread is 1874 days old. We appreciate that you found this thread instead of starting a new one, but if you plan to post here please make sure it's still relevant. If not, please start a new topic. Thank you!

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×
×
  • Create New...

Important Information

Welcome to TST! Signing up is free, and you'll see fewer ads and can talk with fellow golf enthusiasts! By using TST, you agree to our Terms of Use, our Privacy Policy, and our Guidelines.

The popup will be closed in 10 seconds...