Another idea is to make your own shaft flex board. Maltby sell one for $180.
I once made one for free using a scrap of plywood, and some 2", 1/4" dowell material. Hung it on my garage wall.
I marked known clubs flexes on the board as reference points, using a consistent weight. The board also gave me other info besides flex. It was crude looking, but did the job.
It mattered little what the manufacturer's rating on the shaft was, which was usually different from each company. What ever my flex board showed me, was what the actual flex was when compared to the known flexes I used.
Maltby probably has a picture of his flex board on his website. Probably other sites have picts available too. With a little self ingenuity, a person can study the picture, and build their own. It's pretty easy.
I thought about this a while, and then had an epiphany last night. Without a perfect test, the answer is almost definitely no. It's a fairly simple statistical calculation called Bayes' Theorem. The end result is that you'll end up preventing more people from driving when they aren't drunk than preventing drunk drivers.
I'm going to plug in numbers, but since I'm (likely correctly!) assuming drunk driving is a rare event, the numbers don't really matter that much. I'm also going to assume the test is extremely accurate.
Let's say that in 1/10,000 car trips, the driver is too drunk to legally drive. This is probably an underestimation by a factor of 100, if not more, if you think about how many car trips there are in a day. Let's assume that the when the test is positive, the driver is drunk 99.9% of the time. And then assume that when the test is negative, the driver is sober 99.9% of the time (in other words, if the test is negative, the driver is drunk 0.1% of the time).
We can use this to plug in probabilities for each event.
Probability that a driver is drunk: .0001
Probability that a driver is sober: .9999
Probability that a drunk driver gets a positive test: .999
Probability that a drunk driver gets a negative test: .001
Probability that a sober driver gets a positive test: .001
Probability that a sober driver gets a negative test: .999
Bayes' Theorem applies here. It says:
The probability that someone is drunk driver given a positive test is equal to the probability of a drunk driver gets a positive test times the probability of a drunk driver; that divided by the following: the probability of a drunk getting a positive test times probability of a drunk driver plus the probability of sober driver getting a positive test times the probability of a sober driver.
In mathematic terms (DD=drunk driver; SD = sober driver; + = positive test):
P(DD | +) = (P(+ | DD)*P(DD))/((P+ | DD)*P(DD)+P(+ | SD)*P(SD))
Plug in the numbers:
P(DD | +) = ((.999)*(.0001))/((.999)*(.0001)+(.001)*(.9999))
P(DD | +) = .0908
In other words, the probability of a drunk driver given a positive test is only 9%. Meaning that out of a 100 people that test positive under this test, 91 of them would actually be sober.
Because the test is imperfect and drunk driving is rare, it's going to impact more sober drivers than drunk drivers. Even if the test is 99.99% accurate and as a false positive rate of 0.01%, the probability of a drunk driver given a positive test is only 50%. Note that I'm assuming that 1/10,000 car trips is one by a drunk driver. If you assume 1/100,000 car trips are by a drunk driver, the probability of a drunk driver given a positive test is 0.9%.
(You can also use this calculate to find out the odds that a drunk driver will have a negative test, but I have other stuff to do now...)
So, without a nearly perfect test, it's a bad idea for the entire population.
If drunk drivers were more frequent, then it would make more sense. Hence, it makes sense for someone who is more likely to drive drunk, and why the current policy probably makes sense.