We never dreamed that what we thought was a relatively straightforward story about three publications that review fitness equipment would raise the kind of immediate, passionate and direct response it did.
But sometimes you never know when you hit a nerve, and hit a nerve we did with our Feb. 17 story titled “Taking a look at fitness equipment magazine reviewers.” In it, we simply presented what we were told about how the reviews are done at Consumer Reports, Runner’s World and Prevention magazines — to satiate readers’ curiosity — and then balanced that with comments — both positive and negative — from several manufacturers that had their equipment reviewed. We then commented ourselves about gear reviews and their power.
Considering the time subscribers put into responding and the articulate nature of them, we want to share several excerpts with SNEWS subscribers. Of course, some manufacturers chose to remain anonymous for fear of reprisal, real or not. In light of the reaction, we also went back to both Runner’s World and Consumer Reports for additional comment. Prevention magazine didn’t seem to elicit much response, partly because it really doesn’t rate equipment, just writes up selected pieces. (For details about how each said it performs reviews, see our Feb. 17 story.)
Consumer Reports — Known for thoroughness, but not above complaints
Retail owner Jerry Greenspan of Fitness Equipment Experts in Ohio — a man who has gone beyond the call of duty with degrees in biomechanics and who is also a physical therapist — won’t let anybody rest on their laurels. He wrote to SNEWS about Consumer Reports:
“First of all, Consumer Reports has no experts, to the best of my knowledge, in biomechanical engineering, the necessary academic background to evaluate how exercise equipment loads the joints of the body during exercise. Second, they receive one sample of each product and base their conclusions on the tests results of this one product (I can assure you that is not a statistically valid number of units to test)….
“Lastly, it specifically states in the magazine that it accepts donations. Given the tremendous financial impact of their ratings, do you really think that Consumer Reports is beyond reproach? Please.”
One manufacturer said it did have more confidence in the way CR does reviews, compared to other consumer publications, but “I’d really like some visibility into its metrics.”
John Galeotafiore, director of testing at CR for recreation, power and auto equipment, responded that the magazine has two engineers on staff, one with a degree in ergonomics, the other who has been a professor of exercise science. But they don’t stop at staff members.
“We can’t have experts in everything,” he told us. “If we don’t, we go to the outside and bring them in.”
In recent years, the magazine has also raised its standards somewhat from reviews of yore that nearly exclusively looked at low-end units sold at discount chains. Now, readers find name and specialty brands worth a few thousand dollars in the reviews. And he said they try to keep some consistency in the price range reviewed so the units are a little closer in quality. But, he said, “Price is not a factor in scoring,” since testers don’t know prices during months of use.
Regarding the claim that testing one unit is statistically valid, Galeotafiore said if there seems to be some kind of durability or mechanical issue, they will go out and buy another to make sure it’s not a one-time incident. But, he said, why should they try to test a dozen or more since that’s not what happens in the real world where consumers buy just one and not a sampling.
Consumer Reports is working on another treadmill review for a late spring or early summer issue, he said, since fitness equipment reviews are in demand by its readers and are still hot-selling items.
Runner’s World — Hardest hit by claims of shifting criteria, apples-to-oranges
A magazine read by devout runners of all levels as well as wanna-be runners, Rodale Press-owned Runner’s World undertakes its treadmill review every three years. But, whoa, did it stir up a hornet’s nest.
SportsArt America is so intent on the inconsistencies it finds in the RW review process, it has even posted a three-page letter on its website (www.sportsartamerica.com) from its President Terry Brown called “Facts about 2002 Runner’s World Review.”
For the manufacturers involved, the biggest gripe was about the shifting sands in the magazine’s requests for equipment. Initially, they were told to stick to the $2,000 to $3,000 range, and to submit only home treadmills, but some companies later found $5,000 commercial-grade units toeing the line next to the less-expensive home units.
“The Runner’s World process — if you can even call it that — is flawed,” one manufacturer told SNEWS.
Several companies told SNEWS that they didn’t know how many miles were put on their treadmills since nearly all are purchased, at deep discounts, by RW staffers and testers — even ones that are ranked low.
Retailer and biomechanist Greenspan comes down even harder here, calling the magazine’s protocol “scientifically invalid” in his letter to SNEWS:
“First, having runners rate treadmills is like having driving salesman rate cars. They will rate the cars they currently drive highest, simply because it is what they are used to. Are they qualified to evaluate the engineering components of the machines? Of course not. They’re not engineers with years of experience in the industry. And while some people like the feel of soft, cushy decks, these decks have the potential to cause injuries to the ankles, knees and hips due to the increased medial/lateral and anterior/posterior movements of the runner’s feet due to lack of stability….
“Additionally, the treadmill study was not a double-blind, cross-balanced study as is required for scientific validity…. If you don’t have the knowledge or resources to do the research correctly, you just shouldn’t do the research.”
Dave Sellers, new products editor at RW, said the magazine’s review has never intended to imply winners and losers, but rather has been an attempt to present a range that would serve the needs of different runners. He said it is clear that it tells readers the exact limitations of the “human” approach — that they don’t and can’t test durability, only look at a limited number of models and emphasis “runnability.” He also added in an email response:
“The treadmill solicitation and selection is certainly the toughest issue we face in our reviews. The manufacturers naturally want to submit a treadmill that they think will be highly ranked, and they tend to believe that the more expensive treadmills will receive the highest rankings. Our results show that this isn’t necessarily the case. For example, the highest priced units ended up 4th and 6th in the overall ratings. And the least expensive treadmill scored well in the middle of the ratings.
“On the other hand, our readers also want us to review treadmills at lower price points. This is probably the most frequent complaint we hear from readers — why didn’t you look at several less-expensive units? And, naturally, we want to review treadmills of sufficient quality that they are likely to give good long-term service.
“We try to balance these pressures by having a dialogue with treadmill manufacturers to help them select an appropriate unit to submit to us. We point out, for example, that our testing is a simple, straightforward human-factors test. We put serious runners on the treadmills and ask them how they like the ‘runnability’ of the treadmill on several key runner variables — cushioning, stability, etc.
“In discussions with treadmill reps, we point out that higher price often means more electronic and programmable features. We don’t ask our runner-testers to rate these features. Most of our runner-readers use only the manual mode when they run on treadmills, and this is the predominant mode we evaluate in our testing.
“Also, many readers will be weighing price, as well as performance. So, a treadmill that costs less but earned an adequate performance review (cushioning and stability, for example) would be attractive to readers. As a result, a number of manufacturers chose to submit units below their top-of-the-line, so we get a range of treadmills, which is both good and bad. Good, because it allows us to present a range of quality treadmills to readers in our infrequent reviews. Bad, because it allows unhappy manufacturers to say we compared apples and oranges.
“We may change this approach in the future. We’re always re-evaluating how we do things, same as everyone else. But, for now, we think it serves us and our readers reasonably well.”
To the question about why RW even bothers with black-and-white rankings with something that can be a very personal reaction (when it doesn’t do that with other products, including shoes), Sellers wrote:
“Why do ratings at all? We don’t rate running shoes, after all, and rarely rate other products that we review. It’s a good question, and one we’ll continue to evaluate. For now, we believe that running shoe evaluations are absolutely unique to the person who’s foot and body are in the shoe, but that treadmills can be evaluated by runners whose observations have carry-over value to other runners.”
Can any review make everybody happy? Likely not, it seems.
“It just irritates me that so many people are so swayed by all these inaccurate reviews, no matter how well intentioned they are,” Greenspan said.
And, Scott Logan, marketing director of SportsArt America, added, “There really are no great reviews out there. They all either are incomplete or have their own agendas or allow themselves to be influenced by manufacturers.”
SNEWS View: Wow, where do we start? We never dreamed that within an hour of posting a story that we’d get such responses. And these only scratch the surface.
It seems Consumer Reports does one of the best jobs around, not only putting a piece of equipment through its paces for a few months, but also analyzing it from human feedback as well as lab data. Note we say “one of the best jobs” — none will ever be perfect or satisfy the masses. Nor is it truly realistic to ask a publication to review numerous pieces of the same equipment for months on end.
Runner’s World reviews, however, have different issues and some of it is he-said-she-said. Manufacturers say they were asked for one type/price of unit, and they obliged based on perceived “rules;” the magazine says it had a “discussion” with each manufacturer and didn’t set exact criteria. Either way, the review ended up with an apples-to-oranges comparison. We think a publication should sit down and set exact criteria before requesting product, then stick to it, and not be swayed along the way. Plus, we hear some manufacturers flew out top execs and marketing people to gave the magazine’s representative a schpiel about why its product was so good. We say, either all companies do, or they don’t. As much as someone tries not to be influenced by flackery, he or she is. That’s simply reality.
And, we have to ask, why bother with the ranking system on treads when the magazine doesn’t do it with, say, shoes? The shoe reviews simply write-up different models, specifying for whom it’s good and why, then lets readers sort through the information to pick ones that might suit them. Great stuff. A couple of awards (Best Buys and Editor’s Choice) finish off those reviews.
But the magazine says that won’t work for treadmills since it says opinions from one runner about a tread carry over to another, but not so with shoes. We say, that’s bogus. They can’t tell us that runners don’t compare their shoe experiences, and don’t actually go out and buy shoes based on what a friend or colleague says — be that right or wrong. Forget for a moment that treads cost 20 or 50 times as much as a pair of shoes. The reality is the same — they are a very personal item, with one user preferring a softer ride (like a cushioned shoe) and another wanting a firmer surface (stability shoe) while another wants schmantzy features (fancy lacing systems, overlays, zippers, or colors and designs). And don’t think for a second that everybody, no matter how big or tall, short or small, will like or fit the same treadmill. A Honda or a Lexus may be a great car but is everybody going to fit in it, like it or buy it? No.
Should publications stop doing reviews? We don’t think so. Readers love them, and we recognize that. But testers need to be very experienced at the testing process to properly give a rating. And criteria must be firmly set and stuck to. Averaging a really low rating with a really high rating doesn’t give a fair picture — no one in this case thought it was just “good,” so why say that? We say, tell it like it is, describe the process exactly (no hyperbole, please), and then decide if a number ranking is fair. If it’s not, give information about models, features, mechanics and what to look for since that itself can be some of the best service a publication can provide its readers.