Most product comparisons tell you jack. I propose a better way.
Most technology evaluations you read in trade magazines, or those provided by manufacturers suffer a common and basic flaw. They compare technologies with other similar, competitive products and rarely if ever tell the reader how good the product really is. The evaluations list features in light of those of competitors. In my opinion, they perpetuate mediocrity.
I think there is a better way to look at technology products. The method I’ve been developing for the last four years is as close to a scientific way of performing the evaluations as I can find. My method also has another important difference: it tells the security executive (or the manufacturer) how successful the product will be at solving the end user customer’s problems, rather than telling the customer what he or she will have to settle for.
I’ll say it more plainly. By taking a hundred, or so, criteria from end user requirements and preferences, I can score how closely any product comes to the hopes and expectations of the customer.
Here is an example of the scoring. This product (which I will not name here) was recently awarded very high marks in a (more or less) independent product comparison with other big name access control products. It is considered one of the best products you can buy in its category. However, you can see from the scores, it still has some areas of improvement if it stands a chance of meeting the customer’s actual needs.
Category & Rating
Architecture and Integration, 2.7
Reliability and Scalability, 3.5
Configuration and Flexibility, 1.9
Administration and Reporting, 1.8
Overall Rating, 2.5
With a scoring range of 1 through 5, 1 would represent poor or absent support or quality. 5 would indicate satisfying, broad, flexible qualities.
Each category has several sub-categories consisting of several criteria. Each sub-category and criterion is weighted according to its relative importance to the customer. Therefore, the amount of database administrator time required to set up the system may be weighted more or less heavily than, say, the range of third-party databases supported by the product, depending on what customers prefer. Similarly, the usefulness and intuitiveness of the graphical user interface or the online help tools will be weighted more heavily than the product’s support of a command line interface.
When my evaluation is completed, the CSO or product manager will see in the detailed report every major way the technology meets, exceeds, or fails to address dozens of important requirements or preferences of the end user customers.
So rather than crying and carrying on about which gizmo has more features than the next guy's, lets focus on solving the problem and meeting the end user security executive's needs. Let me know if you’d like more information on measuring the true value of technologies. I’m happy to chat. firstname.lastname@example.org