Measuring performance is hard. Unfortunately it's easy to benchmark the wrong things, or benchmark tiny parts of their program rather than everything. That's the danger with using individual benchmarks which don't measure the time taken to perform a real-world action. Some really useful tools I've found for this are:
Benchmarking, as in this article - most useful for comparing the impact of a change on an isolated bit of code.
Measure the absolute impact, not % increases, e.g. a 100% increase from 0.0001ms 0.0002ms in response times to is unlikely to be important.
Profiling - check the call graph to find out which functions are called most often and take longest, particularly in hot paths, then work on those.
Measuring performance is hard. Unfortunately it's easy to benchmark the wrong things, or benchmark tiny parts of their program rather than everything. That's the danger with using individual benchmarks which don't measure the time taken to perform a real-world action. Some really useful tools I've found for this are:
Here's a collection of stories on profiling