1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

Thursday, February 21, 2008

Good Rules Of Thumb For Benchmarking And Tuning

Benchmarking Goals

  • Start out with clearly defined performance goals. Wooly, ill conceived or even none exisitant targets defeat the purposes of any bench marking exercise.

  • Performance, throughput and scalability goals should be linked to the business aims of the software.

Testing Environment and Load Simulation
  • Always use data which is representative of what the application will look like in production.

  • Scale you data in order to understand how performance will be affected as the database grows, if database growth is applicable.

  • Use a realistic time distribution for when SQL statements will hit the database, i.e. ask youself at what what sort of frequency are the SQL statements going to hit the database and how this frequency is distributed over the time window when the application will be in use.

  • Use a realistic distribution of the SQL statements that used against the database.

  • Ensure that when benchmarking and tuning your application, that your application is the only using you server, network, I/O infrastructure. Activity outside of your application will skew your results. SANs are brilliant for skewing your results through caching, some SANs queue all I/O until the write part of the cache is written to disk once it becomes full, read aheads can kick in etc . . .

  • When using load simulation tools be aware of 'think' time, if this is to low you can in fact be simulating activity which is a order or magnitude greater than what you are trying to achieve.

  • Leverage Oracle features that allows your environment to easily reset and for tests to be easily reproduced, such as flashback database.


Metrics Capture

  • Use meaningful metrics for capturing performance, the ultimate goal of your exercise will be the fulfilling processing under certain conditions in order to meet certain business goals, therefore you should be able to express some of your findings in terms of figures that are meaningful to the business users of the application.

  • Hit ratios are flawed indicators of performance, there is some PL/SQL on Connor MacDonalds http://www.oracledba.co.uk/ web site which illustrates how to set the buffer cache hit ratio to support this point. Jonathan Lewis has made a presentation on the subject of hit ratios, in which he explains the flaws of traditional hit ratios and put's forward the "Fan Hit Ratio" as a superior performance metric, refer to http://www.jlcomp.demon.co.uk/hit_ratios.html. The crux of hit ratios being flawed is that they loose meaning, latency to name but one of the things that hit ratios do not capture.

  • Techniques for capturing metrics should not incur overheads that they skew your results.


Tuning

  • Tune What You Know, this gets reiterated at every UKOUG conference during various performance and tuning related presentations and justly so. Only tune after you have hard statistics showing where your time is going, do not blindly apply techniques for optimisation first without knowing where the time is being expended.

  • Make one change to your environment at a time when tuning.

  • When tuning, make changes which correspond to the scope of the problem, e.g. when tuning a query look for changes which are within the scope of the query, object statistics, indexes etc rather than initialisation parameter changes. There are obviously parameters that affect optimisation, however, when trouble shooting your first port of call should always be to the places which are local to the problem.

  • When gathering performance metrics and using Oracle 10g, ensure that database time is one of the metrics you collect.

  • If testing turnaround times for parts of your application are considerable, consider (if possible) breaking out certain parts of the application in order to test the affect of certain changes idependantly before performing the full blown test using the real software.

  • Try to understand what parts of the architecture / infrastrcture will throttle the performance of your software.

  • Optimise "single stream" performance as much as possible before scaling out.

  • Once performance goals have been met, document baselines of key performance indicators (document a flight envelope) to provide a tool for production performance trouble shooting.

No comments:

 
1. 2.