For the past two weeks we have been talking about script analysis—first, how to collect the data from your Cerner Millennium® system and, then, how to create a spreadsheet to analyze the scripts that have a high impact each time they run. This week we are going to find the scripts that have a high impact on your system due to the number of times they run. At first glance, you may wonder why you can’t just sort by the EXECUTIONS column to find the ones that have the highest impact. All scripts will have a larger impact with more executions, but we must consider all of the metrics that impact script performance to determine their impact.
So, picking up from where we left off last week, open your analysis spreadsheet (the .CSV file that was generated by the dm2_sqlarea_analyze command) and follow these steps:
1. Insert a column next to the Cost_Per_Execution column we created last week and enter High_Executions into the first row of this new column.
2. In the second row of the High_Executions column, enter the formula to match this equation: (BUFFER_GETS * DISK_READS + BUFFER_GETS * CPU_TIME + DISK_READS * CPU_TIME) * EXECUTIONS
Note: Your equation will look something like this:
=(J2 * K2 + J2 * P2 + K2 * P2) * M2
If at your site you feel that one of the metrics is more costly, you can modify the equation to reflect that.
3. Copy this equation to every other row in this column where there is corresponding analysis data.
4. You may need to remove and reapply the filter for the spreadsheet. In Excel 2007, go to the Data ribbon and click the Filter button. After you have turned on the filter feature, you should see a button with a down arrow in the first row of every column.
5. Click the down arrow in the High_Executions column and then click Sort Largest to Smallest.
You should now have a spreadsheet where the scripts with the highest impact and a large number of executions are at the top. If you have any questions about the High_Executions equation, please post your comment at the end of the blog.
So what do you do now that you’ve identified a script that has a high impact? Once again, it depends why the score is high: Does it have a high number of BUFFER_GETS, DISK_READS, CPU_TIME, or EXECUTIONS — or some combination of these — that are an order of magnitude above other scripts? The same procedure applies for BUFFER_GETS, DISK_READS, and CPU_TIME as we covered last week. For EXECUTIONS that appear to be an order of magnitude above other scripts, however, you will want to investigate the workflow associated with the application or applications that call that script. Your goal will be to identify if the workflow can be modified so that the script isn’t called so frequently.
Prognosis: Analysis of data from Millennium can identify and address scripts that have a high impact due to the number of times they run.
Next week: A look at the formula we used to calculate the baseline cost of scripts.