Oracle Tables Expose Hidden Issues

In working with a couple of clients, I discovered a large number of Millennium® Oracle tables that were used for logging or errors (like a message log) — more than 90 in the 2007.18 and above schema. I find this attribute perplexing. If you wanted to put all of the error and logging information in the database, why create the message logs, .err and .out files? On a practical level, that means analysis of application or stability issues requires users not only to gather data from the message logs but also to run a count of these 90+ Oracle tables to determine which tables have a lot of activity and then run a retrieval to get the information from those Oracle tables. No wonder many don’t bother looking for issues proactively — the task is simply too daunting.

Despite the magnitude of the task, I believe it’s important to do all you can to keep Millennium running as error-free as possible. And one way to accomplish this goal is to monitor error messages and logs and resolve the issues they show you. Last month I wrote about how to find and correct quickly wrapping message logs ( “What Is Measured Improves” ). Today I’ll list the queries you can run to see if any of these 90+ Oracle tables are active on your system — if, for instance, they include more than 10 rows of information, or whatever amount your team sets as its threshold. To see the list, click here.

This analysis is related to the previous blog on message logs and will use a similar methodology for reviewing the Oracle tables. You want to view activity in the tables over time. Run the queries I have listed and gather the row counts, and then run the queries again in a day or week or month and see which ones have grown.

Due to the fact that the tables are based on the Millennium solutions you have configured for your own site, I cannot provide specific guidance on what to look for and how to address an issue. I can, however, provide a little insight into a few of the tables:

  • CPMPROCESS_ERROR is important to anyone still on 2001.x or earlier code. This is the Oracle table that the CPM Process server and any clones of CPM Process use when they cannot complete a transaction.
  • For the CQM_MDRESULTS_LOG, CQM_MICRESULTS_LOG and CQM_OENTXLOG_QUE tables, you want to make certain that all of the rows are only as old as your purge criterion. For instance, if your purge criterion is seven days, then there should not be any records in these tables more than eight days old. If there are, you need to manually purge them and log a service request (SR) with Cerner to get a purge script that works correctly.
  • CHART_SERV_LOG is important to monitor to verify that the chart servers are not sending too much logging information to the database.
  • The DCP_ERROR_LOG and ORM_ERROR_LOG are for DCP and order errors. They are often in the message logs, but their inclusion here helps consolidate several of these messages from all of the application nodes into a single queriable location.
  • If you are a Millennium Profit client, the CSI_LOG and CSI_ERROR_LOG tables are important to monitor for interfaced charges and their errors.
  • The GL_TRANS_LOG for general ledger events and PFT_*_LOG tables for general Profit issues should be monitored and issues rectified as quickly as possible to minimize cash flow and reimbursement issues.

As I recommended in the previous blog, your staff will need to log SRs with Cerner to figure out how to correct your errors and then create and execute an action plan.

Prognosis: Maintaining a list of Oracle tables and knowing their rate of change or growth can be time-consuming, but the work you do up front to catch issues early will mitigate problems for the clinicians and support staff in the future.