Updated: Sep 21, 2018
Use one of the following programs to determine if the Shared Service Proxies (SSPs) in your Millennium system are queuing.
Panther’s Shared Service Queue Depths Control
The Shared Service Queue Depths control in Softek Panther shows all SSP queues on a node in a domain, allowing you to easily see where you have backlogs in SSP queues. To view these across your entire environment, you can create a new desktop and drag each node’s Shared Service Queue Depths control into a panel in the Desktop control. Clicking on the Maximum column header will display queue depths in order from greatest to least. Any services with a queue depth greater than 0 indicates a backlog.
Panther also provides several sensors that can be configured to send email or pager notifications when user-defined queue depths are exceeded or when no forward progress is made on messages in a queue. These sensors alleviate the need to have someone constantly monitoring the SSP queues and allow your IT staff to more quickly respond to these issues.
The following steps tell you how to use Millennium’s mon_ss.exe utility to find out if you are queuing and include an example of the output from a client.
Start a DOS or CMD prompt on the Citrix session or fat client.
From the winintel directory, type in mon_ss <node name>, login.
Of the six columns displayed, the “cur” (current queue depth) and “max” (maximum queue depth) headings are the ones we are concerned with. Look for max values that are greater than 0, which indicates that you have had a queue or backlog on that queue. You want to keep track of how long cur has a value greater than 0. You can do this by watching and keeping track of the seconds displayed at the top right of the screen.
The “.” in the “queues” column indicates how many were queued at a prior point in time.
The “#” in the “queues” column indicates how many are currently queued.
Millennium’s mon_ss.exe tool can only view one node at a time. Sites with two or more Application nodes require a separate DOS window running for each node.