Installing HP Diagnostics Probe for ASP.NET Applications

Another very useful tip shared by Odorico:

Installation:

  • Just install the provided file, “HP .NET Probe.msi”, which is probably under the “Diagnostics_Probes” folder;
  • Inform the name of server where the Diagnostic is was installed;
  • Keep the other settings as default.

Remember to reset IIS to get the changes (enable/disable probe) in place:
Go to  ’Start > Run’  “iisreset”

Attempting stop...
Internet services successfully stopped
Attempting start...
Internet services successfully restarted

You will be able to Enable and Disable the probe in the application server. By default, after installation, probe is enabled.

If the application you intend to monitor is not detected by the Probe, or if new applications are deployed in the server, you can Rescan ASP .NET Application. A new application point will be created for any new application in the server.

Configuration:

Probe will be installed in the folder C:\MercuryDiagnostics\.NET Probe

Configuration and application points settings will be find in etc folder (C:\MercuryDiagnostics\.NET Probe\etc)

File C:\MercuryDiagnostics\.NET Probe\etc\probe_config.xml contains the main configurations. You will be able to add more detailed monitoring information by editing this file.

The main changes you should do are:

  • Monitoring Heap: Add to “process enablealldomains” the tag “monitorheap=”true”
  • Monitoring IIS and memory information: Add the Iis and Lwmd file points

Example file:

<?xml version="1.0" encoding="utf-8" ?>
- <probeconfig>
<id probeid="" probegroup="Default" />
<credentials username="" password="" />
- <profiler authenticate="">
<authentication username="" password="" />
</profiler>
<diagnosticsserver url="http://localhost:2006/commander" />
<mediator host="localhost" port="2612" metricport="2006" />
<webserver start="35000" end="35100" />
<modes pro="true" />
- <instrumentation>
<logging level="" />
</instrumentation>
<lwmd enabled="true" sample="1m" autobaseline="1h" growth="10" size="10" />
- <process enablealldomains="true" name="ASP.NET" monitorheap="true">
<logging level="" />
<points file="ASP.NET.points" />
<points file="ADO.points" />
<points file="Iis.points" />
<points file="Lwmd.points" />

- <appdomain enabled="false" name="1923244809/Default">
<points file="1923244809-Default.points" />
</appdomain>
</process>
</probeconfig>

If it is necessary to add monitoring level under some specific classes and/or methods you just need to configure the <application_instance>.points file:

;[1923244809/Default]
;Uncomment and modify this section to capture calls to your business logic
;class                   = !Namespace_Qualifed_Class_Name_Of_Your_Code.*
;method                  = !.*

;signature               =
;scope                   =
;ignoreScope             =
;layer                   = 1923244809/Default

Fields class and method will contain the namespace for the classes and method would contain the specific methods to monitor. CAUTION: Avoid monitoring of all methods (.*). That would be very intrusive and the probe tends to impact server performance.

Profiler:

When you open the profiler (during installation it will be showed the port number the service will run) you will see the following screen:

You will also be able to know the Probe’s application port for each IIS application, by checking this information in probe_config.xml file.

<webserver start="35000" end="35200" />

The webserver start and end shows the port ranges for the probe. The first application point will be running in the port 35000, the second one in the port 35001 and so on.

In our example we have only one application installed in IIS, so we just access URL:

http://localhost:35000/profiler

Analysis:

Heap Tab

You can see the application Heap. In this example application presents a memory leak. You can see Gen 2 and Large heap increasing very fast and they are not releasing meaningful portion of memory. Even after the test has been finished, we noticed a significant portion of memory remains allocated.

A rank with the most memory consumer classes is showed. For instance String, Byte[], and Collectors, such as Dictionaries.

Collections Tab

The collectors tab shows how collections grow along the test. For the application example, we have a significant increase in collections allocated in the class CommunityServer.Controls.TagCloud.get_DataSource().

How can I save and share the reports?

You can save a monitoring snapshot by click in Snapshot button. It will be created a XML file containing all information we had collect at that time.

NOTE: Snapshots can create huge XML files. So a good practice is to compact the file before sharing it!

IMPORTANT: The probe is intrusive. It collects a lot of metrics from the application. Thus, don’t forget to disable the probe after you have finished your tests. It will be necessary to reset IIS once again.

Microsoft LogParser

Ask yourself this question: what if everything could be queried with SQL? Microsoft’s LogParser does just that. It lets you slice and dice a variety of log file types using a common SQL-like syntax. It’s an incredibly powerful concept, and the LogParser implementation doesn’t disappoint. This architecture diagram from the LogParser documentation explains it better than I could:

logparser_architecture

The excellent forensic IIS log exploration with LogParser article is a good starting point for sample LogParser IIS log queries. Note that I am summarizing just the SQL clauses; I typically output to the console, so the actual, complete commandline would be

logparser "(sql clause)" -rtp:-1

Top 10 items retrieved:

SELECT TOP 10 cs-uri-stem as Url, COUNT(cs-uri-stem) AS Hits
FROM ex*.log
GROUP BY cs-uri-stem
ORDER BY Hits DESC

Top 10 slowest items:

SELECT TOP 10 cs-uri-stem AS Url, MIN(time-taken) as [Min],
AVG(time-taken) AS [Avg], max(time-taken) AS [Max],
count(time-taken) AS Hits
FROM ex*.log
WHERE time-taken < 120000
GROUP BY Url
ORDER BY [Avg] DESC

All Unique Urls retrieved:

SELECT DISTINCT TO_LOWERCASE(cs-uri-stem) AS Url, Count(*) AS Hits
FROM ex*.log
WHERE sc-status=200
GROUP BY Url
ORDER BY Url

HTTP errors per hour:

SELECT date, QUANTIZE(time, 3600) AS Hour,
sc-status AS Status, COUNT(*) AS Errors
FROM ex*.log
WHERE (sc-status >= 400)
GROUP BY date, hour, sc-status
HAVING (Errors > 25)
ORDER BY Errors DESC

HTTP errors ordered by Url and Status:

SELECT cs-uri-stem AS Url, sc-status AS Status, COUNT(*) AS Errors
FROM ex*.log
WHERE (sc-status >= 400)
GROUP BY Url, Status
ORDER BY Errors DESC

Win32 error codes by total and page:

SELECT cs-uri-stem AS Url,
WIN32_ERROR_DESCRIPTION(sc-win32-status) AS Error, Count(*) AS Total
FROM ex*.log
WHERE (sc-win32-status > 0)
GROUP BY Url, Error
ORDER BY Total DESC

HTTP methods (GET, POST, etc) used per Url:

SELECT cs-uri-stem AS Url, cs-method AS Method,
Count(*) AS Total
FROM ex*.log
WHERE (sc-status < 400 or sc-status >= 500)
GROUP BY Url, Method
ORDER BY Url, Method

Bytes sent from the server:

SELECT cs-uri-stem AS Url, Count(*) AS Hits,
AVG(sc-bytes) AS Avg, Max(sc-bytes) AS Max,
Min(sc-bytes) AS Min, Sum(sc-bytes) AS TotalBytes
FROM ex*.log
GROUP BY cs-uri-stem
HAVING (Hits > 100) ORDER BY [Avg] DESC

Bytes sent from the client:

SELECT cs-uri-stem AS Url, Count(*) AS Hits,
AVG(cs-bytes) AS Avg, Max(cs-bytes) AS Max,
Min(cs-bytes) AS Min, Sum(cs-bytes) AS TotalBytes
FROM ex*.log
GROUP BY Url
HAVING (Hits > 100)
ORDER BY [Avg] DESC

Via codinghorror.com