We have developed a .NET web application that uses SQL Server as a backend. Now we would like to provide a monitoring dashboard app for the tech support team. The idea is that this monitoring app will show a global picture of the "health" of the web servers hosting the application and the database servers holding the data. This "health" measure should reflect the workload of each machine, and would be a number (between 0 and 100, let s say) computed from some inputs that I need to determine.
For the web servers, I imagine that HTTP requests per time unit must be considered, and perhaps bandwidth consumed.
For the database servers, I reckon that transactions per time unit and maybe locks or some other indicator or database concurrency should be used.
In addition, some other generic inputs, such as CPU load, memory usage and disk queue length should also be taken into account.
All these factors should be weighed as necessary to obtain the final "health" figure for each server.
Edit. The idea is that the "health" measure gives the technician a global picture view of a server s workload. If a server appears with low "health", the technician will be able to drill down and look at the details of the machine to see what specific inputs are causing the low "health".
My questions are:
- Do you think this "health" measure makes sense?
- I am thinking of using performance counters to capture the input data. Is this the best option?
- Can you suggest appropriate inputs for the web servers (IIS 7) and the database servers (SQL Server 2008)?
Thanks.