How to plan for capacity? #87
-
How should we plan capacity required for this software? How much space is needed initially? How can I estimate how much will be needed ongoing after install? Do the scans run synchronously or asynchronously per sql instance and per database? Will this run multi threaded? How much memory should we expect this to use? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi, The storage requirements for the DBA Dash repository database will vary from environment depending on how many instances you monitor, what options you have enabled for collection, data retention settings, and a variety of other factors. In the GUI if you go to Options\Data Retention you can see the the retention options for each table and how much space they are currently using. If you use the filter drop down to show all tables you can see the storage for each table - even the ones without data retention settings. Performance data is collected every 1min by default. In most cases, this is also aggregated into 60min periods - to make it more efficient to trend over larger date ranges and also make the data cheaper to store over the long term. So you could choose to clear the 1min data more aggressively to save space while still keeping the associated _60MIN table for long-term analysis. Old data is cleared out via and efficient system of partition switching and truncation. Note: Schedule frequency can be changed using the DBA Dash Service Config tool if required. This can impact storage requirements. I'd recommend sticking with the default values but the option is there if you want to change the frequency of collection. If you are monitoring the instance where the repository database is stored you can also use DBA Dash to track its allocated/used space over time. Select Checks node in the tree. Got to the "DB Space" tab. Click the History link for the database. It will be on a trajectory of growth until the data retention kicks in. If you have slow query capture enabled the storage for this can be significant depending on your server workload and the settings used and the data retention. DBA Dash uses Quartz for scheduling. This is multi-threaded. There is an option to control the number of threads using the ServiceThreads setting in the ServiceConfig.json file. Performance data is collected every 1min. An event will fire every 1min triggering the collection for each server. Collections will occur simultaneously for each monitored server depending on the thread limit. On an individual server, the collections that share the same schedule are executed serially. There are currently 14 collections that fire every 1min by default - they execute 1 at a time rather than simultaneously. This reduces the load on your monitored servers. In my lab environment, the DBADashService.exe is using ~60MB of memory for monitoring 10 SQL Instances. CPU usage is mostly idle. If you monitor a large number of SQL Instances the memory usage is likely to go up. Also, the CPU load will increase with more instances to monitor - particularly if the thread count is increased. I would recommend running the DBA Dash service from a separate server to your production SQL instance. Also, run the DBA Dash repository database on a SQL Instance separate from any critical production SQL Instances that you are monitoring. This keeps the overhead as small as possible. I wouldn't expect the performance impact to be very great if you need to run everything on the same box. Hope this helps, David |
Beta Was this translation helpful? Give feedback.
Hi,
The storage requirements for the DBA Dash repository database will vary from environment depending on how many instances you monitor, what options you have enabled for collection, data retention settings, and a variety of other factors.
In the GUI if you go to Options\Data Retention you can see the the retention options for each table and how much space they are currently using. If you use the filter drop down to show all tables you can see the storage for each table - even the ones without data retention settings.
Performance data is collected every 1min by default. In most cases, this is also aggregated into 60min periods - to make it more efficient to trend over larger date ranges …