Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improved performance #3394

Merged
merged 6 commits into from
Jan 10, 2024
Merged

improved performance #3394

merged 6 commits into from
Jan 10, 2024

Conversation

HenrikStaunPoulsen
Copy link
Contributor

improved performance from 11+ hours to less than 3 hours on my clients database, by writing data to two #temp tables, and reporting of those.

Copy link
Contributor Author

@HenrikStaunPoulsen HenrikStaunPoulsen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed hardcoded database name

@pydss
Copy link

pydss commented Nov 21, 2023 via email

@@ -1435,40 +1442,100 @@ BEGIN TRY

--NOTE: If you want to use the newer syntax for 2012+, you'll have to change 2147483647 to 11 on line ~819
--This change was made because on a table with lots of paritions, the OUTER APPLY was crazy slow.
SET @dsql = N'SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
DROP TABLE if exists #h
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hate to say this, but what is the #h table named for? Can we use a more descriptive name? I'm thinking of the people who have to come in behind you (cough, me, cough) and read and interpret this over time to maintain it. Same with the #os table. Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#h for Henrik?
#os for Other Stuff?
There are 2 things that are hard in computing; cache invalidation, naming things and off-by-one errors.
I'll think of a better name, and find a smallish (15 TB) database to test the result set; before and after.

Copy link
Member

@BrentOzar BrentOzar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use more descriptive names for #h and #os? I haven't yet tested it to make sure it returns the right results, but conceptually I like what you're doing here, I just need to test it after I know what those tables are for. Thanks!

@pydss
Copy link

pydss commented Dec 21, 2023 via email

@BrentOzar
Copy link
Member

@pydss you set yourself to watch the entire Github repo. You'll need to un-watch the repo - none of us can do that for you. Go to http:/firstresponderkit.org and click Unwatch at the top of the screen.

Named the new tables #dm_db_partition_stats_etc and #dm_db_index_operational_stats

Tested on a smaller db (27TB) where the new version runs in 3 seconds, and the old 11 seconds. Same output.

I'll see if I can get a result set from the old version on the larger database, now that there are a couple of holidays.  Previously I have given up after 11 hours.
My new version ran in 3 hours.
@RichBenner
Copy link
Contributor

RichBenner commented Dec 28, 2023

For reference, this fails on a client server I'm currently working on. It's a 2016 server which might be the issue,

Microsoft SQL Server 2016 (SP2-CU17-GDR) (KB5014351) - 13.0.5893.48 (X64) May 24 2022 02:58:22 Copyright (c) Microsoft Corporation Enterprise Edition: Core-based Licensing (64-bit) on Windows Server 2016 Datacenter 10.0 <X64> (Build 14393: ) (Hypervisor)

Error Message;

Starting run. sp_BlitzIndex(TM) v8.18 - December 22, 2023
Create temp tables.
Adding UQ index on #IndexSanity (database_id, object_id, index_id)
Number of databases to examine: 7
Checking partition counts to exclude databases with over 100 partitions
Starting loop through databases



DatabaseName
starting at 2023-12-28 10:55:38
Inserting data into #IndexColumns for clustered indexes and heaps
Inserting data into #IndexColumns for nonclustered indexes
Inserting data into #IndexSanity
Checking partition count
Preferring non-2012 syntax with LEFT JOIN to sys.dm_db_index_operational_stats
Inserting data into #IndexPartitionSanity
Failure populating temp tables.
Last @dsql: 
                        DECLARE @d VARCHAR(19) = CONVERT(VARCHAR(19), GETDATE(), 121)
                        RAISERROR (N'start getting data into #dm_db_partition_stats_etc at (null)',0,1, @d) WITH NOWAIT;
                        SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
                        INSERT INTO #dm_db_partition_stats_etc
                        (
                            database_id, object_id, sname, index_id, partition_number, partition_id, row_count, reserved_MB, reserved_LOB_MB, reserved_row_overflow_MB, lock_escalation_desc, data_compression_desc
                        )
                        SELECT  7 AS database_id,
                                ps.object_id, 
								s.name as sname,
                                ps.index_id, 
                                ps.partition_number, 
                                ps.partition_id,
                                ps.row_count,
                                ps.reserved_page_count * 8. / 1024. AS reserved_MB,
                                ps.lob_reserved_page_count * 8. / 1024. AS reserved_LOB_MB,
                                ps.row_overflow_reserved_page_count * 8. / 1024. AS reserved_row_overflow_MB,
								le.lock_escalation_desc,
                            par.data_compression_desc 

			FROM    [DatabaseName].sys.dm_db_partition_stats AS ps  
                    JOIN [DatabaseName].sys.partitions AS par on ps.partition_id=par.partition_id
                    JOIN [DatabaseName].sys.objects AS so ON ps.object_id = so.object_id
                               AND so.is_ms_shipped = 0 /*Exclude objects shipped by Microsoft*/
                               AND so.type <> 'TF' /*Exclude table valued functions*/
					JOIN [DatabaseName].sys.schemas AS s ON s.schema_id = so.schema_id
			            OUTER APPLY (SELECT st.lock_escalation_desc
			                         FROM [DatabaseName].sys.tables st
			                         WHERE st.object_id = ps.object_id
			        ...
Msg 50000, Level 16, State 1, Procedure sp_BlitzIndex, Line 2439 [Batch Start Line 0]
DatabaseName database failed to process. Invalid column name 'version_generated_inrow'.

Completion time: 2023-12-28T07:55:38.4454559-08:00

@BrentOzar
Copy link
Member

@RichBenner ah yeah, good catch. It looks like in this pull request, @HenrikStaunPoulsen is grabbing ALL of the columns from dm_db_index_operational_stats, rather than just the ones we need, and those newer columns aren't available in all versions.

I've got some time this morning, so lemme see if I can change that in his pull request. I think it's possible to edit somebody else's pull request, I just haven't done it recently.

Removed columns we don't need from prior versions.
@HenrikStaunPoulsen
Copy link
Contributor Author

@RichBenner; good catch. I should have thought about that. Sorry.

@RichBenner
Copy link
Contributor

RichBenner commented Jan 3, 2024

Confirm this now works on the 2016 instance I'm testing it on. As an aside, the current version of sp_BlitzIndex has been running for almost 6 days on the server and is yet to complete, this new version completed in just under 5 minutes and has output 96k rows of information into the output table.

This is using the @mode = 4, @GetAllDatabases = 1 parameters

Copy link
Member

@BrentOzar BrentOzar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updates! Looks good, works on my machine, merging into the dev branch, will be in the next release with credit to you in the release notes.

@BrentOzar
Copy link
Member

@HenrikStaunPoulsen you'll get a chuckle out of this: the query hint min_grant_percent was added in a later update to SQL Server 2012 & 2014. It wasn't in RTM, so sp_BlitzIndex is failing on older versions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants