Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set value for effective cache size and use PR #41 for ome.postgresql #424

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

khaledk2
Copy link
Contributor

@khaledk2 khaledk2 commented Jun 3, 2024

This PR sets a value for the effective_cache_size attribute (75% of the physical machine memory ) and uses ome/ansible-role-postgresql#41

@khaledk2
Copy link
Contributor Author

khaledk2 commented Jun 3, 2024

I think the tests have failed because of the requests version 2.32.0 bug. I have modified the workflow to install "requests < 2.32.0"

@sbesson sbesson self-requested a review June 3, 2024 20:51
@sbesson
Copy link
Member

sbesson commented Jun 4, 2024

Deployed as test122b for testing. Note the configuration file has multiple duplicate configurations

[sbesson@test122b-database ~]$ sudo tail -n 15 /var/lib/pgsql/16/data/postgresql.conf
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------

# Add settings for extensions here

listen_addresses = '*'

shared_buffers = 8002MB


listen_addresses = '*'

effective_cache_size = 23880MB
shared_buffers = 7960MB

@khaledk2
Copy link
Contributor Author

I have updated the Postgres Ansible role (ome/ansible-role-postgresql#41 (comment)), so it will account for this situation and the added variables will not be duplicated by the end of the configuration file.

@sbesson
Copy link
Member

sbesson commented Jul 3, 2024

Proposing to deploy this as part of test123 /cc @jburel

@sbesson
Copy link
Member

sbesson commented Jul 10, 2024

Note this has been deployed on test123 and is currently evaluated for inclusion, quite possibly in prod124. As discussed on Monday, the proposed testing plan is for @khaledk2 to run the search engine indexer and compare the total time using a previous release (prod121 or prod122) as the baseline.

Below is the diff between the PostgreSQL configuration on prod122 and on test123

sbesson@Sebastiens-MacBook-Pro-3 Downloads % diff postgresql.conf.prod122 postgresql.conf.test122 
0a1
> #Ansible managed
822a824
> 
825,827c827,828
< shared_buffers = 8002MB
< 
< max_connections = 150
---
> effective_cache_size = 23880MB
> shared_buffers = 7960MB

@khaledk2
Copy link
Contributor Author

The effective_cache_size value on prod122 is 24GB

sudo cat /var/lib/pgsql/16/data/postgresql.conf | grep effective_cache_size
effective_cache_size = 24GB

We have changed this value manullay on prod121 after upgrading to PostgreSQL 16.
ome/ansible-role-postgresql#41 (comment)

I could not find the container which was used to index the data on the idr-prod122.

So, to have the required comparison, I have run the indexer on the idr-testing123 with the effective_cache_size=23880MB. The indexing time is:
7hrs and 31 minutes

I have set the effective_cache_size value to the default one, i.e. 4GB and restarted the database server then run the indexer again. The indexing time is:
12hrs and 51 minutes

Copy link
Member

@sbesson sbesson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the report. Assuming there are no other performance measurements expected by the IDR, I think the next steps would be for to review, merge & release the upstream role and update this PR.

As a possible follow-up, the IDR PostgreSQL configuration file could be cleaned up for better readability / maintenance by commenting the configuration lines which are identical to the default or overridden at the bottom of the file like shared_buffers and effective_cache_size.

@sbesson
Copy link
Member

sbesson commented Aug 15, 2024

FYI I have cherry-picked 77b1653 and pushed it to the HEAD of origin/master. While we are waiting for this work to be completed, this should prevent regressions when running the deployment playbooks against production environment and ensure the PSQL database has a properly set effective_cache_size across the board

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants