Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes #21072 - build apipie cache after plugins #592

Merged
merged 2 commits into from
Dec 14, 2017

Conversation

mbacovsky
Copy link
Member

Installation of the plugins can influence the API
and the apipie cache needs to be refreshed after that.

Installation of the plugins can influence the API
and the apipie cache needs to be refreshed after that.
@ekohl
Copy link
Member

ekohl commented Sep 22, 2017

Doesn't this mean that every single time the rake task is executed? I think that breaks idempotency. Not sure what the best solution here is but maybe add the notify in foreman::plugin on the package resource?

@iNecas
Copy link
Member

iNecas commented Sep 22, 2017

@ekohl I though puppet does the de-duplication, so I would hope it would run the rake only once, after all the dependencies were run.

@mbacovsky
Copy link
Member Author

@ekohl, my intention was to declare the apipie:cache:index rake task in init.pp with refreshonly => true which I believe executes the task only when it gets the notification from some other resource. Then I bound the notify on the original db:seed and also on plugin package resource.

Did the implementation deviated from my intent? It seemed to work during my testing.

@ekohl
Copy link
Member

ekohl commented Sep 22, 2017

@iNecas oh you might be right since foreman::rake has a refreshonly => true. That said I do think you need to chain so it happens before the service is started. Another effect is that if a user notifies the foreman class it'll now regenerate the cache. Not sure if that's a problem though.

@ekohl
Copy link
Member

ekohl commented Sep 22, 2017

Did the implementation deviated from my intent? It seemed to work during my testing.

No, I don't think so. I didn't fully oversee the patch but I do think you should ensure it's before the service class.

@ares
Copy link
Member

ares commented Sep 25, 2017

Could we also change apipie:cache:index to apipie:cache? While this helps hammer, people get 404 after clicking on API endpoints in /apidoc/...

@mbacovsky
Copy link
Member Author

@ares any issue for this? In general the plugins should generate non-index parts of the cache during the build time and ship it with the package. There should be no need to rebuild the whole cache. Theoretically I can imagine resources such as host, that is extended by multiple plugins where the pre-build approach probably will fail but i'd like to see some real case to check if we can improve the process.

@ares
Copy link
Member

ares commented Sep 25, 2017

@mbacovsky please see https://bugzilla.redhat.com/show_bug.cgi?id=1479900 and https://bugzilla.redhat.com/show_bug.cgi?id=1480563

If you could point me to an example where this happen for other plugin, I can check openscap and virt-who plugins, they might be missing this in spec.

@iNecas
Copy link
Member

iNecas commented Sep 25, 2017

Two bugs in one patch: I'm sold :)

@ekohl
Copy link
Member

ekohl commented Oct 2, 2017

If you can also fix the tests then I think this is a good improvement.

@mbacovsky
Copy link
Member Author

@ekohl the tests were fixed. I removed the breaking test which was testing the cache is not build if the database is not managed because I think there is no relation between the two.
I checked in my logs that the cache build was called only once during the installation.
I tried to address the chaining with the service but failed with creating circular deps. Any idea how to fix that? My attempt was:

diff --git a/manifests/init.pp b/manifests/init.pp
index 9397f3f..c023bd2 100644
--- a/manifests/init.pp
+++ b/manifests/init.pp
@@ -314,6 +314,7 @@ class foreman (
   foreman::rake { 'apipie:cache:index':
     timeout => 0,
   }
+  ~> class { '::foreman::service': }
 
   Class['foreman::repo']
   ~> class { '::foreman::install': }

@ares I'd like to avoid re-building the whole cache. It is taking long time which was the reason we were moving as much as possible of it to the package build time. I checked the bug in openscap plugin and found out it is missing the cache build step done by -a switch to the %foreman_precompile_plugin macro [1]. The same applies for virt-who_configure plugin. For the two bugs it should be enough to add the -a switch to the spec files. I've updated the bugs and will send patches soon.

[1] https://github.com/theforeman/foreman-packaging/blob/rpm/develop/rubygem-foreman_openscap/rubygem-foreman_openscap.spec#L77

@ekohl
Copy link
Member

ekohl commented Oct 15, 2017

@ekohl the tests were fixed. I removed the breaking test which was testing the cache is not build if the database is not managed because I think there is no relation between the two.

Actually there was. The flag was there to ensure the rake task was not executed. This was done for users who have a cluster of webservers and only want to execute rake tasks on one. On the other hand I think that the apipie cache does make sense since it operates on local files.

@mbacovsky
Copy link
Member Author

@ekohl that means all the webservers in the cluster use single DB? Even in that case I don't see the relation. The apipie:cache rake task has to be executed after db:migrate otherwise the rake would fail to load the rails env. Besides that there is no need for the DB as the apipie:cache task only builds static API documentation that is stored in the API controllers and saves it into /usr/share/foreman/public where Apache serves static content from. So no managed DB actually means we can run the task anytime. Do I get it right?

@ekohl
Copy link
Member

ekohl commented Oct 15, 2017

@mbacovsky I know @timogoebel runs multiple webservers using a shared DB. Then web01 runs the crons and manage tasks while web02 and up don't run those. You're correct that apipie:cache operates on local files so should likely be executed on every node which I tried to say with my last sentence. Tomorrow I'll have a fresh look.

@timogoebel can you confirm what I just said and how this would affect your setup?

@ares
Copy link
Member

ares commented Oct 16, 2017

Thanks @mbacovsky, makes sense with the cache.

@timogoebel
Copy link
Member

@ekohl: Thanks for asking. I don't think, this will lead to any problems. However, I don't see why this is a problem in the first place. The apipie cache is also built after every plugin install by the rpms. Not sure about deb packages.
So I think, this now happens twice (well actually the number of plugins + 1)?
I'm not against this because building the cache a bazillion times does not break anything. It just makes everything painfully slow.

@mbacovsky
Copy link
Member Author

@timogoebel this is good point. I have in my TODO to check if we can reduce cache rebuild to one per rpm transaction.
Just for clarification this patch is not adding new re-build it just moves it to better place - from before install of plugins to after it.

Copy link
Member

@ekohl ekohl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only issue I can see is that if someone notifies Class[foreman] it'll now rebuild the apipie cache where previously it didn't. Not sure how big an issue this is in practice. In puppet-katello we now notify Class[foreman::service] so that shouldn't be an issue.

@mmoll mind having a look as well?

@iNecas
Copy link
Member

iNecas commented Dec 13, 2017

Seems like this one could be good to go?

@ekohl
Copy link
Member

ekohl commented Dec 14, 2017

Could still use a review from @mmoll but I can't see any problem.

@mmoll mmoll merged commit e16aa60 into theforeman:master Dec 14, 2017
@mmoll
Copy link
Contributor

mmoll commented Dec 14, 2017

I think this is fine, merged, díky @mbacovsky!

@ekohl ekohl added the Bug label Jan 25, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants