-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dfs refresh referrals #171
Conversation
…cache() for prevent memory leaking jborean93#144 jborean93#136
…ther then capture PathNotCovered For unknown reasons (maybe version of Windows Server), requests with different desired_access (mode: r/w) for DFS-share by original path don't cause PathNotCovered, that trigger _resolve_dfs So if path recognized as DFS-share, resolve it forcibly
cd111a3
to
0015f39
Compare
Codecov Report
@@ Coverage Diff @@
## master #171 +/- ##
==========================================
- Coverage 99.09% 95.74% -3.35%
==========================================
Files 23 23
Lines 5063 5077 +14
==========================================
- Hits 5017 4861 -156
- Misses 46 216 +170
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
Nice picking up on your original problem, I'll have to play around with it to see if I can replicate it as it's certainly not something I would expect. I think we might need a different approach in the PR as keeping the referrals around resulted in a memory bloat reporting in #136. Maybe we should keep the existing cache and the expiry checks but have another simple list that contains paths known to be DFS paths as part of the check. That way it still knows to do the DFS referral request even on an expiry but the referral responses aren't stored indefinitely. |
I thought about this, but in this case we need to duplicate lookup_referral code for find if requested path is part of dfs. And I don't found simple solution. |
I've taken some time to look at this further and while this might alleviate the problem temporarily for you the issue still remains where the first request is going to error with It is due to this problem I see the proper solution is ensuing you've resolved the proper DFS servers for your domain rather than rely on the rudimentary check that As mentioned in #85 (comment) and further down that thread, in these cases you should be setting the following at the start of your script to ensure the domain referrals are looked up properly. import smbclient
smbclient.ClientConfig(domain_controller='dc.hostname.realm') You can add
When an actual request is seen to connect to one of these now known domain names it knows that this is for a DFS path and can do the lookup pre-emptively avoiding this problem altogether. The reason why you don't see this problem when running it natively on Windows is that it already knows the domain controller to contact and the domain information. On Linux, and in this pure Python implementation it does not have that luxury so it relies on what the caller has provided or error codes returned back. In this case it gets back a I apologise for getting back to you so late on the PR but unfortunately this is a workaround that keeps around expired caches causing a leak over time for something that can be solved by setting |
fix #170 - STATUS_ACCESS_DENIED when write by dfs path before read
Remember that path if DFS if it discovered once, and than when referral was expired only refresh it.
Use persistent _referral_cache as dict, so when referral is expired just refresh it (instead of removing).