You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be better if PyRDP used an intercepted file's hash instead of a random name when saving them to disk. Even though we already detect duplicate files using hashes and store the original file name in the mapping.json file, using hashes directly on the filesystem would make it easier for external tools to detect duplicates (e.g: if you're backing up intercepted files) because then they wouldn't have to parse the mapping file.
The text was updated successfully, but these errors were encountered:
It would be better if PyRDP used an intercepted file's hash instead of a random name when saving them to disk. Even though we already detect duplicate files using hashes and store the original file name in the
mapping.json
file, using hashes directly on the filesystem would make it easier for external tools to detect duplicates (e.g: if you're backing up intercepted files) because then they wouldn't have to parse the mapping file.The text was updated successfully, but these errors were encountered: