Releases: jawah/charset_normalizer
Version 2.0.6
Version 2.0.5
Changes:
Internal: 🎨 The project now comply with: flake8, mypy, isort and black to ensure a better overall quality #81
Internal: 🎨 The MANIFEST.in was not exhaustive #78
Improvement: ✨ The BC-support with v1.x was improved, the old staticmethods are restored #82
Remove: 🔥 The project no longer raise warning on tiny content given for detection, will be simply logged as warning instead #92
Improvement: ✨ The Unicode detection is slightly improved, see #93
Bugfix: 🐛 In some rare case, the chunks extractor could cut in the middle of a multi-byte character and could mislead the mess detection #95
Bugfix: 🐛 Some rare 'space' characters could trip up the UnprintablePlugin
/Mess detection #96
Improvement: 🎨 Add syntax sugar __bool__ for results CharsetMatches
list-container see #91
This release push further the detection coverage to 97 % !
Version 2.0.4
Changes:
- Improvement: ❇️ Adjust the MD to lower the sensitivity, thus improving the global detection reliability (#69 #76)
- Improvement: ❇️ Allow fallback on specified encoding if any (#71)
- Bugfix: 🐛 The CLI no longer raise an unexpected exception when no encoding has been found (#70)
- Bugfix: 🐛 Fix accessing the 'alphabets' property when the payload contains surrogate characters (#68)
- Bugfix: 🐛 ✏️ The logger could mislead (explain=True) on detected languages and the impact of one MBCS match (in #72)
- Bugfix: 🐛 Submatch factoring could be wrong in rare edge cases (in #72)
- Bugfix: 🐛 Multiple files given to the CLI were ignored when publishing results to STDOUT. (After the first path) (in #72)
- Internal: 🎨 Fix line endings from CRLF to LF for certain files (#67)
Version 2.0.3
Changes:
- Improvement: ✨ Part of the detection mechanism has been improved to be less sensitive, resulting in more accurate detection results. Especially ASCII. #63 Fix #62
- Improvement: ✨According to the community wishes, the detection will fall back on ASCII or UTF-8 in a last-resort case. #64 Complete #62
Be assured that this project is disposed to listen to any of your concerns you may have. I know the vast majority did not expect requests
to switch Chardet to Charset-Normalizer. I am inclined to make this change worth it, only together that we can achieve great things. Do not hesitate to leave feedback or bug report, will answer them all!
Version 2.0.2
Version 2.0.1
Minor bug fixes release.
Changes:
- Bugfix: 🐛 Make it work where there isn't a filesystem available, dropping assets frequencies.json #54 #55 original report by @sethmlarson
- Improvement: ✨ You may now use aliases in
cp_isolation
andcp_exclusion
arguments #47 - Bugfix: 🐛 Using
explain=False
permanently disable the verbose output in the current runtime #47 - Bugfix: 🐛 One log entry (language target preemptive) was not show in logs when using
explain=True
#47 - Bugfix: 🐛 Fix undesired exception (ValueError) on getitem of instance CharsetMatches #52
- Improvement: 🔧 Public function
normalize
default args values were not aligned withfrom_bytes
#53
Version 2.0.0
This package is reaching its two years of existence, now is a good time for a nice refresh.
Changes: See PR #45
- Performance: ⚡ 4x to 5 times faster than the previous 1.4.0 release.
- Performance: ⚡ At least 2x faster than Chardet.
- Performance: ⚡ Accent has been made on UTF-8 detection, should perform rather instantaneous.
- Improvement: 🔙 The backward compatibility with Chardet has been greatly improved. The legacy
detect
function returns an identical charset name whenever possible. - Improvement: ❇️ The detection mechanism has been slightly improved, now Turkish content is detected correctly (most of the time)
- Code: 🎨 The program has been rewritten to ease the readability and maintainability. (+Using static typing)
- Tests: ✔️ New workflows are now in place to verify the following aspects:
Performance
,Backward-Compatibility with Chardet
, andDetection Coverage
in addition to currents tests. (+CodeQL) - Dependency: ➖ This package no longer require anything when used with Python 3.5 (Dropped
cached_property
) - Docs: ✏️ Performance claims have been updated, the guide to contributing, and the issue template.
- Improvement: ❇️ Add
--version
argument to CLI - Bugfix: 🐛 The CLI output used the relative path of the file(s). Should be absolute.
- Deprecation: 🔴 Methods
coherence_non_latin
,w_counter
,chaos_secondary_pass
of the classCharsetMatch
are now deprecated and scheduled for removal in v3.0 - Improvement: ❇️ If no language was detected in content, trying to infer it using the encoding name/alphabets used.
- Removal: 🔥 Removed support for these languages: Catalan, Esperanto, Kazakh, Baque, Volapük, Azeri, Galician, Nynorsk, Macedonian, and Serbocroatian.
- Improvement: ❇️
utf_7
detection has been reinstated. - Removal: 🔥 The exception hook on UnicodeDecodeError has been removed.
After much consideration, this release won't drop Python 3.5 in v2.
Version 1.4.1
Changes :
- Improvement: 🎨 Logger configuration/usage no longer conflict with others #44
Version 1.4.0
Changes :
Thanks to @potiuk for his tests/ideas that permitted us to improve the quality of this project.
- Dependency: ➖ Using standard
logging
instead of using the packageloguru
. - Dependency: ➖ Dropping
nose
test framework in favor of the maintainedpytest
. - Dependency: ➖ Choose to not use
dragonmapper
package to help with gibberish Chinese/CJK text. - Dependency: 🔧 ➖ Require
cached_property
only for Python 3.5 due to constraint. Dropping for every other interpreter version. - Bugfix: 🐛 BOM marker in a
CharsetNormalizerMatch
instance could beFalse
in rare cases even if obviously present. Due to the sub-match factoring process. - Improvement: 🎇 Return ASCII if given sequences fit. Given reasonable confidence.
- Performance: ⚡ Huge improvement over the larges payload.
- Change: 🔥 Stop support for UTF-7 that does not contain a SIG. (Contributions are welcome to improve that point)
- Feature: 🎇 CLI now produces JSON consumable output.
- Dependency: Dropping PrettyTable, replaced with pure JSON output.
- Bugfix: 🐛 Not searching properly for the BOM when trying utf32/16 parent codec.
- Other: ⚡ Improving the package final size by compressing
frequencies.json
.
This project no longer requires anything except for python 3.5. It is still supported even if passed EOL.
Version 2.x will require Python 3.6+
Version 1.3.9
Changes :
- Bugfix: 🐛 In some very rare cases, you may end up getting encode/decode errors due to a bad bytes payload #40