Releases: ghostery/adblocker
Releases · ghostery/adblocker
v0.3.2
- Fix style injection and cosmetic filtering logic #67
- All cosmetics are now using only one background action (instead of two)
- No unloading is needed in content-script anymore
- Simplified and optimized the implementation of CosmeticBucket
- Internalized the version of serialized engine for auto-invalidation on update
- Fix cosmetic matching (tokenization bug) #65
- Optimize serialization and properly handle unicode in filters #61
v0.3.1
- Fix fuzzy matching by allowing tokens of any size #61
- Add support for CSP (Content Security Policy) filters #60
- Add hard-coded circumvention logic (+ IL defuser) #59
- Simplify 'example' extension
- Add circumvention module and entry-point in cosmetics injection
- Clean-up cjs and esm bundles
- Remove obsolete logic to override user-agent in content-script
- Simplify travis config (using new pre* hooks)
- Consolidate 'fetch' module (with metadata about lists)
v0.3.0
This release contains massive optimizations as well as a few bug fixes and building improvements:
-
Distribute both un-bundled cjs and es6 source #54
-
Produce a commonjs build artifact #53
-
Update build instructions in README.md #52
-
Remove dist folder from source tree #50
-
Cosmetics: fix rule matching when hostname is empty #49
-
Optimizations #46
- Requests can now use
type
as a string or number (e.g.:script
or2
).
// Both are equivalent new Request({ type: 2, url, sourceUrl }) new Request({ type: 'script', url, sourceUrl })
- [BREAKING] format of serialized engine has been changed to store less data
- [BREAKING]
id
attribute from filters has been remove, usegetId()
instead (please note that theid
is not stored internally anymore, but generated every timegetId()
is called).
// Bad filter.id // Good filter.getId()
- [BREAKING] values returned by
getId()
will differ from values stored in
theid
attribute for identical filters (the algorithm is now different
and will do less work). - [BREAKING] domains specified in
$domains=
option are now stored hashed
instead of as string, and can only be retried in their original form if
debug
flag is used inFiltersEngine
- [BREAKING]
fastTokenizer
will now only consider tokens longer than 1 - [BREAKING]
fastTokenizer
will now only tokenize up to 2048 characters from URLs - [BREAKING] hashes produced by
fastHash
andfastHashBetween
will not
match what was produced by the same function before this change (the seed
and hashing algorithm was slightly changed for speed). - [BREAKING] un-initialized attributes of filters instances
(CosmeticFilter
andNetworkFilter
) will have valueundefined
instead ofnull
or empty string like before. It is recommended
to use accessors (e.g.:filter.getHostname()
instead of
filter.hostname
) to access internal attributes, as they will
always return consistent types and fall-back to meaningful defaults.
// Bad filter.redirect filter.filter filter.hostname // Good filter.getRedirect() filter.getFilter() filter.getHostname()
- [BREAKING] a new
Request
abstraction supersedesIRequest
and
IRawRequest
. This new class offers a more consistent experience to work
will requests.
new Request({ url }) new Request({ url, sourceUrl }) new Request({ url, sourceUrl, type: 'string' }) new Request({ url, hostname, domain, type: 'string' })
- [BREAKING] remove support for
hosts
format (e.g.:127.0.0.1 domain
),
since servers blocklists can also be exported in hostname anchored format
(e.g.:||domain^$third-party
). This simplifies the parsing logic. - [BREAKING] remove the following unused legacy request types:
fromFetch
fromDTD
fromXLST
fromBeacon
fromCSP
- [BREAKING]
cpt
(Content Policy Type of requests) is now calledtype
,
to match the terminology of the WebRequestAPI.
// Bad request.cpt new Request({ cpt }) // Good request.type new Request({ type })
- Optimized and simplified implementation of
parseJSResource
(~4 times faster) - Optimized matching of some kinds of filters to prevent any string copy (reduced the number of calls to
slice
,substr
andsubstring
) - Optimized buckets ordering by moving matching filters towards the
beginning of the array. This results in generic filter being tried first. - Optimized some classes of filters sharing the same pattern and options,
with different domains. They are now fused into a single filter. For
example, the following filters:|https://$script,domain=downloadpirate.com
|https://$script,domain=dwindly.io
|https://$script,domain=intoupload.net
|https://$script,domain=linkshrink.net
|https://$script,domain=movpod.in
|https://$script,domain=povw1deo.com|povwideo.net|powvideo.net
|https://$script,domain=sendit.cloud
|https://$script,domain=sfiles.org|suprafiles.me|suprafiles.net|suprafiles.org
|https://$script,domain=streamplay.to
|https://$script,domain=userscloud.com
|https://$script,domain=yourporn.sexy
will be optimized into:|https://$script,domain=dwindly.io|movpod.in|...|yourporn.sexy
tokenize
will now allow%
as part of tokens for filtersCosmeticFilter
now support the new+js()
syntax to inject scriptsNetworkFilter
'sgetTokens()
method will now return more tokens in some
cases. For example, if only one domain is specified in the$domain=
option, then it can be used as a token (before we would only use the pattern part of each filter to extract tokens).- In case a
NetworkFilter
has no token available (e.g.:
$image,domain=ads.com
), then it can be indexed using the domains
specified in the$domain=
option, if any. - Filters of the form
*pattern
(regex) are now optimized intopattern
(plain) - Filters of the form
|http://
or|https://
or|http*://
are now
optimized using the newly introducedhttp
andhttps
options. The
Request
instances will now say if they arehttp
orhttps
, and this
saves string comparisons while matching. - Fixed a bug where javascript resources were serialized twice
- Serialization can now be performed even after
engine
has been optimized - Addition of a
serialize
method onFiltersEngine
class - Reverse Index is now created using only one
Map
instead of two - optDomains and optNotDomains are now stored in a compact typed array
instead ofSet
and a binary search is used for lookups. - Prevent filters from being checked twice for requests by remembering which
request last checked a given bucket in reverse index (i.e.:magic
field)
- Requests can now use
v0.2.1
- Allow disable mutation observer #43
- Fix reverse engine bucket selection #40
- The selection of bucket for each filter in ReverseIndex would
needlessly fallback to the default bucket in some cases (even if a
better bucket could be available). - Some plain patterns would not be indexed properly if we expect
their last token to be a potential partial match. For example
||foo.com/bar would not always match foo.com/barbaz, if bar (the
last token of the filter) was selected as "best token". The
work-around is to ignore the last token of plain patterns (to be
safe).
- The selection of bucket for each filter in ReverseIndex would
- Allow filters without specific resource type to match any cpt #42
v0.2.0
This release introduces a few major changes:
- [BREAKING] Build artifacts will now be located in the
dist
folder - [BREAKING] The return value from
Engine.match
includes the original filter matching the request, instead of a pretty-printed version. - Addition of a benchmark to keep track of performance and memory consumption of the adblocker
- Make use of
tldts
instead oftld.js
for URL parsing (more efficient and in TypeScript) - Fix a few bugs:
- Parsing of options from network filters
- NetworkFilter' optimizer would break in some cases
- Cosmetic filter's tokenization did not behave correctly in case of
~
or+
combinators, or if styles were specified (e.g.:[foo=bar]
) - Matching of filters of type hostname anchor would not handle some corner cases
- Build system was simplified: fewer rollup plugins and stricter TypeScript configuration
- Lots of new tests were added: network filter matching, engine, etc.
v0.1.13
v0.1.12
v0.1.11
v0.1.11