-
Notifications
You must be signed in to change notification settings - Fork 3.2k
Static filter syntax
uBlock Origin (uBO) supports most of the EasyList filter syntax. You can refer to existing filter syntax documentation from Adblock Plus (ABP) and AdGuard (AG).
While uBO does not support some specific cases, it further extends the EasyList filter syntax, which also may share with AG's extended syntax. Here are the most surprising cases documented.
Starting with 1.46.1b15, you can use regex-based values as target domain for static extended filters, see more here.
- Not supported
- Pre-parsing directives
- Extended syntax
document
for entire page exception
It is not supported. The document
option used with an exception filter is to disable uBO. The document
option in static exception filters is for the sake of "acceptable ads" support, which uBO does not support.
The reason it is not supported is to be sure that users explicitly disable uBO themselves if they wish (through Trusted sites feature), not having some external filter list decide for them.
Note: it still works to negate strict blocking when explicitly enabled by blocking filter document
option.
It is not supported.
This option gets used with an exception filter to disable generic network filters on target pages. Generic, in this case, means network filters without a domain=
filter option. Filters such as ||example.com^
are still considered generic.
This option is not supported because using such a filter option would cause large numbers of filters to be silently disabled for a site where applied.
For instance, when used for a specific site, the genericblock
option would cause all the filters in hosts files to be disabled, including those from the malware lists. EasyPrivacy and other anti-tracking lists also contain countless so-called "generic" filters, and as a consequence, these would also end up being disabled.
Supported starting with uBO 1.23.0, also aliased as ehide
.
Before 1.23.0 it was translated internally to generichide
. elemhide
was only available as "No cosmetic filtering" switch.
Keep in mind that generichide
is a cosmetic filtering-related option, and using it has no negative consequence concerning privacy since cosmetic filtering has no privacy value.
uBO 1.16.0 and above supports pre-parsing directives. Pre-parsing directives prefixed with !#
means older versions of uBO or other blockers will see the pre-parsing directives as a comment and discard them.
The pre-parsing directives execute before a list's content is parsed and influence the final content of a filter list.
The !#include
directive allows importing another filter list in place of where the directive appears. The purpose is to allow filter list maintainers to create filters specific to uBO while keeping their list compatible with other blockers. Other blockers will ignore the !#include
directive because it will be seen as a comment and thus discarded. uBO will attempt to load the resource found at [file name]
(the sub-list) and load its content into the current list.
The sub-list must be in the same directory as the main one. It is not allowed to load a sub-list located outside where the current one resides.
Correct usage:
!#include ublock-filters.txt
!#include ublock/filters.txt
Incorrect usage:
!#include https://github.com/uBlockOrigin/uAssets/blob/master/filters/filters.txt
!#include ../filters.txt
Related discussion and live example of usefulness:
The !#if
directive allows filter list maintainers to create areas in a filter list that get parsed only if certain conditions are met (or not met). For example, use this to create filters specific to a particular browser.
For example, to compile a block of filters only if uBO is running as a Firefox add-on:
!#if env_firefox
...
!#endif
Another example is to compile a block of filters only if uBO is not running as a Firefox add-on (you can negate using !
):
!#if !env_firefox
...
!#endif
Support for preprocessor directives is the result of discussion with AG developers. See https://github.com/AdguardTeam/AdguardBrowserExtension/issues/917.
After 1.50.1b9, uBO is fully compatible with the !#if
directives found throughout AdGuard's filter lists.
uBO supports only the following, and anything else gets ignored:
Token | Value | Version |
---|---|---|
ext_abp |
false | 1.29.3b7 |
ext_ublock |
true | |
ext_ubol |
true on uBlock Origin Lite | 1.44.3b12 |
ext_devbuild |
true on the development build | 1.48.1b1 |
env_chromium |
true on all Chromium-based browsers | |
env_edge |
true on Edge (legacy) | |
env_firefox |
true on Firefox | |
env_mobile |
true on mobile devices | |
env_safari |
true on Safari (legacy, up to 12 / macOS Mojave) | |
env_mv3 |
true when uBOL is assembled, and false otherwise | 1.44.5b15 |
false |
false | 1.22.0 |
cap_html_filtering |
true when browser supports HTML filtering | |
cap_user_stylesheet |
true on Firefox, Chromium 66+, supports style injection by tabs.insertCSS
|
|
adguard |
false | 1.29.0 |
adguard_app_android |
false | 1.29.3b7 |
adguard_app_ios |
false | 1.29.3b7 |
adguard_app_mac |
false | 1.29.3b7 |
adguard_app_windows |
false | 1.29.0 |
adguard_ext_android_cb |
false | 1.29.3b7 |
adguard_ext_chromium |
true on Chromium based browsers | 1.28.1b6 |
adguard_ext_edge |
true on Edge (legacy) | 1.29.0 |
adguard_ext_firefox |
true on Firefox | 1.29.0 |
adguard_ext_opera |
true on Chromium | 1.29.0 |
adguard_ext_safari |
false | 1.29.3b7 |
Starting from 1.22.0, you can use the !#if false
directive to disable a large block of your filters without having to remove them.
!#if false
...
!#endif
Before this version, you could use negated ext_ublock
since this token always equals true in uBO.
Starting from 1.50.1b9, you can use the !#else
directive:
!#if cap_html_filtering
example.com##^script:has-text(fakeAd)
!#else
example.com##+js(rmnt, script, fakeAd)
!#endif
uBO extends ABP filter syntax.
- _ (aka "noop")
- * (aka "all URLs")
- $1p ($first-party)
- $3p ($third-party)
- $all (all network-based types + $popup + $document + $inline-font + $inline-script)
- $badfilter
- $css ($stylesheet)
- $cname
- $denyallow
- $document
- $domain ($from)
- $elemhide ($ehide)
- $font
- $frame ($subdocument)
-
$genericblockNot supported - $generichide ($ghide)
- $header
- $image
- $important
- $inline-script
- $inline-font
- $ipaddress
- $match-case
- $media
- $method
- $object
- $other
- $permissions
- $ping
- $popunder
- $popup
- $script
- $specifichide ($shide)
- $strict1p
- $strict3p
- $to
-
$webrtcexample.com##+js(nowebrtc)
- $websocket
- $xhr ($xmlhttprequest)
- $csp
- $empty ($redirect=empty)
- $mp4 ($redirect=noopmp4-1s)
- $redirect
- $redirect-rule
- $removeparam
- $replace (only from a trusted-source origin)
- $uritransform (only from a trusted-source origin)
- $urlskip (only from a trusted-source origin)
uBO can also parse HOSTS file-like resources. All hostname entries from a HOSTS file resource from uBO's point of view will be syntactically equivalent to a filter using the form ||hostname^
.
However, this creates an ambiguity with the ABP filter syntax, which is pattern-based. For example, consider the following filter entry:
example.com
ABP filter syntax dictates that this gets interpreted as "block network requests whose URL contains example.com
at any position".
However, in uBO, the interpretation will be "block network requests to the site example.com
and all of its subdomains", which is the equivalent to ||example.com^
. Note that this includes blocking the main document itself, see "Strict blocking" and document
option.
So in uBO, any pattern that reads as a valid hostname will be assumed to be equivalent to a filter of the form ||example.com^
. If ever you want such a filter syntactically parsed according to ABP's interpretation, add a wildcard at the end:
example.com*
If the filter is a filename, it is best to prepend with a slash to ensure it's not parsed as a hostname:
/example.js
Related:
Just a placeholder.
Implemented to resolve ambiguity in $removeparam
filters with Regular Expression parameters detected as plain Regular Expression filters because of leading and trailing slashes:
/ad-$removeparam=/^foo=bar\d$/,_
Starting from 1.50.1b11, you can use the _
option to also resolve readability issues by supporting multiple instances of the _
option in a single filter:
||example.com$_,removeparam=/^ss\\$/,_,image
||example.com$replace=/bad/good/,___,~third-party
The wildcard character *
gets used to apply a filter to all URLs. Not recommended unless you further narrow the filter using filter options. Examples:
-
*$third-party
: block all 3rd-party network requests. -
*$script,domain=example.com
: block all network requests to fetch script resources atexample.com
.
Usually, it is far more convenient to use dynamic filtering rules instead of generic static filters.
Equivalent to first-party
uBO option, which in turn is negated third-party
option (~third-party
).
Filter will match on requests to currently visited domain.
Equivalent to third-party
option.
Filter will match on requests to other than currently visited domain.
New in 1.20.0.
The all
option is equivalent to specifying all network-based types + popup
, document
, inline-font
and inline-script
.
Example:
||bet365.com^$all
Above will block all network requests, block all popups and prevent inline fonts/scripts from bet365.com
. The EasyList-compatible syntax does not allow this when using only ||bet365.com^
.
Used to disable an existing filter. Occasionally disabling a blocking filter is better than creating an exception filter. Just for example's sake, let's say that a mind-absent filter list maintainer added the following filter to their list:
*$image
Now all images from everywhere are blocked on your side. An exception filter (@@*$image
) is not a good solution because it would also cause images that should get blocked legitimately to no longer be blocked. In such case, the badfilter
option is best:
*$image,badfilter
It will cause the *$image
filter to get discarded. Appending the badfilter
option to any instance of static network filter will prevent the loading of that filter.
After 1.19.0, any filter which fulfills ALL the following conditions:
- Is of the form
|https://
or|http://
or*
; and - Does have a
domain=
option; and - Does not have a negated domain in its
domain=
option; and - Does not have
csp
option; and - Does not have a
redirect=
option
Will process in a certain way:
- The
domain=
option will be decomposed to create as many distinct filters as there are values in thedomain=
option. - It now becomes possible to
badfilter
only one of the distinct filters without having tobadfilter
them all. - The logger will always report these special filters with only a single hostname in the
domain=
option.
Equivalent to stylesheet
option. For convenience.
New in 1.26.0.
When used in an exception filter, it will bypass blocking CNAME uncloaked requests for the current (specified) document.
Network requests resulting from resolving a canonical name are subject to filtering. Creating exception filters using the cname
option can bypass this filtering.
Example:
@@*$cname
The filter above tells the network filtering engine to accept network requests which fulfill all the following conditions:
- network request is blocked
- network request is that of an unaliased hostname
Filter list authors are discouraged from using exception filters of the cname
type unless there is no other practical solution such that the maintenance burden becomes the more significant issue. These exception filters should be as narrow as possible. For example, they apply to a specific domain, etc.
New in 1.26.0.
The purpose of denyallow
is to bring default-deny/allow-exceptionally ability into the static network filtering arsenal.
Example:
*$3p,script,denyallow=x.com|y.com,domain=a.com|b.com
The above filter tells the network filtering engine when the context is a.com
or b.com
; it needs to block all 3rd-party scripts except those from x.com
and y.com
.
Note that the domain=
option is required!
Essentially, the new denyallow
option makes it easier to implement default-deny/allow-exceptionally in static filter lists. It had to be done before with unwieldy regular expressions[1] or through the mix of broadly blocking and exception filters[2].
"Entity" wildcard matching is not supported.
[1] https://hg.adblockplus.org/ruadlist/rev/f362910bc9a0
[2] Typically filters whose patterns are of the form |http*://
See also: to
Alias: doc
It is a type option (like image
or script
) that specifies the main frame (a.k.a. the root document) of a web page. This option is automatically enabled in filters indicating only the host part of the URL (see "HOSTS files" section), causing web pages that match the filter to get subjected to "Strict blocking".
See also: all
Alias: from
Restrict the filter to be applied only on the specified domain(s).
Use the |
symbol to join multiple domains.
Preceding the domain name by ~
will prevent the filter from being applied on this domain.
Starting with 1.28.0 support for "entity" matching has been added. You can now use filter$domain=google.*
to apply a filter to pages on all top-level domains of the specified domain.
Example:
||doubleclick.net^$script,domain=auto-motor-und-sport.de
||adnxs.com^$domain=bz-berlin.de|metal-hammer.de|musikexpress.de|rollingstone.de|stylebook.de
/adsign.$domain=~adsign.no
Starting with 1.46.1b17 support for regex-based values has been added. Example of usage:
@@*$ghide,domain=/img[a-z]{3,5}\.buzz/
Related discussion: uBlockOrigin/uBlock-issues#2234
Regex-based domain values can be negated just like plain or entity-based values:
*$domain=~/regex.../
Alias: ehide
Before uBO 1.23.0, this was being translated internally to generichide
.
When used in an exception filter, this will turn off all cosmetic filtering on matching pages.
Equivalent to ~third-party
option. Provided strictly for convenience (0.9.9.0).
Equivalent to subdocument
option. For convenience.
New in 1.46.1b0.
It is just an alias for the domain=
option. The logger will render domain=
network filters using the from=
version.
See: domain
Alias: ghide
.
When used in an exception filter, it will turn off generic cosmetic filtering on matching pages.
Generic cosmetic filters are hiding filters that apply to all pages - ##.ad-class
.
New in 1.32.0. As of 1.52.3b16 it is enabled by default[1]
Ability to filter network responses according to whether a specific response header is present and whether or not it matches a distinct value.
For example:
*$script,header=via:1.1 google
The above filter blocks network requests of type script
, which has a response HTTP header named via
, which value matches the string 1.1 google
literally.
The header value can get set to a regex literal by bracing the header value with the usual forward slashes, /.../
:
*$script,header=via:/1\.1\s+google/
The header value can be prepended with ~
to reverse the comparison:
*$script,header=via:~1.1 google
The header value is optional and may be left out to test only for the presence of a specific header:
*$script,header=via
Using generic exception filters to disable specific block header=
filters, i.e. @@*$script,header
will override the block header=
filters given in the example above.
Important: Filter authors must use as many narrowing filter options as possible when using the header=
option and only use the header=
option when other filter options are insufficient.
A potential use case is to block Google Tag Manager scripts proxied as the first party in the subdomain of the websites:
*$1p,strict3p,script,header=via:1.1 google
Where connection:
- is weakly 1st-party to the context.
- is not strictly 1st-party to the context.
- is of type
script
. - has a response HTTP header named
via
whose value matches1.1 google
.
Block requests whose responses have the Set-Cookie
header with any value:
||example.com^$header=set-cookie
Unblock requests whose responses have the Set-Cookie
header with value matching the foo, bar$
regular expression:
@@||example.com^$header=set-cookie:/foo\, bar\$/
To remove response headers, see: Response header filtering
.
The filter option important
means to ignore all exception filters (those prefixed with @@
). It will allow you to block specific network requests with 100% certainty.
It applies only to network block filters
Example: ||google-analytics.com^$important,third-party
will block all network requests to google-analytics.com
, disregarding any existing network exception filters.
Disable inline script tags in the main page via CSP: ||example.com^$inline-script
.
See also: csp
Disable inline font tags in the main page via CSP: ||example.com^$inline-font
.
New in 1.60.0.
The purpose is to block according to the IP address of a network request.
Firefox-based browsers: full support. Chromium-based browsers: only when the IP address is used directly in the URL in lieu of a hostname.
The value assigned to ipaddress
can be ...
- ... a plain string which must match exactly a given IP address
e.g.ipaddress=192.168.1.1
to match exactly IP address192.168.1.1
- ... a plain string followed by a wildcard to match IP addresses starting with the pattern
e.g.ipaddress=192.168.*
to match IP addresses starting with192.168.
- ... a regex which will be matched against the IP address
e.g.ipaddress=/^192.168.1.\d{1-2}/
to match IP address between192.168.1.0
and192.168.1.99
. - ...
lan
to match IP addresses reserved for private networks
- ...
loopback
to match IP addresses reserved for loopback
Examples:
*$script,ipaddress=93.184.215.14
||xyz/|$xhr,3p,method=head,ipaddress=/^139\.45\.19[5-7]\./
*$all,ipaddress=::,domain=~0.0.0.0|~127.0.0.1|~[::1]|~[::]|~local|~localhost
*$ipaddress=93.184.*
*$method=post,ipaddress=lan
Cached resources do not have a valid IP address and thus can't be a match to ipaddress
option.
Technical notes
First commit in 1.59.1b15. Related commit: 1.59.1b17, 1.59.1b19, 1.59.1rc1, 1.59.1rc4*.
Lan/loopback values are supported (since 1.59.1b17), related issue: Possibility of Blocking Requests to localhost and Reserved IP Addresses by websockets?.
Browser-provided 0.0.0.0
IP address will be ignored when DNS is proxied (since 1.59.1rc1), related issue: Some rules may break websites that use socks proxy in Firefox.
Cname uncloaking code has been rewritten to account for the ipaddress
option (since 1.59.1b19), related issue: Add AdGuard's $network support on Firefox. This commit makes the DNS resolution code better suited for both filtering on cname and IP address. The change allows early availability of IP address so that ipaddress
option can be matched at onBeforeRequest time. As a result, it is now possible to block root document using ipaddress
option -- so long as an IP address can be extracted before first onBeforeRequest() call.
New in 1.31.1b8.
It is only for Regular Expression filters. Using this with any other filter will cause uBO to discard the filter.
Instructs uBO filtering engine to perform a case-sensitive match.
New in 1.46.1b0.
Related issue: uBlockOrigin/uBlock-issues#2117.
Ability to filter network requests according to their HTTP method.
This option supports a list of |
-separated lowercased method names. Negated method names are allowed.
These are valid methods:
connect
delete
get
head
options
patch
post
put
As per DNR's own documentation:
Example:
||google.com^$method=post|get
||example.com^$method=~get
The logger shows the method used for every network request. It's possible to filter the logger output for most-common methods: get
, head
, post
.
New in 1.50.1b16.
Permissions Policy provides mechanisms to explicitly declare what functionality can and cannot be used on a website. It is similar to Content Security Policy but controls features instead of security behavior.
Examples of what you can do with Permissions Policy:
- Change the default behavior of autoplay on mobile and third-party videos.
- Restrict a site from using sensitive devices like the camera, microphone, or speakers.
- Allow iframes to use the Fullscreen API.
Related discussion:
Reference:
- https://adguard.com/kb/general/ad-filtering/create-own-filters/#permissions-modifier
- https://docs.w3cub.com/http/headers/feature-policy#directives
- https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Permissions-Policy
Example:
||example.com^$permissions=browsing-topics=()
Difference with AdGuard's syntax: use |
to separate permissions policy directives instead of \,
-- uBO will replace instances of |
with ,
:
*$permissions=oversized-images=()|unsized-media=()
However, it's best to not combine permissions policy to not break exception filters for either one of them.
When no type (e.g. $doc
) is given, uBO will use $document,subdocument
internally when the permissions
option is used (same as with csp
).
Blocks requests send by the ping
attribute on links and Navigator.sendBeacon().
To block "popunders" windows/tabs where the original page redirects to an advertisement and the desired content loads in the newly created one. To be used in the same manner as the popup
filter option, except that it will block popunders.
Alias: shide
.
New in uBO 1.23.0.
When used in an exception filter, it will turn off specific cosmetic filtering on matching pages.
Specific cosmetic filters apply only to pages in domains specified in the filter - example.com##.ad-class
.
New in 1.32.0.
Strict first-party request.
The classic option 1p
can "weakly" match partyness. For example, a network request qualifies as 1st-party to its context if both the context and the request share the same base domain.
This new strict1p
option can check for strict partyness. For example, a network request qualifies as 1st-party if both the context and the request share the same hostname.
For example:
Context | Request | 1p |
strict1p |
---|---|---|---|
www.example.org |
www.example.org |
yes | yes |
www.example.org |
subdomain.example.org |
yes | no |
www.example.org |
www.example.com |
no | no |
New in 1.32.0.
Strict third-party requests.
The classic option 3p
can "weakly" match partyness. For example, a network request qualifies as 3rd-party to its context only if the context and the request base domains are different.
This new strict3p
option can check for strict partyness. For example, a network request qualifies as 3rd-party as soon as the context and the request hostnames are different.
For example:
Context | Request | 3p |
strict3p |
---|---|---|---|
www.example.org |
www.example.org |
no | no |
www.example.org |
subdomain.example.org |
no | yes |
www.example.org |
www.example.com |
yes | yes |
New in 1.46.1b0.
Related issue: uBlockOrigin/uBlock-issues#2412.
The main motivation of this option is to give uBO's static network filtering engine an equivalent of DNR's requestDomains
and excludedRequestDomains
.
to=
is a superset of denyallow=
with support for Entity-based syntax and also negated hostname.
For now denyallow=
won't be deprecated, which still does not support entity-based syntax and for which negated domains are not allowed.
Examples:
||it^$3p,to=~example.it
*$script,from=beforeitsnews.com,to=google.*|gstatic.com
Starting with 1.46.1b17 support for regex-based values has been added. Example of usage:
*$script,to=/img[a-z]{3,5}\.buzz/
Related discussion: uBlockOrigin/uBlock-issues#2234
Regex-based domain values can be negated just like plain or entity-based values:
*$to=~/regex.../
Equivalent to xmlhttprequest
option. For convenience.
This option will inject an additional Content-Security-Policy
header to the HTTP network response of the requested web page. This will make Content Security Policy more strict as designed by the specification. It will be applied to document requests only.
This special filter will not block matching resources but only apply HTTP header to pages matching it. Mixing it with other options specifying resource types like image
, script
or frame
(subdocument
) cannot happen. It can still be used with 1p
(first-party
), 3p
(third-party
) or domain
options.
Because of how csp
filters get implemented, they allow for some interesting applications. For example, you can block scripts only in some specific path on the page:
||example.com/subpage/*$csp=script-src 'none'
And even block them everywhere except the main page (note end anchor):
||example.com/*$csp=script-src 'none'
@@||example.com^|$csp=script-src 'none'
An exception filter for a specific csp
blocking filter must have the same content of the csp
option as the blocking filter. However, an exception filter with an empty csp
option will disable all csp
injections for the matching page:
@@||example.com^$csp
CSP option syntax is unusual compared to other filters. Recommend to be used only by advanced users. It works in "allowlist" mode allowing data to be downloaded only from addresses explicitly specified in this option. However, uBO is adding its own second CSP header, which as per specification will merge into one final policy. It will enforce the most strict rules from both. For example, you can break a web page if the policy sent by the server allows a.com
and b.com
and your filter adds c.com
; no request will be allowed.
Refer to "Content Security Policy (CSP) Quick Reference Guide" or MDN documentation for further syntax help.
See also denyallow
Deprecated, avoid using this option. See deprecation notice from AdGuard.
New in 1.22.0.
Redirects request to empty response.
The filter option empty
is converted internally to redirect=empty
.
Deprecated, avoid using this option. See deprecation notice from AdGuard.
New in 1.22.0.
The mp4
filter option will be converted to redirect=noopmp4-1s
internally, and the media
type is assumed.
The redirect
option means "block and redirect", and causes two filters to become created internally, a block filter and a redirect directive (redirect-rule
).
A redirect directive causes a blocked network request to redirect to a local neutered resource version. The neutered resource must use a resource token. You can use empty redirect resources and URL-specific sanitized redirect resources (surrogates). At runtime, filters with unresolvable resource tokens get discarded.
You can use the redirect=
filters with other static filter options. You can exclude them by using @@
, they can be badfilter
-ed, and their priority can increase with the important
option.
Since multiple redirect directives can apply to a single network request, this introduces the concept of redirect priority.
By default, redirect directives have an implicit priority of 0
. Filter authors can declare explicitness by appending :[integer]
(negative values are also supported) to the redirect=
option token. For example:
||example.com/*.js$1p,script,redirect=noopjs:100
The priority dictates which redirect token out of many will ultimately become used. Cases of multiple redirect=
directives applying to a single blocked network request are unlikely. All of these directives get reported in the logger. The effective one gets stated as the last one before redirection entry. Use explicit redirect priority only when a case of redirect ambiguity needs solving.
To disable a redirection, you can use an exception filter for the redirect directive (example for the filter above):
@@||example.com/*.js$1p,script,redirect-rule=noopjs
The filter above does not affect blocking filters, just matching redirect directives. You can broadly disable all redirect directives as follow:
@@||example.com/*.js$1p,script,redirect-rule
Before 1.32.0
Starting with 1.31.0, the redirect=
option no longer is afflicted by static network filtering syntax quirks listed below.
- Must specify resource type.
- Special, reserved token
none
must be used to disable specific redirect filters. - Negated domains in the
domain=
option are not supported because of syntax ambiguity - #310. - Redirections applied to all destinations (starting with
*
) cannot be narrowed byfirst-party
or~third-party
option #3590. - Disable redirection by specifying
none
as the redirect. (Broken in 1.31.0, fixed in 1.31.3b4) - Filters with unresolvable resource tokens at runtime will cause redirection to fail. (Changed in 1.31.1b8)
Available since 1.4.0.
Allows creating standalone redirect directives, without an implicit blocking filter.
For example, consider the following filter:
||example.com/ads.js$script,redirect=noop.js
The above filter will result in a block filter ||example.com/ads.js$script
and a matching redirect directive. Now consider the following filter:
||example.com/ads.js$script,redirect-rule=noop.js
The above filter will only cause a redirect directive to be created, not a block filter. Standalone redirect directives are helpful when blocking a resource is optional, but still want it to redirect should it ever become blocked by whatever means, whether through a separate block filter, a dynamic filtering rule, etc.
Available since 1.22.0.
New in 1.32.0.
To remove query parameters from the URL of network requests -- see also AG's removeparam
's documentation. For historical reasons, queryprune
is an alias of removeparam
(avoid using queryprune
as it is deprecated and support will get removed eventually).
removeparam
is a modifier option (like csp
) in that it does not cause a network request to be blocked but rather becomes modified before being emitted.
removeparam
can be assigned a value. This value will determine which exact parameter from a query string will get removed:
*$removeparam=utm_source
The above filter tells uBO to remove the query parameter utm_source
when present in a URL.
The value assigned to removeparam
can be a literal regular expression, in which case uBO will remove query parameters matching the regular expression:
*$removeparam=/^utm_/
The above filter will remove all query parameters whose name starts with utm_
, regardless of their value. When using a literal regular expression, it gets tested against each query parameter name-value pair assembled into a single string as name=value
.
If no values are assigned, all query parameters on a given site will be removed:
||example.org^$removeparam
Poorly crafted removeparam
filters can have harmful effects on performance. Experienced filter authors need to understand how to create optimal filters.
Cosmetically added params cannot be removed via removeparam
(see related comment: 760#issuecomment-724703650 and invalid issues: #1704, #1767, #1951, #2498)
See also: Filter Performance
New in 1.53.1b3.
Can only be used in a trusted-source origin.
See https://adguard.com/kb/general/ad-filtering/create-own-filters/#replace-modifier
[Documentation to be completed]
New in 1.52.3b12 as urltransform
.
Renamed in 1.54.1b8 to uritransform
.
Can only be used in a trusted-source origin.
Transform the path/query/hash portion of a URL.
See https://adguard.com/kb/general/ad-filtering/create-own-filters/#urltransform-modifier
[Documentation to be completed]
New in 1.60.0.
Can only be used in a trusted-source origin.
The main purpose is to bypass URLs designed to track whether a user visited a specific URL, typically used in click-tracking links.
The urlskip=
option ...
- ... is valid only when used in a trusted filter list
- ... is enforced only on top documents
- ... is enforced on both blocked and non-blocked documents
- ... is a modifier, i.e. it cannot be used along with other modifier options in a single filter
The syntax is urlskip=[steps]
, where steps is a space-separated list of extraction directives detailing what action to perform on the current URL.
Valid directives:
-
?name
: the value of parameter namedname
will be extracted and replace the current URL as the new URL -
+https
: the protocol of the current URL will be forced tohttps:
This directive will succeed only if the protocol of the current URL is either absent or matcheshttp:
orhttps:
The final computed URL must be a valid URL as per URL API, otherwise the filter will be ignored.
||example.com/path/to/tracker$urlskip=?url
The above filter will cause navigation to https://example.com/path/to/tracker?url=https://example.org/
to automatically bypass navigation to example.com
and navigate directly to https://example.org/
.
It is possible to recursively extract URL parameters by using more than one directive, example:
||example.com/path/to/tracker$urlskip=?url ?to
The above filter will cause navigation to https://example.com/path/to/tracker?url=https%3A%2F%2Fexample.org%2Fpath%2Fto%2Ftracker%3Fto%3Dhttps%253A%252F%252Fgit.luolix.top%252F
to automatically bypass navigation to example.com
& example.org
and navigate directly to https://github.com/
.
Note: No skip will occur if not all extraction directives can be fulfilled in URL, example URL:
https://example.com/path/to/tracker?url=https%3A%2F%2Fexample.org%2Fpath%2Fto%2Ftracker
.
More extraction capabilities may be added in the future. In the future we might want to add base64-decoding or regex extraction (=?url base64
), so a separator is needed for the sake of extending the syntax in the future, a space is a good choice since it's never meant to appear in a URL.
Related issues:
- Add queryjump to redirect url
- Implement
$queryjump
for static network filter - [Enhancement] Add option to automatically visit embedded URLs w/o tracker
Technical notes
First commit in 1.59.1b22.
Static extended filters are all of these forms:
[hostname(s)]##[expression]
[hostname(s)]#@#[expression]
The most common static extended filters are cosmetic filters, also known as "element hiding filters" in ABP.
All static extended filters can apply to a specific entity. For example:
google.*###tads.c
An entity is defined as follows: a formal domain name with the Public Suffix part replaced by a wildcard.
Examples: google.*
will apply to all similar Google domain names: google.com
, google.com.br
, google.ca
, google.co.uk
, etc. Another example: facebook.*
will apply to all similar Facebook domain names: facebook.com
, facebook.net
.
Since the base domain name gets used to derive the name of the "entity", google.evil.biz
would not match google.*
.
Starting with 1.46.1b15, you can use regex-based values as target domain (hostname) for static extended filters, works in base hostname, and also in filter options like: domain=
, to=
and from=
. Examples of usage:
Solves: regex-fied domain:
-
/img[a-z]{3,5}\.buzz/##+js(nowoif)
- matches (example):imgabcd.buzz
-
@@*$ghide,domain=/img[a-z]{3,4}\.buzz/
- matches (example):imgabcd.buzz
indomain=
filter option -
*$frame,from=plainlight.com,to=~/youtube/
- excludes domains containing word "youtube" into=
filter option
Solves: Add support for domain double wildcarding in hiding rules (Would be a huge gamechanger for Nitter):
-
/^nitter\.[^.]+\.[^.]+$/##.timeline-item:has-text(owned)
- matches (example):nitter.abc.com
, but notnitter.com
and notnitter.abc.xyz.com
-
/^nitter(?:\.[^.]+){1,2}$/##.timeline-item:has-text(owned)
- matches (example):nitter.com
+nitter.abc.com
, but notnitter.abc.xyz.com
-
/^example\.org$/##h1
- matches onlyexample.org
without subdomains -
/^www\.example\.org$/##h1
- matches onlywww.example.org
without subdomains and withoutexample.org
-
/^(?:www\.)?example\.org$/##h1
- matches onlyexample.org
+www.example.org
without subdomains -
/^example\.org$/,somesite.org,somesite2.*##h1
- can be combined with normal names and entities -
org,~/^example\.org$/##h1
- can be excluded (negated): matchesorg
with all subdomains, withoutexample.org
, but still matches subdomains ofexample.org
(for examplewww.example.org
)
Use sparingly, when no other solution is practical from a maintenance point of view -- keeping in mind that uBO has to iterate through all the regex-based values, unlike plain hostname or entity-based values which are mere lookups.
New in 1.25.0.
Related issue: uBlockOrigin/uBlock-issues#803.
By preceding a typical generic cosmetic filter with a literal *
, this can turn it into a specific-generic cosmetic filter that unconditionally gets injected into all web pages.
*##.selector
But a typical generic cosmetic filter would only inject when uBO's DOM surveyor finds at least one matching element in a web page.
##.selector
The new specific-generic form will also be disabled when a web page is subject to a generichide
exception filter since the filter is essentially generic. The only difference from the usual generic form is that the filter is injected unconditionally instead of through the DOM surveyor.
Specific-generic cosmetic filters will NOT become discarded when checking the "Ignore generic cosmetic filters" option in the "Filter lists" pane since this option is primarily to disable the DOM surveyor.
:has(...)
, :has-text(...)
, :matches-attr(...)
, :matches-css(...)
, :matches-css-before(...)
, :matches-css-after(...)
, :matches-media(...)
, :matches-path(...)
, :min-text-length(n)
, :not(...)
, :others(...)
, :upward(...)
, :watch-attr(...)
, :xpath(...)
.
By default, the implicit purpose of cosmetic filters is to hide unwanted DOM elements. However, it may be helpful to restyle a specific one or entirely remove it from the DOM tree.
- Description: action operator, instruct to remove elements from the DOM tree instead of just hiding them.
- Chainable: No, action operator can only apply at the end of the root chain.
- subject: Can be a plain CSS selector or a procedural cosmetic filter.
- Examples:
gorhill.github.io###pcf #a18 .fail:remove()
New in uBO 1.26.0. Fixes #2252
Since :remove()
is an "action" operator, it must only be used as a trailing operator (just like the :style()
operator).
AG's cosmetic filter syntax { remove: true; }
will be converted to uBO's :remove()
operator internally.
To remove elements from a document before it is parsed by the browser, see: HTML filters
.
- Description: action operator applies a specified style to selected elements in the DOM tree.
- Chainable: No, action operator can only apply at the end of the root chain.
- subject: Can be a plain CSS selector or a procedural cosmetic filter after 1.29.3b10. Before, only native plain CSS selectors had support. See #382.
-
arg: one or more CSS property declarations, separated by the standard
;
. Some characters, strings, and values are forbidden. See below for a list. - Examples:
example.com##h1:style(background-color: blue !important)
motobanda.pl###mvideo:style(z-index: 1 !important)
After 1.29.3b10 procedural selectors are also supported.
Related issue: Support cosmetic filters with explicit style properties and example where it is useful.
It has the same syntax as plain cosmetic filters (it must be a valid CSS selector), except that the :style(...)
suffix appends at the end. The content in the parentheses must be one or more CSS property declarations (separated by the standard ;
). It is not allowed to use
- property values with
url(...)
, - property values with
image-set(...)
, - comments (
/*
,*/
), - backslashes (
\
-escaped values), - sequence of two forward slashes (
//
), even when separated by whitespace
Such style
-based cosmetic filters will get discarded.
As with the other new cosmetic filtering selectors, :style
can be used only for specific cosmetic filters. A hostname or entity must get specified for the filter.
uBO can transparently convert and use the AG CSS injection rules. This essentially means you can use AG's syntax in uBO if you prefer.
Styling filters frequently get used to foil anti-blocker mechanisms on web pages. To benefit from this, you may want to enable AG's filter lists on the 3rd-party filters pane.
- Description: action operator, instruct to remove attribute(s) or class(es) from DOM tree node(s) instead of just hiding them.
- Chainable: No, action operator can only apply at the end of the root chain.
- subject: Can be a plain CSS selector or a procedural cosmetic filter.
- arg: A plain string to match exactly, or a regex literal. Wrap arg in quotes if the parser is having problem parsing arg, this can occur when using special characters.
- Examples
userscloud.com##.btn-icon-stacked[onclick]:remove-attr(onclick)
magesy.*,majesy.*##[oncontextmenu]:remove-attr(oncontextmenu)
zerodot1.gitlab.io##selector:remove-attr(/oncontextmenu|onselectstart|ondragstart/)
zerodot1.gitlab.io##selector:remove-attr(/^on[a-z]+/)
danskebank.fi##html[cookie-consent-banner-open]:remove-class(cookie-consent-banner-open)
New in uBO 1.45.3b13.
These two new pseudo selectors are action operators, and thus can only be used at the end of a selector. They both take as argument a string or regex literal.
For :remove-class()
, when the argument matches a class name, that class name is removed.
For :remove-attr()
, when the argument matches an attribute name, that attribute is removed.
These operators are meant to replace +js(remove-attr, ...)
and +js(remove-class, ...)
, which from now on are candidate for deprecation in some future.
See also: :watch-attr()
usage for cases when targeted attributes are added without DOM layout changes.
Supported by uBO 1.15.0+ in Firefox 57+.
READ VERY CAREFULLY: HTML filtering acts on the response data (before browser parsing). Do not use the browser inspector from developer tools to create HTML filters. You must use view-source:[URL of page]
instead to look at the response data and find relevant information to create relevant HTML filters.
The purpose of HTML filters is to remove elements from a document before it is parsed by the browser.
The syntax is similar to that of cosmetic filters, except that you must prefix your selector (CSS or procedural) with the character ^
:
example.com##^.badstuff
example.com##^script:has-text(7c9e3a5d51cdacfc)
These HTML filters will cause the elements matching the selectors to be removed from the streamed response data, such that the browser will never know of their existence once it parses the modified response data. It makes this a powerful tool in uBO's arsenal.
HTML filtering will work only on pages with character encoding compatible with: UTF-8, ISO-8859-1, Windows-1250, Windows-1251 and Windows-1252 (detailed mapping).
Starting with 1.48.5b4, you can use negated hostnames in HTML filters. Example:
google.com,~translate.google.com##^script:has-text(consentCookiePayload)
Also see: remove-node-text
Historical notes
- With the introduction of HTML filtering, the
script:contains(...)
is now deprecated and internally converted into an equivalent##^script:has-text(...)
HTML filter. The result is essentially the same: to prevent the execution of specific inline script tags in the main HTML document. See "Inline script tag filtering" for further documentation. - Support for chaining procedural operators with native CSS selector syntax (i.e.
a:has(b) + c
) was added in 1.20.1b3. Only procedural operators which make sense in the context of HTML filtering are supported.
New in uBO 1.35.0.
The syntax to remove the response header is a special case of HTML filtering, whereas the response headers are targeted rather than the response body:
example.com##^responseheader(header-name)
header-name
is required to be in lowercase. It is the name of the header to remove.
The removal of response headers can only be applied to document resources like main- or sub-frames.
Only a limited set of headers get targeted for removal:
location
refresh
report-to
set-cookie
This limitation ensures that uBO never lowers the security profile of web pages, as we wouldn't want to remove content-security-policy
.
Given that the header removal occurs at onHeaderReceived time, this new ability works for all browsers.
The motivation for this new filtering ability is an instance of a website using a refresh
header to redirect a visitor to an undesirable destination after a few seconds.
To filter network responses according to whether a specific response header is present and whether or not it matches a distinct value, see: header
.
example.com##+js(...)
It allows the injection of specific JavaScript code into pages. The ...
part is a token identifying a JavaScript resource from the resource library. Keep in mind the resource library is under the control of the uBO project. Only JavaScript code vouched for by uBO is inserted into web pages through a valid resource token.
Some scriptlets support additional parameters when specified after the scriptlet name, separated by a comma. Commas, in these parameters, must be escaped. Before 1.22.0 this was possible only in regex literals (/foo\x2Cbar\u002Cbaz/
), now backslash character is sufficient (foo\,bar
).
Generic +js
filters are ignored: those filters must be specific, i.e. they must apply to specific hostnames, e.g. example.com##+js(nobab)
will inject bab-defuser
into pages on example.com
domain.
Starting with 1.22.0 new exception syntax has been added, allowing to wholly disable scriptlet injection for a given site without having to create exceptions for all matching scriptlet injection filters.
The following exception filter will cause scriptlet injection to be wholly disabled for example.com
:
example.com#@#+js()
Or to disable scriptlet injection everywhere:
#@#+js()
The following form is meaningless and ignored:
example.com##+js()
uBlock Origin - An efficient blocker for Chromium and Firefox. Fast and lean.
- Wiki home
- About the Wiki documentation
- Permissions
- Privacy policy
- Info:
- The toolbar icon
- The popup user interface
- The context menu
-
Dashboard
- Settings pane
- Filter lists pane
- My filters pane
- My rules pane
- Trusted sites pane
- Keyboard shortcuts
- The logger
- Element picker
- Element zapper
-
Blocking mode
- Very easy mode
- Easy mode (default)
- Medium mode (optimal for advanced users)
- Hard mode
- Nightmare mode
- Strict blocking
- Few words about re-design of uBO's user interface
- Reference answers to various topics seen in the wild
- Overview of uBlock's network filtering engine
- uBlock's blocking and protection effectiveness:
- uBlock's resource usage and efficiency:
- Memory footprint: what happens inside uBlock after installation
- uBlock vs. ABP: efficiency compared
- Counterpoint: Who cares about efficiency, I have 8 GB RAM and|or a quad core CPU
- Debunking "uBlock Origin is less efficient than Adguard" claims
- Myth: uBlock consumes over 80MB
- Myth: uBlock is just slightly less resource intensive than Adblock Plus
- Myth: uBlock consumes several or several dozen GB of RAM
- Various videos showing side by side comparison of the load speed of complex sites
- Own memory usage: benchmarks over time
- Contributed memory usage: benchmarks over time
- Can uBO crash a browser?
- Tools, tests
- Deploying uBlock Origin
- Proposal for integration/unit testing
- uBlock Origin Core (Node.js):
- Troubleshooting:
- Good external guides:
- Scientific papers