You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Out of all websites as of time of writing only the Alphacoders websites block the downloader almost every time. I know that the User-Agent is often the giving factor. The reason why it sets a User-Agent here is because some websites (e.g. Wallhaven) outright refuse to connect/respond normally with agent-less clients.
Experimenting
When trying to download an image from Wallaper Abyss, I get this error message:
wallpaper-dl https://wall.alphacoders.com/big.php?i=1362746
Fetching https://wall.alphacoders.com/big.php?i=1362746 FAILED HTTP status client error (403 Forbidden) for url (https://wall.alphacoders.com/big.php?i=1362746)
Looking at the response using cURL, there is obviously a block from Cloudflare's side:
It also does not matter here if the user agent is set to wallpaper-dl's or none at all, since it's not from a normal browser. But watch what happens when I give it my browser's user agent:
I think the config just needs a little field where we can put a custom User-Agent to bypass this anti-bot detection. Does not work all the time apparently, but it might be useful even as a temporary fix for someone.
The text was updated successfully, but these errors were encountered:
The Problem
Out of all websites as of time of writing only the Alphacoders websites block the downloader almost every time. I know that the User-Agent is often the giving factor. The reason why it sets a User-Agent here is because some websites (e.g. Wallhaven) outright refuse to connect/respond normally with agent-less clients.
Experimenting
When trying to download an image from Wallaper Abyss, I get this error message:
Looking at the response using cURL, there is obviously a block from Cloudflare's side:
It also does not matter here if the user agent is set to wallpaper-dl's or none at all, since it's not from a normal browser. But watch what happens when I give it my browser's user agent:
The solution
I think the config just needs a little field where we can put a custom User-Agent to bypass this anti-bot detection. Does not work all the time apparently, but it might be useful even as a temporary fix for someone.
The text was updated successfully, but these errors were encountered: