-
Notifications
You must be signed in to change notification settings - Fork 10.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[YouTube] Implement optional YouTube server-imposed throttling bypass #28859
Comments
I've been experiencing this issue as well with getting my Watch Later videos using youtube-dl. I noticed my scripts were taking an entire day to run instead of minutes. I did a bunch of test runs and found that if I downloaded items rapidly at full speed I would start to get throttled by Youtube. I tried implementing a variety of settings and finally found that randomizing time between shows and capping the speed kept me from being throttled for the last month. Until today... I stated getting some videos throttled again but at least unlike last time it was a couple of them and not all of them. I've started expanding my minimum sleep interval to 200 to see if that helps. |
في اثنين، 14 يونيو، 2021 في 5:48 م، كتب Leith Tussing <
***@***.***>:
… I've been experiencing this issue as well with getting my Watch Later
videos using youtube-dl. I noticed my scripts were taking an entire day to
run instead of minutes. I did a bunch of test runs and found that if I
downloaded items rapidly at full speed I would start to get throttled by
Youtube. I tried implementing a variety of settings and finally found that
randomizing time between shows and capping the speed kept me from being
throttled for the last month.
--sleep-interval 120 --max-sleep-interval 300 --limit-rate 4M
Until today... I stated getting some videos throttled again but at least
unlike last time it was a couple of them and not all of them. I've started
expanding my minimum sleep interval to 200 to see if that helps.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#28859 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AUO2ANZ5M7XJENUUEVJDVNLTSYJDJANCNFSM43QM4GCQ>
.
|
I only used to see this about once in 10 attempts, but now, every video I try to download with youtube-dl is throttled to about 80 KB/s. Changing my IP address doesn't have any effect. It looks to me as though throttling was an experimental feature, and when the experiment was a success, youtube deployed it to all of their content servers. There is hope though. Streaming on mpv doesn't work any more, but the same video will still stream just fine in-browser, proving there is a way of bypassing the throttling! youtube-dl just needs to pretend a little harder to be a real web browser. |
But literally any video I try this with, I see this throttling lol. I see the same throttling when I'm only extracting audio from a video, as well. |
Interesting.
So, to be clear, you are doing this download with mech, and not youtube-dl? |
This suggests that youtube has managed to specifically fingerprint youtube-dl usage patterns. Let me try mech to verify. |
I just tried mech. (I just made an AUR package for it :D) I observe no throttling. |
That is a good idea, @89z. I made an issue at mech for further measures. See https://github.com/89z/mech/issues/8. |
After more experimentation, and some very helpful hints from
The user agent is unimportant. I've written a short python demo of the throttling bypass: #!/usr/bin/env python
from requests import Session
import json
def api_key(response):
from bs4 import BeautifulSoup
import re
soup = BeautifulSoup(response, 'html.parser')
key = None
for script_tag in soup.find_all('script'):
script = script_tag.string
if script is not None:
match = re.search(r'"INNERTUBE_API_KEY":"([^"]+)"', script)
if match is not None:
key = match.group(1)
break
assert key is not None
return key
id = 'yiw6_JakZFc'
session = Session()
session.headers = {
# This is to demonstrate how little the user agent matters, and also for
# fun.
'User-Agent': 'Fuck you, Google!',
}
# Hit the /watch endpoint, but we actually only want an API key lol.
response = session.get(
'https://www.youtube.com/watch',
params={'v': id},
).content.decode()
key = api_key(response)
# OK, now use the API key to get the actual streaming data.
post_data = {
'context': {
'client': {
'clientName': 'ANDROID',
'clientVersion': '16.05',
},
},
'videoId': id,
}
data = json.loads(session.post(
'https://www.youtube.com/youtubei/v1/player',
params={'key': key},
data=json.dumps(post_data),
).content)
for f in data['streamingData']['adaptiveFormats']:
if 'height' in f and f['height'] == 720:
print(f['url']+'&range=0-10000000')
break The URL that this program prints will download the first 10 megs of a 720p video without throttling. |
I implemented the fix in my youtube client. |
In the past I always got full download speed via youtube-dl. I guess a good method to bypass throttling is to not download anonymously but as a logged in Google/Youtube user. |
I confirm that
|
Hello, did you solve the problem? and how did you do that? |
The following merged PRs fix this issue:
To benefit from these you have to install from the git master until the useless sob maintainer makes a new release. |
I'm dying🤣 |
legend has it he's the worst github maintainer since that Gamer191 guy, who tried to create a repository that's sole purpose was to host a useless yt-dlp git diff |
Checklist
Description
First of all, thanks for your amazing software 👍🏻
Recently the following issue is occurring increasingly frequent: In some cases when downloading from YouTube (especially YT Music), the server throttles download speed to 58-60 KiB/s. (My internet connection however is stable and has a constant 4-6 MiB/s bandwith.) This happens in a pattern that I cannot describe more precisely right now. However it seems mostly random but associated to the download url from the extracted info_dict, since
--http-chunk-size
doesn't solve the issue but interrupting the download completely and immediately restarting (video is extracted again) solves the issue most of the time.If nobody else can come up with another solution, I would recommend implementing a new CLI flag that will cause a check of the download speed against a threshold and will interrupt download and restart the extraction completely if the download speed falls under the threshold (for some amount of time?).
I would be incredibly grateful for a workaround! ❤️ I don't feel comfortable enough in Python to implement one myself and request a push... 😉
The text was updated successfully, but these errors were encountered: