Implement a max number of requests per minute to prevent hitting reate limits and triggering ReCaptchaExceptions. This slows down the RecordingDownloader significantly and can be adjusted if needed. A request ist retried once when facing a ReCaptchaException.
Extractor.pageFetched -> Extractor.isPageFetched
Stream.equalStats(Stream) is renamed to Stream.areStatsEqual(Stream)
Stream.getBitrate() is renamed to Stream.getBitRate()
This commit adds two new regular expressions to parse the n parameter function.
It also improves existing regular expressions by using the constant representing
multiple characters instead of adding the one or multiple characters
token manually in each regex for everything and not only function names.
It isn't required anymore and not used by extractor anymore since commit
5a6da5f43e, as the wrong page ID is used as a
visitor data (the VerifiedStatus value as a string).
The pattern to detect the channel ID was faulty and e.g. the ID detected for "https://kolektiva.media/video-channels/documentary_channel" was "a/video-channels" which is wrong. A new pattern is added to distinguish between URLs and potential IDs because IDs must not start with a "/" while IDs inside an URL must.
FixesTeamNewPipe/NewPipe#11369
As there are multiple show UI elements which share a lot of common data, a base
implementation, an abstract class named YoutubeBaseShowInfoItemExtractor, has
been created to handle common cases.
Also throw an exception when we cannot get the verified status of a channel in
YoutubeChannelExtractor due to a missing channelHeader, if the channel has no
channelAgeGateRenderer.
The runs object was computed twice in getTextFromObject and getUrlFromObject
methods, leading to unneeded search costs. This has been avoided by storing the
array in method variables.
This param used to throttle bandwidth of streaming URLs which have this
parameter when the correct value is not provided but it is not the case
anymore, as the streaming URLs return now an HTTP response code 403 in
this case.
This commits fixes extraction of the function name decoding the n parameter for
HTML5 clients' streaming URLs for YouTube base JavaScript player 3400486c.
Two new regexes have been added to the existing ones. All regexes and what they
extract has been documented.
- Fix typo in folder name of DescriptionTestPewdiepie test;
- Fix constant usage of DownloaderTestImpl as download implementation for
UnlistedTest and CCLicensed tests.
These changes work around an anti-bot token, for which its requirement is A/B
tested on the WEB client. In this test, streaming URLs of this client return
HTTP errors 403 if the token is not provided after some time.
It also allows to not fetch the JavaScript player for non-age restricted
videos, reducing data usage.
The TVHTML5 embed client is now only fetched in the case of age-restricted
videos.
The methods forceFetchAndroidClient and forceFetchIosClient of
YoutubeStreamExtractor have been removed, as they are now not needed anymore.
These changes also break the extraction of appropriate error states for private
and deleted videos and invalid video IDs.
Previously, if no URL was provided due to the track being a preorder or
unpaid track that doesn't have its own public page, we would catenate
the artist URL and null to a nonsensical string.
This invalid URL would also be used to try fetching a thumbnail URL.
The downstream behavior of the NewPipe client is only marginally
improved, as it now no longer throws visible exceptions due to the
missing thumbnails, but still gives an unfriendly error upon clicking
the item that has a `null` URL.