diff --git a/CONTRIBUTORS b/CONTRIBUTORS index a9a055742..9b8207b28 100644 --- a/CONTRIBUTORS +++ b/CONTRIBUTORS @@ -695,3 +695,15 @@ KBelmin kesor MellowKyler Wesley107772 +a13ssandr0 +ChocoLZS +doe1080 +hugovdev +jshumphrey +julionc +manavchaudhary1 +powergold1 +Sakura286 +SamDecrock +stratus-ss +subrat-lima diff --git a/Changelog.md b/Changelog.md index 2648b9fe2..41a2da744 100644 --- a/Changelog.md +++ b/Changelog.md @@ -4,6 +4,64 @@ # Changelog # To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master --> +### 2024.11.18 + +#### Important changes +- **Login with OAuth is no longer supported for YouTube** +Due to a change made by the site, yt-dlp is longer able to support OAuth login for YouTube. [Read more](https://github.com/yt-dlp/yt-dlp/issues/11462#issuecomment-2471703090) + +#### Core changes +- [Catch broken Cryptodome installations](https://github.com/yt-dlp/yt-dlp/commit/b83ca24eb72e1e558b0185bd73975586c0bc0546) ([#11486](https://github.com/yt-dlp/yt-dlp/issues/11486)) by [seproDev](https://github.com/seproDev) +- **utils** + - [Fix `join_nonempty`, add `**kwargs` to `unpack`](https://github.com/yt-dlp/yt-dlp/commit/39d79c9b9cf23411d935910685c40aa1a2fdb409) ([#11559](https://github.com/yt-dlp/yt-dlp/issues/11559)) by [Grub4K](https://github.com/Grub4K) + - `subs_list_to_dict`: [Add `lang` default parameter](https://github.com/yt-dlp/yt-dlp/commit/c014fbcddcb4c8f79d914ac5bb526758b540ea33) ([#11508](https://github.com/yt-dlp/yt-dlp/issues/11508)) by [Grub4K](https://github.com/Grub4K) + +#### Extractor changes +- [Allow `ext` override for thumbnails](https://github.com/yt-dlp/yt-dlp/commit/eb64ae7d5def6df2aba74fb703e7f168fb299865) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly) +- **adobepass**: [Fix provider requests](https://github.com/yt-dlp/yt-dlp/commit/85fdc66b6e01d19a94b4f39b58e3c0cf23600902) ([#11472](https://github.com/yt-dlp/yt-dlp/issues/11472)) by [bashonly](https://github.com/bashonly) +- **archive.org**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/f2a4983df7a64c4e93b56f79dbd16a781bd90206) ([#11527](https://github.com/yt-dlp/yt-dlp/issues/11527)) by [jshumphrey](https://github.com/jshumphrey) +- **bandlab**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/6365e92589e4bc17b8fffb0125a716d144ad2137) ([#11535](https://github.com/yt-dlp/yt-dlp/issues/11535)) by [seproDev](https://github.com/seproDev) +- **chaturbate** + - [Extract from API and support impersonation](https://github.com/yt-dlp/yt-dlp/commit/720b3dc453c342bc2e8df7dbc0acaab4479de46c) ([#11555](https://github.com/yt-dlp/yt-dlp/issues/11555)) by [powergold1](https://github.com/powergold1) (With fixes in [7cecd29](https://github.com/yt-dlp/yt-dlp/commit/7cecd299e4a5ef1f0f044b2fedc26f17e41f15e3) by [seproDev](https://github.com/seproDev)) + - [Support alternate domains](https://github.com/yt-dlp/yt-dlp/commit/a9f85670d03ab993dc589f21a9ffffcad61392d5) ([#10595](https://github.com/yt-dlp/yt-dlp/issues/10595)) by [manavchaudhary1](https://github.com/manavchaudhary1) +- **cloudflarestream**: [Avoid extraction via videodelivery.net](https://github.com/yt-dlp/yt-dlp/commit/2db8c2e7d57a1784b06057c48e3e91023720d195) ([#11478](https://github.com/yt-dlp/yt-dlp/issues/11478)) by [hugovdev](https://github.com/hugovdev) +- **ctvnews** + - [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/f351440f1dc5b3dfbfc5737b037a869d946056fe) ([#11534](https://github.com/yt-dlp/yt-dlp/issues/11534)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey) + - [Fix playlist ID extraction](https://github.com/yt-dlp/yt-dlp/commit/f9d98509a898737c12977b2e2117277bada2c196) ([#8892](https://github.com/yt-dlp/yt-dlp/issues/8892)) by [qbnu](https://github.com/qbnu) +- **digitalconcerthall**: [Support login with access/refresh tokens](https://github.com/yt-dlp/yt-dlp/commit/f7257588bdff5f0b0452635a66b253a783c97357) ([#11571](https://github.com/yt-dlp/yt-dlp/issues/11571)) by [bashonly](https://github.com/bashonly) +- **facebook**: [Fix formats extraction](https://github.com/yt-dlp/yt-dlp/commit/bacc31b05a04181b63100c481565256b14813a5e) ([#11513](https://github.com/yt-dlp/yt-dlp/issues/11513)) by [bashonly](https://github.com/bashonly) +- **gamedevtv**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8) ([#11368](https://github.com/yt-dlp/yt-dlp/issues/11368)) by [bashonly](https://github.com/bashonly), [stratus-ss](https://github.com/stratus-ss) +- **goplay**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/6b43a8d84b881d769b480ba6e20ec691e9d1b92d) ([#11466](https://github.com/yt-dlp/yt-dlp/issues/11466)) by [bashonly](https://github.com/bashonly), [SamDecrock](https://github.com/SamDecrock) +- **kenh14**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/eb15fd5a32d8b35ef515f7a3d1158c03025648ff) ([#3996](https://github.com/yt-dlp/yt-dlp/issues/3996)) by [krichbanana](https://github.com/krichbanana), [pzhlkj6612](https://github.com/pzhlkj6612) +- **litv**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/e079ffbda66de150c0a9ebef05e89f61bb4d5f76) ([#11071](https://github.com/yt-dlp/yt-dlp/issues/11071)) by [jiru](https://github.com/jiru) +- **mixchmovie**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/0ec9bfed4d4a52bfb4f8733da1acf0aeeae21e6b) ([#10897](https://github.com/yt-dlp/yt-dlp/issues/10897)) by [Sakura286](https://github.com/Sakura286) +- **patreon**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/1d253b0a27110d174c40faf8fb1c999d099e0cde) ([#11530](https://github.com/yt-dlp/yt-dlp/issues/11530)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey) +- **pialive**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/d867f99622ef7fba690b08da56c39d739b822bb7) ([#10811](https://github.com/yt-dlp/yt-dlp/issues/10811)) by [ChocoLZS](https://github.com/ChocoLZS) +- **radioradicale**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/70c55cb08f780eab687e881ef42bb5c6007d290b) ([#5607](https://github.com/yt-dlp/yt-dlp/issues/5607)) by [a13ssandr0](https://github.com/a13ssandr0), [pzhlkj6612](https://github.com/pzhlkj6612) +- **reddit**: [Improve error handling](https://github.com/yt-dlp/yt-dlp/commit/7ea2787920cccc6b8ea30791993d114fbd564434) ([#11573](https://github.com/yt-dlp/yt-dlp/issues/11573)) by [bashonly](https://github.com/bashonly) +- **redgifsuser**: [Fix extraction](https://github.com/yt-dlp/yt-dlp/commit/d215fba7edb69d4fa665f43663756fd260b1489f) ([#11531](https://github.com/yt-dlp/yt-dlp/issues/11531)) by [jshumphrey](https://github.com/jshumphrey) +- **rutube**: [Rework extractors](https://github.com/yt-dlp/yt-dlp/commit/e398217aae19bb25f91797bfbe8a3243698d7f45) ([#11480](https://github.com/yt-dlp/yt-dlp/issues/11480)) by [seproDev](https://github.com/seproDev) +- **sonylivseries**: [Add `sort_order` extractor-arg](https://github.com/yt-dlp/yt-dlp/commit/2009cb27e17014787bf63eaa2ada51293d54f22a) ([#11569](https://github.com/yt-dlp/yt-dlp/issues/11569)) by [bashonly](https://github.com/bashonly) +- **soop**: [Fix thumbnail extraction](https://github.com/yt-dlp/yt-dlp/commit/c699bafc5038b59c9afe8c2e69175fb66424c832) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly) +- **spankbang**: [Support browser impersonation](https://github.com/yt-dlp/yt-dlp/commit/8388ec256f7753b02488788e3cfa771f6e1db247) ([#11542](https://github.com/yt-dlp/yt-dlp/issues/11542)) by [jshumphrey](https://github.com/jshumphrey) +- **spreaker** + - [Support episode pages and access keys](https://github.com/yt-dlp/yt-dlp/commit/c39016f66df76d14284c705736ca73db8055d8de) ([#11489](https://github.com/yt-dlp/yt-dlp/issues/11489)) by [julionc](https://github.com/julionc) + - [Support podcast and feed pages](https://github.com/yt-dlp/yt-dlp/commit/c6737310619022248f5d0fd13872073cac168453) ([#10968](https://github.com/yt-dlp/yt-dlp/issues/10968)) by [subrat-lima](https://github.com/subrat-lima) +- **youtube** + - [Player client maintenance](https://github.com/yt-dlp/yt-dlp/commit/637d62a3a9fc723d68632c1af25c30acdadeeb85) ([#11528](https://github.com/yt-dlp/yt-dlp/issues/11528)) by [bashonly](https://github.com/bashonly), [seproDev](https://github.com/seproDev) + - [Remove broken OAuth support](https://github.com/yt-dlp/yt-dlp/commit/52c0ffe40ad6e8404d93296f575007b05b04c686) ([#11558](https://github.com/yt-dlp/yt-dlp/issues/11558)) by [bashonly](https://github.com/bashonly) + - tab: [Fix podcasts tab extraction](https://github.com/yt-dlp/yt-dlp/commit/37cd7660eaff397c551ee18d80507702342b0c2b) ([#11567](https://github.com/yt-dlp/yt-dlp/issues/11567)) by [seproDev](https://github.com/seproDev) + +#### Misc. changes +- **build** + - [Bump PyInstaller version pin to `>=6.11.1`](https://github.com/yt-dlp/yt-dlp/commit/f9c8deb4e5887ff5150e911ac0452e645f988044) ([#11507](https://github.com/yt-dlp/yt-dlp/issues/11507)) by [bashonly](https://github.com/bashonly) + - [Enable attestations for trusted publishing](https://github.com/yt-dlp/yt-dlp/commit/f13df591d4d7ca8e2f31b35c9c91e69ba9e9b013) ([#11420](https://github.com/yt-dlp/yt-dlp/issues/11420)) by [bashonly](https://github.com/bashonly) + - [Pin `websockets` version to >=13.0,<14](https://github.com/yt-dlp/yt-dlp/commit/240a7d43c8a67ffb86d44dc276805aa43c358dcc) ([#11488](https://github.com/yt-dlp/yt-dlp/issues/11488)) by [bashonly](https://github.com/bashonly) +- **cleanup** + - [Deprecate more compat functions](https://github.com/yt-dlp/yt-dlp/commit/f95a92b3d0169a784ee15a138fbe09d82b2754a1) ([#11439](https://github.com/yt-dlp/yt-dlp/issues/11439)) by [seproDev](https://github.com/seproDev) + - [Remove dead extractors](https://github.com/yt-dlp/yt-dlp/commit/10fc719bc7f1eef469389c5219102266ef411f29) ([#11566](https://github.com/yt-dlp/yt-dlp/issues/11566)) by [doe1080](https://github.com/doe1080) + - Miscellaneous: [da252d9](https://github.com/yt-dlp/yt-dlp/commit/da252d9d322af3e2178ac5eae324809502a0a862) by [bashonly](https://github.com/bashonly), [Grub4K](https://github.com/Grub4K), [seproDev](https://github.com/seproDev) + ### 2024.11.04 #### Important changes diff --git a/README.md b/README.md index 09096218e..dd3a3189b 100644 --- a/README.md +++ b/README.md @@ -342,8 +342,9 @@ ## General Options: extractor plugins; postprocessor plugins can only be loaded from the default plugin directories - --flat-playlist Do not extract the videos of a playlist, - only list them + --flat-playlist Do not extract a playlist's URL result + entries; some entry metadata may be missing + and downloading may be bypassed --no-flat-playlist Fully extract the videos of a playlist (default) --live-from-start Download livestreams from the start. @@ -1866,8 +1867,8 @@ #### orfon (orf:on) #### bilibili * `prefer_multi_flv`: Prefer extracting flv formats over mp4 for older videos that still provide legacy formats -#### digitalconcerthall -* `prefer_combined_hls`: Prefer extracting combined/pre-merged video and audio HLS formats. This will exclude 4K/HEVC video and lossless/FLAC audio formats, which are only available as split video/audio HLS formats +#### sonylivseries +* `sort_order`: Episode sort order for series extraction - one of `asc` (ascending, oldest first) or `desc` (descending, newest first). Default is `asc` **Note**: These options may be changed/removed in the future without concern for backward compatibility diff --git a/devscripts/changelog_override.json b/devscripts/changelog_override.json index 08ea9666e..906e5cf72 100644 --- a/devscripts/changelog_override.json +++ b/devscripts/changelog_override.json @@ -234,5 +234,10 @@ "when": "57212a5f97ce367590aaa5c3e9a135eead8f81f7", "short": "[ie/vimeo] Fix API retries (#11351)", "authors": ["bashonly"] + }, + { + "action": "add", + "when": "52c0ffe40ad6e8404d93296f575007b05b04c686", + "short": "[priority] **Login with OAuth is no longer supported for YouTube**\nDue to a change made by the site, yt-dlp is longer able to support OAuth login for YouTube. [Read more](https://github.com/yt-dlp/yt-dlp/issues/11462#issuecomment-2471703090)" } ] diff --git a/supportedsites.md b/supportedsites.md index fc79e4ae6..916735e08 100644 --- a/supportedsites.md +++ b/supportedsites.md @@ -129,6 +129,8 @@ # Supported sites - **Bandcamp:album** - **Bandcamp:user** - **Bandcamp:weekly** + - **Bandlab** + - **BandlabPlaylist** - **BannedVideo** - **bbc**: [*bbc*](## "netrc machine") BBC - **bbc.co.uk**: [*bbc*](## "netrc machine") BBC iPlayer @@ -484,6 +486,7 @@ # Supported sites - **Gab** - **GabTV** - **Gaia**: [*gaia*](## "netrc machine") + - **GameDevTVDashboard**: [*gamedevtv*](## "netrc machine") - **GameJolt** - **GameJoltCommunity** - **GameJoltGame** @@ -651,6 +654,8 @@ # Supported sites - **Karaoketv** - **Katsomo**: (**Currently broken**) - **KelbyOne**: (**Currently broken**) + - **Kenh14Playlist** + - **Kenh14Video** - **Ketnet** - **khanacademy** - **khanacademy:unit** @@ -784,10 +789,6 @@ # Supported sites - **MicrosoftLearnSession** - **MicrosoftMedius** - **microsoftstream**: Microsoft Stream - - **mildom**: Record ongoing live by specific user in Mildom - - **mildom:clip**: Clip in Mildom - - **mildom:​user:vod**: Download all VODs from specific user in Mildom - - **mildom:vod**: VOD in Mildom - **minds** - **minds:channel** - **minds:group** @@ -798,6 +799,7 @@ # Supported sites - **MiTele**: mitele.es - **mixch** - **mixch:archive** + - **mixch:movie** - **mixcloud** - **mixcloud:playlist** - **mixcloud:user** @@ -1060,8 +1062,8 @@ # Supported sites - **PhilharmonieDeParis**: Philharmonie de Paris - **phoenix.de** - **Photobucket** + - **PiaLive** - **Piapro**: [*piapro*](## "netrc machine") - - **PIAULIZAPortal**: ulizaportal.jp - PIA LIVE STREAM - **Picarto** - **PicartoVod** - **Piksel** @@ -1088,8 +1090,6 @@ # Supported sites - **PodbayFMChannel** - **Podchaser** - **podomatic**: (**Currently broken**) - - **Pokemon** - - **PokemonWatch** - **PokerGo**: [*pokergo*](## "netrc machine") - **PokerGoCollection**: [*pokergo*](## "netrc machine") - **PolsatGo** @@ -1160,6 +1160,7 @@ # Supported sites - **RadioJavan**: (**Currently broken**) - **radiokapital** - **radiokapital:show** + - **RadioRadicale** - **RadioZetPodcast** - **radlive** - **radlive:channel** @@ -1367,9 +1368,7 @@ # Supported sites - **spotify**: Spotify episodes (**Currently broken**) - **spotify:show**: Spotify shows (**Currently broken**) - **Spreaker** - - **SpreakerPage** - **SpreakerShow** - - **SpreakerShowPage** - **SpringboardPlatform** - **Sprout** - **SproutVideo** @@ -1570,6 +1569,8 @@ # Supported sites - **UFCTV**: [*ufctv*](## "netrc machine") - **ukcolumn**: (**Currently broken**) - **UKTVPlay** + - **UlizaPlayer** + - **UlizaPortal**: ulizaportal.jp - **umg:de**: Universal Music Deutschland (**Currently broken**) - **Unistra** - **Unity**: (**Currently broken**) @@ -1587,8 +1588,6 @@ # Supported sites - **Varzesh3**: (**Currently broken**) - **Vbox7** - **Veo** - - **Veoh** - - **veoh:user** - **Vesti**: Вести.Ru (**Currently broken**) - **Vevo** - **VevoPlaylist** diff --git a/yt_dlp/extractor/_extractors.py b/yt_dlp/extractor/_extractors.py index cb402103d..cf8257318 100644 --- a/yt_dlp/extractor/_extractors.py +++ b/yt_dlp/extractor/_extractors.py @@ -1139,12 +1139,6 @@ MicrosoftMediusIE, ) from .microsoftstream import MicrosoftStreamIE -from .mildom import ( - MildomClipIE, - MildomIE, - MildomUserVodIE, - MildomVodIE, -) from .minds import ( MindsChannelIE, MindsGroupIE, @@ -1527,8 +1521,8 @@ from .philharmoniedeparis import PhilharmonieDeParisIE from .phoenix import PhoenixIE from .photobucket import PhotobucketIE +from .pialive import PiaLiveIE from .piapro import PiaproIE -from .piaulizaportal import PIAULIZAPortalIE from .picarto import ( PicartoIE, PicartoVodIE, @@ -1564,10 +1558,6 @@ ) from .podchaser import PodchaserIE from .podomatic import PodomaticIE -from .pokemon import ( - PokemonIE, - PokemonWatchIE, -) from .pokergo import ( PokerGoCollectionIE, PokerGoIE, @@ -2261,6 +2251,10 @@ ) from .ukcolumn import UkColumnIE from .uktvplay import UKTVPlayIE +from .uliza import ( + UlizaPlayerIE, + UlizaPortalIE, +) from .umg import UMGDeIE from .unistra import UnistraIE from .unity import UnityIE @@ -2289,10 +2283,6 @@ from .varzesh3 import Varzesh3IE from .vbox7 import Vbox7IE from .veo import VeoIE -from .veoh import ( - VeohIE, - VeohUserIE, -) from .vesti import VestiIE from .vevo import ( VevoIE, diff --git a/yt_dlp/extractor/bandlab.py b/yt_dlp/extractor/bandlab.py index e48d5d3f7..64aa2ba70 100644 --- a/yt_dlp/extractor/bandlab.py +++ b/yt_dlp/extractor/bandlab.py @@ -1,4 +1,3 @@ - from .common import InfoExtractor from ..utils import ( ExtractorError, diff --git a/yt_dlp/extractor/common.py b/yt_dlp/extractor/common.py index 2aa40a77a..28a3adf93 100644 --- a/yt_dlp/extractor/common.py +++ b/yt_dlp/extractor/common.py @@ -3767,7 +3767,7 @@ def _merge_subtitles(cls, *dicts, target=None): """ Merge subtitle dictionaries, language by language. """ if target is None: target = {} - for d in dicts: + for d in filter(None, dicts): for lang, subs in d.items(): target[lang] = cls._merge_subtitle_items(target.get(lang, []), subs) return target diff --git a/yt_dlp/extractor/ctvnews.py b/yt_dlp/extractor/ctvnews.py index ebed9eb2d..6d33f85e4 100644 --- a/yt_dlp/extractor/ctvnews.py +++ b/yt_dlp/extractor/ctvnews.py @@ -1,14 +1,27 @@ +import json import re +import urllib.parse from .common import InfoExtractor -from ..utils import orderedSet +from .ninecninemedia import NineCNineMediaIE +from ..utils import extract_attributes, orderedSet +from ..utils.traversal import find_element, traverse_obj class CTVNewsIE(InfoExtractor): - _VALID_URL = r'https?://(?:.+?\.)?ctvnews\.ca/(?:video\?(?:clip|playlist|bin)Id=|.*?)(?P[0-9.]+)' + _BASE_REGEX = r'https?://(?:[^.]+\.)?ctvnews\.ca/' + _VIDEO_ID_RE = r'(?P\d{5,})' + _PLAYLIST_ID_RE = r'(?P\d\.\d{5,})' + _VALID_URL = [ + rf'{_BASE_REGEX}video/c{_VIDEO_ID_RE}', + rf'{_BASE_REGEX}video(?:-gallery)?/?\?clipId={_VIDEO_ID_RE}', + rf'{_BASE_REGEX}video/?\?(?:playlist|bin)Id={_PLAYLIST_ID_RE}', + rf'{_BASE_REGEX}(?!video/)[^?#]*?{_PLAYLIST_ID_RE}/?(?:$|[?#])', + rf'{_BASE_REGEX}(?!video/)[^?#]+\?binId={_PLAYLIST_ID_RE}', + ] _TESTS = [{ 'url': 'http://www.ctvnews.ca/video?clipId=901995', - 'md5': '9b8624ba66351a23e0b6e1391971f9af', + 'md5': 'b608f466c7fa24b9666c6439d766ab7e', 'info_dict': { 'id': '901995', 'ext': 'flv', @@ -16,6 +29,33 @@ class CTVNewsIE(InfoExtractor): 'description': 'md5:958dd3b4f5bbbf0ed4d045c790d89285', 'timestamp': 1467286284, 'upload_date': '20160630', + 'categories': [], + 'season_number': 0, + 'season': 'Season 0', + 'tags': [], + 'series': 'CTV News National | Archive | Stories 2', + 'season_id': '57981', + 'thumbnail': r're:https?://.*\.jpg$', + 'duration': 764.631, + }, + }, { + 'url': 'https://barrie.ctvnews.ca/video/c3030933-here_s-what_s-making-news-for-nov--15?binId=1272429', + 'md5': '8b8c2b33c5c1803e3c26bc74ff8694d5', + 'info_dict': { + 'id': '3030933', + 'ext': 'flv', + 'title': 'Here’s what’s making news for Nov. 15', + 'description': 'Here are the top stories we’re working on for CTV News at 11 for Nov. 15', + 'thumbnail': 'http://images2.9c9media.com/image_asset/2021_2_22_a602e68e-1514-410e-a67a-e1f7cccbacab_png_2000x1125.jpg', + 'season_id': '58104', + 'season_number': 0, + 'tags': [], + 'season': 'Season 0', + 'categories': [], + 'series': 'CTV News Barrie', + 'upload_date': '20241116', + 'duration': 42.943, + 'timestamp': 1731722452, }, }, { 'url': 'http://www.ctvnews.ca/video?playlistId=1.2966224', @@ -31,6 +71,72 @@ class CTVNewsIE(InfoExtractor): 'id': '1.2876780', }, 'playlist_mincount': 100, + }, { + 'url': 'https://www.ctvnews.ca/it-s-been-23-years-since-toronto-called-in-the-army-after-a-major-snowstorm-1.5736957', + 'info_dict': + { + 'id': '1.5736957', + }, + 'playlist_mincount': 6, + }, { + 'url': 'https://www.ctvnews.ca/business/respondents-to-bank-of-canada-questionnaire-largely-oppose-creating-a-digital-loonie-1.6665797', + 'md5': '24bc4b88cdc17d8c3fc01dfc228ab72c', + 'info_dict': { + 'id': '2695026', + 'ext': 'flv', + 'season_id': '89852', + 'series': 'From CTV News Channel', + 'description': 'md5:796a985a23cacc7e1e2fafefd94afd0a', + 'season': '2023', + 'title': 'Bank of Canada asks public about digital currency', + 'categories': [], + 'tags': [], + 'upload_date': '20230526', + 'season_number': 2023, + 'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg', + 'timestamp': 1685105157, + 'duration': 253.553, + }, + }, { + 'url': 'https://stox.ctvnews.ca/video-gallery?clipId=582589', + 'md5': '135cc592df607d29dddc931f1b756ae2', + 'info_dict': { + 'id': '582589', + 'ext': 'flv', + 'categories': [], + 'timestamp': 1427906183, + 'season_number': 0, + 'duration': 125.559, + 'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg', + 'series': 'CTV News Stox', + 'description': 'CTV original footage of the rise and fall of the Berlin Wall.', + 'title': 'Berlin Wall', + 'season_id': '63817', + 'season': 'Season 0', + 'tags': [], + 'upload_date': '20150401', + }, + }, { + 'url': 'https://ottawa.ctvnews.ca/features/regional-contact/regional-contact-archive?binId=1.1164587#3023759', + 'md5': 'a14c0603557decc6531260791c23cc5e', + 'info_dict': { + 'id': '3023759', + 'ext': 'flv', + 'season_number': 2024, + 'timestamp': 1731798000, + 'season': '2024', + 'episode': 'Episode 125', + 'description': 'CTV News Ottawa at Six', + 'duration': 2712.076, + 'episode_number': 125, + 'upload_date': '20241116', + 'title': 'CTV News Ottawa at Six for Saturday, November 16, 2024', + 'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg', + 'categories': [], + 'tags': [], + 'series': 'CTV News Ottawa at Six', + 'season_id': '92667', + }, }, { 'url': 'http://www.ctvnews.ca/1.810401', 'only_matching': True, @@ -42,29 +148,35 @@ class CTVNewsIE(InfoExtractor): 'only_matching': True, }] + def _ninecninemedia_url_result(self, clip_id): + return self.url_result(f'9c9media:ctvnews_web:{clip_id}', NineCNineMediaIE, clip_id) + def _real_extract(self, url): page_id = self._match_id(url) - def ninecninemedia_url_result(clip_id): - return { - '_type': 'url_transparent', - 'id': clip_id, - 'url': f'9c9media:ctvnews_web:{clip_id}', - 'ie_key': 'NineCNineMedia', - } + if mobj := re.fullmatch(self._VIDEO_ID_RE, urllib.parse.urlparse(url).fragment): + page_id = mobj.group('id') - if page_id.isdigit(): - return ninecninemedia_url_result(page_id) - else: - webpage = self._download_webpage(f'http://www.ctvnews.ca/{page_id}', page_id, query={ - 'ot': 'example.AjaxPageLayout.ot', - 'maxItemsPerPage': 1000000, - }) - entries = [ninecninemedia_url_result(clip_id) for clip_id in orderedSet( - re.findall(r'clip\.id\s*=\s*(\d+);', webpage))] - if not entries: - webpage = self._download_webpage(url, page_id) - if 'getAuthStates("' in webpage: - entries = [ninecninemedia_url_result(clip_id) for clip_id in - self._search_regex(r'getAuthStates\("([\d+,]+)"', webpage, 'clip ids').split(',')] - return self.playlist_result(entries, page_id) + if re.fullmatch(self._VIDEO_ID_RE, page_id): + return self._ninecninemedia_url_result(page_id) + + webpage = self._download_webpage(f'https://www.ctvnews.ca/{page_id}', page_id, query={ + 'ot': 'example.AjaxPageLayout.ot', + 'maxItemsPerPage': 1000000, + }) + entries = [self._ninecninemedia_url_result(clip_id) + for clip_id in orderedSet(re.findall(r'clip\.id\s*=\s*(\d+);', webpage))] + if not entries: + webpage = self._download_webpage(url, page_id) + if 'getAuthStates("' in webpage: + entries = [self._ninecninemedia_url_result(clip_id) for clip_id in + self._search_regex(r'getAuthStates\("([\d+,]+)"', webpage, 'clip ids').split(',')] + else: + entries = [ + self._ninecninemedia_url_result(clip_id) for clip_id in + traverse_obj(webpage, ( + {find_element(tag='jasper-player-container', html=True)}, + {extract_attributes}, 'axis-ids', {json.loads}, ..., 'axisId', {str})) + ] + + return self.playlist_result(entries, page_id) diff --git a/yt_dlp/extractor/digitalconcerthall.py b/yt_dlp/extractor/digitalconcerthall.py index edb6fa9c0..4c4fe470d 100644 --- a/yt_dlp/extractor/digitalconcerthall.py +++ b/yt_dlp/extractor/digitalconcerthall.py @@ -1,7 +1,10 @@ +import time + from .common import InfoExtractor from ..networking.exceptions import HTTPError from ..utils import ( ExtractorError, + jwt_decode_hs256, parse_codecs, try_get, url_or_none, @@ -13,9 +16,6 @@ class DigitalConcertHallIE(InfoExtractor): IE_DESC = 'DigitalConcertHall extractor' _VALID_URL = r'https?://(?:www\.)?digitalconcerthall\.com/(?P[a-z]+)/(?Pfilm|concert|work)/(?P[0-9]+)-?(?P[0-9]+)?' - _OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token' - _USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15' - _ACCESS_TOKEN = None _NETRC_MACHINE = 'digitalconcerthall' _TESTS = [{ 'note': 'Playlist with only one video', @@ -69,59 +69,157 @@ class DigitalConcertHallIE(InfoExtractor): 'params': {'skip_download': 'm3u8'}, 'playlist_count': 1, }] + _LOGIN_HINT = ('Use --username token --password ACCESS_TOKEN where ACCESS_TOKEN ' + 'is the "access_token_production" from your browser local storage') + _REFRESH_HINT = 'or else use a "refresh_token" with --username refresh --password REFRESH_TOKEN' + _OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token' + _CLIENT_ID = 'dch.webapp' + _CLIENT_SECRET = '2ySLN+2Fwb' + _USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15' + _OAUTH_HEADERS = { + 'Accept': 'application/json', + 'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8', + 'Origin': 'https://www.digitalconcerthall.com', + 'Referer': 'https://www.digitalconcerthall.com/', + 'User-Agent': _USER_AGENT, + } + _access_token = None + _access_token_expiry = 0 + _refresh_token = None - def _perform_login(self, username, password): - login_token = self._download_json( - self._OAUTH_URL, - None, 'Obtaining token', errnote='Unable to obtain token', data=urlencode_postdata({ + @property + def _access_token_is_expired(self): + return self._access_token_expiry - 30 <= int(time.time()) + + def _set_access_token(self, value): + self._access_token = value + self._access_token_expiry = traverse_obj(value, ({jwt_decode_hs256}, 'exp', {int})) or 0 + + def _cache_tokens(self, /): + self.cache.store(self._NETRC_MACHINE, 'tokens', { + 'access_token': self._access_token, + 'refresh_token': self._refresh_token, + }) + + def _fetch_new_tokens(self, invalidate=False): + if invalidate: + self.report_warning('Access token has been invalidated') + self._set_access_token(None) + + if not self._access_token_is_expired: + return + + if not self._refresh_token: + self._set_access_token(None) + self._cache_tokens() + raise ExtractorError( + 'Access token has expired or been invalidated. ' + 'Get a new "access_token_production" value from your browser ' + f'and try again, {self._REFRESH_HINT}', expected=True) + + # If we only have a refresh token, we need a temporary "initial token" for the refresh flow + bearer_token = self._access_token or self._download_json( + self._OAUTH_URL, None, 'Obtaining initial token', 'Unable to obtain initial token', + data=urlencode_postdata({ 'affiliate': 'none', 'grant_type': 'device', 'device_vendor': 'unknown', - # device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio - 'device_model': 'unknown' if self._configuration_arg('prefer_combined_hls') else 'Safari', - 'app_id': 'dch.webapp', + # device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio, + # but this is no longer effective since actual login is not possible anymore + 'device_model': 'unknown', + 'app_id': self._CLIENT_ID, 'app_distributor': 'berlinphil', - 'app_version': '1.84.0', - 'client_secret': '2ySLN+2Fwb', - }), headers={ - 'Accept': 'application/json', - 'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8', - 'User-Agent': self._USER_AGENT, - })['access_token'] + 'app_version': '1.95.0', + 'client_secret': self._CLIENT_SECRET, + }), headers=self._OAUTH_HEADERS)['access_token'] + try: - login_response = self._download_json( - self._OAUTH_URL, - None, note='Logging in', errnote='Unable to login', data=urlencode_postdata({ - 'grant_type': 'password', - 'username': username, - 'password': password, + response = self._download_json( + self._OAUTH_URL, None, 'Refreshing token', 'Unable to refresh token', + data=urlencode_postdata({ + 'grant_type': 'refresh_token', + 'refresh_token': self._refresh_token, + 'client_id': self._CLIENT_ID, + 'client_secret': self._CLIENT_SECRET, }), headers={ - 'Accept': 'application/json', - 'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8', - 'Referer': 'https://www.digitalconcerthall.com', - 'Authorization': f'Bearer {login_token}', - 'User-Agent': self._USER_AGENT, + **self._OAUTH_HEADERS, + 'Authorization': f'Bearer {bearer_token}', }) - except ExtractorError as error: - if isinstance(error.cause, HTTPError) and error.cause.status == 401: - raise ExtractorError('Invalid username or password', expected=True) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 401: + self._set_access_token(None) + self._refresh_token = None + self._cache_tokens() + raise ExtractorError('Your tokens have been invalidated', expected=True) raise - self._ACCESS_TOKEN = login_response['access_token'] + + self._set_access_token(response['access_token']) + if refresh_token := traverse_obj(response, ('refresh_token', {str})): + self.write_debug('New refresh token granted') + self._refresh_token = refresh_token + self._cache_tokens() + + def _perform_login(self, username, password): + self.report_login() + + if username == 'refresh': + self._refresh_token = password + self._fetch_new_tokens() + + if username == 'token': + if not traverse_obj(password, {jwt_decode_hs256}): + raise ExtractorError( + f'The access token passed to yt-dlp is not valid. {self._LOGIN_HINT}', expected=True) + self._set_access_token(password) + self._cache_tokens() + + if username in ('refresh', 'token'): + if self.get_param('cachedir') is not False: + token_type = 'access' if username == 'token' else 'refresh' + self.to_screen(f'Your {token_type} token has been cached to disk. To use the cached ' + 'token next time, pass --username cache along with any password') + return + + if username != 'cache': + raise ExtractorError( + 'Login with username and password is no longer supported ' + f'for this site. {self._LOGIN_HINT}, {self._REFRESH_HINT}', expected=True) + + # Try cached access_token + cached_tokens = self.cache.load(self._NETRC_MACHINE, 'tokens', default={}) + self._set_access_token(cached_tokens.get('access_token')) + self._refresh_token = cached_tokens.get('refresh_token') + if not self._access_token_is_expired: + return + + # Try cached refresh_token + self._fetch_new_tokens(invalidate=True) def _real_initialize(self): - if not self._ACCESS_TOKEN: - self.raise_login_required(method='password') + if not self._access_token: + self.raise_login_required( + 'All content on this site is only available for registered users. ' + f'{self._LOGIN_HINT}, {self._REFRESH_HINT}', method=None) def _entries(self, items, language, type_, **kwargs): for item in items: video_id = item['id'] - stream_info = self._download_json( - self._proto_relative_url(item['_links']['streams']['href']), video_id, headers={ - 'Accept': 'application/json', - 'Authorization': f'Bearer {self._ACCESS_TOKEN}', - 'Accept-Language': language, - 'User-Agent': self._USER_AGENT, - }) + + for should_retry in (True, False): + self._fetch_new_tokens(invalidate=not should_retry) + try: + stream_info = self._download_json( + self._proto_relative_url(item['_links']['streams']['href']), video_id, headers={ + 'Accept': 'application/json', + 'Authorization': f'Bearer {self._access_token}', + 'Accept-Language': language, + 'User-Agent': self._USER_AGENT, + }) + break + except ExtractorError as error: + if should_retry and isinstance(error.cause, HTTPError) and error.cause.status == 401: + continue + raise formats = [] for m3u8_url in traverse_obj(stream_info, ('channel', ..., 'stream', ..., 'url', {url_or_none})): @@ -157,7 +255,6 @@ def _real_extract(self, url): 'Accept': 'application/json', 'Accept-Language': language, 'User-Agent': self._USER_AGENT, - 'Authorization': f'Bearer {self._ACCESS_TOKEN}', }) videos = [vid_info] if type_ == 'film' else traverse_obj(vid_info, ('_embedded', ..., ...)) diff --git a/yt_dlp/extractor/facebook.py b/yt_dlp/extractor/facebook.py index 91e2f3489..c07efcd58 100644 --- a/yt_dlp/extractor/facebook.py +++ b/yt_dlp/extractor/facebook.py @@ -569,7 +569,7 @@ def extract_dash_manifest(vid_data, formats, mpd_url=None): if dash_manifest: formats.extend(self._parse_mpd_formats( compat_etree_fromstring(urllib.parse.unquote_plus(dash_manifest)), - mpd_url=url_or_none(video.get('dash_manifest_url')) or mpd_url)) + mpd_url=url_or_none(vid_data.get('dash_manifest_url')) or mpd_url)) def process_formats(info): # Downloads with browser's User-Agent are rate limited. Working around diff --git a/yt_dlp/extractor/litv.py b/yt_dlp/extractor/litv.py index 93f926a9f..df9d141de 100644 --- a/yt_dlp/extractor/litv.py +++ b/yt_dlp/extractor/litv.py @@ -1,30 +1,32 @@ import json +import uuid from .common import InfoExtractor from ..utils import ( ExtractorError, int_or_none, + join_nonempty, smuggle_url, traverse_obj, try_call, unsmuggle_url, + urljoin, ) class LiTVIE(InfoExtractor): - _VALID_URL = r'https?://(?:www\.)?litv\.tv/(?:vod|promo)/[^/]+/(?:content\.do)?\?.*?\b(?:content_)?id=(?P[^&]+)' - - _URL_TEMPLATE = 'https://www.litv.tv/vod/%s/content.do?content_id=%s' - + _VALID_URL = r'https?://(?:www\.)?litv\.tv/(?:[^/?#]+/watch/|vod/[^/?#]+/content\.do\?content_id=)(?P[\w-]+)' + _URL_TEMPLATE = 'https://www.litv.tv/%s/watch/%s' + _GEO_COUNTRIES = ['TW'] _TESTS = [{ - 'url': 'https://www.litv.tv/vod/drama/content.do?brc_id=root&id=VOD00041610&isUHEnabled=true&autoPlay=1', + 'url': 'https://www.litv.tv/drama/watch/VOD00041610', 'info_dict': { 'id': 'VOD00041606', 'title': '花千骨', }, 'playlist_count': 51, # 50 episodes + 1 trailer }, { - 'url': 'https://www.litv.tv/vod/drama/content.do?brc_id=root&id=VOD00041610&isUHEnabled=true&autoPlay=1', + 'url': 'https://www.litv.tv/drama/watch/VOD00041610', 'md5': 'b90ff1e9f1d8f5cfcd0a44c3e2b34c7a', 'info_dict': { 'id': 'VOD00041610', @@ -32,16 +34,15 @@ class LiTVIE(InfoExtractor): 'title': '花千骨第1集', 'thumbnail': r're:https?://.*\.jpg$', 'description': '《花千骨》陸劇線上看。十六年前,平靜的村莊內,一名女嬰隨異相出生,途徑此地的蜀山掌門清虛道長算出此女命運非同一般,她體內散發的異香易招惹妖魔。一念慈悲下,他在村莊周邊設下結界阻擋妖魔入侵,讓其年滿十六後去蜀山,並賜名花千骨。', - 'categories': ['奇幻', '愛情', '中國', '仙俠'], + 'categories': ['奇幻', '愛情', '仙俠', '古裝'], 'episode': 'Episode 1', 'episode_number': 1, }, 'params': { 'noplaylist': True, }, - 'skip': 'Georestricted to Taiwan', }, { - 'url': 'https://www.litv.tv/promo/miyuezhuan/?content_id=VOD00044841&', + 'url': 'https://www.litv.tv/drama/watch/VOD00044841', 'md5': '88322ea132f848d6e3e18b32a832b918', 'info_dict': { 'id': 'VOD00044841', @@ -55,94 +56,62 @@ class LiTVIE(InfoExtractor): def _extract_playlist(self, playlist_data, content_type): all_episodes = [ self.url_result(smuggle_url( - self._URL_TEMPLATE % (content_type, episode['contentId']), + self._URL_TEMPLATE % (content_type, episode['content_id']), {'force_noplaylist': True})) # To prevent infinite recursion - for episode in traverse_obj(playlist_data, ('seasons', ..., 'episode', lambda _, v: v['contentId']))] + for episode in traverse_obj(playlist_data, ('seasons', ..., 'episodes', lambda _, v: v['content_id']))] - return self.playlist_result(all_episodes, playlist_data['contentId'], playlist_data.get('title')) + return self.playlist_result(all_episodes, playlist_data['content_id'], playlist_data.get('title')) def _real_extract(self, url): url, smuggled_data = unsmuggle_url(url, {}) - video_id = self._match_id(url) - webpage = self._download_webpage(url, video_id) + vod_data = self._search_nextjs_data(webpage, video_id)['props']['pageProps'] - if self._search_regex( - r'(?i)]*http-equiv="refresh"\s[^>]*content="[0-9]+;\s*url=https://www\.litv\.tv/"', - webpage, 'meta refresh redirect', default=False, group=0): - raise ExtractorError('No such content found', expected=True) + program_info = traverse_obj(vod_data, ('programInformation', {dict})) or {} + playlist_data = traverse_obj(vod_data, ('seriesTree')) + if playlist_data and self._yes_playlist(program_info.get('series_id'), video_id, smuggled_data): + return self._extract_playlist(playlist_data, program_info.get('content_type')) - program_info = self._parse_json(self._search_regex( - r'var\s+programInfo\s*=\s*([^;]+)', webpage, 'VOD data', default='{}'), - video_id) + asset_id = traverse_obj(program_info, ('assets', 0, 'asset_id', {str})) + if asset_id: # This is a VOD + media_type = 'vod' + else: # This is a live stream + asset_id = program_info['content_id'] + media_type = program_info['content_type'] + puid = try_call(lambda: self._get_cookies('https://www.litv.tv/')['PUID'].value) + if puid: + endpoint = 'get-urls' + else: + puid = str(uuid.uuid4()) + endpoint = 'get-urls-no-auth' + video_data = self._download_json( + f'https://www.litv.tv/api/{endpoint}', video_id, + data=json.dumps({'AssetId': asset_id, 'MediaType': media_type, 'puid': puid}).encode(), + headers={'Content-Type': 'application/json'}) - # In browsers `getProgramInfo` request is always issued. Usually this - # endpoint gives the same result as the data embedded in the webpage. - # If, for some reason, there are no embedded data, we do an extra request. - if 'assetId' not in program_info: - program_info = self._download_json( - 'https://www.litv.tv/vod/ajax/getProgramInfo', video_id, - query={'contentId': video_id}, - headers={'Accept': 'application/json'}) - - series_id = program_info['seriesId'] - if self._yes_playlist(series_id, video_id, smuggled_data): - playlist_data = self._download_json( - 'https://www.litv.tv/vod/ajax/getSeriesTree', video_id, - query={'seriesId': series_id}, headers={'Accept': 'application/json'}) - return self._extract_playlist(playlist_data, program_info['contentType']) - - video_data = self._parse_json(self._search_regex( - r'uiHlsUrl\s*=\s*testBackendData\(([^;]+)\);', - webpage, 'video data', default='{}'), video_id) - if not video_data: - payload = {'assetId': program_info['assetId']} - puid = try_call(lambda: self._get_cookies('https://www.litv.tv/')['PUID'].value) - if puid: - payload.update({ - 'type': 'auth', - 'puid': puid, - }) - endpoint = 'getUrl' - else: - payload.update({ - 'watchDevices': program_info['watchDevices'], - 'contentType': program_info['contentType'], - }) - endpoint = 'getMainUrlNoAuth' - video_data = self._download_json( - f'https://www.litv.tv/vod/ajax/{endpoint}', video_id, - data=json.dumps(payload).encode(), - headers={'Content-Type': 'application/json'}) - - if not video_data.get('fullpath'): - error_msg = video_data.get('errorMessage') - if error_msg == 'vod.error.outsideregionerror': + if error := traverse_obj(video_data, ('error', {dict})): + error_msg = traverse_obj(error, ('message', {str})) + if error_msg and 'OutsideRegionError' in error_msg: self.raise_geo_restricted('This video is available in Taiwan only') - if error_msg: + elif error_msg: raise ExtractorError(f'{self.IE_NAME} said: {error_msg}', expected=True) - raise ExtractorError(f'Unexpected result from {self.IE_NAME}') + raise ExtractorError(f'Unexpected error from {self.IE_NAME}') formats = self._extract_m3u8_formats( - video_data['fullpath'], video_id, ext='mp4', - entry_protocol='m3u8_native', m3u8_id='hls') + video_data['result']['AssetURLs'][0], video_id, ext='mp4', m3u8_id='hls') for a_format in formats: # LiTV HLS segments doesn't like compressions a_format.setdefault('http_headers', {})['Accept-Encoding'] = 'identity' - title = program_info['title'] + program_info.get('secondaryMark', '') - description = program_info.get('description') - thumbnail = program_info.get('imageFile') - categories = [item['name'] for item in program_info.get('category', [])] - episode = int_or_none(program_info.get('episode')) - return { 'id': video_id, 'formats': formats, - 'title': title, - 'description': description, - 'thumbnail': thumbnail, - 'categories': categories, - 'episode_number': episode, + 'title': join_nonempty('title', 'secondary_mark', delim='', from_dict=program_info), + **traverse_obj(program_info, { + 'description': ('description', {str}), + 'thumbnail': ('picture', {urljoin('https://p-cdnstatic.svc.litv.tv/')}), + 'categories': ('genres', ..., 'name', {str}), + 'episode_number': ('episode', {int_or_none}), + }), } diff --git a/yt_dlp/extractor/mildom.py b/yt_dlp/extractor/mildom.py deleted file mode 100644 index 88a2b9e89..000000000 --- a/yt_dlp/extractor/mildom.py +++ /dev/null @@ -1,291 +0,0 @@ -import functools -import json -import uuid - -from .common import InfoExtractor -from ..utils import ( - ExtractorError, - OnDemandPagedList, - determine_ext, - dict_get, - float_or_none, - traverse_obj, -) - - -class MildomBaseIE(InfoExtractor): - _GUEST_ID = None - - def _call_api(self, url, video_id, query=None, note='Downloading JSON metadata', body=None): - if not self._GUEST_ID: - self._GUEST_ID = f'pc-gp-{uuid.uuid4()}' - - content = self._download_json( - url, video_id, note=note, data=json.dumps(body).encode() if body else None, - headers={'Content-Type': 'application/json'} if body else {}, - query={ - '__guest_id': self._GUEST_ID, - '__platform': 'web', - **(query or {}), - }) - - if content['code'] != 0: - raise ExtractorError( - f'Mildom says: {content["message"]} (code {content["code"]})', - expected=True) - return content['body'] - - -class MildomIE(MildomBaseIE): - IE_NAME = 'mildom' - IE_DESC = 'Record ongoing live by specific user in Mildom' - _VALID_URL = r'https?://(?:(?:www|m)\.)mildom\.com/(?P\d+)' - - def _real_extract(self, url): - video_id = self._match_id(url) - webpage = self._download_webpage(f'https://www.mildom.com/{video_id}', video_id) - - enterstudio = self._call_api( - 'https://cloudac.mildom.com/nonolive/gappserv/live/enterstudio', video_id, - note='Downloading live metadata', query={'user_id': video_id}) - result_video_id = enterstudio.get('log_id', video_id) - - servers = self._call_api( - 'https://cloudac.mildom.com/nonolive/gappserv/live/liveserver', result_video_id, - note='Downloading live server list', query={ - 'user_id': video_id, - 'live_server_type': 'hls', - }) - - playback_token = self._call_api( - 'https://cloudac.mildom.com/nonolive/gappserv/live/token', result_video_id, - note='Obtaining live playback token', body={'host_id': video_id, 'type': 'hls'}) - playback_token = traverse_obj(playback_token, ('data', ..., 'token'), get_all=False) - if not playback_token: - raise ExtractorError('Failed to obtain live playback token') - - formats = self._extract_m3u8_formats( - f'{servers["stream_server"]}/{video_id}_master.m3u8?{playback_token}', - result_video_id, 'mp4', headers={ - 'Referer': 'https://www.mildom.com/', - 'Origin': 'https://www.mildom.com', - }) - - for fmt in formats: - fmt.setdefault('http_headers', {})['Referer'] = 'https://www.mildom.com/' - - return { - 'id': result_video_id, - 'title': self._html_search_meta('twitter:description', webpage, default=None) or traverse_obj(enterstudio, 'anchor_intro'), - 'description': traverse_obj(enterstudio, 'intro', 'live_intro', expected_type=str), - 'timestamp': float_or_none(enterstudio.get('live_start_ms'), scale=1000), - 'uploader': self._html_search_meta('twitter:title', webpage, default=None) or traverse_obj(enterstudio, 'loginname'), - 'uploader_id': video_id, - 'formats': formats, - 'is_live': True, - } - - -class MildomVodIE(MildomBaseIE): - IE_NAME = 'mildom:vod' - IE_DESC = 'VOD in Mildom' - _VALID_URL = r'https?://(?:(?:www|m)\.)mildom\.com/playback/(?P\d+)/(?P(?P=user_id)-[a-zA-Z0-9]+-?[0-9]*)' - _TESTS = [{ - 'url': 'https://www.mildom.com/playback/10882672/10882672-1597662269', - 'info_dict': { - 'id': '10882672-1597662269', - 'ext': 'mp4', - 'title': '始めてのミルダム配信じゃぃ!', - 'thumbnail': r're:^https?://.*\.(png|jpg)$', - 'upload_date': '20200817', - 'duration': 4138.37, - 'description': 'ゲームをしたくて!', - 'timestamp': 1597662269.0, - 'uploader_id': '10882672', - 'uploader': 'kson組長(けいそん)', - }, - }, { - 'url': 'https://www.mildom.com/playback/10882672/10882672-1597758589870-477', - 'info_dict': { - 'id': '10882672-1597758589870-477', - 'ext': 'mp4', - 'title': '【kson】感染メイズ!麻酔銃で無双する', - 'thumbnail': r're:^https?://.*\.(png|jpg)$', - 'timestamp': 1597759093.0, - 'uploader': 'kson組長(けいそん)', - 'duration': 4302.58, - 'uploader_id': '10882672', - 'description': 'このステージ絶対乗り越えたい', - 'upload_date': '20200818', - }, - }, { - 'url': 'https://www.mildom.com/playback/10882672/10882672-buha9td2lrn97fk2jme0', - 'info_dict': { - 'id': '10882672-buha9td2lrn97fk2jme0', - 'ext': 'mp4', - 'title': '【kson組長】CART RACER!!!', - 'thumbnail': r're:^https?://.*\.(png|jpg)$', - 'uploader_id': '10882672', - 'uploader': 'kson組長(けいそん)', - 'upload_date': '20201104', - 'timestamp': 1604494797.0, - 'duration': 4657.25, - 'description': 'WTF', - }, - }] - - def _real_extract(self, url): - user_id, video_id = self._match_valid_url(url).group('user_id', 'id') - webpage = self._download_webpage(f'https://www.mildom.com/playback/{user_id}/{video_id}', video_id) - - autoplay = self._call_api( - 'https://cloudac.mildom.com/nonolive/videocontent/playback/getPlaybackDetail', video_id, - note='Downloading playback metadata', query={ - 'v_id': video_id, - })['playback'] - - formats = [{ - 'url': autoplay['audio_url'], - 'format_id': 'audio', - 'protocol': 'm3u8_native', - 'vcodec': 'none', - 'acodec': 'aac', - 'ext': 'm4a', - }] - for fmt in autoplay['video_link']: - formats.append({ - 'format_id': 'video-{}'.format(fmt['name']), - 'url': fmt['url'], - 'protocol': 'm3u8_native', - 'width': fmt['level'] * autoplay['video_width'] // autoplay['video_height'], - 'height': fmt['level'], - 'vcodec': 'h264', - 'acodec': 'aac', - 'ext': 'mp4', - }) - - return { - 'id': video_id, - 'title': self._html_search_meta(('og:description', 'description'), webpage, default=None) or autoplay.get('title'), - 'description': traverse_obj(autoplay, 'video_intro'), - 'timestamp': float_or_none(autoplay.get('publish_time'), scale=1000), - 'duration': float_or_none(autoplay.get('video_length'), scale=1000), - 'thumbnail': dict_get(autoplay, ('upload_pic', 'video_pic')), - 'uploader': traverse_obj(autoplay, ('author_info', 'login_name')), - 'uploader_id': user_id, - 'formats': formats, - } - - -class MildomClipIE(MildomBaseIE): - IE_NAME = 'mildom:clip' - IE_DESC = 'Clip in Mildom' - _VALID_URL = r'https?://(?:(?:www|m)\.)mildom\.com/clip/(?P(?P\d+)-[a-zA-Z0-9]+)' - _TESTS = [{ - 'url': 'https://www.mildom.com/clip/10042245-63921673e7b147ebb0806d42b5ba5ce9', - 'info_dict': { - 'id': '10042245-63921673e7b147ebb0806d42b5ba5ce9', - 'title': '全然違ったよ', - 'timestamp': 1619181890, - 'duration': 59, - 'thumbnail': r're:https?://.+', - 'uploader': 'ざきんぽ', - 'uploader_id': '10042245', - }, - }, { - 'url': 'https://www.mildom.com/clip/10111524-ebf4036e5aa8411c99fb3a1ae0902864', - 'info_dict': { - 'id': '10111524-ebf4036e5aa8411c99fb3a1ae0902864', - 'title': 'かっこいい', - 'timestamp': 1621094003, - 'duration': 59, - 'thumbnail': r're:https?://.+', - 'uploader': '(ルーキー', - 'uploader_id': '10111524', - }, - }, { - 'url': 'https://www.mildom.com/clip/10660174-2c539e6e277c4aaeb4b1fbe8d22cb902', - 'info_dict': { - 'id': '10660174-2c539e6e277c4aaeb4b1fbe8d22cb902', - 'title': 'あ', - 'timestamp': 1614769431, - 'duration': 31, - 'thumbnail': r're:https?://.+', - 'uploader': 'ドルゴルスレンギーン=ダグワドルジ', - 'uploader_id': '10660174', - }, - }] - - def _real_extract(self, url): - user_id, video_id = self._match_valid_url(url).group('user_id', 'id') - webpage = self._download_webpage(f'https://www.mildom.com/clip/{video_id}', video_id) - - clip_detail = self._call_api( - 'https://cloudac-cf-jp.mildom.com/nonolive/videocontent/clip/detail', video_id, - note='Downloading playback metadata', query={ - 'clip_id': video_id, - }) - - return { - 'id': video_id, - 'title': self._html_search_meta( - ('og:description', 'description'), webpage, default=None) or clip_detail.get('title'), - 'timestamp': float_or_none(clip_detail.get('create_time')), - 'duration': float_or_none(clip_detail.get('length')), - 'thumbnail': clip_detail.get('cover'), - 'uploader': traverse_obj(clip_detail, ('user_info', 'loginname')), - 'uploader_id': user_id, - - 'url': clip_detail['url'], - 'ext': determine_ext(clip_detail.get('url'), 'mp4'), - } - - -class MildomUserVodIE(MildomBaseIE): - IE_NAME = 'mildom:user:vod' - IE_DESC = 'Download all VODs from specific user in Mildom' - _VALID_URL = r'https?://(?:(?:www|m)\.)mildom\.com/profile/(?P\d+)' - _TESTS = [{ - 'url': 'https://www.mildom.com/profile/10093333', - 'info_dict': { - 'id': '10093333', - 'title': 'Uploads from ねこばたけ', - }, - 'playlist_mincount': 732, - }, { - 'url': 'https://www.mildom.com/profile/10882672', - 'info_dict': { - 'id': '10882672', - 'title': 'Uploads from kson組長(けいそん)', - }, - 'playlist_mincount': 201, - }] - - def _fetch_page(self, user_id, page): - page += 1 - reply = self._call_api( - 'https://cloudac.mildom.com/nonolive/videocontent/profile/playbackList', - user_id, note=f'Downloading page {page}', query={ - 'user_id': user_id, - 'page': page, - 'limit': '30', - }) - if not reply: - return - for x in reply: - v_id = x.get('v_id') - if not v_id: - continue - yield self.url_result(f'https://www.mildom.com/playback/{user_id}/{v_id}') - - def _real_extract(self, url): - user_id = self._match_id(url) - self.to_screen(f'This will download all VODs belonging to user. To download ongoing live video, use "https://www.mildom.com/{user_id}" instead') - - profile = self._call_api( - 'https://cloudac.mildom.com/nonolive/gappserv/user/profileV2', user_id, - query={'user_id': user_id}, note='Downloading user profile')['user_info'] - - return self.playlist_result( - OnDemandPagedList(functools.partial(self._fetch_page, user_id), 30), - user_id, f'Uploads from {profile["loginname"]}') diff --git a/yt_dlp/extractor/pialive.py b/yt_dlp/extractor/pialive.py new file mode 100644 index 000000000..7469135c1 --- /dev/null +++ b/yt_dlp/extractor/pialive.py @@ -0,0 +1,122 @@ +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + clean_html, + extract_attributes, + get_element_by_class, + get_element_html_by_class, + multipart_encode, + str_or_none, + unified_timestamp, + url_or_none, +) +from ..utils.traversal import traverse_obj + + +class PiaLiveIE(InfoExtractor): + _VALID_URL = r'https?://player\.pia-live\.jp/stream/(?P[\w-]+)' + _PLAYER_ROOT_URL = 'https://player.pia-live.jp/' + _PIA_LIVE_API_URL = 'https://api.pia-live.jp' + _API_KEY = 'kfds)FKFps-dms9e' + _TESTS = [{ + 'url': 'https://player.pia-live.jp/stream/4JagFBEIM14s_hK9aXHKf3k3F3bY5eoHFQxu68TC6krUDqGOwN4d61dCWQYOd6CTxl4hjya9dsfEZGsM4uGOUdax60lEI4twsXGXf7crmz8Gk__GhupTrWxA7RFRVt76', + 'info_dict': { + 'id': '88f3109a-f503-4d0f-a9f7-9f39ac745d84', + 'display_id': '2431867_001', + 'title': 'こながめでたい日2024の視聴ページ | PIA LIVE STREAM(ぴあライブストリーム)', + 'live_status': 'was_live', + 'comment_count': int, + }, + 'params': { + 'getcomments': True, + 'skip_download': True, + 'ignore_no_formats_error': True, + }, + 'skip': 'The video is no longer available', + }, { + 'url': 'https://player.pia-live.jp/stream/4JagFBEIM14s_hK9aXHKf3k3F3bY5eoHFQxu68TC6krJdu0GVBVbVy01IwpJ6J3qBEm3d9TCTt1d0eWpsZGj7DrOjVOmS7GAWGwyscMgiThopJvzgWC4H5b-7XQjAfRZ', + 'info_dict': { + 'id': '9ce8b8ba-f6d1-4d1f-83a0-18c3148ded93', + 'display_id': '2431867_002', + 'title': 'こながめでたい日2024の視聴ページ | PIA LIVE STREAM(ぴあライブストリーム)', + 'live_status': 'was_live', + 'comment_count': int, + }, + 'params': { + 'getcomments': True, + 'skip_download': True, + 'ignore_no_formats_error': True, + }, + 'skip': 'The video is no longer available', + }] + + def _extract_var(self, variable, html): + return self._search_regex( + rf'(?:var|const|let)\s+{variable}\s*=\s*(["\'])(?P(?:(?!\1).)+)\1', + html, f'variable {variable}', group='value') + + def _real_extract(self, url): + video_key = self._match_id(url) + webpage = self._download_webpage(url, video_key) + + program_code = self._extract_var('programCode', webpage) + article_code = self._extract_var('articleCode', webpage) + title = self._html_extract_title(webpage) + + if get_element_html_by_class('play-end', webpage): + raise ExtractorError('The video is no longer available', expected=True, video_id=program_code) + + if start_info := clean_html(get_element_by_class('play-waiting__date', webpage)): + date, time = self._search_regex( + r'(?P\d{4}/\d{1,2}/\d{1,2})\([月火水木金土日]\)(?P