Merge pull request #20957 from iptv-org/patch-2025.03.1

Patch 2025.03.1
pull/20978/head
PopeyeTheSai10r 4 weeks ago committed by GitHub
commit ac5a955d19
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -6,9 +6,9 @@ labels: ['streams:add']
body: body:
- type: input - type: input
attributes: attributes:
label: Channel ID (required) label: Stream ID (required)
description: Unique channel ID from [iptv-org.github.io](https://iptv-org.github.io/). If you can't find the channel you want in the list, please let us know through this [form](https://github.com/iptv-org/database/issues/new?assignees=&labels=channels%3Aadd&projects=&template=channels_add.yml&title=Add%3A+) before posting your request. description: "ID of the stream consisting of `<channel_id>` or `<channel_id>@<feed_id>`. Full list of supported channels with corresponding ID could be found on [iptv-org.github.io](https://iptv-org.github.io/). If you can't find the channel you want in the list, please let us know through this [form](https://github.com/iptv-org/database/issues/new?assignees=&labels=channels%3Aadd&projects=&template=channels_add.yml&title=Add%3A+) before posting your request."
placeholder: 'BBCAmericaEast.us' placeholder: 'BBCAmerica.us@East'
validations: validations:
required: true required: true
@ -28,9 +28,12 @@ body:
- 2160p - 2160p
- 1280p - 1280p
- 1080p - 1080p
- 1080i
- 720p - 720p
- 576p - 576p
- 576i
- 480p - 480p
- 480i
- 360p - 360p
- type: dropdown - type: dropdown

@ -19,9 +19,9 @@ body:
- type: input - type: input
attributes: attributes:
label: Channel ID label: Stream ID
description: Channel ID from [iptv-org.github.io](https://iptv-org.github.io/). description: "ID of the stream consisting of `<channel_id>` or `<channel_id>@<feed_id>`. Full list of supported channels with corresponding ID could be found on [iptv-org.github.io](https://iptv-org.github.io/). If you can't find the channel you want in the list, please let us know through this [form](https://github.com/iptv-org/database/issues/new?assignees=&labels=channels%3Aadd&projects=&template=channels_add.yml&title=Add%3A+) before posting your request."
placeholder: 'BBCAmericaEast.us' placeholder: 'BBCAmerica.us@East'
- type: dropdown - type: dropdown
attributes: attributes:
@ -31,9 +31,12 @@ body:
- 2160p - 2160p
- 1280p - 1280p
- 1080p - 1080p
- 1080i
- 720p - 720p
- 576p - 576p
- 576i
- 480p - 480p
- 480i
- 360p - 360p
- '~' - '~'

@ -1,7 +1,7 @@
name: 🚧 Report broken stream name: 🚧 Report broken stream
description: Report a broken or unstable stream description: Report a broken or unstable stream
title: 'Broken: ' title: 'Broken: '
labels: ['broken stream'] labels: ['broken stream', 'streams:remove']
body: body:
- type: markdown - type: markdown

@ -20,15 +20,23 @@ Regardless of which option you choose, before posting your request please do the
- Make sure the link you want to add works stably. To check this, open it in one of the players (for example, [VLC player](https://www.videolan.org/vlc/index.html)) and watch the broadcast for at least a minute (some test streams are interrupted after 15-30 seconds). - Make sure the link you want to add works stably. To check this, open it in one of the players (for example, [VLC player](https://www.videolan.org/vlc/index.html)) and watch the broadcast for at least a minute (some test streams are interrupted after 15-30 seconds).
- Make sure the link is not already in the playlist. This can be done by [searching](https://github.com/search?q=repo%3Aiptv-org%2Fiptv+http%3A%2F%2Fexample.com&type=code) the repository. - Make sure the link is not already in the playlist. This can be done by [searching](https://github.com/search?q=repo%3Aiptv-org%2Fiptv+http%3A%2F%2Fexample.com&type=code) the repository.
- Find the ID of the channel you want to add in our [database](https://iptv-org.github.io/). If this particular channel is not in the database, then leave a request to add it [here](https://github.com/iptv-org/database/issues/new/choose) and wait until it is approved before continuing. - Find the ID of the channel you want on [iptv-org.github.io](https://iptv-org.github.io/). If your desired channel is not on the list you can leave a request to add it [here](https://github.com/iptv-org/database/issues/new/choose).
- Make sure the channel is not blocklisted. This can be done by checking the [blocklist.csv](https://github.com/iptv-org/database/blob/master/data/blocklist.csv) file. - Make sure the channel is not blocklisted. It can also be done through [iptv-org.github.io](https://iptv-org.github.io/).
- The link does not lead to the Xtream Codes server. [Why don't you accept links to Xtream Codes server?](FAQ.md#why-dont-you-accept-links-to-xtream-codes-server) - The link does not lead to the Xtream Codes server. [Why don't you accept links to Xtream Codes server?](FAQ.md#why-dont-you-accept-links-to-xtream-codes-server)
- If you know that the broadcast only works in certain countries or it is periodically interrupted, do not forget to indicate this in the request. - If you know that the broadcast only works in certain countries or it is periodically interrupted, do not forget to indicate this in the request.
A requests without a valid channel ID or working link to the stream will be closed immediately. A requests without a valid stream ID or working link to the stream will be closed immediately.
Note all links in playlists are sorted automatically by scripts so there is no need to sort them manually. For more info, see [Scripts](#scripts). Note all links in playlists are sorted automatically by scripts so there is no need to sort them manually. For more info, see [Scripts](#scripts).
### How to fix the stream description?
Most of the stream description (channel name, categories, languages, broadcast area, logo) we load from the [iptv-org/database](https://github.com/iptv-org/database) using the stream ID.
So first of all, make sure that the desired stream has the correct ID. A full list of all supported channels and their corresponding IDs can be found on [iptv-org.github.io](https://iptv-org.github.io/). To change the stream ID of any link in the playlist, just fill out this [form](https://github.com/iptv-org/iptv/issues/new?assignees=&labels=streams%3Aedit&projects=&template=2_streams_edit.yml&title=Edit%3A+).
If, however, you have found an error in the database itself, this is the place to go: [How to edit channel description?](https://github.com/iptv-org/database/blob/master/CONTRIBUTING.md#how-to-edit-channel-description)
### How to distinguish a link to an Xtream Codes server from a regular one? ### How to distinguish a link to an Xtream Codes server from a regular one?
Most of them have this form: Most of them have this form:
@ -52,6 +60,37 @@ The only thing before publishing your report is to make sure that:
An issue without a valid link will be closed immediately. An issue without a valid link will be closed immediately.
### How to find a broken stream?
For starters, you can just try to open the playlist in [VLC player](https://www.videolan.org/vlc/). The player outputs all errors to the log (Tools -> Messages) so you'll be able to determine pretty accurately why a link isn't working.
Another way to test links is to use the NPM script. To do this, first make sure you have [Node.js](https://nodejs.org/en) installed on your system. Then go to the `iptv` folder using [Console](https://en.wikipedia.org/wiki/Windows_Console) (or [Terminal](<https://en.wikipedia.org/wiki/Terminal_(macOS)>) if you have macOS) and run the command:
```sh
npm run playlist:test path/to/playlist.m3u
```
This command will run an automatic check of all links in the playlist and display their status:
```sh
npm run playlist:test streams/fr.m3u
streams/fr.m3u
┌─────┬───────────────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┐
│ │ tvg-id │ url │ status │
├─────┼───────────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┤
│ 0 │ 6ter.fr │ https://origin-caf900c010ea8046.live.6cloud.fr/out/v1/29c7a579af3348b48230f76cd75699a5/dash_short... │ LOADING... │
│ 1 │ 20MinutesTV.fr │ https://lives.digiteka.com/stream/86d3e867-a272-496b-8412-f59aa0104771/index.m3u8 │ FFMPEG_STREAMS_NOT_FOUND │
│ 2 │ │ https://video1.getstreamhosting.com:1936/8420/8420/playlist.m3u8 │ OK │
│ 3 │ ADNTVPlus.fr │ https://samsunguk-adn-samsung-fre-qfrlc.amagi.tv/playlist/samsunguk-adn-samsung-fre/playlist.m3u8 │ HTTP_FORBIDDEN │
│ 4 │ Africa24.fr │ https://edge12.vedge.infomaniak.com/livecast/ik:africa24/manifest.m3u8 │ OK │
│ 5 │ Africa24English.fr │ https://edge17.vedge.infomaniak.com/livecast/ik:africa24sport/manifest.m3u8 │ OK │
│ 6 │ AfricanewsEnglish.fr │ https://37c774660687468c821a51190046facf.mediatailor.us-east-1.amazonaws.com/v1/master/04fd913bb2... │ HTTP_GATEWAY_TIMEOUT │
│ 7 │ AlpedHuezTV.fr │ https://edge.vedge.infomaniak.com/livecast/ik:adhtv/chunklist.m3u8 │ HTTP_NOT_FOUND │
```
After that, all you have to do is report any broken streams you find.
### How do I remove my channel from playlist? ### How do I remove my channel from playlist?
To request removal of a link to a channel from the repository, you need to fill out this [form](https://github.com/iptv-org/iptv/issues/new?assignees=&labels=removal+request&projects=&template=-removal-request.yml&title=Remove%3A+) and wait for the request to be reviewed (this usually takes no more than 1 business day). And if the request is approved, links to the channel will be immediately removed from the repository. To request removal of a link to a channel from the repository, you need to fill out this [form](https://github.com/iptv-org/iptv/issues/new?assignees=&labels=removal+request&projects=&template=-removal-request.yml&title=Remove%3A+) and wait for the request to be reviewed (this usually takes no more than 1 business day). And if the request is approved, links to the channel will be immediately removed from the repository.
@ -65,22 +104,22 @@ Please note that we only accept removal requests from channel owners and their o
For a stream to be approved, its description must follow this template: For a stream to be approved, its description must follow this template:
``` ```
#EXTINF:-1 tvg-id="CHANNEL_ID",CHANNEL_NAME (RESOLUTION) [LABEL] #EXTINF:-1 tvg-id="STREAM_ID",CHANNEL_NAME (RESOLUTION) [LABEL]
STREAM_URL STREAM_URL
``` ```
| Attribute | Description | Required | Valid values | | Attribute | Description | Required | Valid values |
| -------------- | ------------------------------------------------------------------------------------------ | -------- | -------------------------------------------------------------------------------------------------------------------------- | | -------------- | -------------------------------------------------------------------------------------------------------------------------------------------- | -------- | -------------------------------------------- |
| `CHANNEL_ID` | Channel ID. | Optional | Full list of supported channels with corresponding ID could be found on [iptv-org.github.io](https://iptv-org.github.io/). | | `STREAM_ID` | ID of the stream. Full list of supported channels with corresponding ID could be found on [iptv-org.github.io](https://iptv-org.github.io/). | Optional | `<channel_id>` or `<channel_id>@<feed_id>` |
| `CHANNEL_NAME` | Full name of the channel. May contain any characters except: `,`, `[`, `]`. | Required | - | | `CHANNEL_NAME` | Full name of the channel. May contain any characters except: `,`, `[`, `]`. | Required | - |
| `RESOLUTION` | Maximum stream resolution. | Optional | `2160p`, `1080p`, `720p`, `480p`, `360p` etc | | `RESOLUTION` | Maximum stream resolution. | Optional | `2160p`, `1080p`, `720p`, `480p`, `360p` etc |
| `LABEL` | Specified in cases where the broadcast for some reason may not be available to some users. | Optional | `Geo-blocked` or `Not 24/7` | | `LABEL` | Specified in cases where the broadcast for some reason may not be available to some users. | Optional | `Geo-blocked` or `Not 24/7` |
| `STREAM_URL` | Stream URL. | Required | - | | `STREAM_URL` | Stream URL. | Required | - |
Example: Example:
```xml ```xml
#EXTINF:-1 tvg-id="ExampleTV.ua",Example TV (720p) [Not 24/7] #EXTINF:-1 tvg-id="ExampleTV.ua@HD",Example TV (720p) [Not 24/7]
https://example.com/playlist.m3u8 https://example.com/playlist.m3u8
``` ```

30
package-lock.json generated

@ -10,7 +10,7 @@
"dependencies": { "dependencies": {
"@eslint/eslintrc": "^3.3.0", "@eslint/eslintrc": "^3.3.0",
"@eslint/js": "^9.21.0", "@eslint/js": "^9.21.0",
"@freearhey/core": "^0.2.1", "@freearhey/core": "^0.7.0",
"@octokit/core": "^6.1.4", "@octokit/core": "^6.1.4",
"@octokit/plugin-paginate-rest": "^11.4.3", "@octokit/plugin-paginate-rest": "^11.4.3",
"@octokit/plugin-rest-endpoint-methods": "^7.1.3", "@octokit/plugin-rest-endpoint-methods": "^7.1.3",
@ -1063,9 +1063,9 @@
} }
}, },
"node_modules/@freearhey/core": { "node_modules/@freearhey/core": {
"version": "0.2.1", "version": "0.7.0",
"resolved": "https://registry.npmjs.org/@freearhey/core/-/core-0.2.1.tgz", "resolved": "https://registry.npmjs.org/@freearhey/core/-/core-0.7.0.tgz",
"integrity": "sha512-kEdIxZClykKhGpgyCSlkwuVuSCCAWr3J5YvOUMJQDPgVAYvT5VbD8MYKPm+OwNi9T4HFmF6qqY90qwKJPoOXCA==", "integrity": "sha512-HXkKPYGY7ife7JAc1q/Qxzy0WUdSnyt3rHThCShZHgnH3rz0tpkjHFW7LNegB3he0IKn/Zc95/YSOQ97Fq8ctA==",
"dependencies": { "dependencies": {
"@types/fs-extra": "^11.0.2", "@types/fs-extra": "^11.0.2",
"@types/lodash": "^4.14.198", "@types/lodash": "^4.14.198",
@ -1078,6 +1078,7 @@
"node-gzip": "^1.1.2", "node-gzip": "^1.1.2",
"normalize-url": "^6.1.0", "normalize-url": "^6.1.0",
"object-treeify": "^2.1.1", "object-treeify": "^2.1.1",
"run-script-os": "^1.1.6",
"signale": "^1.4.0" "signale": "^1.4.0"
} }
}, },
@ -6059,6 +6060,15 @@
"queue-microtask": "^1.2.2" "queue-microtask": "^1.2.2"
} }
}, },
"node_modules/run-script-os": {
"version": "1.1.6",
"resolved": "https://registry.npmjs.org/run-script-os/-/run-script-os-1.1.6.tgz",
"integrity": "sha512-ql6P2LzhBTTDfzKts+Qo4H94VUKpxKDFz6QxxwaUZN0mwvi7L3lpOI7BqPCq7lgDh3XLl0dpeXwfcVIitlrYrw==",
"bin": {
"run-os": "index.js",
"run-script-os": "index.js"
}
},
"node_modules/semver": { "node_modules/semver": {
"version": "7.6.3", "version": "7.6.3",
"resolved": "https://registry.npmjs.org/semver/-/semver-7.6.3.tgz", "resolved": "https://registry.npmjs.org/semver/-/semver-7.6.3.tgz",
@ -7455,9 +7465,9 @@
} }
}, },
"@freearhey/core": { "@freearhey/core": {
"version": "0.2.1", "version": "0.7.0",
"resolved": "https://registry.npmjs.org/@freearhey/core/-/core-0.2.1.tgz", "resolved": "https://registry.npmjs.org/@freearhey/core/-/core-0.7.0.tgz",
"integrity": "sha512-kEdIxZClykKhGpgyCSlkwuVuSCCAWr3J5YvOUMJQDPgVAYvT5VbD8MYKPm+OwNi9T4HFmF6qqY90qwKJPoOXCA==", "integrity": "sha512-HXkKPYGY7ife7JAc1q/Qxzy0WUdSnyt3rHThCShZHgnH3rz0tpkjHFW7LNegB3he0IKn/Zc95/YSOQ97Fq8ctA==",
"requires": { "requires": {
"@types/fs-extra": "^11.0.2", "@types/fs-extra": "^11.0.2",
"@types/lodash": "^4.14.198", "@types/lodash": "^4.14.198",
@ -7470,6 +7480,7 @@
"node-gzip": "^1.1.2", "node-gzip": "^1.1.2",
"normalize-url": "^6.1.0", "normalize-url": "^6.1.0",
"object-treeify": "^2.1.1", "object-treeify": "^2.1.1",
"run-script-os": "^1.1.6",
"signale": "^1.4.0" "signale": "^1.4.0"
}, },
"dependencies": { "dependencies": {
@ -11122,6 +11133,11 @@
"queue-microtask": "^1.2.2" "queue-microtask": "^1.2.2"
} }
}, },
"run-script-os": {
"version": "1.1.6",
"resolved": "https://registry.npmjs.org/run-script-os/-/run-script-os-1.1.6.tgz",
"integrity": "sha512-ql6P2LzhBTTDfzKts+Qo4H94VUKpxKDFz6QxxwaUZN0mwvi7L3lpOI7BqPCq7lgDh3XLl0dpeXwfcVIitlrYrw=="
},
"semver": { "semver": {
"version": "7.6.3", "version": "7.6.3",
"resolved": "https://registry.npmjs.org/semver/-/semver-7.6.3.tgz", "resolved": "https://registry.npmjs.org/semver/-/semver-7.6.3.tgz",

@ -39,7 +39,7 @@
"dependencies": { "dependencies": {
"@eslint/eslintrc": "^3.3.0", "@eslint/eslintrc": "^3.3.0",
"@eslint/js": "^9.21.0", "@eslint/js": "^9.21.0",
"@freearhey/core": "^0.2.1", "@freearhey/core": "^0.7.0",
"@octokit/core": "^6.1.4", "@octokit/core": "^6.1.4",
"@octokit/plugin-paginate-rest": "^11.4.3", "@octokit/plugin-paginate-rest": "^11.4.3",
"@octokit/plugin-rest-endpoint-methods": "^7.1.3", "@octokit/plugin-rest-endpoint-methods": "^7.1.3",

@ -1,21 +1,37 @@
import { Logger, Storage } from '@freearhey/core' import { Logger, Storage, Collection } from '@freearhey/core'
import { API_DIR, STREAMS_DIR } from '../../constants' import { API_DIR, STREAMS_DIR, DATA_DIR } from '../../constants'
import { PlaylistParser } from '../../core' import { PlaylistParser } from '../../core'
import { Stream } from '../../models' import { Stream, Channel, Feed } from '../../models'
import { uniqueId } from 'lodash'
async function main() { async function main() {
const logger = new Logger() const logger = new Logger()
logger.info('loading api data...')
const dataStorage = new Storage(DATA_DIR)
const channelsData = await dataStorage.json('channels.json')
const channels = new Collection(channelsData).map(data => new Channel(data))
const channelsGroupedById = channels.keyBy((channel: Channel) => channel.id)
const feedsData = await dataStorage.json('feeds.json')
const feeds = new Collection(feedsData).map(data =>
new Feed(data).withChannel(channelsGroupedById)
)
const feedsGroupedByChannelId = feeds.groupBy((feed: Feed) =>
feed.channel ? feed.channel.id : uniqueId()
)
logger.info('loading streams...') logger.info('loading streams...')
const streamsStorage = new Storage(STREAMS_DIR) const streamsStorage = new Storage(STREAMS_DIR)
const parser = new PlaylistParser({ storage: streamsStorage }) const parser = new PlaylistParser({
storage: streamsStorage,
channelsGroupedById,
feedsGroupedByChannelId
})
const files = await streamsStorage.list('**/*.m3u') const files = await streamsStorage.list('**/*.m3u')
let streams = await parser.parse(files) let streams = await parser.parse(files)
streams = streams streams = streams
.map(data => new Stream(data)) .orderBy((stream: Stream) => stream.getId())
.orderBy([(stream: Stream) => stream.channel])
.map((stream: Stream) => stream.toJSON()) .map((stream: Stream) => stream.toJSON())
logger.info(`found ${streams.count()} streams`) logger.info(`found ${streams.count()} streams`)
logger.info('saving to .api/streams.json...') logger.info('saving to .api/streams.json...')

@ -12,7 +12,9 @@ async function main() {
client.download('countries.json'), client.download('countries.json'),
client.download('languages.json'), client.download('languages.json'),
client.download('regions.json'), client.download('regions.json'),
client.download('subdivisions.json') client.download('subdivisions.json'),
client.download('feeds.json'),
client.download('timezones.json')
] ]
await Promise.all(requests) await Promise.all(requests)

@ -1,25 +1,36 @@
import { Logger, Storage, Collection } from '@freearhey/core' import { Logger, Storage, Collection } from '@freearhey/core'
import { STREAMS_DIR, DATA_DIR } from '../../constants' import { STREAMS_DIR, DATA_DIR } from '../../constants'
import { PlaylistParser } from '../../core' import { PlaylistParser } from '../../core'
import { Stream, Playlist, Channel } from '../../models' import { Stream, Playlist, Channel, Feed } from '../../models'
import { program } from 'commander' import { program } from 'commander'
import { uniqueId } from 'lodash'
program.argument('[filepath]', 'Path to file to validate').parse(process.argv) program.argument('[filepath]', 'Path to file to validate').parse(process.argv)
async function main() { async function main() {
const storage = new Storage(STREAMS_DIR) const streamsStorage = new Storage(STREAMS_DIR)
const logger = new Logger() const logger = new Logger()
logger.info('loading channels from api...') logger.info('loading data from api...')
const dataStorage = new Storage(DATA_DIR) const dataStorage = new Storage(DATA_DIR)
const channelsContent = await dataStorage.json('channels.json') const channelsData = await dataStorage.json('channels.json')
const groupedChannels = new Collection(channelsContent) const channels = new Collection(channelsData).map(data => new Channel(data))
.map(data => new Channel(data)) const channelsGroupedById = channels.keyBy((channel: Channel) => channel.id)
.keyBy((channel: Channel) => channel.id) const feedsData = await dataStorage.json('feeds.json')
const feeds = new Collection(feedsData).map(data =>
new Feed(data).withChannel(channelsGroupedById)
)
const feedsGroupedByChannelId = feeds.groupBy(feed =>
feed.channel ? feed.channel.id : uniqueId()
)
logger.info('loading streams...') logger.info('loading streams...')
const parser = new PlaylistParser({ storage }) const parser = new PlaylistParser({
const files = program.args.length ? program.args : await storage.list('**/*.m3u') storage: streamsStorage,
channelsGroupedById,
feedsGroupedByChannelId
})
const files = program.args.length ? program.args : await streamsStorage.list('**/*.m3u')
let streams = await parser.parse(files) let streams = await parser.parse(files)
logger.info(`found ${streams.count()} streams`) logger.info(`found ${streams.count()} streams`)
@ -35,8 +46,8 @@ async function main() {
logger.info('removing wrong id...') logger.info('removing wrong id...')
streams = streams.map((stream: Stream) => { streams = streams.map((stream: Stream) => {
if (groupedChannels.missing(stream.channel)) { if (!stream.channel || channelsGroupedById.missing(stream.channel.id)) {
stream.channel = '' stream.id = ''
} }
return stream return stream
@ -46,22 +57,22 @@ async function main() {
streams = streams.orderBy( streams = streams.orderBy(
[ [
(stream: Stream) => stream.name, (stream: Stream) => stream.name,
(stream: Stream) => parseInt(stream.quality.replace('p', '')), (stream: Stream) => stream.getVerticalResolution(),
(stream: Stream) => stream.label, (stream: Stream) => stream.getLabel(),
(stream: Stream) => stream.url (stream: Stream) => stream.url
], ],
['asc', 'desc', 'asc', 'asc'] ['asc', 'desc', 'asc', 'asc']
) )
logger.info('saving...') logger.info('saving...')
const groupedStreams = streams.groupBy((stream: Stream) => stream.filepath) const groupedStreams = streams.groupBy((stream: Stream) => stream.getFilepath())
for (let filepath of groupedStreams.keys()) { for (let filepath of groupedStreams.keys()) {
const streams = groupedStreams.get(filepath) || [] const streams = groupedStreams.get(filepath) || []
if (!streams.length) return if (!streams.length) return
const playlist = new Playlist(streams, { public: false }) const playlist = new Playlist(streams, { public: false })
await storage.save(filepath, playlist.toString()) await streamsStorage.save(filepath, playlist.toString())
} }
} }

@ -1,14 +1,23 @@
import { Logger, Storage, Collection, File } from '@freearhey/core' import { Logger, Storage, Collection } from '@freearhey/core'
import { PlaylistParser } from '../../core' import { PlaylistParser } from '../../core'
import { Stream, Category, Channel, Language, Country, Region, Subdivision } from '../../models' import {
import _ from 'lodash' Stream,
Category,
Channel,
Language,
Country,
Region,
Subdivision,
Feed,
Timezone
} from '../../models'
import { uniqueId } from 'lodash'
import { import {
CategoriesGenerator, CategoriesGenerator,
CountriesGenerator, CountriesGenerator,
LanguagesGenerator, LanguagesGenerator,
RegionsGenerator, RegionsGenerator,
IndexGenerator, IndexGenerator,
IndexNsfwGenerator,
IndexCategoryGenerator, IndexCategoryGenerator,
IndexCountryGenerator, IndexCountryGenerator,
IndexLanguageGenerator, IndexLanguageGenerator,
@ -19,123 +28,136 @@ import { DATA_DIR, LOGS_DIR, STREAMS_DIR } from '../../constants'
async function main() { async function main() {
const logger = new Logger() const logger = new Logger()
const dataStorage = new Storage(DATA_DIR) const dataStorage = new Storage(DATA_DIR)
const generatorsLogger = new Logger({
stream: await new Storage(LOGS_DIR).createStream(`generators.log`)
})
logger.info('loading data from api...') logger.info('loading data from api...')
const channelsContent = await dataStorage.json('channels.json') const categoriesData = await dataStorage.json('categories.json')
const channels = new Collection(channelsContent).map(data => new Channel(data)) const countriesData = await dataStorage.json('countries.json')
const categoriesContent = await dataStorage.json('categories.json') const languagesData = await dataStorage.json('languages.json')
const categories = new Collection(categoriesContent).map(data => new Category(data)) const regionsData = await dataStorage.json('regions.json')
const countriesContent = await dataStorage.json('countries.json') const subdivisionsData = await dataStorage.json('subdivisions.json')
const countries = new Collection(countriesContent).map(data => new Country(data)) const timezonesData = await dataStorage.json('timezones.json')
const languagesContent = await dataStorage.json('languages.json') const channelsData = await dataStorage.json('channels.json')
const languages = new Collection(languagesContent).map(data => new Language(data)) const feedsData = await dataStorage.json('feeds.json')
const regionsContent = await dataStorage.json('regions.json')
const regions = new Collection(regionsContent).map(data => new Region(data)) logger.info('preparing data...')
const subdivisionsContent = await dataStorage.json('subdivisions.json') const subdivisions = new Collection(subdivisionsData).map(data => new Subdivision(data))
const subdivisions = new Collection(subdivisionsContent).map(data => new Subdivision(data)) const subdivisionsGroupedByCode = subdivisions.keyBy(
(subdivision: Subdivision) => subdivision.code
)
const subdivisionsGroupedByCountryCode = subdivisions.groupBy(
(subdivision: Subdivision) => subdivision.countryCode
)
let regions = new Collection(regionsData).map(data =>
new Region(data).withSubdivisions(subdivisions)
)
const regionsGroupedByCode = regions.keyBy((region: Region) => region.code)
const categories = new Collection(categoriesData).map(data => new Category(data))
const categoriesGroupedById = categories.keyBy((category: Category) => category.id)
const languages = new Collection(languagesData).map(data => new Language(data))
const languagesGroupedByCode = languages.keyBy((language: Language) => language.code)
const countries = new Collection(countriesData).map(data =>
new Country(data)
.withRegions(regions)
.withLanguage(languagesGroupedByCode)
.withSubdivisions(subdivisionsGroupedByCountryCode)
)
const countriesGroupedByCode = countries.keyBy((country: Country) => country.code)
regions = regions.map((region: Region) => region.withCountries(countriesGroupedByCode))
const timezones = new Collection(timezonesData).map(data =>
new Timezone(data).withCountries(countriesGroupedByCode)
)
const timezonesGroupedById = timezones.keyBy((timezone: Timezone) => timezone.id)
const channels = new Collection(channelsData).map(data =>
new Channel(data)
.withCategories(categoriesGroupedById)
.withCountry(countriesGroupedByCode)
.withSubdivision(subdivisionsGroupedByCode)
)
const channelsGroupedById = channels.keyBy((channel: Channel) => channel.id)
const feeds = new Collection(feedsData).map(data =>
new Feed(data)
.withChannel(channelsGroupedById)
.withLanguages(languagesGroupedByCode)
.withTimezones(timezonesGroupedById)
.withBroadcastCountries(
countriesGroupedByCode,
regionsGroupedByCode,
subdivisionsGroupedByCode
)
.withBroadcastRegions(regions)
.withBroadcastSubdivisions(subdivisionsGroupedByCode)
)
const feedsGroupedByChannelId = feeds.groupBy((feed: Feed) =>
feed.channel ? feed.channel.id : uniqueId()
)
logger.info('loading streams...') logger.info('loading streams...')
let streams = await loadStreams({ channels, categories, languages }) const storage = new Storage(STREAMS_DIR)
let totalStreams = streams.count() const parser = new PlaylistParser({
streams = streams.uniqBy((stream: Stream) => (stream.channel || _.uniqueId()) + stream.timeshift) storage,
channelsGroupedById,
feedsGroupedByChannelId
})
const files = await storage.list('**/*.m3u')
let streams = await parser.parse(files)
const totalStreams = streams.count()
streams = streams.uniqBy((stream: Stream) =>
stream.hasId() ? stream.getChannelId() + stream.getFeedId() : uniqueId()
)
logger.info(`found ${totalStreams} streams (including ${streams.count()} unique)`) logger.info(`found ${totalStreams} streams (including ${streams.count()} unique)`)
const generatorsLogger = new Logger({ logger.info('sorting streams...')
stream: await new Storage(LOGS_DIR).createStream(`generators.log`) streams = streams.orderBy(
}) [
(stream: Stream) => stream.getId(),
(stream: Stream) => stream.getVerticalResolution(),
(stream: Stream) => stream.getLabel()
],
['asc', 'asc', 'desc']
)
logger.info('generating categories/...') logger.info('generating categories/...')
await new CategoriesGenerator({ categories, streams, logger: generatorsLogger }).generate() await new CategoriesGenerator({ categories, streams, logger: generatorsLogger }).generate()
logger.info('generating countries/...') logger.info('generating countries/...')
await new CountriesGenerator({ await new CountriesGenerator({
countries, countries,
streams, streams,
regions,
subdivisions,
logger: generatorsLogger logger: generatorsLogger
}).generate() }).generate()
logger.info('generating languages/...') logger.info('generating languages/...')
await new LanguagesGenerator({ streams, logger: generatorsLogger }).generate() await new LanguagesGenerator({ streams, logger: generatorsLogger }).generate()
logger.info('generating regions/...') logger.info('generating regions/...')
await new RegionsGenerator({ await new RegionsGenerator({
streams, streams,
regions, regions,
subdivisions,
logger: generatorsLogger logger: generatorsLogger
}).generate() }).generate()
logger.info('generating index.m3u...') logger.info('generating index.m3u...')
await new IndexGenerator({ streams, logger: generatorsLogger }).generate() await new IndexGenerator({ streams, logger: generatorsLogger }).generate()
logger.info('generating index.category.m3u...') logger.info('generating index.category.m3u...')
await new IndexCategoryGenerator({ streams, logger: generatorsLogger }).generate() await new IndexCategoryGenerator({ streams, logger: generatorsLogger }).generate()
logger.info('generating index.country.m3u...') logger.info('generating index.country.m3u...')
await new IndexCountryGenerator({ await new IndexCountryGenerator({
streams, streams,
countries,
regions,
subdivisions,
logger: generatorsLogger logger: generatorsLogger
}).generate() }).generate()
logger.info('generating index.language.m3u...') logger.info('generating index.language.m3u...')
await new IndexLanguageGenerator({ streams, logger: generatorsLogger }).generate() await new IndexLanguageGenerator({ streams, logger: generatorsLogger }).generate()
logger.info('generating index.region.m3u...') logger.info('generating index.region.m3u...')
await new IndexRegionGenerator({ streams, regions, logger: generatorsLogger }).generate() await new IndexRegionGenerator({ streams, regions, logger: generatorsLogger }).generate()
} }
main() main()
async function loadStreams({
channels,
categories,
languages
}: {
channels: Collection
categories: Collection
languages: Collection
}) {
const groupedChannels = channels.keyBy(channel => channel.id)
const groupedCategories = categories.keyBy(category => category.id)
const groupedLanguages = languages.keyBy(language => language.code)
const storage = new Storage(STREAMS_DIR)
const parser = new PlaylistParser({ storage })
const files = await storage.list('**/*.m3u')
let streams = await parser.parse(files)
streams = streams
.orderBy(
[
(stream: Stream) => stream.channel,
(stream: Stream) => parseInt(stream.quality.replace('p', '')),
(stream: Stream) => stream.label
],
['asc', 'asc', 'desc', 'asc']
)
.map((stream: Stream) => {
const channel: Channel | undefined = groupedChannels.get(stream.channel)
if (channel) {
const channelCategories = channel.categories
.map((id: string) => groupedCategories.get(id))
.filter(Boolean)
const channelLanguages = channel.languages
.map((id: string) => groupedLanguages.get(id))
.filter(Boolean)
stream.categories = channelCategories
stream.languages = channelLanguages
stream.broadcastArea = channel.broadcastArea
stream.isNSFW = channel.isNSFW
if (channel.logo) stream.logo = channel.logo
} else {
const file = new File(stream.filepath)
const [_, countryCode] = file.name().match(/^([a-z]{2})(_|$)/) || [null, null]
const defaultBroadcastArea = countryCode ? [`c/${countryCode.toUpperCase()}`] : []
stream.broadcastArea = new Collection(defaultBroadcastArea)
}
return stream
})
return streams
}

@ -1,7 +1,7 @@
import { Logger, Storage, Collection } from '@freearhey/core' import { Logger, Storage, Collection } from '@freearhey/core'
import { ROOT_DIR, STREAMS_DIR } from '../../constants' import { ROOT_DIR, STREAMS_DIR, DATA_DIR } from '../../constants'
import { PlaylistParser, StreamTester, CliTable } from '../../core' import { PlaylistParser, StreamTester, CliTable } from '../../core'
import { Stream } from '../../models' import { Stream, Feed, Channel } from '../../models'
import { program } from 'commander' import { program } from 'commander'
import { eachLimit } from 'async-es' import { eachLimit } from 'async-es'
import commandExists from 'command-exists' import commandExists from 'command-exists'
@ -38,8 +38,6 @@ const logger = new Logger()
const tester = new StreamTester() const tester = new StreamTester()
async function main() { async function main() {
const storage = new Storage(ROOT_DIR)
if (await isOffline()) { if (await isOffline()) {
logger.error(chalk.red('Internet connection is required for the script to work')) logger.error(chalk.red('Internet connection is required for the script to work'))
@ -56,9 +54,25 @@ async function main() {
return return
} }
logger.info('loading channels from api...')
const dataStorage = new Storage(DATA_DIR)
const channelsData = await dataStorage.json('channels.json')
const channels = new Collection(channelsData).map(data => new Channel(data))
const channelsGroupedById = channels.keyBy((channel: Channel) => channel.id)
const feedsData = await dataStorage.json('feeds.json')
const feeds = new Collection(feedsData).map(data =>
new Feed(data).withChannel(channelsGroupedById)
)
const feedsGroupedByChannelId = feeds.groupBy(feed => feed.channel)
logger.info('loading streams...') logger.info('loading streams...')
const parser = new PlaylistParser({ storage }) const rootStorage = new Storage(ROOT_DIR)
const files = program.args.length ? program.args : await storage.list(`${STREAMS_DIR}/*.m3u`) const parser = new PlaylistParser({
storage: rootStorage,
channelsGroupedById,
feedsGroupedByChannelId
})
const files = program.args.length ? program.args : await rootStorage.list(`${STREAMS_DIR}/*.m3u`)
streams = await parser.parse(files) streams = await parser.parse(files)
logger.info(`found ${streams.count()} streams`) logger.info(`found ${streams.count()} streams`)
@ -89,7 +103,7 @@ async function main() {
main() main()
async function runTest(stream: Stream) { async function runTest(stream: Stream) {
const key = stream.filepath + stream.channel + stream.url const key = stream.filepath + stream.getId() + stream.url
results[key] = chalk.white('LOADING...') results[key] = chalk.white('LOADING...')
const result = await tester.test(stream) const result = await tester.test(stream)
@ -125,11 +139,11 @@ function drawTable() {
] ]
}) })
streams.forEach((stream: Stream, index: number) => { streams.forEach((stream: Stream, index: number) => {
const status = results[stream.filepath + stream.channel + stream.url] || chalk.gray('PENDING') const status = results[stream.filepath + stream.getId() + stream.url] || chalk.gray('PENDING')
const row = { const row = {
'': index, '': index,
'tvg-id': stream.channel.length > 25 ? stream.channel.slice(0, 22) + '...' : stream.channel, 'tvg-id': stream.getId().length > 25 ? stream.getId().slice(0, 22) + '...' : stream.getId(),
url: stream.url.length > 100 ? stream.url.slice(0, 97) + '...' : stream.url, url: stream.url.length > 100 ? stream.url.slice(0, 97) + '...' : stream.url,
status status
} }

@ -1,45 +1,63 @@
import { Logger, Storage, Collection, Dictionary } from '@freearhey/core' import { Logger, Storage, Collection, Dictionary } from '@freearhey/core'
import { DATA_DIR, STREAMS_DIR } from '../../constants' import { DATA_DIR, STREAMS_DIR } from '../../constants'
import { IssueLoader, PlaylistParser } from '../../core' import { IssueLoader, PlaylistParser } from '../../core'
import { Stream, Playlist, Channel, Issue } from '../../models' import { Stream, Playlist, Channel, Feed, Issue } from '../../models'
import validUrl from 'valid-url' import validUrl from 'valid-url'
import { uniqueId } from 'lodash'
let processedIssues = new Collection() let processedIssues = new Collection()
let streams: Collection
let groupedChannels: Dictionary
let issues: Collection
async function main() { async function main() {
const logger = new Logger({ disabled: true }) const logger = new Logger({ disabled: true })
const loader = new IssueLoader() const loader = new IssueLoader()
logger.info('loading issues...') logger.info('loading issues...')
issues = await loader.load() const issues = await loader.load()
logger.info('loading channels from api...') logger.info('loading channels from api...')
const dataStorage = new Storage(DATA_DIR) const dataStorage = new Storage(DATA_DIR)
const channelsContent = await dataStorage.json('channels.json') const channelsData = await dataStorage.json('channels.json')
groupedChannels = new Collection(channelsContent) const channels = new Collection(channelsData).map(data => new Channel(data))
.map(data => new Channel(data)) const channelsGroupedById = channels.keyBy((channel: Channel) => channel.id)
.keyBy((channel: Channel) => channel.id) const feedsData = await dataStorage.json('feeds.json')
const feeds = new Collection(feedsData).map(data =>
new Feed(data).withChannel(channelsGroupedById)
)
const feedsGroupedByChannelId = feeds.groupBy((feed: Feed) =>
feed.channel ? feed.channel.id : uniqueId()
)
logger.info('loading streams...') logger.info('loading streams...')
const streamsStorage = new Storage(STREAMS_DIR) const streamsStorage = new Storage(STREAMS_DIR)
const parser = new PlaylistParser({ storage: streamsStorage }) const parser = new PlaylistParser({
storage: streamsStorage,
feedsGroupedByChannelId,
channelsGroupedById
})
const files = await streamsStorage.list('**/*.m3u') const files = await streamsStorage.list('**/*.m3u')
streams = await parser.parse(files) const streams = await parser.parse(files)
logger.info('removing broken streams...') logger.info('removing broken streams...')
await removeStreams(loader) await removeStreams({ streams, issues })
logger.info('edit stream description...') logger.info('edit stream description...')
await editStreams(loader) await editStreams({
streams,
issues,
channelsGroupedById,
feedsGroupedByChannelId
})
logger.info('add new streams...') logger.info('add new streams...')
await addStreams(loader) await addStreams({
streams,
issues,
channelsGroupedById,
feedsGroupedByChannelId
})
logger.info('saving...') logger.info('saving...')
const groupedStreams = streams.groupBy((stream: Stream) => stream.filepath) const groupedStreams = streams.groupBy((stream: Stream) => stream.getFilepath())
for (let filepath of groupedStreams.keys()) { for (let filepath of groupedStreams.keys()) {
let streams = groupedStreams.get(filepath) || [] let streams = groupedStreams.get(filepath) || []
streams = streams.filter((stream: Stream) => stream.removed === false) streams = streams.filter((stream: Stream) => stream.removed === false)
@ -54,7 +72,7 @@ async function main() {
main() main()
async function removeStreams(loader: IssueLoader) { async function removeStreams({ streams, issues }: { streams: Collection; issues: Collection }) {
const requests = issues.filter( const requests = issues.filter(
issue => issue.labels.includes('streams:remove') && issue.labels.includes('approved') issue => issue.labels.includes('streams:remove') && issue.labels.includes('approved')
) )
@ -62,22 +80,35 @@ async function removeStreams(loader: IssueLoader) {
const data = issue.data const data = issue.data
if (data.missing('brokenLinks')) return if (data.missing('brokenLinks')) return
const brokenLinks = data.getString('brokenLinks').split(/\r?\n/).filter(Boolean) const brokenLinks = data.getString('brokenLinks') || ''
let changed = false let changed = false
brokenLinks.forEach(link => { brokenLinks
const found: Stream = streams.first((_stream: Stream) => _stream.url === link.trim()) .split(/\r?\n/)
if (found) { .filter(Boolean)
found.removed = true .forEach(link => {
changed = true const found: Stream = streams.first((_stream: Stream) => _stream.url === link.trim())
} if (found) {
}) found.removed = true
changed = true
}
})
if (changed) processedIssues.add(issue.number) if (changed) processedIssues.add(issue.number)
}) })
} }
async function editStreams(loader: IssueLoader) { async function editStreams({
streams,
issues,
channelsGroupedById,
feedsGroupedByChannelId
}: {
streams: Collection
issues: Collection
channelsGroupedById: Dictionary
feedsGroupedByChannelId: Dictionary
}) {
const requests = issues.filter( const requests = issues.filter(
issue => issue.labels.includes('streams:edit') && issue.labels.includes('approved') issue => issue.labels.includes('streams:edit') && issue.labels.includes('approved')
) )
@ -86,59 +117,110 @@ async function editStreams(loader: IssueLoader) {
if (data.missing('streamUrl')) return if (data.missing('streamUrl')) return
let stream = streams.first( let stream: Stream = streams.first(
(_stream: Stream) => _stream.url === data.getString('streamUrl') (_stream: Stream) => _stream.url === data.getString('streamUrl')
) as Stream )
if (!stream) return if (!stream) return
if (data.has('channelId')) { const streamId = data.getString('streamId') || ''
const channel = groupedChannels.get(data.getString('channelId')) const [channelId, feedId] = streamId.split('@')
if (!channel) return if (channelId) {
stream
stream.channel = data.getString('channelId') .setChannelId(channelId)
stream.filepath = `${channel.country.toLowerCase()}.m3u` .setFeedId(feedId)
stream.line = -1 .withChannel(channelsGroupedById)
stream.name = channel.name .withFeed(feedsGroupedByChannelId)
.updateId()
.updateName()
.updateFilepath()
} }
if (data.has('label')) stream.label = data.getString('label') const label = data.getString('label') || ''
if (data.has('quality')) stream.quality = data.getString('quality') const quality = data.getString('quality') || ''
if (data.has('httpUserAgent')) stream.httpUserAgent = data.getString('httpUserAgent') const httpUserAgent = data.getString('httpUserAgent') || ''
if (data.has('httpReferrer')) stream.httpReferrer = data.getString('httpReferrer') const httpReferrer = data.getString('httpReferrer') || ''
if (data.has('label')) stream.setLabel(label)
if (data.has('quality')) stream.setQuality(quality)
if (data.has('httpUserAgent')) stream.setHttpUserAgent(httpUserAgent)
if (data.has('httpReferrer')) stream.setHttpReferrer(httpReferrer)
processedIssues.add(issue.number) processedIssues.add(issue.number)
}) })
} }
async function addStreams(loader: IssueLoader) { async function addStreams({
streams,
issues,
channelsGroupedById,
feedsGroupedByChannelId
}: {
streams: Collection
issues: Collection
channelsGroupedById: Dictionary
feedsGroupedByChannelId: Dictionary
}) {
const requests = issues.filter( const requests = issues.filter(
issue => issue.labels.includes('streams:add') && issue.labels.includes('approved') issue => issue.labels.includes('streams:add') && issue.labels.includes('approved')
) )
requests.forEach((issue: Issue) => { requests.forEach((issue: Issue) => {
const data = issue.data const data = issue.data
if (data.missing('channelId') || data.missing('streamUrl')) return if (data.missing('streamId') || data.missing('streamUrl')) return
if (streams.includes((_stream: Stream) => _stream.url === data.getString('streamUrl'))) return if (streams.includes((_stream: Stream) => _stream.url === data.getString('streamUrl'))) return
if (!validUrl.isUri(data.getString('streamUrl'))) return const stringUrl = data.getString('streamUrl') || ''
if (!isUri(stringUrl)) return
const channel = groupedChannels.get(data.getString('channelId')) const streamId = data.getString('streamId') || ''
const [channelId] = streamId.split('@')
const channel: Channel = channelsGroupedById.get(channelId)
if (!channel) return if (!channel) return
const label = data.getString('label') || ''
const quality = data.getString('quality') || ''
const httpUserAgent = data.getString('httpUserAgent') || ''
const httpReferrer = data.getString('httpReferrer') || ''
const stream = new Stream({ const stream = new Stream({
channel: data.getString('channelId'), tvg: {
url: data.getString('streamUrl'), id: streamId,
label: data.getString('label'), name: '',
quality: data.getString('quality'), url: '',
httpUserAgent: data.getString('httpUserAgent'), logo: '',
httpReferrer: data.getString('httpReferrer'), rec: '',
filepath: `${channel.country.toLowerCase()}.m3u`, shift: ''
},
name: data.getString('channelName') || channel.name,
url: stringUrl,
group: {
title: ''
},
http: {
'user-agent': httpUserAgent,
referrer: httpReferrer
},
line: -1, line: -1,
name: data.getString('channelName') || channel.name raw: '',
timeshift: '',
catchup: {
type: '',
source: '',
days: ''
}
}) })
.withChannel(channelsGroupedById)
.withFeed(feedsGroupedByChannelId)
.setLabel(label)
.setQuality(quality)
.updateName()
.updateFilepath()
streams.add(stream) streams.add(stream)
processedIssues.add(issue.number) processedIssues.add(issue.number)
}) })
} }
function isUri(string: string) {
return validUrl.isUri(encodeURI(string))
}

@ -1,9 +1,9 @@
import { Logger, Storage, Collection, Dictionary } from '@freearhey/core' import { Logger, Storage, Collection, Dictionary } from '@freearhey/core'
import { PlaylistParser } from '../../core' import { PlaylistParser } from '../../core'
import { Channel, Stream, Blocked } from '../../models' import { Channel, Stream, Blocked, Feed } from '../../models'
import { program } from 'commander' import { program } from 'commander'
import chalk from 'chalk' import chalk from 'chalk'
import _ from 'lodash' import { uniqueId } from 'lodash'
import { DATA_DIR, STREAMS_DIR } from '../../constants' import { DATA_DIR, STREAMS_DIR } from '../../constants'
program.argument('[filepath]', 'Path to file to validate').parse(process.argv) program.argument('[filepath]', 'Path to file to validate').parse(process.argv)
@ -17,41 +17,52 @@ type LogItem = {
async function main() { async function main() {
const logger = new Logger() const logger = new Logger()
logger.info(`loading blocklist...`) logger.info('loading data from api...')
const dataStorage = new Storage(DATA_DIR) const dataStorage = new Storage(DATA_DIR)
const channelsContent = await dataStorage.json('channels.json') const channelsData = await dataStorage.json('channels.json')
const channels = new Collection(channelsContent).map(data => new Channel(data)) const channels = new Collection(channelsData).map(data => new Channel(data))
const channelsGroupedById = channels.keyBy((channel: Channel) => channel.id)
const feedsData = await dataStorage.json('feeds.json')
const feeds = new Collection(feedsData).map(data =>
new Feed(data).withChannel(channelsGroupedById)
)
const feedsGroupedByChannelId = feeds.groupBy((feed: Feed) =>
feed.channel ? feed.channel.id : uniqueId()
)
const blocklistContent = await dataStorage.json('blocklist.json') const blocklistContent = await dataStorage.json('blocklist.json')
const blocklist = new Collection(blocklistContent).map(data => new Blocked(data)) const blocklist = new Collection(blocklistContent).map(data => new Blocked(data))
const blocklistGroupedByChannelId = blocklist.keyBy((blocked: Blocked) => blocked.channelId)
logger.info(`found ${blocklist.count()} records`)
logger.info('loading streams...') logger.info('loading streams...')
const streamsStorage = new Storage(STREAMS_DIR) const streamsStorage = new Storage(STREAMS_DIR)
const parser = new PlaylistParser({ storage: streamsStorage }) const parser = new PlaylistParser({
storage: streamsStorage,
channelsGroupedById,
feedsGroupedByChannelId
})
const files = program.args.length ? program.args : await streamsStorage.list('**/*.m3u') const files = program.args.length ? program.args : await streamsStorage.list('**/*.m3u')
const streams = await parser.parse(files) const streams = await parser.parse(files)
logger.info(`found ${streams.count()} streams`) logger.info(`found ${streams.count()} streams`)
let errors = new Collection() let errors = new Collection()
let warnings = new Collection() let warnings = new Collection()
let groupedStreams = streams.groupBy((stream: Stream) => stream.filepath) let streamsGroupedByFilepath = streams.groupBy((stream: Stream) => stream.getFilepath())
for (const filepath of groupedStreams.keys()) { for (const filepath of streamsGroupedByFilepath.keys()) {
const streams = groupedStreams.get(filepath) const streams = streamsGroupedByFilepath.get(filepath)
if (!streams) continue if (!streams) continue
const log = new Collection() const log = new Collection()
const buffer = new Dictionary() const buffer = new Dictionary()
streams.forEach((stream: Stream) => { streams.forEach((stream: Stream) => {
const invalidId = if (stream.channelId) {
stream.channel && !channels.first((channel: Channel) => channel.id === stream.channel) const channel = channelsGroupedById.get(stream.channelId)
if (invalidId) { if (!channel) {
log.add({ log.add({
type: 'warning', type: 'warning',
line: stream.line, line: stream.line,
message: `"${stream.channel}" is not in the database` message: `"${stream.id}" is not in the database`
}) })
}
} }
const duplicate = stream.url && buffer.has(stream.url) const duplicate = stream.url && buffer.has(stream.url)
@ -65,19 +76,19 @@ async function main() {
buffer.set(stream.url, true) buffer.set(stream.url, true)
} }
const blocked = blocklist.first(blocked => stream.channel === blocked.channel) const blocked = stream.channel ? blocklistGroupedByChannelId.get(stream.channel.id) : false
if (blocked) { if (blocked) {
if (blocked.reason === 'dmca') { if (blocked.reason === 'dmca') {
log.add({ log.add({
type: 'error', type: 'error',
line: stream.line, line: stream.line,
message: `"${stream.channel}" is on the blocklist due to claims of copyright holders (${blocked.ref})` message: `"${blocked.channelId}" is on the blocklist due to claims of copyright holders (${blocked.ref})`
}) })
} else if (blocked.reason === 'nsfw') { } else if (blocked.reason === 'nsfw') {
log.add({ log.add({
type: 'error', type: 'error',
line: stream.line, line: stream.line,
message: `"${stream.channel}" is on the blocklist due to NSFW content (${blocked.ref})` message: `"${blocked.channelId}" is on the blocklist due to NSFW content (${blocked.ref})`
}) })
} }
} }

@ -1,154 +1,164 @@
import { Logger, Storage, Collection, Dictionary } from '@freearhey/core' import { Logger, Storage, Collection, Dictionary } from '@freearhey/core'
import { DATA_DIR, STREAMS_DIR } from '../../constants' import { DATA_DIR, STREAMS_DIR } from '../../constants'
import { IssueLoader, PlaylistParser } from '../../core' import { IssueLoader, PlaylistParser } from '../../core'
import { Blocked, Channel, Issue, Stream } from '../../models' import { Blocked, Channel, Issue, Stream, Feed } from '../../models'
import { uniqueId } from 'lodash'
async function main() { async function main() {
const logger = new Logger() const logger = new Logger()
const loader = new IssueLoader() const loader = new IssueLoader()
let report = new Collection()
const storage = new Storage(DATA_DIR)
logger.info('loading issues...') logger.info('loading issues...')
const issues = await loader.load() const issues = await loader.load()
logger.info('loading data from api...')
const dataStorage = new Storage(DATA_DIR)
const channelsData = await dataStorage.json('channels.json')
const channels = new Collection(channelsData).map(data => new Channel(data))
const channelsGroupedById = channels.keyBy((channel: Channel) => channel.id)
const feedsData = await dataStorage.json('feeds.json')
const feeds = new Collection(feedsData).map(data =>
new Feed(data).withChannel(channelsGroupedById)
)
const feedsGroupedByChannelId = feeds.groupBy((feed: Feed) =>
feed.channel ? feed.channel.id : uniqueId()
)
const blocklistContent = await dataStorage.json('blocklist.json')
const blocklist = new Collection(blocklistContent).map(data => new Blocked(data))
const blocklistGroupedByChannelId = blocklist.keyBy((blocked: Blocked) => blocked.channelId)
logger.info('loading streams...') logger.info('loading streams...')
const streamsStorage = new Storage(STREAMS_DIR) const streamsStorage = new Storage(STREAMS_DIR)
const parser = new PlaylistParser({ storage: streamsStorage }) const parser = new PlaylistParser({
storage: streamsStorage,
channelsGroupedById,
feedsGroupedByChannelId
})
const files = await streamsStorage.list('**/*.m3u') const files = await streamsStorage.list('**/*.m3u')
const streams = await parser.parse(files) const streams = await parser.parse(files)
const streamsGroupedByUrl = streams.groupBy((stream: Stream) => stream.url) const streamsGroupedByUrl = streams.groupBy((stream: Stream) => stream.url)
const streamsGroupedByChannel = streams.groupBy((stream: Stream) => stream.channel) const streamsGroupedByChannelId = streams.groupBy((stream: Stream) => stream.channelId)
logger.info('checking broken streams reports...')
const brokenStreamReports = issues.filter(issue =>
issue.labels.find((label: string) => label === 'broken stream')
)
brokenStreamReports.forEach((issue: Issue) => {
const brokenLinks = issue.data.getArray('brokenLinks') || []
if (!brokenLinks.length) {
const result = {
issueNumber: issue.number,
type: 'broken stream',
streamId: undefined,
streamUrl: undefined,
status: 'missing_link'
}
logger.info('loading channels from api...') report.add(result)
const channelsContent = await storage.json('channels.json') } else {
const channelsGroupedById = new Collection(channelsContent) for (const streamUrl of brokenLinks) {
.map(data => new Channel(data)) const result = {
.groupBy((channel: Channel) => channel.id) issueNumber: issue.number,
type: 'broken stream',
streamId: undefined,
streamUrl: truncate(streamUrl),
status: 'pending'
}
logger.info('loading blocklist from api...') if (streamsGroupedByUrl.missing(streamUrl)) {
const blocklistContent = await storage.json('blocklist.json') result.status = 'wrong_link'
const blocklistGroupedByChannel = new Collection(blocklistContent) }
.map(data => new Blocked(data))
.groupBy((blocked: Blocked) => blocked.channel)
let report = new Collection() report.add(result)
}
}
})
logger.info('checking streams:add requests...') logger.info('checking streams:add requests...')
const addRequests = issues.filter(issue => issue.labels.includes('streams:add')) const addRequests = issues.filter(issue => issue.labels.includes('streams:add'))
const addRequestsBuffer = new Dictionary() const addRequestsBuffer = new Dictionary()
addRequests.forEach((issue: Issue) => { addRequests.forEach((issue: Issue) => {
const channelId = issue.data.getString('channelId') || undefined const streamId = issue.data.getString('streamId') || ''
const streamUrl = issue.data.getString('streamUrl') const streamUrl = issue.data.getString('streamUrl') || ''
const [channelId] = streamId.split('@')
const result = new Dictionary({ const result = {
issueNumber: issue.number, issueNumber: issue.number,
type: 'streams:add', type: 'streams:add',
channelId, streamId: streamId || undefined,
streamUrl, streamUrl: truncate(streamUrl),
status: 'pending' status: 'pending'
}) }
if (!channelId) result.set('status', 'missing_id') if (!channelId) result.status = 'missing_id'
else if (!streamUrl) result.set('status', 'missing_link') else if (!streamUrl) result.status = 'missing_link'
else if (blocklistGroupedByChannel.has(channelId)) result.set('status', 'blocked') else if (blocklistGroupedByChannelId.has(channelId)) result.status = 'blocked'
else if (channelsGroupedById.missing(channelId)) result.set('status', 'wrong_id') else if (channelsGroupedById.missing(channelId)) result.status = 'wrong_id'
else if (streamsGroupedByUrl.has(streamUrl)) result.set('status', 'on_playlist') else if (streamsGroupedByUrl.has(streamUrl)) result.status = 'on_playlist'
else if (addRequestsBuffer.has(streamUrl)) result.set('status', 'duplicate') else if (addRequestsBuffer.has(streamUrl)) result.status = 'duplicate'
else result.set('status', 'pending') else result.status = 'pending'
addRequestsBuffer.set(streamUrl, true) addRequestsBuffer.set(streamUrl, true)
report.add(result.data()) report.add(result)
}) })
logger.info('checking streams:edit requests...') logger.info('checking streams:edit requests...')
const editRequests = issues.filter(issue => issue.labels.find(label => label === 'streams:edit')) const editRequests = issues.filter(issue =>
issue.labels.find((label: string) => label === 'streams:edit')
)
editRequests.forEach((issue: Issue) => { editRequests.forEach((issue: Issue) => {
const channelId = issue.data.getString('channelId') || undefined const streamId = issue.data.getString('streamId') || ''
const streamUrl = issue.data.getString('streamUrl') || undefined const streamUrl = issue.data.getString('streamUrl') || ''
const [channelId] = streamId.split('@')
const result = new Dictionary({ const result = {
issueNumber: issue.number, issueNumber: issue.number,
type: 'streams:edit', type: 'streams:edit',
channelId, streamId: streamId || undefined,
streamUrl, streamUrl: truncate(streamUrl),
status: 'pending' status: 'pending'
}) }
if (!streamUrl) result.set('status', 'missing_link')
else if (streamsGroupedByUrl.missing(streamUrl)) result.set('status', 'invalid_link')
else if (channelId && channelsGroupedById.missing(channelId)) result.set('status', 'invalid_id')
report.add(result.data())
})
logger.info('checking broken streams reports...')
const brokenStreamReports = issues.filter(issue =>
issue.labels.find(label => label === 'broken stream')
)
brokenStreamReports.forEach((issue: Issue) => {
const brokenLinks = issue.data.getArray('brokenLinks') || []
if (!brokenLinks.length) {
const result = new Dictionary({
issueNumber: issue.number,
type: 'broken stream',
channelId: undefined,
streamUrl: undefined,
status: 'missing_link'
})
report.add(result.data()) if (!streamUrl) result.status = 'missing_link'
} else { else if (streamsGroupedByUrl.missing(streamUrl)) result.status = 'invalid_link'
for (const streamUrl of brokenLinks) { else if (channelId && channelsGroupedById.missing(channelId)) result.status = 'invalid_id'
const result = new Dictionary({
issueNumber: issue.number,
type: 'broken stream',
channelId: undefined,
streamUrl: undefined,
status: 'pending'
})
if (streamsGroupedByUrl.missing(streamUrl)) { report.add(result)
result.set('streamUrl', streamUrl)
result.set('status', 'wrong_link')
}
report.add(result.data())
}
}
}) })
logger.info('checking channel search requests...') logger.info('checking channel search requests...')
const channelSearchRequests = issues.filter(issue => const channelSearchRequests = issues.filter(issue =>
issue.labels.find(label => label === 'channel search') issue.labels.find((label: string) => label === 'channel search')
) )
const channelSearchRequestsBuffer = new Dictionary() const channelSearchRequestsBuffer = new Dictionary()
channelSearchRequests.forEach((issue: Issue) => { channelSearchRequests.forEach((issue: Issue) => {
const channelId = issue.data.getString('channelId') const streamId = issue.data.getString('channelId') || ''
const [channelId] = streamId.split('@')
const result = new Dictionary({ const result = {
issueNumber: issue.number, issueNumber: issue.number,
type: 'channel search', type: 'channel search',
channelId, streamId: streamId || undefined,
streamUrl: undefined, streamUrl: undefined,
status: 'pending' status: 'pending'
}) }
if (!channelId) result.set('status', 'missing_id') if (!channelId) result.status = 'missing_id'
else if (channelsGroupedById.missing(channelId)) result.set('status', 'invalid_id') else if (channelsGroupedById.missing(channelId)) result.status = 'invalid_id'
else if (channelSearchRequestsBuffer.has(channelId)) result.set('status', 'duplicate') else if (channelSearchRequestsBuffer.has(channelId)) result.status = 'duplicate'
else if (blocklistGroupedByChannel.has(channelId)) result.set('status', 'blocked') else if (blocklistGroupedByChannelId.has(channelId)) result.status = 'blocked'
else if (streamsGroupedByChannel.has(channelId)) result.set('status', 'fulfilled') else if (streamsGroupedByChannelId.has(channelId)) result.status = 'fulfilled'
else { else {
const channelData = channelsGroupedById.get(channelId) const channelData = channelsGroupedById.get(channelId)
if (channelData.length && channelData[0].closed) result.set('status', 'closed') if (channelData.length && channelData[0].closed) result.status = 'closed'
} }
channelSearchRequestsBuffer.set(channelId, true) channelSearchRequestsBuffer.set(channelId, true)
report.add(result.data()) report.add(result)
}) })
report = report.orderBy(item => item.issueNumber).filter(item => item.status !== 'pending') report = report.orderBy(item => item.issueNumber).filter(item => item.status !== 'pending')
@ -157,3 +167,10 @@ async function main() {
} }
main() main()
function truncate(string: string, limit: number = 100) {
if (!string) return string
if (string.length < limit) return string
return string.slice(0, limit) + '...'
}

@ -41,7 +41,7 @@ export class ApiClient {
} }
async download(filename: string) { async download(filename: string) {
const stream = await this.storage.createStream(`/temp/data/${filename}`) const stream = await this.storage.createStream(`temp/data/${filename}`)
const bar = this.progressBar.create(0, 0, { filename }) const bar = this.progressBar.create(0, 0, { filename })

@ -1,9 +1,10 @@
import { Table } from 'console-table-printer' import { Table } from 'console-table-printer'
import { ComplexOptions } from 'console-table-printer/dist/src/models/external-table'
export class CliTable { export class CliTable {
table: Table table: Table
constructor(options?) { constructor(options?: ComplexOptions | string[]) {
this.table = new Table(options) this.table = new Table(options)
} }

@ -18,7 +18,7 @@ export class IssueData {
return Boolean(this._data.get(key)) return Boolean(this._data.get(key))
} }
getString(key: string): string { getString(key: string): string | undefined {
const deleteSymbol = '~' const deleteSymbol = '~'
return this._data.get(key) === deleteSymbol ? '' : this._data.get(key) return this._data.get(key) === deleteSymbol ? '' : this._data.get(key)

@ -16,7 +16,7 @@ export class IssueLoader {
} }
let issues: object[] = [] let issues: object[] = []
if (TESTING) { if (TESTING) {
issues = (await import('../../tests/__data__/input/issues/all.js')).default issues = (await import('../../tests/__data__/input/playlist_update/issues.js')).default
} else { } else {
issues = await octokit.paginate(octokit.rest.issues.listForRepo, { issues = await octokit.paginate(octokit.rest.issues.listForRepo, {
owner: OWNER, owner: OWNER,

@ -3,11 +3,10 @@ import { Issue } from '../models'
import { IssueData } from './issueData' import { IssueData } from './issueData'
const FIELDS = new Dictionary({ const FIELDS = new Dictionary({
'Stream ID': 'streamId',
'Channel ID': 'channelId', 'Channel ID': 'channelId',
'Channel ID (required)': 'channelId', 'Feed ID': 'feedId',
'Stream URL': 'streamUrl', 'Stream URL': 'streamUrl',
'Stream URL (optional)': 'streamUrl',
'Stream URL (required)': 'streamUrl',
'Broken Link': 'brokenLinks', 'Broken Link': 'brokenLinks',
'Broken Links': 'brokenLinks', 'Broken Links': 'brokenLinks',
Label: 'label', Label: 'label',
@ -18,8 +17,7 @@ const FIELDS = new Dictionary({
'HTTP Referrer': 'httpReferrer', 'HTTP Referrer': 'httpReferrer',
'What happened to the stream?': 'reason', 'What happened to the stream?': 'reason',
Reason: 'reason', Reason: 'reason',
Notes: 'notes', Notes: 'notes'
'Notes (optional)': 'notes'
}) })
export class IssueParser { export class IssueParser {
@ -30,7 +28,7 @@ export class IssueParser {
fields.forEach((field: string) => { fields.forEach((field: string) => {
const parsed = typeof field === 'string' ? field.split(/\r?\n/).filter(Boolean) : [] const parsed = typeof field === 'string' ? field.split(/\r?\n/).filter(Boolean) : []
let _label = parsed.shift() let _label = parsed.shift()
_label = _label ? _label.trim() : '' _label = _label ? _label.replace(/ \(optional\)| \(required\)/, '').trim() : ''
let _value = parsed.join('\r\n') let _value = parsed.join('\r\n')
_value = _value ? _value.trim() : '' _value = _value ? _value.trim() : ''

@ -1,4 +1,5 @@
export type LogItem = { export type LogItem = {
type: string
filepath: string filepath: string
count: number count: number
} }

@ -1,12 +1,22 @@
import { Collection, Storage } from '@freearhey/core' import { Collection, Storage, Dictionary } from '@freearhey/core'
import parser from 'iptv-playlist-parser' import parser from 'iptv-playlist-parser'
import { Stream } from '../models' import { Stream } from '../models'
type PlaylistPareserProps = {
storage: Storage
feedsGroupedByChannelId: Dictionary
channelsGroupedById: Dictionary
}
export class PlaylistParser { export class PlaylistParser {
storage: Storage storage: Storage
feedsGroupedByChannelId: Dictionary
channelsGroupedById: Dictionary
constructor({ storage }: { storage: Storage }) { constructor({ storage, feedsGroupedByChannelId, channelsGroupedById }: PlaylistPareserProps) {
this.storage = storage this.storage = storage
this.feedsGroupedByChannelId = feedsGroupedByChannelId
this.channelsGroupedById = channelsGroupedById
} }
async parse(files: string[]): Promise<Collection> { async parse(files: string[]): Promise<Collection> {
@ -21,41 +31,18 @@ export class PlaylistParser {
} }
async parseFile(filepath: string): Promise<Collection> { async parseFile(filepath: string): Promise<Collection> {
const streams = new Collection()
const content = await this.storage.load(filepath) const content = await this.storage.load(filepath)
const parsed: parser.Playlist = parser.parse(content) const parsed: parser.Playlist = parser.parse(content)
parsed.items.forEach((item: parser.PlaylistItem) => { const streams = new Collection(parsed.items).map((data: parser.PlaylistItem) => {
const { name, label, quality } = parseTitle(item.name) const stream = new Stream(data)
const stream = new Stream({ .withFeed(this.feedsGroupedByChannelId)
channel: item.tvg.id, .withChannel(this.channelsGroupedById)
name, .setFilepath(filepath)
label,
quality, return stream
filepath,
line: item.line,
url: item.url,
httpReferrer: item.http.referrer,
httpUserAgent: item.http['user-agent']
})
streams.add(stream)
}) })
return streams return streams
} }
} }
function parseTitle(title: string): { name: string; label: string; quality: string } {
const [, label] = title.match(/ \[(.*)\]$/) || [null, '']
title = title.replace(new RegExp(` \\[${escapeRegExp(label)}\\]$`), '')
const [, quality] = title.match(/ \(([0-9]+p)\)$/) || [null, '']
title = title.replace(new RegExp(` \\(${quality}\\)$`), '')
return { name: title, label, quality }
}
function escapeRegExp(text) {
return text.replace(/[-[\]{}()*+?.,\\^$|#\s]/g, '\\$&')
}

@ -11,15 +11,15 @@ export class StreamTester {
async test(stream: Stream) { async test(stream: Stream) {
if (TESTING) { if (TESTING) {
const results = (await import('../../tests/__data__/input/test_results/all.js')).default const results = (await import('../../tests/__data__/input/playlist_test/results.js')).default
return results[stream.url] return results[stream.url]
} else { } else {
return this.checker.checkStream({ return this.checker.checkStream({
url: stream.url, url: stream.url,
http: { http: {
referrer: stream.httpReferrer, referrer: stream.getHttpReferrer(),
'user-agent': stream.httpUserAgent 'user-agent': stream.getHttpUserAgent()
} }
}) })
} }

@ -29,11 +29,7 @@ export class CategoriesGenerator implements Generator {
const categoryStreams = streams const categoryStreams = streams
.filter((stream: Stream) => stream.hasCategory(category)) .filter((stream: Stream) => stream.hasCategory(category))
.map((stream: Stream) => { .map((stream: Stream) => {
const streamCategories = stream.categories stream.groupTitle = stream.getCategoryNames().join(';')
.map((category: Category) => category.name)
.sort()
const groupTitle = stream.categories ? streamCategories.join(';') : ''
stream.groupTitle = groupTitle
return stream return stream
}) })
@ -41,13 +37,17 @@ export class CategoriesGenerator implements Generator {
const playlist = new Playlist(categoryStreams, { public: true }) const playlist = new Playlist(categoryStreams, { public: true })
const filepath = `categories/${category.id}.m3u` const filepath = `categories/${category.id}.m3u`
await this.storage.save(filepath, playlist.toString()) await this.storage.save(filepath, playlist.toString())
this.logger.info(JSON.stringify({ filepath, count: playlist.streams.count() })) this.logger.info(
JSON.stringify({ type: 'category', filepath, count: playlist.streams.count() })
)
}) })
const undefinedStreams = streams.filter((stream: Stream) => stream.noCategories()) const undefinedStreams = streams.filter((stream: Stream) => !stream.hasCategories())
const playlist = new Playlist(undefinedStreams, { public: true }) const playlist = new Playlist(undefinedStreams, { public: true })
const filepath = 'categories/undefined.m3u' const filepath = 'categories/undefined.m3u'
await this.storage.save(filepath, playlist.toString()) await this.storage.save(filepath, playlist.toString())
this.logger.info(JSON.stringify({ filepath, count: playlist.streams.count() })) this.logger.info(
JSON.stringify({ type: 'category', filepath, count: playlist.streams.count() })
)
} }
} }

@ -1,12 +1,10 @@
import { Generator } from './generator' import { Generator } from './generator'
import { Collection, Storage, Logger } from '@freearhey/core' import { Collection, Storage, Logger } from '@freearhey/core'
import { Country, Region, Subdivision, Stream, Playlist } from '../models' import { Country, Subdivision, Stream, Playlist } from '../models'
import { PUBLIC_DIR } from '../constants' import { PUBLIC_DIR } from '../constants'
type CountriesGeneratorProps = { type CountriesGeneratorProps = {
streams: Collection streams: Collection
regions: Collection
subdivisions: Collection
countries: Collection countries: Collection
logger: Logger logger: Logger
} }
@ -14,55 +12,37 @@ type CountriesGeneratorProps = {
export class CountriesGenerator implements Generator { export class CountriesGenerator implements Generator {
streams: Collection streams: Collection
countries: Collection countries: Collection
regions: Collection
subdivisions: Collection
storage: Storage storage: Storage
logger: Logger logger: Logger
constructor({ streams, countries, regions, subdivisions, logger }: CountriesGeneratorProps) { constructor({ streams, countries, logger }: CountriesGeneratorProps) {
this.streams = streams this.streams = streams
this.countries = countries this.countries = countries
this.regions = regions
this.subdivisions = subdivisions
this.storage = new Storage(PUBLIC_DIR) this.storage = new Storage(PUBLIC_DIR)
this.logger = logger this.logger = logger
} }
async generate(): Promise<void> { async generate(): Promise<void> {
const streams = this.streams const streams = this.streams
.orderBy([stream => stream.getTitle()]) .orderBy((stream: Stream) => stream.getTitle())
.filter((stream: Stream) => stream.isSFW()) .filter((stream: Stream) => stream.isSFW())
const regions = this.regions.filter((region: Region) => region.code !== 'INT')
this.countries.forEach(async (country: Country) => { this.countries.forEach(async (country: Country) => {
const countrySubdivisions = this.subdivisions.filter( const countryStreams = streams.filter((stream: Stream) =>
(subdivision: Subdivision) => subdivision.country === country.code stream.isBroadcastInCountry(country)
) )
const countrySubdivisionsCodes = countrySubdivisions.map(
(subdivision: Subdivision) => `s/${subdivision.code}`
)
const countryAreaCodes = regions
.filter((region: Region) => region.countries.includes(country.code))
.map((region: Region) => `r/${region.code}`)
.concat(countrySubdivisionsCodes)
.add(`c/${country.code}`)
const countryStreams = streams.filter(stream =>
stream.broadcastArea.intersects(countryAreaCodes)
)
if (countryStreams.isEmpty()) return if (countryStreams.isEmpty()) return
const playlist = new Playlist(countryStreams, { public: true }) const playlist = new Playlist(countryStreams, { public: true })
const filepath = `countries/${country.code.toLowerCase()}.m3u` const filepath = `countries/${country.code.toLowerCase()}.m3u`
await this.storage.save(filepath, playlist.toString()) await this.storage.save(filepath, playlist.toString())
this.logger.info(JSON.stringify({ filepath, count: playlist.streams.count() })) this.logger.info(
JSON.stringify({ type: 'country', filepath, count: playlist.streams.count() })
)
countrySubdivisions.forEach(async (subdivision: Subdivision) => { country.getSubdivisions().forEach(async (subdivision: Subdivision) => {
const subdivisionStreams = streams.filter(stream => const subdivisionStreams = streams.filter((stream: Stream) =>
stream.broadcastArea.includes(`s/${subdivision.code}`) stream.isBroadcastInSubdivision(subdivision)
) )
if (subdivisionStreams.isEmpty()) return if (subdivisionStreams.isEmpty()) return
@ -70,16 +50,22 @@ export class CountriesGenerator implements Generator {
const playlist = new Playlist(subdivisionStreams, { public: true }) const playlist = new Playlist(subdivisionStreams, { public: true })
const filepath = `subdivisions/${subdivision.code.toLowerCase()}.m3u` const filepath = `subdivisions/${subdivision.code.toLowerCase()}.m3u`
await this.storage.save(filepath, playlist.toString()) await this.storage.save(filepath, playlist.toString())
this.logger.info(JSON.stringify({ filepath, count: playlist.streams.count() })) this.logger.info(
JSON.stringify({ type: 'subdivision', filepath, count: playlist.streams.count() })
)
}) })
}) })
const internationalStreams = streams.filter(stream => stream.isInternational()) const undefinedStreams = streams.filter((stream: Stream) => !stream.hasBroadcastArea())
if (internationalStreams.notEmpty()) { const undefinedPlaylist = new Playlist(undefinedStreams, { public: true })
const playlist = new Playlist(internationalStreams, { public: true }) const undefinedFilepath = 'countries/undefined.m3u'
const filepath = 'countries/int.m3u' await this.storage.save(undefinedFilepath, undefinedPlaylist.toString())
await this.storage.save(filepath, playlist.toString()) this.logger.info(
this.logger.info(JSON.stringify({ filepath, count: playlist.streams.count() })) JSON.stringify({
} type: 'country',
filepath: undefinedFilepath,
count: undefinedPlaylist.streams.count()
})
)
} }
} }

@ -26,14 +26,14 @@ export class IndexCategoryGenerator implements Generator {
let groupedStreams = new Collection() let groupedStreams = new Collection()
streams.forEach((stream: Stream) => { streams.forEach((stream: Stream) => {
if (stream.noCategories()) { if (!stream.hasCategories()) {
const streamClone = stream.clone() const streamClone = stream.clone()
streamClone.groupTitle = 'Undefined' streamClone.groupTitle = 'Undefined'
groupedStreams.add(streamClone) groupedStreams.add(streamClone)
return return
} }
stream.categories.forEach((category: Category) => { stream.getCategories().forEach((category: Category) => {
const streamClone = stream.clone() const streamClone = stream.clone()
streamClone.groupTitle = category.name streamClone.groupTitle = category.name
groupedStreams.push(streamClone) groupedStreams.push(streamClone)
@ -48,6 +48,6 @@ export class IndexCategoryGenerator implements Generator {
const playlist = new Playlist(groupedStreams, { public: true }) const playlist = new Playlist(groupedStreams, { public: true })
const filepath = 'index.category.m3u' const filepath = 'index.category.m3u'
await this.storage.save(filepath, playlist.toString()) await this.storage.save(filepath, playlist.toString())
this.logger.info(JSON.stringify({ filepath, count: playlist.streams.count() })) this.logger.info(JSON.stringify({ type: 'index', filepath, count: playlist.streams.count() }))
} }
} }

@ -1,29 +1,20 @@
import { Generator } from './generator' import { Generator } from './generator'
import { Collection, Storage, Logger } from '@freearhey/core' import { Collection, Storage, Logger } from '@freearhey/core'
import { Stream, Playlist, Country, Subdivision, Region } from '../models' import { Stream, Playlist, Country } from '../models'
import { PUBLIC_DIR } from '../constants' import { PUBLIC_DIR } from '../constants'
type IndexCountryGeneratorProps = { type IndexCountryGeneratorProps = {
streams: Collection streams: Collection
regions: Collection
countries: Collection
subdivisions: Collection
logger: Logger logger: Logger
} }
export class IndexCountryGenerator implements Generator { export class IndexCountryGenerator implements Generator {
streams: Collection streams: Collection
countries: Collection
regions: Collection
subdivisions: Collection
storage: Storage storage: Storage
logger: Logger logger: Logger
constructor({ streams, regions, countries, subdivisions, logger }: IndexCountryGeneratorProps) { constructor({ streams, logger }: IndexCountryGeneratorProps) {
this.streams = streams this.streams = streams
this.countries = countries
this.regions = regions
this.subdivisions = subdivisions
this.storage = new Storage(PUBLIC_DIR) this.storage = new Storage(PUBLIC_DIR)
this.logger = logger this.logger = logger
} }
@ -32,10 +23,10 @@ export class IndexCountryGenerator implements Generator {
let groupedStreams = new Collection() let groupedStreams = new Collection()
this.streams this.streams
.orderBy(stream => stream.getTitle()) .orderBy((stream: Stream) => stream.getTitle())
.filter(stream => stream.isSFW()) .filter((stream: Stream) => stream.isSFW())
.forEach(stream => { .forEach((stream: Stream) => {
if (stream.noBroadcastArea()) { if (!stream.hasBroadcastArea()) {
const streamClone = stream.clone() const streamClone = stream.clone()
streamClone.groupTitle = 'Undefined' streamClone.groupTitle = 'Undefined'
groupedStreams.add(streamClone) groupedStreams.add(streamClone)
@ -48,7 +39,7 @@ export class IndexCountryGenerator implements Generator {
groupedStreams.add(streamClone) groupedStreams.add(streamClone)
} }
this.getStreamBroadcastCountries(stream).forEach((country: Country) => { stream.getBroadcastCountries().forEach((country: Country) => {
const streamClone = stream.clone() const streamClone = stream.clone()
streamClone.groupTitle = country.name streamClone.groupTitle = country.name
groupedStreams.add(streamClone) groupedStreams.add(streamClone)
@ -65,40 +56,6 @@ export class IndexCountryGenerator implements Generator {
const playlist = new Playlist(groupedStreams, { public: true }) const playlist = new Playlist(groupedStreams, { public: true })
const filepath = 'index.country.m3u' const filepath = 'index.country.m3u'
await this.storage.save(filepath, playlist.toString()) await this.storage.save(filepath, playlist.toString())
this.logger.info(JSON.stringify({ filepath, count: playlist.streams.count() })) this.logger.info(JSON.stringify({ type: 'index', filepath, count: playlist.streams.count() }))
}
getStreamBroadcastCountries(stream: Stream) {
const groupedRegions = this.regions.keyBy((region: Region) => region.code)
const groupedCountries = this.countries.keyBy((country: Country) => country.code)
const groupedSubdivisions = this.subdivisions.keyBy(
(subdivision: Subdivision) => subdivision.code
)
let broadcastCountries = new Collection()
stream.broadcastArea.forEach(broadcastAreaCode => {
const [type, code] = broadcastAreaCode.split('/')
switch (type) {
case 'c':
broadcastCountries.add(code)
break
case 'r':
if (code !== 'INT' && groupedRegions.has(code)) {
broadcastCountries = broadcastCountries.concat(groupedRegions.get(code).countries)
}
break
case 's':
if (groupedSubdivisions.has(code)) {
broadcastCountries.add(groupedSubdivisions.get(code).country)
}
break
}
})
return broadcastCountries
.uniq()
.map(code => groupedCountries.get(code))
.filter(Boolean)
} }
} }

@ -27,6 +27,6 @@ export class IndexGenerator implements Generator {
const playlist = new Playlist(sfwStreams, { public: true }) const playlist = new Playlist(sfwStreams, { public: true })
const filepath = 'index.m3u' const filepath = 'index.m3u'
await this.storage.save(filepath, playlist.toString()) await this.storage.save(filepath, playlist.toString())
this.logger.info(JSON.stringify({ filepath, count: playlist.streams.count() })) this.logger.info(JSON.stringify({ type: 'index', filepath, count: playlist.streams.count() }))
} }
} }

@ -22,17 +22,17 @@ export class IndexLanguageGenerator implements Generator {
async generate(): Promise<void> { async generate(): Promise<void> {
let groupedStreams = new Collection() let groupedStreams = new Collection()
this.streams this.streams
.orderBy(stream => stream.getTitle()) .orderBy((stream: Stream) => stream.getTitle())
.filter(stream => stream.isSFW()) .filter((stream: Stream) => stream.isSFW())
.forEach(stream => { .forEach((stream: Stream) => {
if (stream.noLanguages()) { if (!stream.hasLanguages()) {
const streamClone = stream.clone() const streamClone = stream.clone()
streamClone.groupTitle = 'Undefined' streamClone.groupTitle = 'Undefined'
groupedStreams.add(streamClone) groupedStreams.add(streamClone)
return return
} }
stream.languages.forEach((language: Language) => { stream.getLanguages().forEach((language: Language) => {
const streamClone = stream.clone() const streamClone = stream.clone()
streamClone.groupTitle = language.name streamClone.groupTitle = language.name
groupedStreams.add(streamClone) groupedStreams.add(streamClone)
@ -47,6 +47,6 @@ export class IndexLanguageGenerator implements Generator {
const playlist = new Playlist(groupedStreams, { public: true }) const playlist = new Playlist(groupedStreams, { public: true })
const filepath = 'index.language.m3u' const filepath = 'index.language.m3u'
await this.storage.save(filepath, playlist.toString()) await this.storage.save(filepath, playlist.toString())
this.logger.info(JSON.stringify({ filepath, count: playlist.streams.count() })) this.logger.info(JSON.stringify({ type: 'index', filepath, count: playlist.streams.count() }))
} }
} }

@ -25,6 +25,6 @@ export class IndexNsfwGenerator implements Generator {
const playlist = new Playlist(allStreams, { public: true }) const playlist = new Playlist(allStreams, { public: true })
const filepath = 'index.nsfw.m3u' const filepath = 'index.nsfw.m3u'
await this.storage.save(filepath, playlist.toString()) await this.storage.save(filepath, playlist.toString())
this.logger.info(JSON.stringify({ filepath, count: playlist.streams.count() })) this.logger.info(JSON.stringify({ type: 'index', filepath, count: playlist.streams.count() }))
} }
} }

@ -28,14 +28,21 @@ export class IndexRegionGenerator implements Generator {
.orderBy((stream: Stream) => stream.getTitle()) .orderBy((stream: Stream) => stream.getTitle())
.filter((stream: Stream) => stream.isSFW()) .filter((stream: Stream) => stream.isSFW())
.forEach((stream: Stream) => { .forEach((stream: Stream) => {
if (stream.noBroadcastArea()) { if (stream.isInternational()) {
const streamClone = stream.clone()
streamClone.groupTitle = 'International'
groupedStreams.push(streamClone)
return
}
if (!stream.hasBroadcastArea()) {
const streamClone = stream.clone() const streamClone = stream.clone()
streamClone.groupTitle = 'Undefined' streamClone.groupTitle = 'Undefined'
groupedStreams.push(streamClone) groupedStreams.push(streamClone)
return return
} }
this.getStreamRegions(stream).forEach((region: Region) => { stream.getBroadcastRegions().forEach((region: Region) => {
const streamClone = stream.clone() const streamClone = stream.clone()
streamClone.groupTitle = region.name streamClone.groupTitle = region.name
groupedStreams.push(streamClone) groupedStreams.push(streamClone)
@ -43,41 +50,14 @@ export class IndexRegionGenerator implements Generator {
}) })
groupedStreams = groupedStreams.orderBy((stream: Stream) => { groupedStreams = groupedStreams.orderBy((stream: Stream) => {
if (stream.groupTitle === 'Undefined') return 'ZZ' if (stream.groupTitle === 'International') return 'ZZ'
if (stream.groupTitle === 'Undefined') return 'ZZZ'
return stream.groupTitle return stream.groupTitle
}) })
const playlist = new Playlist(groupedStreams, { public: true }) const playlist = new Playlist(groupedStreams, { public: true })
const filepath = 'index.region.m3u' const filepath = 'index.region.m3u'
await this.storage.save(filepath, playlist.toString()) await this.storage.save(filepath, playlist.toString())
this.logger.info(JSON.stringify({ filepath, count: playlist.streams.count() })) this.logger.info(JSON.stringify({ type: 'index', filepath, count: playlist.streams.count() }))
}
getStreamRegions(stream: Stream) {
let streamRegions = new Collection()
stream.broadcastArea.forEach(broadcastAreaCode => {
const [type, code] = broadcastAreaCode.split('/')
switch (type) {
case 'r':
const groupedRegions = this.regions.keyBy((region: Region) => region.code)
streamRegions.add(groupedRegions.get(code))
break
case 's':
const [countryCode] = code.split('-')
const subdivisionRegions = this.regions.filter((region: Region) =>
region.countries.includes(countryCode)
)
streamRegions = streamRegions.concat(subdivisionRegions)
break
case 'c':
const countryRegions = this.regions.filter((region: Region) =>
region.countries.includes(code)
)
streamRegions = streamRegions.concat(countryRegions)
break
}
})
return streamRegions
} }
} }

@ -18,35 +18,40 @@ export class LanguagesGenerator implements Generator {
async generate(): Promise<void> { async generate(): Promise<void> {
const streams = this.streams const streams = this.streams
.orderBy(stream => stream.getTitle()) .orderBy((stream: Stream) => stream.getTitle())
.filter(stream => stream.isSFW()) .filter((stream: Stream) => stream.isSFW())
let languages = new Collection() let languages = new Collection()
streams.forEach((stream: Stream) => { streams.forEach((stream: Stream) => {
languages = languages.concat(stream.languages) languages = languages.concat(stream.getLanguages())
}) })
languages languages
.filter(Boolean)
.uniqBy((language: Language) => language.code) .uniqBy((language: Language) => language.code)
.orderBy((language: Language) => language.name) .orderBy((language: Language) => language.name)
.forEach(async (language: Language) => { .forEach(async (language: Language) => {
const languageStreams = streams.filter(stream => stream.hasLanguage(language)) const languageStreams = streams.filter((stream: Stream) => stream.hasLanguage(language))
if (languageStreams.isEmpty()) return if (languageStreams.isEmpty()) return
const playlist = new Playlist(languageStreams, { public: true }) const playlist = new Playlist(languageStreams, { public: true })
const filepath = `languages/${language.code}.m3u` const filepath = `languages/${language.code}.m3u`
await this.storage.save(filepath, playlist.toString()) await this.storage.save(filepath, playlist.toString())
this.logger.info(JSON.stringify({ filepath, count: playlist.streams.count() })) this.logger.info(
JSON.stringify({ type: 'language', filepath, count: playlist.streams.count() })
)
}) })
const undefinedStreams = streams.filter(stream => stream.noLanguages()) const undefinedStreams = streams.filter((stream: Stream) => !stream.hasLanguages())
if (undefinedStreams.isEmpty()) return if (undefinedStreams.isEmpty()) return
const playlist = new Playlist(undefinedStreams, { public: true }) const playlist = new Playlist(undefinedStreams, { public: true })
const filepath = 'languages/undefined.m3u' const filepath = 'languages/undefined.m3u'
await this.storage.save(filepath, playlist.toString()) await this.storage.save(filepath, playlist.toString())
this.logger.info(JSON.stringify({ filepath, count: playlist.streams.count() })) this.logger.info(
JSON.stringify({ type: 'language', filepath, count: playlist.streams.count() })
)
} }
} }

@ -1,53 +1,61 @@
import { Generator } from './generator' import { Generator } from './generator'
import { Collection, Storage, Logger } from '@freearhey/core' import { Collection, Storage, Logger } from '@freearhey/core'
import { Playlist, Subdivision, Region } from '../models' import { Playlist, Region, Stream } from '../models'
import { PUBLIC_DIR } from '../constants' import { PUBLIC_DIR } from '../constants'
type RegionsGeneratorProps = { type RegionsGeneratorProps = {
streams: Collection streams: Collection
regions: Collection regions: Collection
subdivisions: Collection
logger: Logger logger: Logger
} }
export class RegionsGenerator implements Generator { export class RegionsGenerator implements Generator {
streams: Collection streams: Collection
regions: Collection regions: Collection
subdivisions: Collection
storage: Storage storage: Storage
logger: Logger logger: Logger
constructor({ streams, regions, subdivisions, logger }: RegionsGeneratorProps) { constructor({ streams, regions, logger }: RegionsGeneratorProps) {
this.streams = streams this.streams = streams
this.regions = regions this.regions = regions
this.subdivisions = subdivisions
this.storage = new Storage(PUBLIC_DIR) this.storage = new Storage(PUBLIC_DIR)
this.logger = logger this.logger = logger
} }
async generate(): Promise<void> { async generate(): Promise<void> {
const streams = this.streams const streams = this.streams
.orderBy(stream => stream.getTitle()) .orderBy((stream: Stream) => stream.getTitle())
.filter(stream => stream.isSFW()) .filter((stream: Stream) => stream.isSFW())
this.regions.forEach(async (region: Region) => { this.regions.forEach(async (region: Region) => {
if (region.code === 'INT') return if (region.isWorldwide()) return
const regionSubdivisionsCodes = this.subdivisions const regionStreams = streams.filter((stream: Stream) => stream.isBroadcastInRegion(region))
.filter((subdivision: Subdivision) => region.countries.indexOf(subdivision.country) > -1)
.map((subdivision: Subdivision) => `s/${subdivision.code}`)
const regionCodes = region.countries
.map((code: string) => `c/${code}`)
.concat(regionSubdivisionsCodes)
.add(`r/${region.code}`)
const regionStreams = streams.filter(stream => stream.broadcastArea.intersects(regionCodes))
const playlist = new Playlist(regionStreams, { public: true }) const playlist = new Playlist(regionStreams, { public: true })
const filepath = `regions/${region.code.toLowerCase()}.m3u` const filepath = `regions/${region.code.toLowerCase()}.m3u`
await this.storage.save(filepath, playlist.toString()) await this.storage.save(filepath, playlist.toString())
this.logger.info(JSON.stringify({ filepath, count: playlist.streams.count() })) this.logger.info(
JSON.stringify({ type: 'region', filepath, count: playlist.streams.count() })
)
}) })
const internationalStreams = streams.filter((stream: Stream) => stream.isInternational())
const internationalPlaylist = new Playlist(internationalStreams, { public: true })
const internationalFilepath = 'regions/int.m3u'
await this.storage.save(internationalFilepath, internationalPlaylist.toString())
this.logger.info(
JSON.stringify({
type: 'region',
filepath: internationalFilepath,
count: internationalPlaylist.streams.count()
})
)
const undefinedStreams = streams.filter((stream: Stream) => !stream.hasBroadcastArea())
const playlist = new Playlist(undefinedStreams, { public: true })
const filepath = 'regions/undefined.m3u'
await this.storage.save(filepath, playlist.toString())
this.logger.info(JSON.stringify({ type: 'region', filepath, count: playlist.streams.count() }))
} }
} }

@ -5,13 +5,13 @@ type BlockedProps = {
} }
export class Blocked { export class Blocked {
channel: string channelId: string
reason: string reason: string
ref: string ref: string
constructor({ ref, reason, channel }: BlockedProps) { constructor(data: BlockedProps) {
this.channel = channel this.channelId = data.channel
this.reason = reason this.reason = data.reason
this.ref = ref this.ref = data.ref
} }
} }

@ -0,0 +1,11 @@
type BroadcastAreaProps = {
code: string
}
export class BroadcastArea {
code: string
constructor(data: BroadcastAreaProps) {
this.code = data.code
}
}

@ -1,4 +1,4 @@
type CategoryProps = { type CategoryData = {
id: string id: string
name: string name: string
} }
@ -7,8 +7,8 @@ export class Category {
id: string id: string
name: string name: string
constructor({ id, name }: CategoryProps) { constructor(data: CategoryData) {
this.id = id this.id = data.id
this.name = name this.name = data.name
} }
} }

@ -1,17 +1,16 @@
import { Collection } from '@freearhey/core' import { Collection, Dictionary } from '@freearhey/core'
import { Category, Country, Subdivision } from './index'
type ChannelProps = { type ChannelData = {
id: string id: string
name: string name: string
alt_names: string[] alt_names: string[]
network: string network: string
owners: string[] owners: Collection
country: string country: string
subdivision: string subdivision: string
city: string city: string
broadcast_area: string[] categories: Collection
languages: string[]
categories: string[]
is_nsfw: boolean is_nsfw: boolean
launched: string launched: string
closed: string closed: string
@ -24,56 +23,86 @@ export class Channel {
id: string id: string
name: string name: string
altNames: Collection altNames: Collection
network: string network?: string
owners: Collection owners: Collection
country: string countryCode: string
subdivision: string country?: Country
city: string subdivisionCode?: string
broadcastArea: Collection subdivision?: Subdivision
languages: Collection cityName?: string
categories: Collection categoryIds: Collection
categories?: Collection
isNSFW: boolean isNSFW: boolean
launched: string launched?: string
closed: string closed?: string
replacedBy: string replacedBy?: string
website: string website?: string
logo: string logo: string
constructor({ constructor(data: ChannelData) {
id, this.id = data.id
name, this.name = data.name
alt_names, this.altNames = new Collection(data.alt_names)
network, this.network = data.network || undefined
owners, this.owners = new Collection(data.owners)
country, this.countryCode = data.country
subdivision, this.subdivisionCode = data.subdivision || undefined
city, this.cityName = data.city || undefined
broadcast_area, this.categoryIds = new Collection(data.categories)
languages, this.isNSFW = data.is_nsfw
categories, this.launched = data.launched || undefined
is_nsfw, this.closed = data.closed || undefined
launched, this.replacedBy = data.replaced_by || undefined
closed, this.website = data.website || undefined
replaced_by, this.logo = data.logo
website, }
logo
}: ChannelProps) { withSubdivision(subdivisionsGroupedByCode: Dictionary): this {
this.id = id if (!this.subdivisionCode) return this
this.name = name
this.altNames = new Collection(alt_names) this.subdivision = subdivisionsGroupedByCode.get(this.subdivisionCode)
this.network = network
this.owners = new Collection(owners) return this
this.country = country }
this.subdivision = subdivision
this.city = city withCountry(countriesGroupedByCode: Dictionary): this {
this.broadcastArea = new Collection(broadcast_area) this.country = countriesGroupedByCode.get(this.countryCode)
this.languages = new Collection(languages)
this.categories = new Collection(categories) return this
this.isNSFW = is_nsfw }
this.launched = launched
this.closed = closed withCategories(groupedCategories: Dictionary): this {
this.replacedBy = replaced_by this.categories = this.categoryIds
this.website = website .map((id: string) => groupedCategories.get(id))
this.logo = logo .filter(Boolean)
return this
}
getCountry(): Country | undefined {
return this.country
}
getSubdivision(): Subdivision | undefined {
return this.subdivision
}
getCategories(): Collection {
return this.categories || new Collection()
}
hasCategories(): boolean {
return !!this.categories && this.categories.notEmpty()
}
hasCategory(category: Category): boolean {
return (
!!this.categories &&
this.categories.includes((_category: Category) => _category.id === category.id)
)
}
isSFW(): boolean {
return this.isNSFW === false
} }
} }

@ -1,20 +1,58 @@
type CountryProps = { import { Collection, Dictionary } from '@freearhey/core'
import { Region, Language } from '.'
type CountryData = {
code: string code: string
name: string name: string
languages: string[] lang: string
flag: string flag: string
} }
export class Country { export class Country {
code: string code: string
name: string name: string
languages: string[]
flag: string flag: string
languageCode: string
language?: Language
subdivisions?: Collection
regions?: Collection
constructor(data: CountryData) {
this.code = data.code
this.name = data.name
this.flag = data.flag
this.languageCode = data.lang
}
withSubdivisions(subdivisionsGroupedByCountryCode: Dictionary): this {
this.subdivisions = subdivisionsGroupedByCountryCode.get(this.code) || new Collection()
return this
}
withRegions(regions: Collection): this {
this.regions = regions.filter(
(region: Region) => region.code !== 'INT' && region.includesCountryCode(this.code)
)
return this
}
withLanguage(languagesGroupedByCode: Dictionary): this {
this.language = languagesGroupedByCode.get(this.languageCode)
return this
}
getLanguage(): Language | undefined {
return this.language
}
getRegions(): Collection {
return this.regions || new Collection()
}
constructor({ code, name, languages, flag }: CountryProps) { getSubdivisions(): Collection {
this.code = code return this.subdivisions || new Collection()
this.name = name
this.languages = languages
this.flag = flag
} }
} }

@ -0,0 +1,200 @@
import { Collection, Dictionary } from '@freearhey/core'
import { Country, Language, Region, Channel, Subdivision } from './index'
type FeedData = {
channel: string
id: string
name: string
is_main: boolean
broadcast_area: Collection
languages: Collection
timezones: Collection
video_format: string
}
export class Feed {
channelId: string
channel?: Channel
id: string
name: string
isMain: boolean
broadcastAreaCodes: Collection
broadcastCountryCodes: Collection
broadcastCountries?: Collection
broadcastRegionCodes: Collection
broadcastRegions?: Collection
broadcastSubdivisionCodes: Collection
broadcastSubdivisions?: Collection
languageCodes: Collection
languages?: Collection
timezoneIds: Collection
timezones?: Collection
videoFormat: string
constructor(data: FeedData) {
this.channelId = data.channel
this.id = data.id
this.name = data.name
this.isMain = data.is_main
this.broadcastAreaCodes = new Collection(data.broadcast_area)
this.languageCodes = new Collection(data.languages)
this.timezoneIds = new Collection(data.timezones)
this.videoFormat = data.video_format
this.broadcastCountryCodes = new Collection()
this.broadcastRegionCodes = new Collection()
this.broadcastSubdivisionCodes = new Collection()
this.broadcastAreaCodes.forEach((areaCode: string) => {
const [type, code] = areaCode.split('/')
switch (type) {
case 'c':
this.broadcastCountryCodes.add(code)
break
case 'r':
this.broadcastRegionCodes.add(code)
break
case 's':
this.broadcastSubdivisionCodes.add(code)
break
}
})
}
withChannel(channelsGroupedById: Dictionary): this {
this.channel = channelsGroupedById.get(this.channelId)
return this
}
withLanguages(languagesGroupedByCode: Dictionary): this {
this.languages = this.languageCodes
.map((code: string) => languagesGroupedByCode.get(code))
.filter(Boolean)
return this
}
withTimezones(timezonesGroupedById: Dictionary): this {
this.timezones = this.timezoneIds
.map((id: string) => timezonesGroupedById.get(id))
.filter(Boolean)
return this
}
withBroadcastSubdivisions(subdivisionsGroupedByCode: Dictionary): this {
this.broadcastSubdivisions = this.broadcastSubdivisionCodes.map((code: string) =>
subdivisionsGroupedByCode.get(code)
)
return this
}
withBroadcastCountries(
countriesGroupedByCode: Dictionary,
regionsGroupedByCode: Dictionary,
subdivisionsGroupedByCode: Dictionary
): this {
let broadcastCountries = new Collection()
if (this.isInternational()) {
this.broadcastCountries = broadcastCountries
return this
}
this.broadcastCountryCodes.forEach((code: string) => {
broadcastCountries.add(countriesGroupedByCode.get(code))
})
this.broadcastRegionCodes.forEach((code: string) => {
const region: Region = regionsGroupedByCode.get(code)
if (region) {
region.countryCodes.forEach((countryCode: string) => {
broadcastCountries.add(countriesGroupedByCode.get(countryCode))
})
}
})
this.broadcastSubdivisionCodes.forEach((code: string) => {
const subdivision: Subdivision = subdivisionsGroupedByCode.get(code)
if (subdivision) {
broadcastCountries.add(countriesGroupedByCode.get(subdivision.countryCode))
}
})
this.broadcastCountries = broadcastCountries.uniq().filter(Boolean)
return this
}
withBroadcastRegions(regions: Collection): this {
if (!this.broadcastCountries) return this
const countriesCodes = this.broadcastCountries.map((country: Country) => country.code)
this.broadcastRegions = regions.filter((region: Region) => {
if (region.code === 'INT') return false
return region.countryCodes.intersects(countriesCodes)
})
return this
}
hasBroadcastArea(): boolean {
return (
this.isInternational() || (!!this.broadcastCountries && this.broadcastCountries.notEmpty())
)
}
getBroadcastCountries(): Collection {
return this.broadcastCountries || new Collection()
}
getBroadcastRegions(): Collection {
return this.broadcastRegions || new Collection()
}
getTimezones(): Collection {
return this.timezones || new Collection()
}
getLanguages(): Collection {
return this.languages || new Collection()
}
hasLanguages(): boolean {
return !!this.languages && this.languages.notEmpty()
}
hasLanguage(language: Language): boolean {
return (
!!this.languages &&
this.languages.includes((_language: Language) => _language.code === language.code)
)
}
isInternational(): boolean {
return this.broadcastAreaCodes.includes('r/INT')
}
isBroadcastInSubdivision(subdivision: Subdivision): boolean {
if (this.isInternational()) return false
return this.broadcastSubdivisionCodes.includes(subdivision.code)
}
isBroadcastInCountry(country: Country): boolean {
if (this.isInternational()) return false
return this.getBroadcastCountries().includes(
(_country: Country) => _country.code === country.code
)
}
isBroadcastInRegion(region: Region): boolean {
if (this.isInternational()) return false
return this.getBroadcastRegions().includes((_region: Region) => _region.code === region.code)
}
}

@ -8,3 +8,6 @@ export * from './language'
export * from './country' export * from './country'
export * from './region' export * from './region'
export * from './subdivision' export * from './subdivision'
export * from './feed'
export * from './broadcastArea'
export * from './timezone'

@ -1,4 +1,4 @@
type LanguageProps = { type LanguageData = {
code: string code: string
name: string name: string
} }
@ -7,8 +7,8 @@ export class Language {
code: string code: string
name: string name: string
constructor({ code, name }: LanguageProps) { constructor(data: LanguageData) {
this.code = code this.code = data.code
this.name = name this.name = data.name
} }
} }

@ -1,6 +1,7 @@
import { Collection } from '@freearhey/core' import { Collection, Dictionary } from '@freearhey/core'
import { Subdivision } from '.'
type RegionProps = { type RegionData = {
code: string code: string
name: string name: string
countries: string[] countries: string[]
@ -9,11 +10,43 @@ type RegionProps = {
export class Region { export class Region {
code: string code: string
name: string name: string
countries: Collection countryCodes: Collection
countries?: Collection
subdivisions?: Collection
constructor({ code, name, countries }: RegionProps) { constructor(data: RegionData) {
this.code = code this.code = data.code
this.name = name this.name = data.name
this.countries = new Collection(countries) this.countryCodes = new Collection(data.countries)
}
withCountries(countriesGroupedByCode: Dictionary): this {
this.countries = this.countryCodes.map((code: string) => countriesGroupedByCode.get(code))
return this
}
withSubdivisions(subdivisions: Collection): this {
this.subdivisions = subdivisions.filter(
(subdivision: Subdivision) => this.countryCodes.indexOf(subdivision.countryCode) > -1
)
return this
}
getSubdivisions(): Collection {
return this.subdivisions || new Collection()
}
getCountries(): Collection {
return this.countries || new Collection()
}
includesCountryCode(code: string): boolean {
return this.countryCodes.includes((countryCode: string) => countryCode === code)
}
isWorldwide(): boolean {
return this.code === 'INT'
} }
} }

@ -1,64 +1,193 @@
import { URL, Collection } from '@freearhey/core' import { URL, Collection, Dictionary } from '@freearhey/core'
import { Category, Language } from './index' import { Feed, Channel, Category, Region, Subdivision, Country, Language } from './index'
import parser from 'iptv-playlist-parser'
type StreamProps = { export class Stream {
name: string name: string
url: string url: string
filepath: string id?: string
groupTitle: string
channelId?: string
channel?: Channel
feedId?: string
feed?: Feed
filepath?: string
line: number line: number
channel?: string label?: string
verticalResolution?: number
isInterlaced?: boolean
httpReferrer?: string httpReferrer?: string
httpUserAgent?: string httpUserAgent?: string
label?: string
quality?: string
}
export class Stream {
channel: string
filepath: string
line: number
httpReferrer: string
label: string
name: string
quality: string
url: string
httpUserAgent: string
logo: string
broadcastArea: Collection
categories: Collection
languages: Collection
isNSFW: boolean
groupTitle: string
removed: boolean = false removed: boolean = false
constructor({ constructor(data: parser.PlaylistItem) {
channel, if (!data.name) throw new Error('"name" property is required')
filepath, if (!data.url) throw new Error('"url" property is required')
line,
httpReferrer, const [channelId, feedId] = data.tvg.id.split('@')
label, const { name, label, quality } = parseTitle(data.name)
name, const { verticalResolution, isInterlaced } = parseQuality(quality)
quality,
url, this.id = data.tvg.id || undefined
httpUserAgent this.feedId = feedId || undefined
}: StreamProps) { this.channelId = channelId || undefined
this.channel = channel || '' this.line = data.line
this.filepath = filepath this.label = label || undefined
this.line = line
this.httpReferrer = httpReferrer || ''
this.label = label || ''
this.name = name this.name = name
this.quality = quality || '' this.verticalResolution = verticalResolution || undefined
this.url = url this.isInterlaced = isInterlaced || undefined
this.httpUserAgent = httpUserAgent || '' this.url = data.url
this.logo = '' this.httpReferrer = data.http.referrer || undefined
this.broadcastArea = new Collection() this.httpUserAgent = data.http['user-agent'] || undefined
this.categories = new Collection()
this.languages = new Collection()
this.isNSFW = false
this.groupTitle = 'Undefined' this.groupTitle = 'Undefined'
} }
withChannel(channelsGroupedById: Dictionary): this {
if (!this.channelId) return this
this.channel = channelsGroupedById.get(this.channelId)
return this
}
withFeed(feedsGroupedByChannelId: Dictionary): this {
if (!this.channelId) return this
const channelFeeds = feedsGroupedByChannelId.get(this.channelId) || []
if (this.feedId) this.feed = channelFeeds.find((feed: Feed) => feed.id === this.feedId)
if (!this.feedId && !this.feed) this.feed = channelFeeds.find((feed: Feed) => feed.isMain)
return this
}
setId(id: string): this {
this.id = id
return this
}
setChannelId(channelId: string): this {
this.channelId = channelId
return this
}
setFeedId(feedId: string | undefined): this {
this.feedId = feedId
return this
}
setLabel(label: string): this {
this.label = label
return this
}
setQuality(quality: string): this {
const { verticalResolution, isInterlaced } = parseQuality(quality)
this.verticalResolution = verticalResolution || undefined
this.isInterlaced = isInterlaced || undefined
return this
}
setHttpUserAgent(httpUserAgent: string): this {
this.httpUserAgent = httpUserAgent
return this
}
setHttpReferrer(httpReferrer: string): this {
this.httpReferrer = httpReferrer
return this
}
setFilepath(filepath: string): this {
this.filepath = filepath
return this
}
updateFilepath(): this {
if (!this.channel) return this
this.filepath = `${this.channel.countryCode.toLowerCase()}.m3u`
return this
}
getChannelId(): string {
return this.channelId || ''
}
getFeedId(): string {
if (this.feedId) return this.feedId
if (this.feed) return this.feed.id
return ''
}
getFilepath(): string {
return this.filepath || ''
}
getHttpReferrer(): string {
return this.httpReferrer || ''
}
getHttpUserAgent(): string {
return this.httpUserAgent || ''
}
getQuality(): string {
if (!this.verticalResolution) return ''
let quality = this.verticalResolution.toString()
if (this.isInterlaced) quality += 'i'
else quality += 'p'
return quality
}
hasId(): boolean {
return !!this.id
}
hasQuality(): boolean {
return !!this.verticalResolution
}
getVerticalResolution(): number {
if (!this.hasQuality()) return 0
return parseInt(this.getQuality().replace(/p|i/, ''))
}
updateName(): this {
if (!this.channel) return this
this.name = this.channel.name
if (this.feed && !this.feed.isMain) {
this.name += ` ${this.feed.name}`
}
return this
}
updateId(): this {
if (!this.channel) return this
if (this.feed) {
this.id = `${this.channel.id}@${this.feed.id}`
} else {
this.id = this.channel.id
}
return this
}
normalizeURL() { normalizeURL() {
const url = new URL(this.url) const url = new URL(this.url)
@ -81,43 +210,82 @@ export class Stream {
return !!this.channel return !!this.channel
} }
hasCategories(): boolean { getBroadcastRegions(): Collection {
return this.categories.notEmpty() return this.feed ? this.feed.getBroadcastRegions() : new Collection()
}
getBroadcastCountries(): Collection {
return this.feed ? this.feed.getBroadcastCountries() : new Collection()
}
hasBroadcastArea(): boolean {
return this.feed ? this.feed.hasBroadcastArea() : false
}
isSFW(): boolean {
return this.channel ? this.channel.isSFW() : true
} }
noCategories(): boolean { hasCategories(): boolean {
return this.categories.isEmpty() return this.channel ? this.channel.hasCategories() : false
} }
hasCategory(category: Category): boolean { hasCategory(category: Category): boolean {
return this.categories.includes((_category: Category) => _category.id === category.id) return this.channel ? this.channel.hasCategory(category) : false
}
getCategoryNames(): string[] {
return this.getCategories()
.map((category: Category) => category.name)
.sort()
.all()
}
getCategories(): Collection {
return this.channel ? this.channel.getCategories() : new Collection()
}
getLanguages(): Collection {
return this.feed ? this.feed.getLanguages() : new Collection()
}
hasLanguages() {
return this.feed ? this.feed.hasLanguages() : false
}
hasLanguage(language: Language) {
return this.feed ? this.feed.hasLanguage(language) : false
}
getBroadcastAreaCodes(): Collection {
return this.feed ? this.feed.broadcastAreaCodes : new Collection()
} }
noLanguages(): boolean { isBroadcastInSubdivision(subdivision: Subdivision): boolean {
return this.languages.isEmpty() return this.feed ? this.feed.isBroadcastInSubdivision(subdivision) : false
} }
hasLanguage(language: Language): boolean { isBroadcastInCountry(country: Country): boolean {
return this.languages.includes((_language: Language) => _language.code === language.code) return this.feed ? this.feed.isBroadcastInCountry(country) : false
} }
noBroadcastArea(): boolean { isBroadcastInRegion(region: Region): boolean {
return this.broadcastArea.isEmpty() return this.feed ? this.feed.isBroadcastInRegion(region) : false
} }
isInternational(): boolean { isInternational(): boolean {
return this.broadcastArea.includes('r/INT') return this.feed ? this.feed.isInternational() : false
} }
isSFW(): boolean { getLogo(): string {
return this.isNSFW === false return this?.channel?.logo || ''
} }
getTitle(): string { getTitle(): string {
let title = `${this.name}` let title = `${this.name}`
if (this.quality) { if (this.getQuality()) {
title += ` (${this.quality})` title += ` (${this.getQuality()})`
} }
if (this.label) { if (this.label) {
@ -127,15 +295,26 @@ export class Stream {
return title return title
} }
getLabel(): string {
return this.label || ''
}
getId(): string {
return this.id || ''
}
data() { data() {
return { return {
id: this.id,
channel: this.channel, channel: this.channel,
feed: this.feed,
filepath: this.filepath, filepath: this.filepath,
httpReferrer: this.httpReferrer,
label: this.label, label: this.label,
name: this.name, name: this.name,
quality: this.quality, verticalResolution: this.verticalResolution,
isInterlaced: this.isInterlaced,
url: this.url, url: this.url,
httpReferrer: this.httpReferrer,
httpUserAgent: this.httpUserAgent, httpUserAgent: this.httpUserAgent,
line: this.line line: this.line
} }
@ -143,7 +322,8 @@ export class Stream {
toJSON() { toJSON() {
return { return {
channel: this.channel || null, channel: this.channelId || null,
feed: this.feedId || null,
url: this.url, url: this.url,
referrer: this.httpReferrer || null, referrer: this.httpReferrer || null,
user_agent: this.httpUserAgent || null user_agent: this.httpUserAgent || null
@ -151,10 +331,10 @@ export class Stream {
} }
toString(options: { public: boolean }) { toString(options: { public: boolean }) {
let output = `#EXTINF:-1 tvg-id="${this.channel}"` let output = `#EXTINF:-1 tvg-id="${this.getId()}"`
if (options.public) { if (options.public) {
output += ` tvg-logo="${this.logo}" group-title="${this.groupTitle}"` output += ` tvg-logo="${this.getLogo()}" group-title="${this.groupTitle}"`
} }
if (this.httpReferrer) { if (this.httpReferrer) {
@ -180,3 +360,29 @@ export class Stream {
return output return output
} }
} }
function parseTitle(title: string): {
name: string
label: string
quality: string
} {
const [, label] = title.match(/ \[(.*)\]$/) || [null, '']
title = title.replace(new RegExp(` \\[${escapeRegExp(label)}\\]$`), '')
const [, quality] = title.match(/ \(([0-9]+p)\)$/) || [null, '']
title = title.replace(new RegExp(` \\(${quality}\\)$`), '')
return { name: title, label, quality }
}
function escapeRegExp(text) {
return text.replace(/[-[\]{}()*+?.,\\^$|#\s]/g, '\\$&')
}
function parseQuality(quality: string): { verticalResolution: number; isInterlaced: boolean } {
let [, verticalResolutionString] = quality.match(/^(\d+)/) || [null, undefined]
const isInterlaced = /i$/i.test(quality)
let verticalResolution = 0
if (verticalResolutionString) verticalResolution = parseInt(verticalResolutionString)
return { verticalResolution, isInterlaced }
}

@ -1,4 +1,7 @@
type SubdivisionProps = { import { Dictionary } from '@freearhey/core'
import { Country } from '.'
type SubdivisionData = {
code: string code: string
name: string name: string
country: string country: string
@ -7,11 +10,18 @@ type SubdivisionProps = {
export class Subdivision { export class Subdivision {
code: string code: string
name: string name: string
country: string countryCode: string
country?: Country
constructor(data: SubdivisionData) {
this.code = data.code
this.name = data.name
this.countryCode = data.country
}
withCountry(countriesGroupedByCode: Dictionary): this {
this.country = countriesGroupedByCode.get(this.countryCode)
constructor({ code, name, country }: SubdivisionProps) { return this
this.code = code
this.name = name
this.country = country
} }
} }

@ -0,0 +1,30 @@
import { Collection, Dictionary } from '@freearhey/core'
type TimezoneData = {
id: string
utc_offset: string
countries: string[]
}
export class Timezone {
id: string
utcOffset: string
countryCodes: Collection
countries?: Collection
constructor(data: TimezoneData) {
this.id = data.id
this.utcOffset = data.utc_offset
this.countryCodes = new Collection(data.countries)
}
withCountries(countriesGroupedByCode: Dictionary): this {
this.countries = this.countryCodes.map((code: string) => countriesGroupedByCode.get(code))
return this
}
getCountries(): Collection {
return this.countries || new Collection()
}
}

@ -11,6 +11,7 @@ export class CategoryTable implements Table {
const dataStorage = new Storage(DATA_DIR) const dataStorage = new Storage(DATA_DIR)
const categoriesContent = await dataStorage.json('categories.json') const categoriesContent = await dataStorage.json('categories.json')
const categories = new Collection(categoriesContent).map(data => new Category(data)) const categories = new Collection(categoriesContent).map(data => new Category(data))
const categoriesGroupedById = categories.keyBy((category: Category) => category.id)
const parser = new LogParser() const parser = new LogParser()
const logsStorage = new Storage(LOGS_DIR) const logsStorage = new Storage(LOGS_DIR)
@ -19,13 +20,12 @@ export class CategoryTable implements Table {
let data = new Collection() let data = new Collection()
parser parser
.parse(generatorsLog) .parse(generatorsLog)
.filter((logItem: LogItem) => logItem.filepath.includes('categories/')) .filter((logItem: LogItem) => logItem.type === 'category')
.forEach((logItem: LogItem) => { .forEach((logItem: LogItem) => {
const file = new File(logItem.filepath) const file = new File(logItem.filepath)
const categoryId = file.name() const categoryId = file.name()
const category: Category = categories.first( const category: Category = categoriesGroupedById.get(categoryId)
(category: Category) => category.id === categoryId
)
data.add([ data.add([
category ? category.name : 'ZZ', category ? category.name : 'ZZ',
category ? category.name : 'Undefined', category ? category.name : 'Undefined',

@ -12,34 +12,31 @@ export class CountryTable implements Table {
const countriesContent = await dataStorage.json('countries.json') const countriesContent = await dataStorage.json('countries.json')
const countries = new Collection(countriesContent).map(data => new Country(data)) const countries = new Collection(countriesContent).map(data => new Country(data))
const countriesGroupedByCode = countries.keyBy((country: Country) => country.code)
const subdivisionsContent = await dataStorage.json('subdivisions.json') const subdivisionsContent = await dataStorage.json('subdivisions.json')
const subdivisions = new Collection(subdivisionsContent).map(data => new Subdivision(data)) const subdivisions = new Collection(subdivisionsContent).map(data => new Subdivision(data))
const subdivisionsGroupedByCode = subdivisions.keyBy(
(subdivision: Subdivision) => subdivision.code
)
const parser = new LogParser() const parser = new LogParser()
const logsStorage = new Storage(LOGS_DIR) const logsStorage = new Storage(LOGS_DIR)
const generatorsLog = await logsStorage.load('generators.log') const generatorsLog = await logsStorage.load('generators.log')
const parsed = parser.parse(generatorsLog)
let data = new Collection() let data = new Collection()
parser
.parse(generatorsLog) parsed
.filter( .filter((logItem: LogItem) => logItem.type === 'subdivision')
(logItem: LogItem) =>
logItem.filepath.includes('countries/') || logItem.filepath.includes('subdivisions/')
)
.forEach((logItem: LogItem) => { .forEach((logItem: LogItem) => {
const file = new File(logItem.filepath) const file = new File(logItem.filepath)
const code = file.name().toUpperCase() const code = file.name().toUpperCase()
const [countryCode, subdivisionCode] = code.split('-') || ['', ''] const [countryCode, subdivisionCode] = code.split('-') || ['', '']
const country = countriesGroupedByCode.get(countryCode)
if (subdivisionCode) { if (country && subdivisionCode) {
const subdivision = subdivisions.first( const subdivision = subdivisionsGroupedByCode.get(code)
(subdivision: Subdivision) => subdivision.code === code
)
if (subdivision) { if (subdivision) {
const country = countries.first(
(country: Country) => country.code === subdivision.country
)
data.add([ data.add([
`${country.name}/${subdivision.name}`, `${country.name}/${subdivision.name}`,
`&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;${subdivision.name}`, `&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;${subdivision.name}`,
@ -47,18 +44,28 @@ export class CountryTable implements Table {
`<code>https://iptv-org.github.io/iptv/${logItem.filepath}</code>` `<code>https://iptv-org.github.io/iptv/${logItem.filepath}</code>`
]) ])
} }
} else if (countryCode === 'INT') { }
})
parsed
.filter((logItem: LogItem) => logItem.type === 'country')
.forEach((logItem: LogItem) => {
const file = new File(logItem.filepath)
const code = file.name().toUpperCase()
const [countryCode] = code.split('-') || ['', '']
const country = countriesGroupedByCode.get(countryCode)
if (country) {
data.add([ data.add([
'ZZ', country.name,
'🌍 International', `${country.flag} ${country.name}`,
logItem.count, logItem.count,
`<code>https://iptv-org.github.io/iptv/${logItem.filepath}</code>` `<code>https://iptv-org.github.io/iptv/${logItem.filepath}</code>`
]) ])
} else { } else {
const country = countries.first((country: Country) => country.code === countryCode)
data.add([ data.add([
country.name, 'ZZ',
`${country.flag} ${country.name}`, 'Undefined',
logItem.count, logItem.count,
`<code>https://iptv-org.github.io/iptv/${logItem.filepath}</code>` `<code>https://iptv-org.github.io/iptv/${logItem.filepath}</code>`
]) ])

@ -11,6 +11,7 @@ export class LanguageTable implements Table {
const dataStorage = new Storage(DATA_DIR) const dataStorage = new Storage(DATA_DIR)
const languagesContent = await dataStorage.json('languages.json') const languagesContent = await dataStorage.json('languages.json')
const languages = new Collection(languagesContent).map(data => new Language(data)) const languages = new Collection(languagesContent).map(data => new Language(data))
const languagesGroupedByCode = languages.keyBy((language: Language) => language.code)
const parser = new LogParser() const parser = new LogParser()
const logsStorage = new Storage(LOGS_DIR) const logsStorage = new Storage(LOGS_DIR)
@ -19,13 +20,11 @@ export class LanguageTable implements Table {
let data = new Collection() let data = new Collection()
parser parser
.parse(generatorsLog) .parse(generatorsLog)
.filter((logItem: LogItem) => logItem.filepath.includes('languages/')) .filter((logItem: LogItem) => logItem.type === 'language')
.forEach((logItem: LogItem) => { .forEach((logItem: LogItem) => {
const file = new File(logItem.filepath) const file = new File(logItem.filepath)
const languageCode = file.name() const languageCode = file.name()
const language: Language = languages.first( const language: Language = languagesGroupedByCode.get(languageCode)
(language: Language) => language.code === languageCode
)
data.add([ data.add([
language ? language.name : 'ZZ', language ? language.name : 'ZZ',

@ -11,6 +11,7 @@ export class RegionTable implements Table {
const dataStorage = new Storage(DATA_DIR) const dataStorage = new Storage(DATA_DIR)
const regionsContent = await dataStorage.json('regions.json') const regionsContent = await dataStorage.json('regions.json')
const regions = new Collection(regionsContent).map(data => new Region(data)) const regions = new Collection(regionsContent).map(data => new Region(data))
const regionsGroupedByCode = regions.keyBy((region: Region) => region.code)
const parser = new LogParser() const parser = new LogParser()
const logsStorage = new Storage(LOGS_DIR) const logsStorage = new Storage(LOGS_DIR)
@ -19,22 +20,35 @@ export class RegionTable implements Table {
let data = new Collection() let data = new Collection()
parser parser
.parse(generatorsLog) .parse(generatorsLog)
.filter((logItem: LogItem) => logItem.filepath.includes('regions/')) .filter((logItem: LogItem) => logItem.type === 'region')
.forEach((logItem: LogItem) => { .forEach((logItem: LogItem) => {
const file = new File(logItem.filepath) const file = new File(logItem.filepath)
const regionCode = file.name().toUpperCase() const regionCode = file.name().toUpperCase()
const region: Region = regions.first((region: Region) => region.code === regionCode) const region: Region = regionsGroupedByCode.get(regionCode)
if (region) { if (region) {
data.add([ data.add([
region.name, region.name,
region.name,
logItem.count,
`<code>https://iptv-org.github.io/iptv/${logItem.filepath}</code>`
])
} else {
data.add([
'ZZZ',
'Undefined',
logItem.count, logItem.count,
`<code>https://iptv-org.github.io/iptv/${logItem.filepath}</code>` `<code>https://iptv-org.github.io/iptv/${logItem.filepath}</code>`
]) ])
} }
}) })
data = data.orderBy(item => item[0]) data = data
.orderBy(item => item[0])
.map(item => {
item.shift()
return item
})
const table = new HTMLTable(data.all(), [ const table = new HTMLTable(data.all(), [
{ name: 'Region', align: 'left' }, { name: 'Region', align: 'left' },

@ -1 +0,0 @@
[{"channel":null,"url":"http://51.15.246.58:8081/daawahtv/daawahtv2/playlist.m3u8","referrer":null,"user_agent":null},{"channel":null,"url":"http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index2.m3u8","referrer":"http://imn.iq","user_agent":"Mozilla/5.0 (iPhone; CPU iPhone OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148"},{"channel":"AndorraTV.ad","url":"https://iptv-all.lanesh4d0w.repl.co/andorra/atv","referrer":null,"user_agent":null},{"channel":"BBCNews.uk","url":"http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index.m3u8","referrer":null,"user_agent":null},{"channel":"LDPRTV.ru","url":"http://46.46.143.222:1935/live/mp4:ldpr.stream/blocked.m3u8","referrer":null,"user_agent":null},{"channel":"MeteoMedia.ca","url":"http://encodercdn1.frontline.ca/encoder181/output/Meteo_Media_720p/playlist.m3u8","referrer":null,"user_agent":null},{"channel":"VisitXTV.nl","url":"https://stream.visit-x.tv/vxtv/ngrp:live_all/30fps.m3u8","referrer":null,"user_agent":null},{"channel":"Zoo.ad","url":"https://iptv-all.lanesh4d0w.repl.co/andorra/zoo","referrer":null,"user_agent":null}]

@ -1,5 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="BBCNews.uk" tvg-logo="https://raw.githubusercontent.com/Tapiosinn/tv-logos/master/countries/united-kingdom/bbc-news-uk.png" group-title="General;News",BBC News HD
http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index.m3u8
#EXTINF:-1 tvg-id="LDPRTV.ru" tvg-logo="https://iptvx.one/icn/ldpr-tv.png" group-title="General",ЛДПР ТВ (1080p)
http://46.46.143.222:1935/live/mp4:ldpr.stream/blocked.m3u8

@ -1,3 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="BBCNews.uk" tvg-logo="https://raw.githubusercontent.com/Tapiosinn/tv-logos/master/countries/united-kingdom/bbc-news-uk.png" group-title="General;News",BBC News HD
http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index.m3u8

@ -1,11 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Undefined" http-referrer="http://imn.iq" http-user-agent="Mozilla/5.0 (iPhone; CPU iPhone OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148",Andorra TV (720p) [Not 24/7]
#EXTVLCOPT:http-referrer=http://imn.iq
#EXTVLCOPT:http-user-agent=Mozilla/5.0 (iPhone; CPU iPhone OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148
http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index2.m3u8
#EXTINF:-1 tvg-id="AndorraTV.ad" tvg-logo="" group-title="Undefined",ATV
https://iptv-all.lanesh4d0w.repl.co/andorra/atv
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Undefined",Daawah TV
http://51.15.246.58:8081/daawahtv/daawahtv2/playlist.m3u8
#EXTINF:-1 tvg-id="Zoo.ad" tvg-logo="" group-title="Undefined",Zoo (720p)
https://iptv-all.lanesh4d0w.repl.co/andorra/zoo

@ -1,3 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="MeteoMedia.ca" tvg-logo="https://s1.twnmm.com/images/en_ca/mobile/logos/twn-mobile-logo.png" group-title="Weather",Meteomedia
http://encodercdn1.frontline.ca/encoder181/output/Meteo_Media_720p/playlist.m3u8

@ -1,3 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="VisitXTV.nl" tvg-logo="https://i.imgur.com/RJ9wbNF.jpg" group-title="XXX",Visit-X TV
https://stream.visit-x.tv/vxtv/ngrp:live_all/30fps.m3u8

@ -1,5 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="AndorraTV.ad" tvg-logo="" group-title="Undefined",ATV
https://iptv-all.lanesh4d0w.repl.co/andorra/atv
#EXTINF:-1 tvg-id="Zoo.ad" tvg-logo="" group-title="Undefined",Zoo (720p)
https://iptv-all.lanesh4d0w.repl.co/andorra/zoo

@ -1,3 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="MeteoMedia.ca" tvg-logo="https://s1.twnmm.com/images/en_ca/mobile/logos/twn-mobile-logo.png" group-title="Weather",Meteomedia
http://encodercdn1.frontline.ca/encoder181/output/Meteo_Media_720p/playlist.m3u8

@ -1,3 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Undefined",Daawah TV
http://51.15.246.58:8081/daawahtv/daawahtv2/playlist.m3u8

@ -1,3 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="LDPRTV.ru" tvg-logo="https://iptvx.one/icn/ldpr-tv.png" group-title="General",ЛДПР ТВ (1080p)
http://46.46.143.222:1935/live/mp4:ldpr.stream/blocked.m3u8

@ -1,19 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="BBCNews.uk" tvg-logo="https://raw.githubusercontent.com/Tapiosinn/tv-logos/master/countries/united-kingdom/bbc-news-uk.png" group-title="General",BBC News HD
http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index.m3u8
#EXTINF:-1 tvg-id="LDPRTV.ru" tvg-logo="https://iptvx.one/icn/ldpr-tv.png" group-title="General",ЛДПР ТВ (1080p)
http://46.46.143.222:1935/live/mp4:ldpr.stream/blocked.m3u8
#EXTINF:-1 tvg-id="BBCNews.uk" tvg-logo="https://raw.githubusercontent.com/Tapiosinn/tv-logos/master/countries/united-kingdom/bbc-news-uk.png" group-title="News",BBC News HD
http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index.m3u8
#EXTINF:-1 tvg-id="MeteoMedia.ca" tvg-logo="https://s1.twnmm.com/images/en_ca/mobile/logos/twn-mobile-logo.png" group-title="Weather",Meteomedia
http://encodercdn1.frontline.ca/encoder181/output/Meteo_Media_720p/playlist.m3u8
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Undefined" http-referrer="http://imn.iq" http-user-agent="Mozilla/5.0 (iPhone; CPU iPhone OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148",Andorra TV (720p) [Not 24/7]
#EXTVLCOPT:http-referrer=http://imn.iq
#EXTVLCOPT:http-user-agent=Mozilla/5.0 (iPhone; CPU iPhone OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148
http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index2.m3u8
#EXTINF:-1 tvg-id="AndorraTV.ad" tvg-logo="" group-title="Undefined",ATV
https://iptv-all.lanesh4d0w.repl.co/andorra/atv
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Undefined",Daawah TV
http://51.15.246.58:8081/daawahtv/daawahtv2/playlist.m3u8
#EXTINF:-1 tvg-id="Zoo.ad" tvg-logo="" group-title="Undefined",Zoo (720p)
https://iptv-all.lanesh4d0w.repl.co/andorra/zoo

@ -1,17 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="AndorraTV.ad" tvg-logo="" group-title="Andorra",ATV
https://iptv-all.lanesh4d0w.repl.co/andorra/atv
#EXTINF:-1 tvg-id="Zoo.ad" tvg-logo="" group-title="Andorra",Zoo (720p)
https://iptv-all.lanesh4d0w.repl.co/andorra/zoo
#EXTINF:-1 tvg-id="MeteoMedia.ca" tvg-logo="https://s1.twnmm.com/images/en_ca/mobile/logos/twn-mobile-logo.png" group-title="Canada",Meteomedia
http://encodercdn1.frontline.ca/encoder181/output/Meteo_Media_720p/playlist.m3u8
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="India",Daawah TV
http://51.15.246.58:8081/daawahtv/daawahtv2/playlist.m3u8
#EXTINF:-1 tvg-id="LDPRTV.ru" tvg-logo="https://iptvx.one/icn/ldpr-tv.png" group-title="Russia",ЛДПР ТВ (1080p)
http://46.46.143.222:1935/live/mp4:ldpr.stream/blocked.m3u8
#EXTINF:-1 tvg-id="BBCNews.uk" tvg-logo="https://raw.githubusercontent.com/Tapiosinn/tv-logos/master/countries/united-kingdom/bbc-news-uk.png" group-title="International",BBC News HD
http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index.m3u8
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Undefined" http-referrer="http://imn.iq" http-user-agent="Mozilla/5.0 (iPhone; CPU iPhone OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148",Andorra TV (720p) [Not 24/7]
#EXTVLCOPT:http-referrer=http://imn.iq
#EXTVLCOPT:http-user-agent=Mozilla/5.0 (iPhone; CPU iPhone OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148
http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index2.m3u8

@ -1,17 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="BBCNews.uk" tvg-logo="https://raw.githubusercontent.com/Tapiosinn/tv-logos/master/countries/united-kingdom/bbc-news-uk.png" group-title="English",BBC News HD
http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index.m3u8
#EXTINF:-1 tvg-id="LDPRTV.ru" tvg-logo="https://iptvx.one/icn/ldpr-tv.png" group-title="Russian",ЛДПР ТВ (1080p)
http://46.46.143.222:1935/live/mp4:ldpr.stream/blocked.m3u8
#EXTINF:-1 tvg-id="AndorraTV.ad" tvg-logo="" group-title="Valencian",ATV
https://iptv-all.lanesh4d0w.repl.co/andorra/atv
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Undefined" http-referrer="http://imn.iq" http-user-agent="Mozilla/5.0 (iPhone; CPU iPhone OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148",Andorra TV (720p) [Not 24/7]
#EXTVLCOPT:http-referrer=http://imn.iq
#EXTVLCOPT:http-user-agent=Mozilla/5.0 (iPhone; CPU iPhone OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148
http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index2.m3u8
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Undefined",Daawah TV
http://51.15.246.58:8081/daawahtv/daawahtv2/playlist.m3u8
#EXTINF:-1 tvg-id="MeteoMedia.ca" tvg-logo="https://s1.twnmm.com/images/en_ca/mobile/logos/twn-mobile-logo.png" group-title="Undefined",Meteomedia
http://encodercdn1.frontline.ca/encoder181/output/Meteo_Media_720p/playlist.m3u8
#EXTINF:-1 tvg-id="Zoo.ad" tvg-logo="" group-title="Undefined",Zoo (720p)
https://iptv-all.lanesh4d0w.repl.co/andorra/zoo

@ -1,17 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Undefined" http-referrer="http://imn.iq" http-user-agent="Mozilla/5.0 (iPhone; CPU iPhone OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148",Andorra TV (720p) [Not 24/7]
#EXTVLCOPT:http-referrer=http://imn.iq
#EXTVLCOPT:http-user-agent=Mozilla/5.0 (iPhone; CPU iPhone OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148
http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index2.m3u8
#EXTINF:-1 tvg-id="AndorraTV.ad" tvg-logo="" group-title="Undefined",ATV
https://iptv-all.lanesh4d0w.repl.co/andorra/atv
#EXTINF:-1 tvg-id="BBCNews.uk" tvg-logo="https://raw.githubusercontent.com/Tapiosinn/tv-logos/master/countries/united-kingdom/bbc-news-uk.png" group-title="General;News",BBC News HD
http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index.m3u8
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Undefined",Daawah TV
http://51.15.246.58:8081/daawahtv/daawahtv2/playlist.m3u8
#EXTINF:-1 tvg-id="MeteoMedia.ca" tvg-logo="https://s1.twnmm.com/images/en_ca/mobile/logos/twn-mobile-logo.png" group-title="Weather",Meteomedia
http://encodercdn1.frontline.ca/encoder181/output/Meteo_Media_720p/playlist.m3u8
#EXTINF:-1 tvg-id="Zoo.ad" tvg-logo="" group-title="Undefined",Zoo (720p)
https://iptv-all.lanesh4d0w.repl.co/andorra/zoo
#EXTINF:-1 tvg-id="LDPRTV.ru" tvg-logo="https://iptvx.one/icn/ldpr-tv.png" group-title="General",ЛДПР ТВ (1080p)
http://46.46.143.222:1935/live/mp4:ldpr.stream/blocked.m3u8

@ -1,45 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="MeteoMedia.ca" tvg-logo="https://s1.twnmm.com/images/en_ca/mobile/logos/twn-mobile-logo.png" group-title="Americas",Meteomedia
http://encodercdn1.frontline.ca/encoder181/output/Meteo_Media_720p/playlist.m3u8
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Asia",Daawah TV
http://51.15.246.58:8081/daawahtv/daawahtv2/playlist.m3u8
#EXTINF:-1 tvg-id="LDPRTV.ru" tvg-logo="https://iptvx.one/icn/ldpr-tv.png" group-title="Asia",ЛДПР ТВ (1080p)
http://46.46.143.222:1935/live/mp4:ldpr.stream/blocked.m3u8
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Asia-Pacific",Daawah TV
http://51.15.246.58:8081/daawahtv/daawahtv2/playlist.m3u8
#EXTINF:-1 tvg-id="LDPRTV.ru" tvg-logo="https://iptvx.one/icn/ldpr-tv.png" group-title="Commonwealth of Independent States",ЛДПР ТВ (1080p)
http://46.46.143.222:1935/live/mp4:ldpr.stream/blocked.m3u8
#EXTINF:-1 tvg-id="AndorraTV.ad" tvg-logo="" group-title="Europe",ATV
https://iptv-all.lanesh4d0w.repl.co/andorra/atv
#EXTINF:-1 tvg-id="Zoo.ad" tvg-logo="" group-title="Europe",Zoo (720p)
https://iptv-all.lanesh4d0w.repl.co/andorra/zoo
#EXTINF:-1 tvg-id="LDPRTV.ru" tvg-logo="https://iptvx.one/icn/ldpr-tv.png" group-title="Europe",ЛДПР ТВ (1080p)
http://46.46.143.222:1935/live/mp4:ldpr.stream/blocked.m3u8
#EXTINF:-1 tvg-id="AndorraTV.ad" tvg-logo="" group-title="Europe, the Middle East and Africa",ATV
https://iptv-all.lanesh4d0w.repl.co/andorra/atv
#EXTINF:-1 tvg-id="Zoo.ad" tvg-logo="" group-title="Europe, the Middle East and Africa",Zoo (720p)
https://iptv-all.lanesh4d0w.repl.co/andorra/zoo
#EXTINF:-1 tvg-id="LDPRTV.ru" tvg-logo="https://iptvx.one/icn/ldpr-tv.png" group-title="Europe, the Middle East and Africa",ЛДПР ТВ (1080p)
http://46.46.143.222:1935/live/mp4:ldpr.stream/blocked.m3u8
#EXTINF:-1 tvg-id="MeteoMedia.ca" tvg-logo="https://s1.twnmm.com/images/en_ca/mobile/logos/twn-mobile-logo.png" group-title="North America",Meteomedia
http://encodercdn1.frontline.ca/encoder181/output/Meteo_Media_720p/playlist.m3u8
#EXTINF:-1 tvg-id="MeteoMedia.ca" tvg-logo="https://s1.twnmm.com/images/en_ca/mobile/logos/twn-mobile-logo.png" group-title="Northern America",Meteomedia
http://encodercdn1.frontline.ca/encoder181/output/Meteo_Media_720p/playlist.m3u8
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="South Asia",Daawah TV
http://51.15.246.58:8081/daawahtv/daawahtv2/playlist.m3u8
#EXTINF:-1 tvg-id="AndorraTV.ad" tvg-logo="" group-title="Worldwide",ATV
https://iptv-all.lanesh4d0w.repl.co/andorra/atv
#EXTINF:-1 tvg-id="BBCNews.uk" tvg-logo="https://raw.githubusercontent.com/Tapiosinn/tv-logos/master/countries/united-kingdom/bbc-news-uk.png" group-title="Worldwide",BBC News HD
http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index.m3u8
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Worldwide",Daawah TV
http://51.15.246.58:8081/daawahtv/daawahtv2/playlist.m3u8
#EXTINF:-1 tvg-id="MeteoMedia.ca" tvg-logo="https://s1.twnmm.com/images/en_ca/mobile/logos/twn-mobile-logo.png" group-title="Worldwide",Meteomedia
http://encodercdn1.frontline.ca/encoder181/output/Meteo_Media_720p/playlist.m3u8
#EXTINF:-1 tvg-id="Zoo.ad" tvg-logo="" group-title="Worldwide",Zoo (720p)
https://iptv-all.lanesh4d0w.repl.co/andorra/zoo
#EXTINF:-1 tvg-id="LDPRTV.ru" tvg-logo="https://iptvx.one/icn/ldpr-tv.png" group-title="Worldwide",ЛДПР ТВ (1080p)
http://46.46.143.222:1935/live/mp4:ldpr.stream/blocked.m3u8
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Undefined" http-referrer="http://imn.iq" http-user-agent="Mozilla/5.0 (iPhone; CPU iPhone OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148",Andorra TV (720p) [Not 24/7]
#EXTVLCOPT:http-referrer=http://imn.iq
#EXTVLCOPT:http-user-agent=Mozilla/5.0 (iPhone; CPU iPhone OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148
http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index2.m3u8

@ -1,3 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="BBCNews.uk" tvg-logo="https://raw.githubusercontent.com/Tapiosinn/tv-logos/master/countries/united-kingdom/bbc-news-uk.png" group-title="General;News",BBC News HD
http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index.m3u8

@ -1,3 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="LDPRTV.ru" tvg-logo="https://iptvx.one/icn/ldpr-tv.png" group-title="General",ЛДПР ТВ (1080p)
http://46.46.143.222:1935/live/mp4:ldpr.stream/blocked.m3u8

@ -1,11 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Undefined" http-referrer="http://imn.iq" http-user-agent="Mozilla/5.0 (iPhone; CPU iPhone OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148",Andorra TV (720p) [Not 24/7]
#EXTVLCOPT:http-referrer=http://imn.iq
#EXTVLCOPT:http-user-agent=Mozilla/5.0 (iPhone; CPU iPhone OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148
http://1111296894.rsc.cdn77.org/LS-ATL-54548-6/index2.m3u8
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Undefined",Daawah TV
http://51.15.246.58:8081/daawahtv/daawahtv2/playlist.m3u8
#EXTINF:-1 tvg-id="MeteoMedia.ca" tvg-logo="https://s1.twnmm.com/images/en_ca/mobile/logos/twn-mobile-logo.png" group-title="Weather",Meteomedia
http://encodercdn1.frontline.ca/encoder181/output/Meteo_Media_720p/playlist.m3u8
#EXTINF:-1 tvg-id="Zoo.ad" tvg-logo="" group-title="Undefined",Zoo (720p)
https://iptv-all.lanesh4d0w.repl.co/andorra/zoo

@ -1,3 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="MeteoMedia.ca" tvg-logo="https://s1.twnmm.com/images/en_ca/mobile/logos/twn-mobile-logo.png" group-title="Weather",Meteomedia
http://encodercdn1.frontline.ca/encoder181/output/Meteo_Media_720p/playlist.m3u8

@ -1,3 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Undefined",Daawah TV
http://51.15.246.58:8081/daawahtv/daawahtv2/playlist.m3u8

@ -1,5 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Undefined",Daawah TV
http://51.15.246.58:8081/daawahtv/daawahtv2/playlist.m3u8
#EXTINF:-1 tvg-id="LDPRTV.ru" tvg-logo="https://iptvx.one/icn/ldpr-tv.png" group-title="General",ЛДПР ТВ (1080p)
http://46.46.143.222:1935/live/mp4:ldpr.stream/blocked.m3u8

@ -1,3 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="LDPRTV.ru" tvg-logo="https://iptvx.one/icn/ldpr-tv.png" group-title="General",ЛДПР ТВ (1080p)
http://46.46.143.222:1935/live/mp4:ldpr.stream/blocked.m3u8

@ -1,7 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="AndorraTV.ad" tvg-logo="" group-title="Undefined",ATV
https://iptv-all.lanesh4d0w.repl.co/andorra/atv
#EXTINF:-1 tvg-id="Zoo.ad" tvg-logo="" group-title="Undefined",Zoo (720p)
https://iptv-all.lanesh4d0w.repl.co/andorra/zoo
#EXTINF:-1 tvg-id="LDPRTV.ru" tvg-logo="https://iptvx.one/icn/ldpr-tv.png" group-title="General",ЛДПР ТВ (1080p)
http://46.46.143.222:1935/live/mp4:ldpr.stream/blocked.m3u8

@ -1,7 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="AndorraTV.ad" tvg-logo="" group-title="Undefined",ATV
https://iptv-all.lanesh4d0w.repl.co/andorra/atv
#EXTINF:-1 tvg-id="Zoo.ad" tvg-logo="" group-title="Undefined",Zoo (720p)
https://iptv-all.lanesh4d0w.repl.co/andorra/zoo
#EXTINF:-1 tvg-id="LDPRTV.ru" tvg-logo="https://iptvx.one/icn/ldpr-tv.png" group-title="General",ЛДПР ТВ (1080p)
http://46.46.143.222:1935/live/mp4:ldpr.stream/blocked.m3u8

@ -1,3 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="MeteoMedia.ca" tvg-logo="https://s1.twnmm.com/images/en_ca/mobile/logos/twn-mobile-logo.png" group-title="Weather",Meteomedia
http://encodercdn1.frontline.ca/encoder181/output/Meteo_Media_720p/playlist.m3u8

@ -1,3 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="MeteoMedia.ca" tvg-logo="https://s1.twnmm.com/images/en_ca/mobile/logos/twn-mobile-logo.png" group-title="Weather",Meteomedia
http://encodercdn1.frontline.ca/encoder181/output/Meteo_Media_720p/playlist.m3u8

@ -1,3 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="" tvg-logo="" group-title="Undefined",Daawah TV
http://51.15.246.58:8081/daawahtv/daawahtv2/playlist.m3u8

@ -1,3 +0,0 @@
#EXTM3U
#EXTINF:-1 tvg-id="MeteoMedia.ca" tvg-logo="https://s1.twnmm.com/images/en_ca/mobile/logos/twn-mobile-logo.png" group-title="Weather",Meteomedia
http://encodercdn1.frontline.ca/encoder181/output/Meteo_Media_720p/playlist.m3u8

@ -1,70 +0,0 @@
{"filepath":"categories/auto.m3u","count":0}
{"filepath":"categories/animation.m3u","count":0}
{"filepath":"categories/business.m3u","count":0}
{"filepath":"categories/classic.m3u","count":0}
{"filepath":"categories/comedy.m3u","count":0}
{"filepath":"categories/cooking.m3u","count":0}
{"filepath":"categories/culture.m3u","count":0}
{"filepath":"categories/documentary.m3u","count":0}
{"filepath":"categories/education.m3u","count":0}
{"filepath":"categories/entertainment.m3u","count":0}
{"filepath":"categories/family.m3u","count":0}
{"filepath":"categories/general.m3u","count":2}
{"filepath":"categories/kids.m3u","count":0}
{"filepath":"categories/legislative.m3u","count":0}
{"filepath":"categories/lifestyle.m3u","count":0}
{"filepath":"categories/movies.m3u","count":0}
{"filepath":"categories/music.m3u","count":0}
{"filepath":"categories/news.m3u","count":1}
{"filepath":"categories/outdoor.m3u","count":0}
{"filepath":"categories/relax.m3u","count":0}
{"filepath":"categories/religious.m3u","count":0}
{"filepath":"categories/series.m3u","count":0}
{"filepath":"categories/science.m3u","count":0}
{"filepath":"categories/shop.m3u","count":0}
{"filepath":"categories/sports.m3u","count":0}
{"filepath":"categories/travel.m3u","count":0}
{"filepath":"categories/weather.m3u","count":1}
{"filepath":"categories/xxx.m3u","count":1}
{"filepath":"categories/undefined.m3u","count":4}
{"filepath":"countries/ad.m3u","count":2}
{"filepath":"countries/ca.m3u","count":1}
{"filepath":"subdivisions/ca-on.m3u","count":1}
{"filepath":"countries/in.m3u","count":1}
{"filepath":"countries/ru.m3u","count":1}
{"filepath":"countries/int.m3u","count":1}
{"filepath":"index.category.m3u","count":8}
{"filepath":"index.country.m3u","count":7}
{"filepath":"index.language.m3u","count":7}
{"filepath":"index.m3u","count":7}
{"filepath":"index.region.m3u","count":21}
{"filepath":"languages/eng.m3u","count":1}
{"filepath":"languages/rus.m3u","count":1}
{"filepath":"languages/cat.m3u","count":1}
{"filepath":"languages/undefined.m3u","count":4}
{"filepath":"regions/afr.m3u","count":0}
{"filepath":"regions/amer.m3u","count":1}
{"filepath":"regions/apac.m3u","count":1}
{"filepath":"regions/arab.m3u","count":0}
{"filepath":"regions/asean.m3u","count":0}
{"filepath":"regions/asia.m3u","count":2}
{"filepath":"regions/carib.m3u","count":0}
{"filepath":"regions/cas.m3u","count":0}
{"filepath":"regions/cenamer.m3u","count":0}
{"filepath":"regions/cis.m3u","count":1}
{"filepath":"regions/emea.m3u","count":3}
{"filepath":"regions/eur.m3u","count":3}
{"filepath":"regions/hispam.m3u","count":0}
{"filepath":"regions/lac.m3u","count":0}
{"filepath":"regions/latam.m3u","count":0}
{"filepath":"regions/maghreb.m3u","count":0}
{"filepath":"regions/mena.m3u","count":0}
{"filepath":"regions/mideast.m3u","count":0}
{"filepath":"regions/nam.m3u","count":1}
{"filepath":"regions/noram.m3u","count":1}
{"filepath":"regions/nord.m3u","count":0}
{"filepath":"regions/oce.m3u","count":0}
{"filepath":"regions/sas.m3u","count":1}
{"filepath":"regions/southam.m3u","count":0}
{"filepath":"regions/ssa.m3u","count":0}
{"filepath":"regions/wafr.m3u","count":0}

@ -1,5 +1,5 @@
#EXTM3U #EXTM3U
#EXTINF:-1 tvg-id="NPO1.nl",NPO 1 (342p) [Geo-blocked] #EXTINF:-1 tvg-id="NPO1.nl@SD",NPO 1 (342p) [Geo-blocked]
http://resolver.streaming.api.nos.nl/livestream?url=/live/npo/tvlive/npo1/npo1.isml/.m3u8 http://resolver.streaming.api.nos.nl/livestream?url=/live/npo/tvlive/npo1/npo1.isml/.m3u8
#EXTINF:-1 tvg-id="NPO2.nl",NPO 2 (342p) #EXTINF:-1 tvg-id="NPO2.nl",NPO 2 (342p)
http://resolver.streaming.api.nos.nl/livestream?url=/live/npo/tvlive/npo2/npo2.isml/.m3u8 http://resolver.streaming.api.nos.nl/livestream?url=/live/npo/tvlive/npo2/npo2.isml/.m3u8
@ -7,3 +7,5 @@ http://resolver.streaming.api.nos.nl/livestream?url=/live/npo/tvlive/npo2/npo2.i
#EXTVLCOPT:http-referrer=http://imn.iq #EXTVLCOPT:http-referrer=http://imn.iq
#EXTVLCOPT:http-user-agent=Mozilla/5.0 (iPhone; CPU iPhone OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148 #EXTVLCOPT:http-user-agent=Mozilla/5.0 (iPhone; CPU iPhone OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148
http://stream.tvtap.net:8081/live/nl-npo2.stream/playlist.m3u8 http://stream.tvtap.net:8081/live/nl-npo2.stream/playlist.m3u8
#EXTINF:-1 tvg-id="NPO2.nl",NPO 2 [Geo-blocked]
http://resolver.streaming.api.nos.nl/livestream?url=/live/npo/tvlive/npo2/npo22.isml/.m3u8

@ -0,0 +1,76 @@
{"type":"category","filepath":"categories/auto.m3u","count":0}
{"type":"category","filepath":"categories/cooking.m3u","count":0}
{"type":"category","filepath":"categories/comedy.m3u","count":0}
{"type":"category","filepath":"categories/documentary.m3u","count":0}
{"type":"category","filepath":"categories/business.m3u","count":0}
{"type":"category","filepath":"categories/classic.m3u","count":0}
{"type":"category","filepath":"categories/entertainment.m3u","count":0}
{"type":"category","filepath":"categories/education.m3u","count":0}
{"type":"category","filepath":"categories/animation.m3u","count":0}
{"type":"category","filepath":"categories/family.m3u","count":0}
{"type":"category","filepath":"categories/kids.m3u","count":0}
{"type":"category","filepath":"categories/culture.m3u","count":0}
{"type":"category","filepath":"categories/lifestyle.m3u","count":0}
{"type":"category","filepath":"categories/general.m3u","count":3}
{"type":"category","filepath":"categories/outdoor.m3u","count":0}
{"type":"category","filepath":"categories/music.m3u","count":0}
{"type":"category","filepath":"categories/legislative.m3u","count":0}
{"type":"category","filepath":"categories/series.m3u","count":0}
{"type":"category","filepath":"categories/news.m3u","count":1}
{"type":"category","filepath":"categories/movies.m3u","count":0}
{"type":"category","filepath":"categories/relax.m3u","count":0}
{"type":"category","filepath":"categories/religious.m3u","count":0}
{"type":"category","filepath":"categories/weather.m3u","count":1}
{"type":"category","filepath":"categories/science.m3u","count":0}
{"type":"category","filepath":"categories/shop.m3u","count":0}
{"type":"category","filepath":"categories/xxx.m3u","count":1}
{"type":"category","filepath":"categories/sports.m3u","count":0}
{"type":"category","filepath":"categories/travel.m3u","count":0}
{"type":"category","filepath":"categories/undefined.m3u","count":6}
{"type":"country","filepath":"countries/ad.m3u","count":1}
{"type":"country","filepath":"countries/ca.m3u","count":1}
{"type":"country","filepath":"countries/kg.m3u","count":1}
{"type":"country","filepath":"countries/kz.m3u","count":1}
{"type":"country","filepath":"countries/tj.m3u","count":1}
{"type":"country","filepath":"countries/ru.m3u","count":1}
{"type":"country","filepath":"countries/tm.m3u","count":1}
{"type":"country","filepath":"countries/undefined.m3u","count":4}
{"type":"country","filepath":"countries/uz.m3u","count":1}
{"type":"language","filepath":"languages/cat.m3u","count":1}
{"type":"language","filepath":"languages/rus.m3u","count":1}
{"type":"subdivision","filepath":"subdivisions/ca-on.m3u","count":1}
{"type":"language","filepath":"languages/undefined.m3u","count":7}
{"type":"language","filepath":"languages/eng.m3u","count":1}
{"type":"region","filepath":"regions/afr.m3u","count":0}
{"type":"region","filepath":"regions/apac.m3u","count":0}
{"type":"region","filepath":"regions/amer.m3u","count":1}
{"type":"region","filepath":"regions/arab.m3u","count":0}
{"type":"region","filepath":"regions/asean.m3u","count":0}
{"type":"region","filepath":"regions/cenamer.m3u","count":0}
{"type":"region","filepath":"regions/asia.m3u","count":2}
{"type":"region","filepath":"regions/carib.m3u","count":0}
{"type":"region","filepath":"regions/cis.m3u","count":2}
{"type":"region","filepath":"regions/hispam.m3u","count":0}
{"type":"region","filepath":"regions/emea.m3u","count":3}
{"type":"region","filepath":"regions/lac.m3u","count":0}
{"type":"region","filepath":"regions/cas.m3u","count":1}
{"type":"region","filepath":"regions/latam.m3u","count":0}
{"type":"region","filepath":"regions/eur.m3u","count":3}
{"type":"region","filepath":"regions/nam.m3u","count":1}
{"type":"region","filepath":"regions/mena.m3u","count":0}
{"type":"region","filepath":"regions/noram.m3u","count":1}
{"type":"region","filepath":"regions/mideast.m3u","count":0}
{"type":"region","filepath":"regions/maghreb.m3u","count":0}
{"type":"region","filepath":"regions/ssa.m3u","count":0}
{"type":"region","filepath":"regions/nord.m3u","count":0}
{"type":"region","filepath":"regions/oce.m3u","count":0}
{"type":"region","filepath":"regions/southam.m3u","count":0}
{"type":"region","filepath":"regions/wafr.m3u","count":0}
{"type":"region","filepath":"regions/int.m3u","count":2}
{"type":"region","filepath":"regions/sas.m3u","count":0}
{"type":"region","filepath":"regions/undefined.m3u","count":4}
{"type":"index","filepath":"index.m3u","count":10}
{"type":"index","filepath":"index.category.m3u","count":11}
{"type":"index","filepath":"index.country.m3u","count":14}
{"type":"index","filepath":"index.language.m3u","count":10}
{"type":"index","filepath":"index.region.m3u","count":20}

@ -0,0 +1,3 @@
#EXTM3U
#EXTINF:-1 tvg-id="AdaTV.cy",AdaTV
https://ythls.onrender.com/channel/UC40TUSUx490U5uR1lZt3Ajg.m3u8

@ -2,4 +2,4 @@
#EXTINF:-1 tvg-id="TFX.fr" http-referrer="https://pkpakiplay.xyz/" http-user-agent="Mozilla/5.0 (iPhone; CPU iPhone OS 17_7 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.0 Mobile/15E148 Safari/604.1",TFX #EXTINF:-1 tvg-id="TFX.fr" http-referrer="https://pkpakiplay.xyz/" http-user-agent="Mozilla/5.0 (iPhone; CPU iPhone OS 17_7 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.0 Mobile/15E148 Safari/604.1",TFX
#EXTVLCOPT:http-referrer=https://pkpakiplay.xyz/ #EXTVLCOPT:http-referrer=https://pkpakiplay.xyz/
#EXTVLCOPT:http-user-agent=Mozilla/5.0 (iPhone; CPU iPhone OS 17_7 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.0 Mobile/15E148 Safari/604.1 #EXTVLCOPT:http-user-agent=Mozilla/5.0 (iPhone; CPU iPhone OS 17_7 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.0 Mobile/15E148 Safari/604.1
http://livetv.ktv.zone/13/play.m3u8 https://stitcher-ipv4.pluto.tv/v1/stitch/embed/hls/channel/64c109a4798def0008a6e03e/master.m3u8?advertisingId={PSID}&appVersion=unknown&deviceDNT={TARGETOPT}&deviceId={PSID}&deviceLat=0&deviceLon=0&deviceMake=samsung&deviceModel=samsung&deviceType=samsung-tvplus&deviceVersion=unknown&embedPartner=samsung-tvplus&profileFloor=&profileLimit=&samsung_app_domain={APP_DOMAIN}&samsung_app_name={APP_NAME}&us_privacy=1YNY

@ -1,5 +1,5 @@
#EXTM3U #EXTM3U
#EXTINF:-1 tvg-id="BBCAmericaEast.us" http-user-agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36 Edge/12.246",BBC America East (720p) #EXTINF:-1 tvg-id="BBCAmerica.us@East" http-user-agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36 Edge/12.246",BBC America East (720p)
#EXTVLCOPT:http-user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36 Edge/12.246 #EXTVLCOPT:http-user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36 Edge/12.246
https://servilive.com:3126/live/tele2000live.m3u8 https://servilive.com:3126/live/tele2000live.m3u8
#EXTINF:-1 tvg-id="FastTV.us",Fast TV #EXTINF:-1 tvg-id="FastTV.us",Fast TV

@ -155,7 +155,7 @@ Same thing, but split up into separate files:
<tr><td>🇧🇱 Saint Barthélemy</td><td align="right">1</td><td nowrap><code>https://iptv-org.github.io/iptv/countries/bl.m3u</code></td></tr> <tr><td>🇧🇱 Saint Barthélemy</td><td align="right">1</td><td nowrap><code>https://iptv-org.github.io/iptv/countries/bl.m3u</code></td></tr>
<tr><td>🇸🇭 Saint Helena</td><td align="right">1</td><td nowrap><code>https://iptv-org.github.io/iptv/countries/sh.m3u</code></td></tr> <tr><td>🇸🇭 Saint Helena</td><td align="right">1</td><td nowrap><code>https://iptv-org.github.io/iptv/countries/sh.m3u</code></td></tr>
<tr><td>🇰🇳 Saint Kitts and Nevis</td><td align="right">1</td><td nowrap><code>https://iptv-org.github.io/iptv/countries/kn.m3u</code></td></tr> <tr><td>🇰🇳 Saint Kitts and Nevis</td><td align="right">1</td><td nowrap><code>https://iptv-org.github.io/iptv/countries/kn.m3u</code></td></tr>
<tr><td>🌍 International</td><td align="right">1</td><td nowrap><code>https://iptv-org.github.io/iptv/countries/int.m3u</code></td></tr> <tr><td>Undefined</td><td align="right">2</td><td nowrap><code>https://iptv-org.github.io/iptv/countries/undefined.m3u</code></td></tr>
</tbody> </tbody>
</table> </table>
@ -207,6 +207,8 @@ Same thing, but split up into separate files:
<tr><td align="left">South Asia</td><td align="right">1</td><td align="left" nowrap><code>https://iptv-org.github.io/iptv/regions/sas.m3u</code></td></tr> <tr><td align="left">South Asia</td><td align="right">1</td><td align="left" nowrap><code>https://iptv-org.github.io/iptv/regions/sas.m3u</code></td></tr>
<tr><td align="left">Sub-Saharan Africa</td><td align="right">0</td><td align="left" nowrap><code>https://iptv-org.github.io/iptv/regions/ssa.m3u</code></td></tr> <tr><td align="left">Sub-Saharan Africa</td><td align="right">0</td><td align="left" nowrap><code>https://iptv-org.github.io/iptv/regions/ssa.m3u</code></td></tr>
<tr><td align="left">West Africa</td><td align="right">0</td><td align="left" nowrap><code>https://iptv-org.github.io/iptv/regions/wafr.m3u</code></td></tr> <tr><td align="left">West Africa</td><td align="right">0</td><td align="left" nowrap><code>https://iptv-org.github.io/iptv/regions/wafr.m3u</code></td></tr>
<tr><td align="left">Worldwide</td><td align="right">1</td><td align="left" nowrap><code>https://iptv-org.github.io/iptv/regions/int.m3u</code></td></tr>
<tr><td align="left">Undefined</td><td align="right">2</td><td align="left" nowrap><code>https://iptv-org.github.io/iptv/regions/undefined.m3u</code></td></tr>
</tbody> </tbody>
</table> </table>

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save