-
-
Notifications
You must be signed in to change notification settings - Fork 707
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feed problem, some videos do not show up #1130
Comments
Do you have a list of channel IDs that are affected by this bug? I can check their pubsub subscription status then at https://pubsubhubbub.appspot.com/ |
I can give you the sub file... but I noticed something weird, I had problem with my instance I think it was problem with the database, while piped worked okay the login page did not work (pressing on login button did nothing) till I restarted the docker image so I assume it's problem in my own instance, somehow fetchers do not run right? |
This should be fixed with TeamPiped/Piped-Backend@3a00940, let me know if it is not. |
Still have same issue, not all channel feed are being loaded, although i dont have this issue on this custom instance. |
Only new videos uploaded from 6 hours back would be updated in the feed, this is due to the nature of how PubSub works. Let me know if this happens on new uploads from now. |
@FireMasterK out of context issue, I noticed piped instance and login (I guess db) sometimes disconnect and I can't login without restart the whole instance so do I need to restart it every hour like invidious? about feed because what happened, my sync stopped 3 days ago and now I'm waiting it to pull new videos and tell you it worked or no |
I have to visit each channel to get new videos in my feed, weird behavior Is there a command or way to force it to fetch updates from yt? |
Is there any logs that you can provide? run |
Im away from my pc so excuse my werid english Umm weidly enoguh, i get And my config file
|
Try copying the name of the container from |
Hmm, i have piped backend running but i cant access its logs, it throws same error |
Nvm, you have to run |
Ohhhhh 😂 Amyways i cant provide logs as im on phone and cant copy logs and when i send them to commandline pastebin it sends different output |
This comment was marked as outdated.
This comment was marked as outdated.
I have same issue, I've tried it on official instance and custom one. The subscription won't update until I enter the channel, even if I wait for 6 hours it doesn't update recently. Some videos that are uploaded 20+ hours before don't show. |
Can you share some example channel IDs? |
Here are some:
|
I am facing the same problem using the official instance (LinusTechTips NoCopyrightSounds). Tried different instances and found the same results but with different channels. However, I've found that some videos does not dissappear but have the wrong time of upload. For example, Linus uploaded a video 15 mins ago but in the feed it shows that it was uploaded 6h ago, when I scroll down the feed I will eventually find it. Maybe this will help. |
My feed does not update at all, all subscriptions channel only get new video when I visit it directly via Piped UI. My stack installed follow guide: https://piped-docs.kavin.rocks/docs/self-hosting/, using docker method. This issue just happend recently. Tried rebuild back-end image not fix the issue. |
Can confirm that some of my subbed channels with new videos don't show up in feed, regardless of instance. One way to make them show up is to go to the channel manually, close the Piped PWA and reopen and voila .. videos for that channel appear in the feed. Edit: even if you go to the channel manually, the next new video an hour later or so again does not show up. A good example for this is the Netflix channel. |
Can confirm the issue is still persistent. I have all the channels bookmarked and when opened at once the new videos show up in feed. |
Here are some: |
No, that's the whole point of the script - it fetches your list of subscriptions from the DB and does the same request that you do when manually refreshing each subscription individually. You need to use the linked documentation to configure your machine to periodically run the script, which will refresh all subscriptions every time it runs. |
I see. Thank you |
I created a CRON script, is this correct? @tron1point0 |
How is yours different from this one? @dhruvinsh
|
Exactly the same just works without |
How many channels can be refreshed at the same time? use CONTAINER ID is ok? |
Both |
How can you verify whether this command has been executed successfully? |
Yes:
|
How can you verify whether this command has been executed successfully? |
If executed correctly with
Where the first Don't forget to replace |
I use xargs. |
Then you can drop the $ docker exec piped_postgres_1 psql -Upiped -dpiped -qtAc "SELECT DISTINCT(channel) FROM users_subscribed" | xargs -I\{} curl -k -o /dev/null 'https://[...redacted...]/channel/{}'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2607 100 2607 0 0 166k 0 --:--:-- --:--:-- --:--:-- 181k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2607 100 2607 0 0 215k 0 --:--:-- --:--:-- --:--:-- 231k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
... |
I ran the script above (albeit modified, I use k3s) and the feed still is not updated. Any other way I can solve this?
This number also shows up in the logs:
|
Here is a much improved script to do this. I have noticed that the piped API now takes exactly 1 second for every channel/request, which has not been the case previously. #!/bin/bash
PIPEDAPI="https://pipedapi.example.com"
curl -LIsfZkw '%{stderr}%{onerror}%{urlnum}: %{url}: %{errormsg}\n' --retry 1 --config \
<(docker exec -i piped-postgres \
bash -c "PGPASSWORD=\$POSTGRES_PASSWORD psql -U \$POSTGRES_USER -d \$POSTGRES_DB -qtAXc \
\"SELECT DISTINCT 'url=\"$PIPEDAPI\"/channel/' || id FROM public.pubsub;\"") \
>/dev/null |
@TeamPiped Since there seems to be a stable, scriptable workaround and since a proper fix is pending for 2,5 years now, could this workaround be included in the official Docker image, so that all Piped instances benefit from it? The issue wasn't that serious 2,5 years ago because it only affected few videos, but since YouTube's war against adblockers and 3rd-party apps it became a permanent condition that renders Piped's subscription feature basically useless. This also affects popular dependant projects like LibreTube. Including this workaround into the official images would be very much appreciated. |
Nice! I golfed that a little differently to pass the URLs to curl via
The |
The problem is, that this is more of a hack than a proper workaround; It does not scale and is very inefficient as it queries every channel that is in the db every time, even if they have no new content for years. So this hacky workaround is fine for your private piped instances, but for big public instances it's kind of unfeasible, unfortunately. (Especially with the 1 sec per channel rate limiting) I have further improved the script:
Also, if it wasn't clear already, the script expands the env variables that are already present in the postgres container, so no need to pass them to the script. #!/bin/bash
PIPEDAPI="https://pipedapi.example.com"
curl -LIsfZkw '%{stderr}%{onerror}%{urlnum}: %{url}: %{errormsg}\n' --retry 1 --config \
<(docker exec -i piped-postgres \
bash -c "PGPASSWORD=\$POSTGRES_PASSWORD psql -U \$POSTGRES_USER -d \$POSTGRES_DB -qtAXc \
\"SELECT DISTINCT 'url=\"$PIPEDAPI\"/channel/' || channel FROM users_subscribed;\"") \
>/dev/null |
I'd say every workaround is a hack, that's what distinguishes it from a proper fix 😄 The problem is that this issue persists for 2,5 years now and it seems like that nobody is either able (this includes myself), or willing to invest the time necessary to develop a proper fix, or it really can't be fixed in a meaningful way. I don't know and honestly, I don't think the reasons matter in the end. Since subscriptions is one of Piped's key features, I'd consider its breakage (and let's be honest, it's completely broken) major enough to justify a workaround until a proper fix is found and merged. That's just my IMHO though.
Since it's a workaround, downsides - even major ones - are expected. Workarounds are temporary solutions after all. The question is whether the workaround stays feasible. Working with a rate limit doesn't really prevent us from applying this workaround, but only how many channels we can process in a given time. So for example, if users of a given instance have subscribed to 21,600 unique channels, feeds can only be updated four times a day max. I can only speak for myself here, but I'd definitely prefer having a feed with a 6 hour delay instead of having no working feed at all. I don't run a public Piped instance, so I can't really assess about how many unique channels we're talking for larger instances. For example, kavin.rocks has 421,150 registered users. It's unlikely that subscriptions don't overlap, but I guess we still end up with way too many unique channels on kavin.rocks. So, this workaround likely isn't feasible for kavin.rocks, at least not until rate limit evasion measures are taken. The two next largest instances (adminforge.de and drgns.space) have 23,367 and 13,979 registered users respectively. I can only speculate, but I would be surprised (considering what I know now, there definitely could be obstacles I just don't know, please tell me then) if we can't at least get daily updates (= 86,400 unique channels max) with this workaround on these instances. Smaller instances should be fine and allow for more than one feed update per day. So, let's work on improving the workaround to make it more feasible for instances of any size and ready to be included in the official Docker images. First we need a config option to enable/disable this workaround. This could be as simple as adding a Docker env variable. Second we can't leave calling this script up to a cronjob, because we don't know whether the number of unique channels exceeds the number of channels we can process in the cronjob's frequency. On the other hand we don't want to update channels too often. So, let's modify the script so that it runs repeatedly indefinitely and pauses if it has processed all channels before the next run is due. The frequency should be configurable, too (let's start with hourly updates). Here's a proof-of-concept with these two additions: #!/bin/bash
PIPEDAPI="https://pipedapi.example.com"
SUBSCRIPTIONS_WORKAROUND_ENABLED="${SUBSCRIPTIONS_WORKAROUND_ENABLED:-1}"
SUBSCRIPTIONS_WORKAROUND_FREQUENCY="${SUBSCRIPTIONS_WORKAROUND_FREQUENCY:-3600}"
if [ "$SUBSCRIPTIONS_WORKAROUND_ENABLED" == "1" ]; then
while true; do
sleep_until=$(($(date +%s) + SUBSCRIPTIONS_WORKAROUND_FREQUENCY))
curl -LIsfZkw '%{stderr}%{onerror}%{urlnum}: %{url}: %{errormsg}\n' --retry 1 --config \
<(docker exec -i piped-postgres \
bash -c "PGPASSWORD=\$POSTGRES_PASSWORD psql -U \$POSTGRES_USER -d \$POSTGRES_DB -qtAXc \
\"SELECT DISTINCT 'url=\"$PIPEDAPI\"/channel/' || channel FROM users_subscribed;\"") \
>/dev/null
(( $(date +%s) >= sleep_until )) || sleep $(( sleep_until - $(date +%s) ))
done
fi It still remains a rather simple workaround. That's by design, if it gets too complex it would be better to either develop a proper fix or at least include the workaround directly into Piped instead of calling Piped's API with So, let's discuss a few more possible additions. You mentioned that it doesn't make much sense to update channels that don't publish videos that frequently. I absolutely agree. A simple solution could be to actually add two workaround scripts: One frequent version that excludes channels without recent videos (that's as simple as using a different SQL query), and another less frequently running version that only updates the channels excluded by the first one. This two-tier system could also include other criteria, for example channels that don't have many subscribers could fall into the low-frequency tier, or could be excluded completely (e.g. if there's just one subscriber). The criteria could be configurable, so that admins can choose what fits their instance best. WDYT? |
Official Instance
Describe the bug
I self host piped.esmailelbob.xyz and I noticed videos from channels I subbed to in invidious show up and takes so long to show up in piped
To Reproduce
sub to some channels
take channels file
add it to both invidious and piped
Expected behavior
videos show up in my feed when some channel upload a video
Logs/Errors
hmm, not sure what to add here
Browser, and OS with Version.
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: