its been on the experimental branch for a while now
poop
its been on the experimental branch for a while now
can you run something like iperf3 or openspeedtest between the server and client to prove its a network throughput issue?
do you have a network switch you can add to avoid switching through your router (if it is indeed bad?)
Have you ensured you arent unknowingly using wifi at either end?
NGINX is a bit more hands on than some other options but it’s mature, configurable and there’s a huge amount of information out there for setting it up for various use cases.
in my case, its what I set up when i was first getting into this and it works, so I don’t want to go through setting up anything else.
Thanks for the insightful and helpful comment.
Unraid is great and I have been using it for over a decade now, but a paid OS on a 2bay nas seems excessive
I use Plexamp for that, Jellyfin does it too. You can assign libraries per user quite easily.
So for 3 users you might have 4 libraries, one per user then a shared library they all have access to.
the 2.5" size of disks are now mostly direct USB controller disks rather than sata adapters internally.
3.5" disks are still SATA as far as i’ve seen but the actual sku’s of the disks are often the lower grades. like you will get a disk that looks like another good disk but with only 64mb of dram instead of 256 on the one you would buy as a bare internal drive for example so they can end up a bit slower. and warranties are usually void.
Used to be my main source of disks, but these days there are better ways and it is easier to know exactly what you are getting.
Are you transcoding?
4mbit per client for 1080 is generally a workable minimum for the average casual watcher if you have H265 compatible clients (and a decent encoder, like a modern intel CPU for example), 6 - 8mbit per client if its H264 only.
Remember that the bitrate to quality curve for live transcoding isn’t as good as a slow, non-real-time encode done the brute force way on a CPU. so if you have a few videos that look great at 4mbit, dont assume your own transcodes will look quite that nice, you’re using a GPU to get it done as quickly as possible, with acceptable quality, not as slowly and carefully as possible for the best compression.
You’re confusing a container format (MKV) with a video codec (AV1)
MKV is just a container like a folder or zip file that contains the video stream (or streams, technically you can have multiple) which could be in H264, H265, AV1 etc etc, along with audio streams, subtitles and many other files that go along, like custom Fonts, Posters, etc etc.
As for the codec itself, AV1 done properly is a very good codec but to be visually lossless it isn’t significantly better than a good H265 encode without doing painfully slow CPU encodes, rather than fast efficient GPU encodes. people that are compressing their entire libraries to AV1 are sacrificing a small amount of quality, and some people are more sensitive to its flaws than others. in my case I try to avoid re-encoding in general. AV1 is also less supported on TVs and Media players, so you run into issues with some devices not playing them at all, or having to use CPU decoding.
So I still have my media in mostly untouched original formats, some of my old movie archives and things that aren’t critical like daily shows are H265 encoded for a bit of space saving without risking compatibility issues. Most of my important media and movies are not re-encoded at all, if I rip a bluray I store the video stream that was on the disk untouched.
N5095 ? lots of reports of that one not supporting everything it should based on other Jasper Lake chips, CPU getting hit for Decode when it shouldn’t for example. Also HDR to SDR cant be accelerated with VPP on that one as far as I know so the CPU gets smashed. I think you can do it with OpenCL though.
Was it an n100? They have a severely limited power budget of 6w compared to the n95 at 25w or so.
I’m running jellyfin ontop of ubuntu desktop while also playing retro games. That all sits in a proxmox vm with other services running alongside it. It’s perfectly snappy.
One of my miniPCs is just a little N95 and it can easily transcode 4K HDR to 1080p (HDR or tonemapped SDR) to a couple of clients, and with excellent image quality. You could build a nice little server with a modern i3 and 16gigs of ram and it would smash through 4 or 5 high bitrate 4K HDR transcodes just fine.
Is that one transcoding client local to you? or are you trying to stream over the web? if it’s local, put some of the budget to a new player for that screen perhaps?
I’ve had good luck with WD Blue NVME (SN550)
I’ve put several of those into machines at work and have had years without an issue. I’m also running a WD Blue SN550 1TB in my server as one of the caches, 25000 hours power on time, >100TB written, temperatures way higher than they should be and still over 93% health remaining according to smart.
I played Crysis on a Vuzix VR920 in around 2008, that was my first VR other than a virtual boy.
Dual 640x480, frame interleaved 3d at 30hz per eye! if you drop a single frame the eyes got out of sync and switched! I think I had dual 9600GTs at the time and it struggled. I think it also struggled on the dual 9800GTX+ I had after that.
head tracking was purely gyro/accelerometer based and worked very poorly.
That is awesome.
I played descent 1 and 2 for hours on end back in the day, never got to play 3 as I didn’t have a 3d card yet and they dropped the software renderer option.
I haven’t thought about powder toy for so many years.
Yip, I have a Linux VM running on one of my boxes in the garage that is plugged into a video matrix so I can bring it up on any screen in the house, I use the pi to connect Keyboard/Mouse/controllers etc to that when I’m using it.
I use Ubooquity and Komga, both mainly for the OPDS service which I access on various devices.
Ubooquity is good for basic book and file serving, but does support graphics. Komga is very much graphic focussed and is very good at it.
I have .solutions and .info domain emails that still gets denied by some services, especially anything government or public utility, pain in the arse.
You’d think that at least .info would be pretty well accepted by now.