How do I use this feature? I’m a Firefox user since quantum and had no idea this was a thing.
How do I use this feature? I’m a Firefox user since quantum and had no idea this was a thing.
For private use? Hot take, but Arch. It’s easy to maintain and not easy to break at all. I think I spend zero time on maintenance other than running package updates. I only reinstall when I get a new computer.
(I say for private use only because you’ll be getting weird looks from people if you use arch on a server in a professional setting, and it might break if you try to update it after five years of not doing it since there aren’t any “releases” to group big changes - in practice I run arch on my home server too with no issues)
This is so cool, first MQTT-based sensor I’ve set up. Already had a broker set up with HA, but how can HA automatically discover which topic to listen to, know the vendor name and how to interpret all the data?
Interesting, so I guess those API-calls are just fetching the cached calendar on my HA Yellow. Wonder why it’s so slow, but I guess there’s not much to do about that then. :(
Not exactly. My main use-case here is for my girlfriend and me to see each both of our calendars in one place, and HA had support for it and is a web portal we both have access to. To do automations on them is secondary.
Currently, whenever I look at the calendar control panel it will load for a bit while pulling all the calendars, and sometimes timeout and not show anything. I believe this to be because it’s pulling from Fastmail / iCloud everytime and might be rate limited or just have a poor connection, this wouldn’t be an issue if the calendars were stored on the instance itself because then it would only miss the latest entries.
The idea that maybe I can self-host an app that does it is that if HA can’t do the caching, then maybe this self-hosted app can and it wouldn’t matter that HA fetches it remotely each time since the remote is on the same local network. Having them as separate calendars is still desirable since that gives some additional information.
Is immich in a usable state yet? I was looking for a self-hosted image service a while back, but eventually I just went with pigallery2 mostly due to the extremely simple file storage (just point to a folder and you’re good to go), but I do miss being able to manage images/albums from the website and having a more mobile friendly version. I kind of avoided immich due to the repo saying it’s under very active development (#scary).
the biggest selling point for me is that I’ll have a mounted folder or two, a shell script for creating the container, and then if I want to move the service to a new computer I just move these files/folders and run the script. it’s awesome. the initial setup is also a lot easier because all dependencies and stuff are bundled with the app.
in short, it’s basically the exe-file of the server world
runs everything as root (not many well built images with proper useranagement it seems)
that’s true I guess, but for the most part shit’s stuck inside the container anyway so how much does it really matter?
you cannot really know which stuff is in the images: you must trust who built it
you kinda can, reading a Dockerfile is pretty much like reading a very basic shell script for the most part. regardless, I do trust most creators of images I use. most of the images I have running are either created by the people who made the app, or official docker images. if I trust them enough to run their apps, why wouldn’t I trust their images?
lots of mess in the system (mounts, fake networks, rules…)
that’s sort of the point, isn’t it? stuff is isolated
I get the same all the time. OP reminded me to check today and Jetbrains toolbox had cached a lot of downloads that took up 42 GB in total. yarn folder with 2.3 GB. bazel folder with 15 GB (apparently used for building Anki),7 GB paru clones.
All in all it added up to 82 GB.
I’m on arch, which I consider one of the larger distros, where most such configuration is very simple. Not sure what rolling mesa is. I probably wouldn’t recommend Ubuntu to anyone who is against using Snap, but there are many distros to choose from if you want KDE as well? It’s more a question of why people would go for Hannah Montana Linux (figuratively speaking, some very niche distro).
But to respond to your core point, sure. If you do have a lot of customization needs for whatever reason, then by all means. (I still don’t get it)
I generally don’t understand why people go for the smaller ones at all. I guess it’s good that someone does to prevent the whole scene being dominated by a single distro, but with some exceptions (e.g. you hate systemd for some reason and really want systemd-less arch, or you have a super niche preferences). For 99% of distros it makes very little difference which one you use, except that you’ll have fewer resources at your disposal (fewer packages, fewer stack overflow threads, fewer everything).
Given your background it should come to no surprise that it doesn’t really matter much.
That said, I recommend Arch with some caveats, mainly with regards to the “very little effort to start using” requirement. If you know how to follow instructions, it should only be about 30-45 minutes to install it. It will on the other hand fit your other requirements of good defaults and not shipping with loads of applications. When you install an app you will get that app and nothing else, and the defaults will either be exactly what the upstream defaults would be if you built it yourself or something very close to that. You also have everything available through the AUR, and after using it for years I’ve yet to run into an update not going smoothly.
I’m well familiar with EEE, I’ve used Linux off and on for something like 20 years, back when Microsoft really was the boogeyman. I don’t think VS code qualifies for this category since it was originally (ish, has roots in Atom I think) open source and Microsoft. It was never embraced/extended, and extinguishing their own product makes no sense. (btw I don’t even use VS Code, shit vim plugins in my experience, jetbrains all the way)
WSL IMO is a concession on Microsoft’s part, because most dev tools nowadays are being made primarily with Linux in mind. It’s what makes Windows at all usable as a development platform in many situations. And pretty much nothing developed specifically for WSL. All WSL has on a normal Linux distro is integration with the host system AFAIK.
I hate Windows and dislike a lot of Microsoft products, but I think we’re way past Microsoft being the bad guy. They kinda like Linux now, and probably do more good than bad for it. There are much worse companies in tech, I think Microsoft’s worst crimes as of late is creating Teams and being boring.
How is it a pain? You just change the origin on your existing project, and new projects you just use the new one to start with.
That’s a promising idea, apparently my $TERM is not foot
but xterm-256color
by default. However starting with TERM=foot br
or TERM=foot broot
doesn’t seem to enable high res previews. :(
Anyone know if I can get the highres image previews in foot
? I think I saw something about foot supporting the kitty graphics protocol, but I can’t get it to work. If I start it like normal I get low res previews, if I start it with TERM=kitty I get no previews.
Never knew it, very neat!