• 1 Post
  • 17 Comments
Joined 1 year ago
cake
Cake day: September 20th, 2023

help-circle


  • aleq@lemmy.worldtoLinux@lemmy.mlLinux middle ground?
    link
    fedilink
    arrow-up
    15
    arrow-down
    7
    ·
    2 months ago

    For private use? Hot take, but Arch. It’s easy to maintain and not easy to break at all. I think I spend zero time on maintenance other than running package updates. I only reinstall when I get a new computer.

    (I say for private use only because you’ll be getting weird looks from people if you use arch on a server in a professional setting, and it might break if you try to update it after five years of not doing it since there aren’t any “releases” to group big changes - in practice I run arch on my home server too with no issues)




  • Not exactly. My main use-case here is for my girlfriend and me to see each both of our calendars in one place, and HA had support for it and is a web portal we both have access to. To do automations on them is secondary.

    Currently, whenever I look at the calendar control panel it will load for a bit while pulling all the calendars, and sometimes timeout and not show anything. I believe this to be because it’s pulling from Fastmail / iCloud everytime and might be rate limited or just have a poor connection, this wouldn’t be an issue if the calendars were stored on the instance itself because then it would only miss the latest entries.

    The idea that maybe I can self-host an app that does it is that if HA can’t do the caching, then maybe this self-hosted app can and it wouldn’t matter that HA fetches it remotely each time since the remote is on the same local network. Having them as separate calendars is still desirable since that gives some additional information.




  • aleq@lemmy.worldtoSelfhosted@lemmy.worldWhy docker
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 months ago

    the biggest selling point for me is that I’ll have a mounted folder or two, a shell script for creating the container, and then if I want to move the service to a new computer I just move these files/folders and run the script. it’s awesome. the initial setup is also a lot easier because all dependencies and stuff are bundled with the app.

    in short, it’s basically the exe-file of the server world

    runs everything as root (not many well built images with proper useranagement it seems)

    that’s true I guess, but for the most part shit’s stuck inside the container anyway so how much does it really matter?

    you cannot really know which stuff is in the images: you must trust who built it

    you kinda can, reading a Dockerfile is pretty much like reading a very basic shell script for the most part. regardless, I do trust most creators of images I use. most of the images I have running are either created by the people who made the app, or official docker images. if I trust them enough to run their apps, why wouldn’t I trust their images?

    lots of mess in the system (mounts, fake networks, rules…)

    that’s sort of the point, isn’t it? stuff is isolated



  • I’m on arch, which I consider one of the larger distros, where most such configuration is very simple. Not sure what rolling mesa is. I probably wouldn’t recommend Ubuntu to anyone who is against using Snap, but there are many distros to choose from if you want KDE as well? It’s more a question of why people would go for Hannah Montana Linux (figuratively speaking, some very niche distro).

    But to respond to your core point, sure. If you do have a lot of customization needs for whatever reason, then by all means. (I still don’t get it)


  • I generally don’t understand why people go for the smaller ones at all. I guess it’s good that someone does to prevent the whole scene being dominated by a single distro, but with some exceptions (e.g. you hate systemd for some reason and really want systemd-less arch, or you have a super niche preferences). For 99% of distros it makes very little difference which one you use, except that you’ll have fewer resources at your disposal (fewer packages, fewer stack overflow threads, fewer everything).


  • Given your background it should come to no surprise that it doesn’t really matter much.

    That said, I recommend Arch with some caveats, mainly with regards to the “very little effort to start using” requirement. If you know how to follow instructions, it should only be about 30-45 minutes to install it. It will on the other hand fit your other requirements of good defaults and not shipping with loads of applications. When you install an app you will get that app and nothing else, and the defaults will either be exactly what the upstream defaults would be if you built it yourself or something very close to that. You also have everything available through the AUR, and after using it for years I’ve yet to run into an update not going smoothly.


  • I’m well familiar with EEE, I’ve used Linux off and on for something like 20 years, back when Microsoft really was the boogeyman. I don’t think VS code qualifies for this category since it was originally (ish, has roots in Atom I think) open source and Microsoft. It was never embraced/extended, and extinguishing their own product makes no sense. (btw I don’t even use VS Code, shit vim plugins in my experience, jetbrains all the way)

    WSL IMO is a concession on Microsoft’s part, because most dev tools nowadays are being made primarily with Linux in mind. It’s what makes Windows at all usable as a development platform in many situations. And pretty much nothing developed specifically for WSL. All WSL has on a normal Linux distro is integration with the host system AFAIK.