For many systems out there, /bin and /lib are no longer a thing. Instead, they are just a link to /usr/bin and /usr/lib. And for some systems even /sbin has been merged with /bin (in turn linked to /usr/bin).
For many systems out there, /bin and /lib are no longer a thing. Instead, they are just a link to /usr/bin and /usr/lib. And for some systems even /sbin has been merged with /bin (in turn linked to /usr/bin).
Not just Linux… 99% of the time you see something weird in the computing world, the reason is going to be “because history.”
The C developers are the ones with the ageist mindset.
The Rust developers certainly are not the ones raising the point “C has always worked, so why should we use another language?” which ignores the objective advantages of Rust and is solely leaning on C being the older language.
They very rarely have memory and threading issues
It’s always the “rarely” that gets you. A program that doesn’t crash is awesome, a program that crashes consistently is easy to debug (and most likely would be caught during development anyway), but a program that crashes only once a week? Wooo boy.
People vastly underestimate the value Rust brings by ensuring the same class of bugs will never happen.
It really depends.
If I know I will never open the file in the terminal or batch process it in someways, I will name it using Common Case: “Cool Filename.odt”.
Anything besides that, snake case. Preferably prefixed with current date: “20240901_cool_filename”
People back then just grossly underestimated how big computing was going to be.
The human brain is not built to predict exponential growths!
Assuming the entire US court system isn’t in the corporate pocket
I love your optimism
You see, shit like this is why I think some of the Eastern philosophers like Xunzi hit the mark on what “God” is: God is not a sentient being, God does not have a conscious mind like we do, God simply is.
Of course, those people didn’t call this higher being the God, they called it “Heaven”, but I think it’s really referring to the natural flow of the world, something that is not controlled by us. Maybe the closest equivalent to this concept in the non-Eastern world is “Luck” – people rarely assign “being lucky” to the actions of <insert deity here>, it simply happens by the flow of this world, it is not the action of an all-knowing, all-powerful deity. But like I said, it’s merely the closest approximation of the Heaven concept I can think of.
The side effect coming out of this revelation is that, you can’t blame the Heaven for your own misfortunes. The Heaven is not a sentient being after all!
This also explains why VPN is a possible workaround to this issue.
Your VPN will encapsulate any packets that your phone will send out inside a new packet (its contents encrypted), and this new packet is the one actually being sent out to the internet. What TTL does this new packet have? You guessed it, 64. From the ISP’s perspective, this packet is no different than any other packets sent directly from your phone.
BUT, not all phones will pass tethered packets to the VPN client – they directly send those out to the internet. Mine does this! In this case, TTL-based tracking will still work. And some phones seem to have other methods to inform the ISP that the data is tethered, in which case the VPN workaround may possibly fail.
Not sure if it’s still the case today, but back then cellular ISPs could tell you are tethering by looking at the TTL (time to live) value of your packets.
Basically, a packet starts with a TTL of 64 usually. After each hop (e.g. from your phone to the ISP’s devices) the TTL is decremented, becoming 63, then 62, and so on. The main purpose of TTL is to prevent packets from lingering in the network forever, by dropping the packet if its TTL reaches zero. Most packets reach their destinations within 20 hops anyway, so a TTL of 64 is plenty enough.
Back to the topic. What happens when the ISP receives a packet with a TTL value less than expected, like 61 instead of 62? It realizes that your packet must have gone through an additional hop, for example when it hopped from your laptop onto your phone, hence the data must be tethered.
If I remember right, the syncing issue was particularly egregious when you run windowed X11 programs on Wayland. So it could be that you got lucky.
It’s the explicit sync protocol.
The TL;DR is basically: everyone else has supported implicit sync for ages, but Nvidia doesn’t. So now everyone is designing an explicit sync Wayland protocol to accommodate for this issue.
You need to enable DRM KMS on Nvidia.
Mine is simply default KDE. The only visible thing I’ve changed is the wallpaper – changes to my desktop mostly concentrate on the “invisible” ones like shortcut keys or setting changes or scripting.
Desktop? I settled on Arch and Fedora.
Server? Debian. Although technically I never distrohopped on servers, been using Debian since the beginning of time.
Can’t replicate your results here. I play on Wayland, and deliberately force some games to run natively on Wayland (SDL_VIDEODRIVER=wayland
) and so far I haven’t noticed any framerate changes except statistical noise.
It’s not a fork of wlroots. wlroots is a library to assist developers in creating Wayland compositors.
You ably demonstrate your own inability to listen.
Or was it you?
I’m not sure how you hallucinated that Wayland got 4 years of design and 8 years of implementation.
2012-2021, or to clarify “Late 2012 to early-mid 2021” seems to be 8-point-something years to me. I dunno, did mathematics change recently or something?
With graphics programming relatively in its infancy X11 didn’t require 15 years to become usable
I hope you do understand that graphics weren’t as complicated back then. Compositing of windows was not an idea (at least, not a widely spread one) in the 90s. Nor was sandboxing an idea back then. Or multidisplay (we hacked it onto X11 later through XRandR). Or HDR nowadays. Or HiDPI. Or touch input and gestures. We software rendered everything too, so DRI and friends weren’t thought of.
In a way… you are actually insulting the kernel developers.
That is to say in practical effect actual usage of real apps so dwarfs any overhead that it is immeasurable statistical noise
The concern about battery life is also probably equally pointless.
some of us have actual desktops.
There just aren’t. It’s not blurry.
I don’t have a bunch of screen tearing
Let me summarize this with your own statement, because you certainly just went out and disregarded all things I said:
Your responses make me think you aren’t actually listening for instance
Yeah, you are now just outright ignoring people’s opinion. 2 hours of battery life - statistical noise, pointless. Laptops - who neeeeeeeeds that, we have desktops!! Lack of fractional scaling which people literally listed as a “disadvantage” of Wayland before it got the protocol - yeah, I guess X11 is magic and somehow things are not blurry on X11 which has the same problem when XRandR is used.
Do I need to quote more?
Also, regarding this:
Wayland development started in 2008 and in 2018 was still a unusable buggy pile of shit.
Maybe you should take note of when Wayland development had actually started picking up. 2008 was when the idea came up. 2012 was when the concrete foundation started being laid.
Not to mention that it was 2021 when Fedora and Ubuntu made it default. Your experience in 2018 is not representative of the Wayland ecosystem in 2021 at all, never mind that it’s now 2023. The 3 years between 2018-2021 saw various applications either implementing their first support, or maturing their support of Wayland. Maybe you should try again before asserting a bunch of opinions which are outdated.
Wayland was effectively rebuilding the Linux graphics stack from the ground up. (No, it’s not rebuilding the stack for the sake of it. The rebuilding actually started in X.org, but people were severely burned out in the end. Hence Wayland. X.org still contains an atomic KMS implementation, it’s just disabled by default.)
4 years of designing and 8 years of implementation across the entire ecosystem is impressive, not obnoxious.
It’s obnoxious to those of us who discovered Linux 20 years ago rather than last week.
Something makes me think that you aren’t actually using it 20 years ago.
Maybe it’s just my memory of the modelines failing me. Hmmm… did I just hallucinate the XFree86 server taking down my system?
Oh noes, I am getting old. Damn.
Usually I sympathize with sentiments like this (“people use X because of uncontrolled circumstances”), but browsers are not one of them.
If you have a website that requires the use of Chrome, then just use Chrome for that website! It’s not an either-or thing – you can install both browsers and use Firefox as the primary one.
And that’s what makes this statement so problematic. You don’t earn anything by staying exclusively on Chrome, when both it and Firefox can work alongside each other.