My router will still block all ports not explicitly allowed for the hosts regardless of protocol, it’s a firewall after all and not just NAT. Just because the host addressable doesn’t mean its ports are reachable.
My router will still block all ports not explicitly allowed for the hosts regardless of protocol, it’s a firewall after all and not just NAT. Just because the host addressable doesn’t mean its ports are reachable.
Testing is actually mandatory, what’s not mandatory though is to do it before deploying.
what’s feurking
An optional step in the développement process
Emacs? When there’s ed
? Talk about bloat…
Personally I’d love to see more wider usage of S/MIME and/or PGP.
I’d rather see less. https://www.latacora.com/blog/2019/07/16/the-pgp-problem/ is a good summary about the issue and they have a shorter follow-up post about why encrypting mail in general is bad at https://www.latacora.com/blog/2020/02/19/stop-using-encrypted/
What I take issue with actalis, is that they don’t just sign your private key but you actually get the private key from them. It then depends on how much you trust the issuer.
By definition, that key can no longer be considered “private”.
Could be the kernel itself
Wouldn’t make sense to me because the thread says GNU/Linux and others, though this could relate to Android or distros not using any GNU.
gnupg
Usually not exposed to the network though, but it’s generally a mess so wouldn’t be too surprising
Another candidate I have in mind is ntpd, but again that is usually not easily accessible from outside and not used everywhere, as stuff like systemd-timesyncd exists.
Just want to stress that I’m not sure about it being OpenSSH, it was more supposed to be a fun guess than a certain prediction
Since this affects Linux and others, I’m guessing this is about OpenSSH. But I’m not very certain. Just can’t think of another candidate.
But holy sh, if your software has been running on everything for the last 20 years
This doesn’t sound like glibc as someone in the thread guessed.
I was also with a provider that didn’t offer API access for the longest time. When they then increased prices, I switched, now paying a third of their asking price per year at a very good provider.
I guess migrating is difficult if the provider doesn’t offer a mechanism to either dump the DNS to a file or perform a zone transfer (the later being part of the standard).
Can only recommend INWX for domains, though my personal requirements aren’t the highest.
A lot of paid cert providers were not so great before LE put the spotlight on the issue; it was more of a scheme to extract money from operators who couldn’t afford to not offer TLS / SSL. https://bugzilla.mozilla.org/show_bug.cgi?id=647959 was a famous post that made fun of / criticized the system before LE. This hurt security, and if not free, LE wouldn’t have worked.
Also wildcard certificates are more difficult to do automated with let’s encrypt.
They are trivial with a non-garbage domain provider.
If you want EV certificates (where the cert company actually calls you up and verifies you’re the company you claim to be) you also need to go the paid route
The process however isn’t as secure as one might think: https://cyberscoop.com/easy-fake-extended-validation-certificates-research-shows/
In my experience trustworthyness of certs is not an issue with LE. I sometimes check websites certs and of I see they’re LE I’m more like “Good for them”
Basically, am LE cert says “we were able to verify that the operator of this service you’re attempting to use controls (parts of) the domain it claims to be part of”. Nothing more or less. Which in most cases is enough so that you can secure the connection. It’s possibly even a stronger guarantee than some sketchy cert providers provided in the past which was like “we were able to verify that someone sent us money”.
Open source firmware doesn’t mean anything as long as tivoization is happening.
Which I don’t know whether it’s the case, but legislature might make this a requirement.
Once LSD becomes approved medication for ASD, my doctor will hear about my condition very fast
I, a systems guy, have a better time learning go than nix packages.
Go is a simple and elegant imperative language (that does come with its downsides); Nix the DSL is a functional language which requires a different way of thinking. Systems usually are operated imperatively, so it’s normal that you’d find it easier.
It’s not an easy language at all and one might ask if another one wouldn’t do the job better, which is what Guix System kind of explores, but its (nix) design goals make a lot of sense.
NTSYNC is one example, I don’t know what the current progress is https://lore.kernel.org/lkml/20240124004028.16826-1-zfigura@codeweavers.com/
It was supposed to be in 6.10, I don’t know if that actually happened
For most network share I use /mnt/$server.
I use /mnt/$proto/$server
, though that level of organization was probably overkill. Whatever…
I do /volumX for additional hard drives.
A good first approximation.
So where in this setup would you mount a network share? Or am additional hard drive for storage? The latter is neither removable nor temporary. Also /run
is quite more than what this makes it seem (e.g. user mounts can be located there), there is practically only one system path for executables (/usr/bin
)…
Not saying that the graphic is inherently wrong or bad, but one shouldn’t think it’s the end all be all.
Ah okay, then yes. I was just afraid it’d be a Boeing
Depends, who made it
I have full IPv6, none of my ports that I haven’t explicitly whitelisted in the firewall can be accessed from the Internet. I can open a host completely, but it’s not default. This is on the most common brand of consumer routers here.
Just because it’s not NATted doesn’t mean there’s no firewall in place.