hey yeah, no stress!
just lemme know if you’d want someone to brainstorm with.
hey yeah, no stress!
just lemme know if you’d want someone to brainstorm with.
lemme know if you need some tshooting remotely, if schedules permit, we can do screenshares
I had this issue when I used kubernetes, sata SSDs cant keep up, not sure what Evo 980 is and what it is rated for but I would suggest shutting down all container IO and do a benchmark using fio.
my current setup is using proxmox, rusts configured in raid5 on a NAS, jellyfin container.
all jf container transcoding and cache is dumped on a wd750 nvme, while all media are store on the NAS (max. BW is 150MBps)
you can monitor the IO using IOstat once you’ve done a benchmark.
I’d check high I/O wait, specially if your all of the vms are on HDDs.
one of the solution I had for this issue was to have multiple DNS servers. solved it by buying a raspberry pi zero w and running a 2nd small instance of pihole there. I made sure that the piZeroW is plugged on a separate circuit in my home.
the person you are replying to either lacks comprehension or maybe just wants to be argumentative and doesn’t want to comprehend.
i didnt have a problem with network ports (I use a switch) what I shouldve considered during purchasing was the number of drives (sata ports), pcie features (bifurcation, version, number of nvme slots)
I need to do high IOPs for my research now and I am stuck with raid0 commodity SSDs in 3 ports.
deleted by creator
I’m running a PBS instance (plus networking containers) for 4years now, cc on file for the first 2 years, now on file, but my usecase is operating within the free-forever tier.
My instance has not been deleted by them, though I’ve rebuilt the multiple times since.
The region you are on might be struggling with capacity issues, I use middle east region and never encountered account/vm deletions (yet). For my case, latency isnt an issue so i dont mind having it ona far away region.
The job security.
Depends on what kind of service the malicious requests are hitting.
Fail2ban can be used for a wide range of services.
I don’t have a public facing service (except for a honeypot), but I’ve used fail2ban before on public ssh/webauth/openvpn endpoint.
For a blog, you might be well served by a WAF, I’ve used modsec before, not sure if there’s anything that’s newer.
I’d make my own nas.
I agree with this, what I suggested is not a best practice, I should preface my post with that.
And I feel your pain! I get calls that are extremes, like people putting too much security where the ticket is “P1 everything is down, fly every engineer here” for an nACL/SG they created.
The other extreme is that deliberate exposure of services to the public internet (other service providers send us an email and ask us to do something about it, but not our monkeys, shared responsibility, etc).
Edit: **this will make your oci instance less secure **and will break integrations with other oci services. Do not use this in production, but ONLY for testing if the host fw rules affect your app.
I’m currently using oraclecloud for my bots. I work in the space (cloud/systems engg) and the first thing that got me was that the oracle ubuntu instances have custom iptables in place for security.
I’m not sure if it still has, but last i checked a year ago I had to flush iptables before I was able to use other ports. I didint really want to deal with another layer of security to manage as I was just using the arm servers for my hobby.
It might be something worth checking, it isn’t specific to lemmy though.
I found it unintuitive because other major cloud providers do not have any host firewall/security in place (making it easier to manage security using SG/NACL, through the console).
That’s the setup when I started.
I picked up an x230 with a broken screen and used it as a hypervisor (proxmox 5.4).
I used whatever resources available to me at the time and learned weird networking (passing through nics for a router on a stick configuration).
I used that x230 until the mobo gave up.
hypervisor: proxmox
vms: rhel 9.2