At least what they ended up doing was not some crypto ponzi scheme.
Written from Librewolf, because I’ve had enough.
At least what they ended up doing was not some crypto ponzi scheme.
Written from Librewolf, because I’ve had enough.
If the mail is sent unencrypted the admin can read it. What I have is a script that encrypt incoming e-mail with the users key, so that they are stored encrypted on the harddrive. That at least protect against an intruder reading past e-mails. I use a Perl script written by Mike Cardwell for that.
Another service you might like to have for your users is WKD/WKS, so that senders clients can automatically fetch the public key for your users.
It’s easy to overlook with the omnipresent internet, but self-hosting doesn’t require internet. You could host for your fellow students on the local network. If that’s also against the Wifi rules you can either ignore that stupid rule or set up your own god damn wifi with hostapd on your machine and let students connect directly to it. It’s probably best to use a machine dedicated to the task for security reasons as you wouldn’t want curious students to accidentally erase your homework. I wouldn’t use containers or VMs for any of this, I’d just use bare metal like in the good ol’ days. You could also, without having to worry, give people shell accounts because it’s a closed network. The options are endless without all the worries of hosting on the internet.
WAT!? No internet!?
Megaphone appears to be a Spotify advertising platform for podcasts. https://megaphone.spotify.com/
Give us a link to the rss feed and let’s investigate. I’m not experiencing this.
Reminded my of what happened at the MindTheTech conference half a year ago.
https://peervideo.club/w/p/i4BetLY7RZa5yeNLJriXPW?playlistPosition=3
Automatic Content Recognition (ACR) [42] is widely used for second-party tracking in smart TVs. As shown in Figure 1, ACR periodically captures frames (and/or audio), builds a fingerprint of the content, and then shares it with an ACR server for matching it against a database of known content (e.g., movies, ads, live feed). When the fingerprint matches, ACR server can determine exactly what piece of content is being watched on the smart TV.
Netscape Communicator, Netscape Communicator, KHTML, Netscape Communicator
As a general stance “People want me to give them free shit. I say gtfo.”, I understand you.
That’s just not proportional to Mozilla and Firefox. In 2022 they had a total revenue of $595 million¹. That allows them to hire 3305 software developers at a salary of $180.000. Google was responsible for 81% of that revenue¹. If you remove Google and their influence from the equation you’re left with $113 millon and Mozilla can then hire 628 software developers. I think that would be more than adequate to maintain a browser.
lol, I think we’re giving too little credit to the marketing people in tech. I want to read their blogs!
It seems that we focus our interest in two different parts of the problem.
Finding the most optimal way to classify which images are best compressed in bulk is an interesting problem in itself. In this particular problem the person asking it had already picked out similar images by hand and they can be identified by their timestamp for optimizing a comparison of similarity. What I wanted to find out was how well the similar images can be compressed with various methods and codecs with minimal loss of quality. My goal was not to use it as a method to classify the images. It was simply to examine how well the compression stage would work with various methods.
It’s a pillar of democracy to protect the autonomy of the people.
It is a human right…
Wait… this is exactly the problem a video codec solves. Scoot and give me some sample data!
I was not talking about classification. What I was talking about was a simple probe at how well a collage of similar images compares in compressed size to the images individually. The hypothesis is that a compression codec would compress images with similar colordistribution in a spritesheet better than if it encode each image individually. I don’t know, the savings might be neglible, but I’d assume that there was something to gain at least for some compression codecs. I doubt doing deduplication post compression has much to gain.
I think you’re overthinking the classification task. These images are very similar and I think comparing the color distribution would be adequate. It would of course be interesting to compare the different methods :)
The first thing I would do writing such a paper would be to test current compression algorithms by create a collage of the similar images and see how that compares to the size of the indiviual images.
See this: https://github.com/chenxiaolong/avbroot/issues/299
The issue with the Pixel seems to be a a build-up of static in the LCD.