- cross-posted to:
- lemmy@lemmy.ml
- cross-posted to:
- lemmy@lemmy.ml
https://github.com/LemmyNet/lemmy/issues/3245
I posted far more details on the issue then I am putting here-
But, just to bring some math in- with the current full-mesh federation model, assuming 10,000 instances-
That will require nearly 50 million connections.
Each comment. Each vote. Each post, will have to be sent 50 million seperate times.
In the purposed hub-spoke model, We can reduce that by over 99%, so that each post/vote/comment/etc, only has to be sent 10,000 times (plus n*(n-1)/2 times, where n = number of hub servers).
The current full mesh architecture will not scale. I predict, exponential growth will continue to occur.
Let’s work on a solution to this problem together.
Other people in the thread have already made this point: even with a full mesh network, the number of remote calls made for a single activity is equal to the number of instances subscribing to that activity (plus one if the activity originates from an instance that’s not the host of the activity).
A hub/spoke model doesn’t change this, it just moves the load from the host instance to the hub. The number of connections is still the same: if N instances need to receive the activity, N calls will have to be made. If anything this adds 1 more call from the host instance to the hub.
Even peer-to-peer distribution of activities, mentioned by @hazelnoot@beehaw.org, wouldn’t actually change the amount of calls being made. You still have N servers that have to receive the activity, so you need at least N calls overall. What this would do is redistribute the load better over instances, so the host doesn’t have to make all N calls. It would definitely be an improvement, but it would not be easy to implement successfully, and it would almost surely break ActivityPub compatibility.
The only thing I can think of that would actually reduce the overall network load, though, is batching: sending multiple activities/updates together in a single message. AFAIK this is not supported by ActivityPub, though, so implementing it would mean breaking compatibility, and also implementing an entirely updated version of the protocol (which is a massive undertaking).
My logic, was the move the load away from the primary instance server, onto a service/server that only focuses on handling federation duties.
My reasoning- is to break apart the two workloads, and hopefully build a more scalable federation tier, that can scale independently on the primary instance server.
I understand the logic, and you’re right to think about how improve Lemmy’s scalability. But I’m not sure if this is the way to go.
If you build a dedicated federation proxy for an instance, you’ve really just slightly moved the problem. The federation proxy is going to have the same scalability issues, and if anything the total load goes up.
If you build multi-instance hubs, you suddenly introduce a lot of new issues.
I would agree with all of your above points.
That said- https://github.com/LemmyNet/lemmy/issues/3245 The most recent idea has popped up a few times on both lemmy, and now github- and actually sounds like a potential solution as well.
Just- using signed messages between instances, which can be transmitting P2P, instead of direct only.
Yeah what’s being described there is basically a P2P model. I still think it wouldn’t make a huge difference in the chattiness of the protocol. At best it would redistribute the load for outgoing federation messages, but not for incoming ones. An instance still has to receive each message individually, regardless of where they comes from.