Using rqlite as P2P solution with only read-only nodes except one

35 views
Skip to first unread message

Karsten Ohme

unread,
Jan 14, 2024, 11:42:54 AMJan 14
to rqlite
My goal is to have a decentralized P2P database in a local only network (all nodes 192.168.x.x addresses) where each node has the identical data. Nodes can be dynamically added. The nodes are read-only but somehow data must be pushed into this system initially from inside from on of these nodes. The idea would be to have only one writer (I assume this is the leader then) which is dynamically selected. I.e. the first added peer will be the leader, but as soon as this node is shutdown a different node should act as the writer. Is such a setup possible? The deployment should be server less, i.e. no dedicated write node only deployed especially for this purpose must exist in the system.

Thanks.

Philip O'Toole

unread,
Jan 16, 2024, 10:42:52 AMJan 16
to rql...@googlegroups.com
Inline -- thanks.

On Sun, Jan 14, 2024 at 11:42 AM 'Karsten Ohme' via rqlite <rql...@googlegroups.com> wrote:
My goal is to have a decentralized P2P database

When I hear "decentralized" I always suspect rqlite won't fit the use case. Perhaps it will, but rqlite is not a "decentralized, peer-to-peer system".

in a local only network (all nodes 192.168.x.x addresses) where each node has the identical data.

Yes, each node will have identical data in an rqlite system.
 
Nodes can be dynamically added.

rqlite support that.
 
The nodes are read-only but somehow data must be pushed into this system initially from inside from on of these nodes. The idea would be to have only one writer (I assume this is the leader then) which is dynamically selected.

Yes, the Leader is dynamically selected. It happens as a result of Leader election. If the Leader fails, or becomes unreachable, a new Leader is selected.
I.e. the first added peer will be the leader, but as soon as this node is shutdown a different node should act as the writer.

Yes, rqlite will do this.
 
Is such a setup possible?

I'm not sure. I would need more details. Are the nodes able to continually talk to each other the network?
 
The deployment should be server less,

I don't know what this means in this context.
 
i.e. no dedicated write node only deployed especially for this purpose must exist in the system.

Thanks.

--
You received this message because you are subscribed to the Google Groups "rqlite" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rqlite+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/rqlite/ab0fcc66-4c44-47e8-b834-0ac92f8dcdd7n%40googlegroups.com.

Karsten Ohme

unread,
Jan 16, 2024, 11:32:43 AMJan 16
to rqlite
Decentralized mean here that there is no special server. Each node is identical in the system. E.g. there is also no consul or etc leader election node how it might be in other systems. Also any node can fail anytime and the system still works.
All nodes are able to talk constantly to each other, they will be in the same local network, e.g. 192.168.x.x addresses. My steps based on the answers would be:

Start an initial node with:

rqlited -node-id 1 -http-addr=$HOST1:4001 -raft-addr=$HOST1:4002 \ -bootstrap-expect 1 -join $HOST1:4002 data
On the second node and all others I can use:

rqlited -node-id 2 -http-addr host2:4001 -raft-addr host2:4002 -join host1:4002 data

And that's it. Whenever the leader fails one of the read only nodes will be come a leader. is this correct?

Is -raft-non-voter=true important somehow?

Thanks

Philip O'Toole

unread,
Jan 16, 2024, 12:17:55 PMJan 16
to rql...@googlegroups.com
Inline.

On Tue, Jan 16, 2024 at 11:32 AM 'Karsten Ohme' via rqlite <rql...@googlegroups.com> wrote:
Decentralized mean here that there is no special server. Each node is identical in the system. E.g. there is also no consul or etc leader election node how it might be in other systems. Also any node can fail anytime and the system still works.
All nodes are able to talk constantly to each other, they will be in the same local network, e.g. 192.168.x.x addresses. My steps based on the answers would be:

Start an initial node with:

rqlited -node-id 1 -http-addr=$HOST1:4001 -raft-addr=$HOST1:4002 \ -bootstrap-expect 1 -join $HOST1:4002 data
On the second node and all others I can use:

rqlited -node-id 2 -http-addr host2:4001 -raft-addr host2:4002 -join host1:4002 data

And that's it. Whenever the leader fails one of the read only nodes will be come a leader. is this correct?

Those nodes are not "read-only". They are called "Followers", and "Followers" can become "the Leader" if the Leader fails. "Read-only" is a different type of node, can never become a leader, and is only useful for serving read traffic. Another term for "read-only" is "non-voter".

 Does that help?

Karsten Ohme

unread,
Jan 16, 2024, 12:46:14 PMJan 16
to rqlite
I think I have understood this now. Then I will start with just one bootstrap node  and let the other join. What is a practical limit of nodes? Is 100 still OK?

Philip O'Toole

unread,
Jan 16, 2024, 3:20:22 PMJan 16
to rql...@googlegroups.com
On Tue, Jan 16, 2024 at 12:46 PM 'Karsten Ohme' via rqlite <rql...@googlegroups.com> wrote:
I think I have understood this now. Then I will start with just one bootstrap node  and let the other join. What is a practical limit of nodes? Is 100 still OK?

There is no intrinsic limit, but managing 100 nodes could become unwieldy. With 100 nodes, a minimum of 51 nodes will have to be up at all times.

Try it and see.

 

Karsten Ohme

unread,
Jan 16, 2024, 4:17:48 PMJan 16
to rql...@googlegroups.com
In this scenario even 100 would be up, but later 90 might have shut down and then the 10 remaining should still work. Or does the system remember the 100 nodes and there is no way to go back to less then 51?

Philip O'Toole

unread,
Jan 16, 2024, 4:41:41 PMJan 16
to rql...@googlegroups.com
On Tue, Jan 16, 2024 at 4:17 PM 'Karsten Ohme' via rqlite <rql...@googlegroups.com> wrote:
In this scenario even 100 would be up, but later 90 might have shut down and then the 10 remaining should still work.

You need to define "work". You can read from those 10 nodes, but you can't write to them.
 
Or does the system remember the 100 nodes and there is no way to go back to less then 51?

You would need to explicitly remove nodes (or have them "reaped" automatically). You probably need to read the Cluster Management docs.


 

Karsten Ohme

unread,
Jan 16, 2024, 5:19:43 PMJan 16
to rqlite
With work I mean write attempts. My understanding from this thread was that it is possible to have only one leader which is also the only node handling writes. And if any nodes are added this does not change, i.e. there is still only one node out of n the only leader and writer. And if this node dies another node is getting this position.
Or is this wrong and there must be always a quorum, hence the 51 out of 100?

Reply all
Reply to author
Forward
0 new messages