Cete is a distributed key value store server written in Go built on top of BadgerDB.
It provides functions through gRPC (HTTP/2 + Protocol Buffers) or traditional RESTful API (HTTP/1.1 + JSON).
Cete implements Raft consensus algorithm by hashicorp/raft. It achieve consensus across all the instances of the nodes, ensuring that every change made to the system is made to a quorum of nodes, or none at all.
Cete makes it easy bringing up a cluster of BadgerDB (a cete of badgers) .
- Easy deployment
- Bringing up cluster
- Database replication
- An easy-to-use HTTP API
- CLI is also available
- Docker container image is available
When you satisfied dependencies, let's build Cete for Linux as following:
$ mkdir -p ${GOPATH}/src/github.com/mosuka
$ cd ${GOPATH}/src/github.com/mosuka
$ git clone https://github.com/mosuka/cete.git
$ cd cete
$ make buildIf you want to build for other platform, set GOOS, GOARCH environment variables. For example, build for macOS like following:
$ make GOOS=darwin buildYou can see the binary file when build successful like so:
$ ls ./bin
ceteIf you want to test your changes, run command like following:
$ make test$ make GOOS=linux dist$ make GOOS=darwin distStarting cete is easy as follows:
$ ./bin/cete start --node-id=node1 --data-dir=/tmp/cete/node1 --bind-addr=:6060 --grpc-addr=:5050 --http-addr=:8080You can now set, get and delete data via CLI.
Setting a value by key, execute the following command:
$ ./bin/cete set --grpc-addr=:5050 --key=key1 value1Getting a value by key, execute the following command:
$ ./bin/cete get --grpc-addr=:5050 --key=key1You can see the result. The result of the above command is:
value1
Deleting a value by key, execute the following command:
$ ./bin/cete delete --grpc-addr=:5050 --key=key1Also you can do above commands via HTTP REST API that listened port 8080.
Indexing a value by key via HTTP is as following:
$ curl -s -X PUT 'http://127.0.0.1:8080/store/key1' -d value1Getting a value by key via HTTP is as following:
$ curl -s -X GET 'http://127.0.0.1:8080/store/key1'Deleting a value by key via HTTP is as following:
$ curl -X DELETE 'http://127.0.0.1:8080/store/key1'Cete is easy to bring up the cluster. Cete node is already running, but that is not fault tolerant. If you need to increase the fault tolerance, bring up 2 more data nodes like so:
$ ./bin/cete start --node-id=node2 --data-dir=/tmp/cete/node2 --bind-addr=:6061 --grpc-addr=:5051 --http-addr=:8081 --join-addr=:5050
$ ./bin/cete start --node-id=node3 --data-dir=/tmp/cete/node3 --bind-addr=:6062 --grpc-addr=:5052 --http-addr=:8082 --join-addr=:5050Above example shows each Cete node running on the same host, so each node must listen on different ports. This would not be necessary if each node ran on a different host.
This instructs each new node to join an existing node, each node recognizes the joining clusters when started. So you have a 3-node cluster. That way you can tolerate the failure of 1 node. You can check the peers with the following command:
$ ./bin/cete cluster --grpc-addr=:5050You can see the result in JSON format. The result of the above command is:
{
"nodes": [
{
"id": "node1",
"bind_addr": ":6060",
"grpc_addr": ":5050",
"http_addr": ":8080",
"leader": true,
"data_dir": "/tmp/cete/node1"
},
{
"id": "node2",
"bind_addr": ":6061",
"grpc_addr": ":5051",
"http_addr": ":8081",
"data_dir": "/tmp/cete/node2"
},
{
"id": "node3",
"bind_addr": ":6062",
"grpc_addr": ":5052",
"http_addr": ":8082",
"data_dir": "/tmp/cete/node3"
}
]
}Recommend 3 or more odd number of nodes in the cluster. In failure scenarios, data loss is inevitable, so avoid deploying single nodes.
The following command indexes documents to any node in the cluster:
$ ./bin/cete set --grpc-addr=:5050 --key=key1 value1So, you can get the document from the node specified by the above command as follows:
$ ./bin/cete get --grpc-addr=:5050 --key=key1You can see the result. The result of the above command is:
value1
You can also get the same document from other nodes in the cluster as follows:
$ ./bin/cete get --grpc-addr=:5051 --key=key1
$ ./bin/cete get --grpc-addr=:5052 --key=key1You can see the result. The result of the above command is:
value1
You can build the Docker container image like so:
$ make docker-buildYou can also use the Docker container image already registered in docker.io like so:
$ docker pull mosuka/cete:latestSee https://hub.docker.com/r/mosuka/cete/tags/
You can also use the Docker container image already registered in docker.io like so:
$ docker pull mosuka/cete:latestRunning a Cete data node on Docker. Start Cete node like so:
$ docker run --rm --name cete-node1 \
-p 5050:5050 \
-p 6060:6060 \
-p 8080:8080 \
mosuka/cete:latest cete start \
--node-id=node1 \
--bind-addr=:6060 \
--grpc-addr=:5050 \
--http-addr=:8080 \
--data-dir=/tmp/cete/node1You can execute the command in docker container as follows:
$ docker exec -it cete-node1 cete node --grpc-addr=:5050