- Create a simple front with a client-side direct API call
- Get Twitter dev account
- (Implement long polling? Not sure if I need it after gPRC implementation is done)
- Move the API call to a custom server
- (Proxy the Twitter API calls for simpler caching?)
- Create a protobuf definition of the service and messages
- Implement the gRPC server
- Implement the gRPC client
- (Cache the tweets in a local redis? If I don't cache the external API calls. Figure out how/whether to leverage protobuf for this?)
- Dockerize
- Deploy somewhere
- Client: next or svelte?
- Server: next, plain node or something completely different?
- http or express?
- api: gRPC or kafka?
- caching storage. redis? no caching
- Where to deploy?
It turns out there's an official tutorial from Twitter available for which the description closely resembles what I'm setting out to do. I want to make a couple of different choices, but let's keep it around for easy reference.
(For example, I want to use gRPC instead of websockets, and ideally react renders instead of embeds. One can argue that using protocol buffers instead of Twitter's original JSON responses might complicate things unnecessary, but there's a chance for me to learn something new while working on this instead of just replicating somebody else's code.
Also, I might try to run the API off the same server as the front app, instead making the division on server vs client code. Haven't decided yet whether this will make things easier or more complicated.
you'll need
- docker
- node 18
- yarn
yarn
cp .env.sample .env # then edit .env to add a Twitter bearer token
yarn dev
That should do it. It spins up a grpc server, an envoy proxy and a nextjs server. Then go have a look at localhost:3000.
yarn build
yarn start
# yarn start:next # localhost:3000
# yarn start:envoy # localhost:8080
# yarn start:grpc # localhost:9090
This bit could be improved. Envoy is dockerized from elsewhere. I would like to dockerize the two others as well and maybe run them through docker-compose.
- tests
- ci/cd flows
- kubernetes
- cleaning up disused streams doesn't work the way I thought