这是indexloc提供的服务,不要输入任何密码
Skip to content

Alternative using binary instead of json for updates #455

@vincentfretin

Description

@vincentfretin

p2pcf

https://webspaces.space by @gfodor is using a fork of hubs's networked-aframe here with a specific P2PCFAdapter.js adapter using p2pcf to be hosted only with a Cloudflare worker for the signaling part. This adapter falls into the WebRTC mesh topology like easyrtc, not sfu like the janus adapter.

Note: networked-aframe repo in networked-aframe organization has all the changes from hubs's fork and more examples and fixes for easyrtc adapter and networked-video-source component for aframe 1.5.0.

Looking at the diff there are a lot of changes to use flatbuffers instead of json for naf updates if I understand it.
I'm not sure if one can just switch their own naf project that uses custom registered networked schema to this fork, @gfodor can shed some lights if this is possible or not and correct me if I'm saying something wrong technichally here.
Anyway I think it's an interesting code base to learn, so I'm creating this issue for more visibility.

I guess using binary transfer instead of json is better for bandwidth and cpu because there is no json serialization/deserialization.
It would be interesting if someone know about some studies or benchmarks on this subject.
For 8 persons in peer to peer it probably doesn't make much of a difference. For 30 persons or more using a sfu it probably can start to make a difference for the browser on low end devices? For the server part it probably also mean it can support a lot more rooms if it's using less cpu.

wsrtc

Another interesting code base is webaverse's wsrtc that only uses WebSocket and transfer audio over it, using opus codec in WebAssembly, and transferring other data in binary. Below is my analysis when I studied this code in Aug 2023 but I never done anything with it.

The index.js file in the wsrtc repo to run a simple nodejs server with express and ws wasn't updated, the webaverse project has its own integration, search for wsrtc in the webaverse app repo
This wsrtc code is not documented, and only used in the webaverse project as dependency.
I thought at first wsrtc was using WebCodecs with opus codec for audio but actually this code is commented and an alternative implementation based on a WebWorker with libopus in WebAssembly and AudioWorklet is used to encode and decode audio chunks with the opus codec, instead of using MediaStreamTrackProcessor/AudioEncoder/AudioDecoder from WebCodecs API that is not yet available in all browsers.
Encoded audio chunks are sent with a WebSocket opened with binaryType arraybuffer (instead of default blob).
The protocol implemented here to transfer other types like number, string, Uint8Array is well thought, all is encoded in bytes to be transferred over the same WebSocket. More complex state with Map and Array for example for the listing of players is using zjs that is a fork of yjs but optimized for bytes transfer.
This solution should work in all browsers, constraint I can see is the use of TextEncoder.encodeInto that is available since Safari iOS 14.5 and Safari 14.1 on macOS.

The solution here doesn't support screen sharing with a video codec. Recreating a video codec in WebAssembly in a efficient way would be really too much work.
I see in zjs repo that support for image in encoder/decoder was added meaning it can send ImageData objects over the WebSocket. ImageData can be obtained by drawing HTMLVideoElement to a canvas, so some sort of degraded screen sharing with an image updated every few seconds may be implemented, to be tested.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions