Design Decisions
This document includes misc design decisions about actors. This document is not relevant to using Rivet, only for curious programmers & potential contributors.
Notice
There is a lot of pseudo-code in this document that is not valid Rivet code.
Unary RPC + events vs unary/client-streaming/server-streaming/bidirectional RPC
Libraries like gRPC provide 4 types of RPCs for different streaming requirements.
Cognitive load
This design would cause too much cognitive load of getting started with Rivet too much. While the 4 RPC types are not complicated on their own, developers of Rivet are already learning about the actor model, so we want to minimize the amount of new concepts developers have to learn.
Familiarity with events
Almost every language - especially JavaScript - uses the foo.on(event, callback)
pattern frequently. Therefore, designing realtime actor functionality like this is easiest for most developers to understand.
Complexity compared to events
For example, just to subscribe to an event, a developer would have to implement a server-streaming RPC & an event system in order to receive realtime events. Additionally, streaming RPCs require much more complicated cleanup code than having a default event system.
For example:
This is significantly more difficult to understand than the equivalent in Rivet:
Parallel RPC handlers vs serial message handlers
Traditional "actors" use "messages" to communicate with actors. (Sometimes messages can have a response, similar to RPCs). The actors usually process messages in serial and can optionally parallelize by spawning background tasks if needed.
Rivet allows RPCs to execute in parallel (though ordering is preserved per-connection).
Cognitive load
The primary reason is that writing & understanding Rivet actors is dead simple, since calling an RPC looks like calling a method on a class.
Writing a message handler that can do multiple things requires writing an ADT and setting up a loop. Compare the legibility of these two actors:
The RPC version is much more straightforward to understand and maintain. It looks like normal object-oriented code that most developers are familiar with.
Accidental performance bottlenecks with serial processing
If developers use an await
in an event loop, they'll unintentionally slow down their actor when they don't need to by taking a long time to receive the next message. For example, this code is deceivingly slow:
Opt-in serial message handling
It's still easy to opt-in to serial message handling if it makes sense. For example:
(Technically this example doesn't need a queue since the queued promises don't do anything async, but the point stands.)
This implementation maintains the clean RPC interface while ensuring all operations happen serially through a message queue.
Actor tags vs actor IDs & supervisors
Traditionally, actor systems have an actor ID (i.e. a "process ID" in Erlang) that identifies both the machine & identity where an actor is running. Actor PIDs are managed by "supervisors" that keep track of all of the actors and handle crashes.
Ease of use of tags
Actor tags are much easier to read & understand than actor PIDs.
Rivet durability vs supervisor restarts
In most actor systems, this restart/reschedule behavior is handled by a supervisor. If an actor restarts or crashes, the supervisor will spawn a new actor and save the new actor ID.
Rivet actors are durable, meaning they will automatically reschedule in case of a failure. This means the location where the actor is running may change without a mechanism to notify all handles of the actor ID.
Ease of use of durability
Using tags instead of an actor ID & supervisors is insanely easy to understand. Actors have a few difficult concepts associated with them, taking durability out of the problem makes it easier for more developers to work with actors.
Supervisors still exist & non-durable actors
Rivet can run non-durable actors and use the traditional actor model, if needed. This is a core part of how the dedicated game server example works. The matchmaker actor handles the lifecycle of game server actors itself.