Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backpressure in ActorPools #28

Closed
johnpyp opened this issue May 29, 2024 · 6 comments
Closed

Backpressure in ActorPools #28

johnpyp opened this issue May 29, 2024 · 6 comments
Assignees
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@johnpyp
Copy link

johnpyp commented May 29, 2024

For single-actors I think the unbounded channels work fine, because implicitly they're sequential, conceptually (of course you can do more complex things).

However for actor pools, I want the ability to fill up all the actors in a given pool, and not keep spamming messages until they're ready to accept some more. I can't use .send().await like I would in a normal actor, because that's effectively just making the pool meaningless. I can't send_async unless it's trivially small data, because it's not going to have any backpressure, and will effectively memory leak.

In this case, I think it's fine to have a model of something like...

pool.send_async()
pool.wait_until_available().await

This does of course have an implicit race condition, which probably doesn't matter for a lot of use cases, but it could be baked into another send method like...

pool.send_when_available().await

For users who want more backpressure than the pool actor count provides, they can easily add a buffered channel on top of this (or another actor itself that buffers).

@johnpyp
Copy link
Author

johnpyp commented May 29, 2024

I see there's a similar issue #3, though that seems more generic than just for pools. I'd be ok with either - making it more generic for actors and pools, or being a specific pool implementation detail.

@tqwewe tqwewe self-assigned this May 29, 2024
@tqwewe tqwewe added enhancement New feature or request good first issue Good for newcomers labels May 29, 2024
@tqwewe
Copy link
Owner

tqwewe commented May 29, 2024

Hi @johnpyp

I can't use .send().await like I would in a normal actor, because that's effectively just making the pool meaningless.

Could you clarify this a little? Do you mean calling .send().await on the pool itself? If so that should provide essentially the same as back pressure since the places that send the message would be awaited until its been processed.

Would the pool.send_when_available().await method await the response of the message, or is this more like send_async(), but waiting first until a worker is available?

@johnpyp
Copy link
Author

johnpyp commented May 29, 2024

Right, the latter, i.e "send_async_when_available().await"

The first option of just using .send doesn't work because it only fills up a one actor at a time, so I'd have to set up a parallel feeder anyways, kind of defeating the purpose.

@tqwewe
Copy link
Owner

tqwewe commented May 29, 2024

I see, I think this could be easily achieved using a Semaphore, where permits are sent along with each message, and dropped when the worker handles it. This would provide backpressure and only await if all permits are in use.

I wonder if this should simply be the behaviour of .send_async().await on the pool, or if it should be a separate method.

Though, is this issue specific to the actor pool? Or is this the same thing with regular actors too I wonder.

@tqwewe
Copy link
Owner

tqwewe commented May 29, 2024

On second thought, I think changing regular actors to use bounded channels would likely be beneficial in that unbounded channels are definitely a potential for a memory leak, and using bounded channels would also close this issue.

@tqwewe
Copy link
Owner

tqwewe commented Jun 11, 2024

Closed in #29

@tqwewe tqwewe closed this as completed Jun 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants