8000 Backpressure for Redis Sink · Issue #23096 · vectordotdev/vector · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Backpressure for Redis Sink #23096

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
sc68cal opened this issue May 23, 2025 · 0 comments
Open

Backpressure for Redis Sink #23096

sc68cal opened this issue May 23, 2025 · 0 comments
Labels
type: feature A value-adding code addition that introduce new functionality.

Comments

@sc68cal
Copy link
sc68cal commented May 23, 2025

A note for the community

This is a feature request, related to #21943

Along with the original discussion author, I have also recently experienced this, where I have Vector parsing a Kafka data source, parsing log messages using grok patterns and placing the results into a Redis sink running on the same machine, where they are processed by a different component. The problem is there is no way to have backpressure back to Kafka, based on the size of the Redis list that is used to store the intermediate results. The processors consuming from Redis are not consuming fast enough and the redis list continues to grow. I have multiple instances of this processing system connected to the same Kafka group, but since there's no way to put backpressure into Kafka, there's no way for the system to balance itself. I have one node that has a list length of 0 and another node that does not process as quickly and the Redis list continues to grow without bounds, which disrupts the processing of events overall.

This feature would be really great to have. I can certainly look into how to implement it, but I'll be the first to admit my Rust is barely past Rustlings and a bit of light reading and hacking, and I've only used Vector as a consumer.

Use Cases

The end goal is to have a way to limit the growth of a redis list via LLEN when the Redis sink is configured to use a list. There may not be an equivalent means for Redis channel type.

Attempted Solutions

I attempted to configure the buffer parameters for the Redis sink but those parameters don't really influence the system in the correct way. The issue is how fast consumers on the other side of the Redis list consume, not how quickly Vector can place data into Redis.

Proposal

Have some means to check the length of a Redis list that is being used as a sink, and when the length of the list exceeds a configured parameter, slow down consumption from sources that the sink is connected to. In my usecase, it is fine for the list to grow above the configured parameter value while the consumption is being slowed down, the important part is that at some point the list length ceases to grow while consumers catch up.

References

Discussion in #21943

Version

0.47.0

@sc68cal sc68cal added the type: feature A value-adding code addition that introduce new functionality. label May 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: feature A value-adding code addition that introduce new functionality.
Projects
None yet
Development

No branches or pull requests

1 participant
0