-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Bring back PlacementTarget
to IPlacementFilterDirector.Filter
#9477
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Looks like the removal was to allow for decisions to be cached and not depend on per-call data to make a decsion. I wonder if there's a way that both could be supported as needed? |
@ReubenBond What would the scope of caching the filter results? Right now the decision is filtered every time Compatible Silos are pulled. GrainVersionManifest does caching of compatible silos (before filtering) based on _clusterManifestProvider.Current and comparing versions. I guess that filtering could similarly have caching (of filters that don't require RequestContext) for each filter/type. This could be achieved with a built in base class like Need to work on how that would get exposed in a non-confusing way. |
@miguelhasse could you help us to understand your scenario more? What is your placement director & filter doing in this case? Could you have a custom placement director with no filter?
The current tuple contains |
For the use case. the built in filters use the current silo's metadata and compare that across other candidate silos to filter down. No per-call data from RequestContext is required for that case so the current implementation works. Miguel is looking at a property on a request (a region that is presumably potentially different for each call) to compare against candidate silo metadata so no caching on just the tuple is possible there. If we only ever have the current interface (no RequestContext), then we could have caching on that tuple between membership updated. But if we did allow for the RequestContext to be used, potentially as an opt-in, then we wouldn't be able to do any caching, at least not after the first filter that used RequestContext. That's because the filters after that could be getting a different set of silos from the previous filter and may give different answers back based on that, even for the same context tuple. The implementation using a custom Placement does work, but it has downsides that it's basically both filtering and selection in one. It's currently implemented with random selection right now. It would be better, arguably, if it had ResourceOptimized placement selection behavior instead (from the region filtered candidates) but that means pulling in and copying the implementation. If it were instead done with a dedicated filter, then the selection would be a separate concern and could either be omitted to get default behavior (and upgrade to ROP should that become the default), or be able to specify ROP or ActivationCount, or any other placement separately from filtering and not need to worry about trying to copy over code from internal sealed classes. |
I guess that even without the RequestContext, there could be arbitrary logic to do the filtering. It could be a random chance that each candidate silo gets filtered out resulting in a different answer for filtering on each activation. Not necessarily the most useful filter, but I don't think that's an invalid implementation? It seems like it would be hard to try to enforce that filter implementations must be (or optionally, may declare that they are) consistent and cacheable in their behavior; even if the majority case may end up being so. |
Here's an attempt at allowing both types of filters (without and with RequestContext available): But I still think that we may not be able to make the assumption that a filter would necessarily always return the same results between membership updates. |
You've convinced me. We could potentially implement an API (either now or later) which allows filter results to be cached by having each filter provide cache keys alongside the results. What do you think? Perhaps for now we go with the simple route (no caching) and later we implement caching. |
In that case, then we could add the RequestContext back in (use PlacementTarget). Then have some opt-in mechanism (ICacheablePlacementFilterDirector interface) that provides the cache key and indicates that the user expects to only be called once per membership/cache key (or whatever the caching requirements are). That would be similar to the [Immutable] semantics where it isn't strictly enforced, but the intent can be specified by the developer. |
Here's adding PlacementTarget back |
The replacement of
PlacementTarget
withPlacementFilterContext
broke my use case and implementation ofIPlacementFilterDirector.Filter
. I needRequestContextData
andGrainId
properties to filter the silo placement by checking these properties against silo metadata. So, for my use case filter caching would never be an option, and made placement filter useless.Please bring back
PlacementTarget
as a parameter inIPlacementFilterDirector.Filter
or some other way to have placement filtering based onRequestContextData
andGrainId
.Originally posted by @ReubenBond in #9271 (comment)
The text was updated successfully, but these errors were encountered: