8000 memory cost too high · Issue #8 · diegommm/adaptivepool · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

memory cost too high #8

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Nyx2022 opened this issue Nov 30, 2024 · 4 comments
Open

memory cost too high #8

Nyx2022 opened this issue Nov 30, 2024 · 4 comments
Assignees

Comments

@Nyx2022
Copy link
Nyx2022 commented Nov 30, 2024

hi ,i'm using this lib now,but i found when i call get and put very flunetnly,the memery still will increase to a high value for my application
image
you see, the relative is too high
my config is :
var BufferPool1 = adaptivepool.New( //还不太会用
adaptivepool.BytesBufferProvider{},
adaptivepool.NormalEstimator{
Threshold: 2, // reuse buffer if its Cap is in Mean ± 2 * StdDev
MinCost: 51200, // minimum cost (bytes) of newly created items
},
50, // bias towards the latest 500 elements to increase adaptability
)

can you tell how can i control the memory cost in a little value, thank you inadvance

@diegommm
Copy link
Owner

Hi @Nyx2022! Some questions:

  1. Can you clarify what is the image you shared showing? Is that total memory allocated by Get since the program started? Is that total memory currently allocated through Get?
  2. Do you have stats on how often are Get and Put being called?
  3. Do you have stats about the data being allocated? (i.e. do you know if it approximately follows a Normal Distribution or anything else?) As an example: as per the current docs, NormalEstimator will not suggest a smaller value than MinCost (if it's positive). This means that if you're traffic is for many items much smaller than that, which means you would be always allocating far more than necessary, and then that could translate in over-allocation. In that respect:
  4. How did you arrive to the minimum cost of 51200?

Regarding your question:

can you tell how can i control the memory cost in a little value

Do you mean how to control the memory cost of an item which requires very little memory? If that's so, remember that AdaptivePool is meant to preallocate statistically when you do not have better information. That is, if most of the time you need to allocate for a certain size range, but in a very specific case you know you will need to allocate less than that, then in the latter case you should either use a different AdaptivePool for that case, or handle allocations differently. Example:

  1. You have a web server serving 2 endpoints and want to allocate for the payloads received in the requests.
  2. One of the endpoints is a DELETE endpoint and has a small payload of a few bytes.
  3. The other is a PUT endpoint and receives a mid-sized payload containing the full description of an entity.

In the example above, the recommendation would be to use one AdaptivePool for each endpoint. That is because the distribution of the memory cost is uneven among both endpoints, and you will benefit from treating them separately.

Please, let me know if this helped, I'm thinking of adding some debugging operations, and I would be glad to hear your thoughts.

Kind regards.

@diegommm diegommm self-assigned this Dec 24, 2024
@Nyx2022
Copy link
Author
Nyx2022 commented Dec 25, 2024

Hi @Nyx2022! Some questions:  你好!一些问题:

  1. Can you clarify what is the image you shared showing? Is that total memory allocated by Get since the program started? Is that total memory currently allocated through Get?
    the image is gopprof flame in 2 mins. no. no.
  2. Do you have stats on how often are Get and Put being called?
    no, but it was called in marshal func , so the call frequency is very high in some times
  3. Do you have stats about the data being allocated? (i.e. do you know if it approximately follows a Normal Distribution or anything else?) As an example: as per the current docs, NormalEstimator will not suggest a smaller value than MinCost (if it's positive). This means that if you're traffic is for many items much smaller than that, which means you would be always allocating far more than necessary, and then that could translate in over-allocation. In that respect:
    no
  1. How did you arrive to the minimum cost of 51200?
    i do not fully understand the meaning of each param.so i only test it by 512 and 51200

Please, let me know if this helped, I'm thinking of adding some debugging operations, and I would be glad to hear your thoughts.

Kind regards. 

@diegommm
Copy link
Owner

Question just in case: are you calling Put and passing it the previously allocated buffer you obtained with Get/GetWithCost once you no longer need to use it?

i do not fully understand the meaning of each param.so i only test it by 512 and 51200

Were the results with 512 worse?

Regarding the meaning of the parameters: Have you read the docs describing them? If the meaning of any parameter is unclear in the docs, please, let me know what could be confusing so I can improve them.

the image is gopprof flame in 2 mins. no. no.

Can you explain what is the gopprof flamegraph displaying then?

Note that it is understandable that allocations are made if the pool is empty, e.g., if you just started the program and requesting a lot of memory.

The goal of the library is to reuse some of those allocations and also allocate more efficiently, not to remove allocations. If your program does require a high number of allocations but doesn't release them with Put (e.g. it really needs to hold them) then all the allocations will still be provisioned.

@Nyx2022
Copy link
Author
Nyx2022 commented Jan 12, 2025

i will call put func after i do not use it,
512 is not worse,
gopprof flamegraph is a perfomrance graph of one exe

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
0