8000 core.link: Streamline counters [DRAFT] by lukego · Pull Request #570 · snabbco/snabb · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

core.link: Streamline counters [DRAFT] #570

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

lukego
Copy link
Member
@lukego lukego commented Jul 21, 2015

Reduce the amount of counter-related code and data in struct link.

The ring indexes (read, write) and now upgraded to 64-bit integers that do not wrap around [*]. This means that we don't need to use separate packet tx/rx counters because the ring indexes serve that purpose directly. (We do have to be careful to handle wrap-around when actually looking up elements in the ring based on the index.)

The byte counters are simply removed. Do we really need them? (How about if we tracked only packets-per-second plus perhaps a system-wide average packet size?)

Dropped packets are still counted directly because this is important information and I don't immediately see a way to derive it indirectly.

[*] 64-bit packet counters really should not wrap around. Even at one billion packets per second it would take more than 500 years. However, if this is not a safe assumption then we could detect wrap around and account for it on the counters.

(Google: "2^64/(1 billion per second)" => "584.554531 years".)

@eugeneia @javierguerragiraldez @alexandergall what do you guys thing about this idea? I mean in terms of information loss, impact on the code, and conflicts with other open PRs?

Reduce the amount of counter-related code and data in `struct link`.

The ring indexes (read, write) and now upgraded to 64-bit integers that
do not wrap around [*]. This means that we don't need to use separate
packet tx/rx counters because the ring indexes serve that purpose
directly. (We do have to be careful to handle wrap-around when actually
looking up elements in the ring based on the index.)

The byte counters are simply removed. Do we really need them?

Dropped packets are still counted directly because this is important
information and I don't immediately see a way to derive it indirectly.

[*] 64-bit packet counters really should not wrap around. Even at one
billion packets per second it would take more than 500 years. However,
if this is not a safe assumption then we could detect wrap around and
account for it on the counters.

(Google: "2^64/(1 billion per second" => "584.554531 years".)
@javierguerragiraldez
Copy link
Contributor

I think full() is wrong when read >= size-1. this should work:

function full (r)
    return r.write == r.read + size - 1
end

But in all, is there a measurable difference? I remember doing some benchmarks and finding no difference between

while not empty(in) and not full(out) do ... end

and

for _ = 1, min(nreadable(in),nwritable(out)) do ... end

even if the first form does a check on every iteration, while the second one does only at the start and it's an integer count from there on. I guess the index pointers become essentially free once they get cached in.

@eugeneia
Copy link
Member

I am always for simplicity but I wonder what is the benefit here? Did rx/txbytes actually cost us much? It seems like an interesting metric to me so I would consider keeping it.

@lukego
Copy link
Member Author
lukego commented Jul 22, 2015

@eugeneia That is a fair point. I see no urgent need to make this change now. We could revisit this idea in the future if/when we are trying to squeeze more performance out of the basic app network and see if it really helps.

I close this PR for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0