Replies: 11 comments 7 replies
-
Different app chains will want different semantics for their tx dissemination. I think the best possible default would be that every validator registers an ip address that it will receive txs over. Http3 would be a good fit for standardized responses |
Beta Was this translation helpful? Give feedback.
-
The core changes needs to remove the mempool is that CometBFT needs to provide an api the enables running check tx against the transaction and mechanism to call out to an external process when reeping txs for the block |
Beta Was this translation helpful? Give feedback.
-
new rpc design
Current state of block parts
|
Beta Was this translation helpful? Give feedback.
-
Thanks everyone, |
Beta Was this translation helpful? Give feedback.
-
Espresso systems employs a CDN for fast point to point communication between validators but with a fallback mechanism that uses p2p gossip. Perhaps Comet could be inspired by a similar design |
Beta Was this translation helpful? Give feedback.
-
Adding this here #1565 |
Beta Was this translation helpful? Give feedback.
-
Hi guys, feedback is welcome on this PR #1585 about an ADR for a |
Beta Was this translation helpful? Give feedback.
-
Implementation of ADR 111 |
Beta Was this translation helpful? Give feedback.
-
Organization of a working group to explore mempool improvementsLogisticsWhere: CometBFT community call, calendar instructions available upon joining this Google group NotesWill be captured here. AgendaUnder construction Please don't hesitate to comment if any specific items should be added to the agenda. |
Beta Was this translation helpful? Give feedback.
-
The current mempool implementation displays a thundering herd problem when a CometBFT network is under sustained load due to contention on the ABCI mutex. I'm not really sure this should do into the baseline v1 mempool behavior so I'm instead documenting this here. Symptoms included increased round failures and empty blocks while the network is under heavy load. Here is a proposed mechanism to make some naive mitigations again this are. add a paramater for stopping max mempool size here What does the patch do? If there 1000 tx is the mempool, there are around be 1000 abci mutexs locks for abci recheck. |
Beta Was this translation helpful? Give feedback.
-
"Instead we add a new msg type to the mempool reactor where the mempools sends messages to its's peers that says stop sending me txs for x blocks. The reactor should track what peers it sent this message to. If the peers keeps sending txs after recieving this message before the target height has been reached then drop the peer and blacklist them. So I think instead we setup a similar structure to the txSender structure where we track the height of the peers that. have sent a message that they are not receiving txs and wait the peer state has exceeded the target height before sending more messages. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
The CometBFT mempool has a number of significant flaws.
It is both practically and conceptually broken.
The mempool layer is extremely inefficient. Each tx floods across the network being rebroadcast many times.
The mempool provides no mechanism to provide back pressure, prioritize dissemination or inform users about what fees are needed.
This essentially forms a dos vector by creating thundering herd problems that can slow block production.
It makes no sense to flood txs into the network when txs just need to be sent to the next proposer.
Beta Was this translation helpful? Give feedback.
All reactions