8000 Batch commits to the state and block store · Issue #1040 · cometbft/cometbft · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Batch commits to the state and block store #1040

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
3 tasks done
jmalicevic opened this issue Jun 27, 2023 · 2 comments
Closed
3 tasks done

Batch commits to the state and block store #1040

jmalicevic opened this issue Jun 27, 2023 · 2 comments
Labels
P:storage-optimization Priority: Give operators greater control over storage and storage optimization
Milestone

Comments

@jmalicevic
Copy link
Contributor
jmalicevic commented Jun 27, 2023

Currently, state is committed to the block and state store using asynchronous calls followed by a synchronous call at the end.

We need to investigate whether explicitly batching them improves performance. In particular, consider the work done in :
tendermint/tendermint#6067
tendermint/tendermint#6018

and issue: tendermint/tendermint#5888

The goal of this is two fold:

  • Increase atomicity guarantees
  • Improve performance
    DoD:
  • Understand, from a DB perspective whether atomicity is actually increased or not
  • Evaluate whether there are corner cases where the batch becomes very big and understand how the DB handles this underneath (memory vs. disk)
  • State is saved in batches with an indication on performance improvement
@jmalicevic jmalicevic mentioned this issue Jun 27, 2023
21 tasks
@jmalicevic jmalicevic added the P:storage-optimization Priority: Give operators greater control over storage and storage optimization label Jun 27, 2023
@jmalicevic jmalicevic added this to the 2023-Q3 milestone Jun 27, 2023
@jmalicevic jmalicevic changed the title Batch commits to the state and block store (work done for Tendermint v0.36) * Batch commits to the state and block store Jun 27, 2023
@jmalicevic jmalicevic moved this from Todo to Ready for Review in CometBFT 2023 Sep 7, 2023
@jmalicevic jmalicevic modified the milestones: 2023-Q3, 2023-Q4 Oct 16, 2023
@jmalicevic
Copy link
Contributor Author

This task has been completed but initial go benchmarking suggests that the performance is actually downgraded. The plan is to rerun the experiments once we change the key ordering during Q4 2023. This would then help make the final decision on whether this is to be merged or rejected.

@melekes
Copy link
Contributor
melekes commented Dec 5, 2023

Understand, from a DB perspective whether atomicity is actually increased or not

Atomicity is increasing, as mentioned in tendermint/tendermint#6018 because it is all or nothing in the case of the batch.

Evaluate whether there are corner cases where the batch becomes very big and understand how the DB handles this underneath (memory vs. disk)

LSMs don't handle big batches very well (1). The ideal for goleveldb is ~ 100KB (2), which is more or less equal to our block part size (64kB).

Here are the results from running the Anton's benchmark on my local PC:

Benchmark
func BenchmarkBlockStore_SaveBlockWithExtendedCommit(b *testing.B) {
	dbTypes := []dbm.BackendType{
		dbm.GoLevelDBBackend,
	}
	NTXs := int64(1024)

	for _, dbType := range dbTypes {
		state, bs, cleanup := makeStateAndBlockStoreSpecificBackend(dbType)
		b.Run(fmt.Sprintf("DBType:%s;NTxs:%d", dbType, NTXs),
			func(b *testing.B) {
				for i := 0; i < b.N; i++ {
					height := bs.Height() + 1
					block := state.MakeBlock(height, test.MakeNTxs(height, NTXs), new(types.Commit), nil, state.Validators.GetProposer().Address)
					validPartSet, err := block.MakePartSet(types.BlockPartSizeBytes)
					if err != nil {
						b.Error(err)
					}
					seenCommit := makeTestExtCommit(block.Header.Height, cmttime.Now())

					bs.SaveBlockWithExtendedCommit(block, validPartSet, seenCommit)
				}
			})

		cleanup()
	}

}

type cleanupFunc func()

func makeStateAndBlockStoreSpecificBackend(backendType dbm.BackendType) (sm.State, *BlockStore, cleanupFunc) {
	config := test.ResetTestRoot("blockchain_reactor_test")
	blockDB, err := dbm.NewDB("block", backendType, config.DBDir())
	if err != nil {
		panic(fmt.Errorf("error creating %s: %w", backendType, err))
	}
	stateDB, err := dbm.NewDB("state", backendType, config.DBDir())
	if err != nil {
		panic(fmt.Errorf("error creating %s: %w", backendType, err))
	}
	stateStore := sm.NewStore(stateDB, sm.StoreOptions{
		DiscardABCIResponses: false,
	})
	state, err := stateStore.LoadFromDBOrGenesisFile(config.GenesisFile())
	if err != nil {
		panic(fmt.Errorf("error constructing state from genesis file: %w", err))
	}
	return state, NewBlockStore(blockDB), func() { os.RemoveAll(config.RootDir) }
}
## batching / 1 MB block / goleveldb
155921071 ns/op
## NO batching / 1 MB block / goleveldb
134169606 ns/op
## batching / 10 MB block / goleveldb
1135896143 ns/op
## NO batching / 10 MB block / goleveldb
1096632817 ns/op

If we want to support blocks bigger than 100kB on average, we should not batch single blocks. This becomes more important for big blocks (>1MB). The current max size is 4MB. This implies that CometBFT must handle such a scenario efficiently, which won't be the case w/ a single batch.

Now, the current block size of Cosmos Hub is 64kB, with peaks ranging from 200kB to 700kB. For Osmosis, it's 74kB. If we want to target the current customers, then creating a single batch makes sense.

@melekes melekes closed this as completed Dec 11, 2023
@github-project-automation github-project-automation bot moved this from Ready for Review to Done in CometBFT 2023 Dec 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P:storage-optimization Priority: Give operators greater control over storage and storage optimization
Projects
No open projects
Status: Done
Development

Successfully merging a pull request may close this issue.

3 participants
0