-
Notifications
You must be signed in to change notification settings - Fork 636
ADR-101: Backport documentation to v0.38.x-experimental #1477
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
10 commits
Select commit
Hold shift + click to select a range
9bf3f03
docs: gRPC services and data companion pruning services (#1307)
andynog d48c6a3
Add note on legacy grpc to intro
hvanz 999751e
Update docs
hvanz 979b103
Apply suggestions from code review
hvanz e6f750a
Update text based on reviews
hvanz 39e22e4
Merge branch 'v0.38.x-experimental' into hvanz/backport-pr1307-v0.38
andynog 3b464bb
Update docs/data-companion/pruning.md
hvanz 8a36955
Merge branch 'v0.38.x-experimental' into hvanz/backport-pr1307-v0.38
hvanz 250069e
added a few more details related to grpc and pruning (#1477)
andynog 256f189
Merge branch 'v0.38.x-experimental' into hvanz/backport-pr1307-v0.38
andynog File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
--- | ||
order: false | ||
parent: | ||
title: Data Companion | ||
order: 7 | ||
--- | ||
|
||
# Guides | ||
|
||
- [Introduction](./intro.md) | ||
- [gRPC services](./grpc.md) | ||
- [Pruning service](./pruning.md) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,217 @@ | ||
--- | ||
order: 1 | ||
parent: | ||
title: gRPC services | ||
order: 2 | ||
--- | ||
|
||
|
||
# Fetching data from the node | ||
|
||
One of the most important steps to create a Data Companion service is to extract the necessary data from the node. | ||
Fortunately, CometBFT provides gRPC endpoints that allow you to fetch the data, such as `version`, `block` and | ||
`block results`. | ||
|
||
This documentation aims to provide a detailed explanation of CometBFT's gRPC services that can be used to retrieve | ||
the data you need. | ||
|
||
## Enabling the gRPC services | ||
|
||
To utilize the gRPC services, it's necessary to enable them in CometBFT's configuration settings. | ||
|
||
In the `[gRPC]` section of the configuration: | ||
``` | ||
####################################################### | ||
### gRPC Server Configuration Options ### | ||
####################################################### | ||
|
||
# | ||
# Note that the gRPC server is exposed unauthenticated. It is critical that | ||
# this server not be exposed directly to the public internet. If this service | ||
# must be accessed via the public internet, please ensure that appropriate | ||
# precautions are taken (e.g. fronting with a reverse proxy like nginx with TLS | ||
# termination and authentication, using DDoS protection services like | ||
# CloudFlare, etc.). | ||
10000 | # | |
|
||
[grpc] | ||
``` | ||
|
||
Add the address for the non-privileged (regular) services, for example: | ||
|
||
``` | ||
laddr = "tcp://0.0.0.0:26090" | ||
``` | ||
|
||
> Note that this address MUST be different from the `grpc_laddr` within the `[rpc]` configuration section. The listener | ||
at that endpoint does not support the new gRPC services (Block, BlockResults, etc.), and vice versa. | ||
|
||
The non-privileged gRPC endpoint is **enabled by default**. Each individual service exposed in this endpoint can be disabled | ||
or enabled individually. For example, to enable the `Version` service, in the `[grpc.version_service]` section, ensure | ||
that the `enabled` property is set to `true`: | ||
|
||
``` | ||
# | ||
# Each gRPC service can be turned on/off, and in some cases configured, | ||
# individually. If the gRPC server is not enabled, all individual services' | ||
# configurations are ignored. | ||
# | ||
|
||
# The gRPC version service provides version information about the node and the | ||
# protocols it uses. | ||
[grpc.version_service] | ||
enabled = true | ||
``` | ||
|
||
Do the same thing for the `block_service` and the `block_results_service` to enable them. | ||
|
||
``` | ||
# The gRPC block service returns block information | ||
[grpc.block_service] | ||
enabled = true | ||
|
||
# The gRPC block results service returns block results for a given height. If no height | ||
# is given, it will return the block results from the latest height. | ||
[grpc.block_results_service] | ||
enabled = true | ||
``` | ||
|
||
## Fetching **Block** data | ||
|
||
In order to retrieve `block` data using the gRPC block service, ensure the service is enabled as described in the section above. | ||
|
||
Once the service has been enabled, the Golang gRPC client provided by CometBFT can be utilized to retrieve data from the node. | ||
|
||
This client code is a convenient option for retrieving data, as it allows for requests to be sent and responses to be | ||
managed in a more idiomatic manner. However, if necessary and desired, the protobuf client can also be used directly. | ||
|
||
Here is an example code to retrieve a block by its height: | ||
``` | ||
import ( | ||
"github.com/cometbft/cometbft/rpc/grpc/client" | ||
) | ||
|
||
ctx := context.Background() | ||
|
||
// Service Client | ||
addr := "0.0.0.0:26090" | ||
conn, err := client.New(ctx, addr, client.WithInsecure()) | ||
if err != nil { | ||
// Do something with the error | ||
} | ||
|
||
block, err := conn.GetBlockByHeight(ctx, height) | ||
if err != nil { | ||
// Do something with the error | ||
} else { | ||
// Do something with the `block` | ||
} | ||
|
||
``` | ||
|
||
## Fetching **Block Results** data | ||
|
||
To fetch `block results` you can use a similar code as the previous one but just invoking the method to that retrieves | ||
block results. | ||
|
||
Here's an example: | ||
``` | ||
blockResults, err := conn.GetBlockResults(ctx, height) | ||
if err != nil { | ||
// Do something with the error | ||
} else { | ||
// Do something with the `blockResults` | ||
} | ||
|
||
``` | ||
|
||
## Latest height streaming | ||
|
||
There is a new way to subscribe to a stream of new blocks with the Block service. Previously, you could connect and | ||
subscribe to new block events using websockets through the RPC service. | ||
|
||
One of the advantages of the new streaming service is that it allows you to opt for the latest height subscription. | ||
This way, the gRPC endpoint will not have to transfer entire blocks to keep you updated. Instead, you can fetch the | ||
blocks at the desired pace through the `GetBlockByHeight` method. | ||
|
||
To receive the latest height from the stream, you need to call the method that returns the receive-only channel and then | ||
watch for messages that come through the channel. The message sent on the channel is a `LatestHeightResult` struct. | ||
|
||
``` | ||
// LatestHeightResult type used in GetLatestResult and send to the client | ||
// via a channel | ||
type LatestHeightResult struct { | ||
Height int64 | ||
Error error | ||
} | ||
``` | ||
|
||
Once you get a message, you can check the `Height` field for the latest height (assuming the `Error` field is nil) | ||
|
||
Here's an example: | ||
``` | ||
import ( | ||
"github.com/cometbft/cometbft/rpc/grpc/client" | ||
) | ||
|
||
ctx := context.Background() | ||
|
||
// Service Client | ||
addr := "0.0.0.0:26090" | ||
conn, err := client.New(ctx, addr, client.WithInsecure()) | ||
if err != nil { | ||
// Do something with the error | ||
} | ||
|
||
stream, err := conn.GetLatestHeight(ctx) | ||
if err != nil { | ||
// Do something with the error | ||
} | ||
|
||
for { | ||
select { | ||
case <- ctx.Done(): | ||
return | ||
case latestHeight, ok := <-stream: | ||
if ok { | ||
if latestHeight.Error != nil { | ||
// Do something with error | ||
} else { | ||
// Latest Height -> latestHeight.Height | ||
} | ||
} else { | ||
return | ||
} | ||
} | ||
} | ||
``` | ||
|
||
The ability to monitor new blocks is attractive as it unlocks avenues for creating dynamic pipelines and ingesting services | ||
via the producer-consumer pattern. | ||
|
||
For instance, upon receiving a notification about a fresh block, one can activate a method to retrieve block data and | ||
save it in a database. Subsequently, the node can set a retain height, allowing for data pruning. | ||
|
||
## Storing the fetched data | ||
|
||
In the Data Companion workflow, the second step involves saving the data retrieved from a blockchain onto an external | ||
storage medium, such as a database. This external storage medium is important because it allows the data to be accessed | ||
and utilized by custom web services that can serve the blockchain data in a more efficient way. | ||
|
||
When choosing a database, evaluate your specific needs, including data size, user access, and budget. | ||
For example, the [RPC Companion](https://github.com/cometbft/rpc-companion) uses Postgresql as a starting point, but there | ||
are many other options to consider. Choose a database that meets your needs and helps you achieve your objectives. | ||
|
||
Before proceeding to the next step, it is crucial to verify that the data has been correctly stored in the external database. | ||
Once you have confirmed that the data has been successfully stored externally, you can proceed to update the new "retain_height" | ||
information. This update will allow the node to prune the information that is now stored externally. | ||
|
||
## Pruning the node data | ||
|
||
In order to successfully execute the Data Companion workflow, the third step entails utilizing the newly introduced | ||
gRPC pruning service API to set certain retain height values on the node. The pruning service allows the data companion | ||
to effectively influence the pruning of blocks and state, ABCI results (if enabled), block indexer data and transaction | ||
indexer data on the node. | ||
|
||
For a comprehensive understanding of the pruning service, please see the document | ||
[Pruning service](./pruning.md). |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
--- | ||
order: 1 | ||
parent: | ||
title: Introduction | ||
order: 1 | ||
--- | ||
|
||
# Introduction | ||
|
||
A proposal was made in | ||
[ADR-101](https://github.com/cometbft/cometbft/blob/6f2590df767be4c1824f1cc4070a647c417e6e75/docs/architecture/adr-101-data-companion-pull-api.md) | ||
to introduce new gRPC endpoints that can be used by an external application to fetch data from the node and to control | ||
which data is pruned by the node. | ||
|
||
The Data Companion pruning service allows users to keep only the necessary data on the node, | ||
enabling more efficient storage management and improved performance of the node. With this new service, users can have | ||
greater control over their pruning mechanism and therefore better ability to o 5D40 ptimize the node's storage. | ||
|
||
The new pruning service allows granular control of what can be pruned such as blocks and state, ABCI results (if enabled), block | ||
indexer data and transaction indexer data. | ||
|
||
By also using the new gRPC services, it's possible now to retrieve data from the node, such as `block` and `block results` | ||
in a more efficient way. | ||
|
||
The [gRPC services](./grpc.md) document provides practical information and insights that will guide you through the | ||
process of using these services in order to create a Data Companion service. | ||
|
||
Note that this version of CometBFT (v0.38) already includes a gRPC service | ||
(`rpc/grpc/api.go`) that is considered legacy code and will be removed in future | ||
releases. In case you need to use the legacy gRPC endpoints, make sure that they | ||
have a different URL than the gRPC services described in this document. |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.