8000 feat: grafloki start/stop by tedim52 · Pull Request #2696 · kurtosis-tech/kurtosis · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

feat: grafloki start/stop #2696

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 49 commits into from
Apr 9, 2025
Merged

feat: grafloki start/stop #2696

merged 49 commits into from
Apr 9, 2025

Conversation

tedim52
Copy link
Collaborator
@tedim52 tedim52 commented Apr 3, 2025

Description

Implements kurtosis grafloki start and kurtosis grafloki stop. These start and stop grafana and loki instances over Docker and K&s depending on the cluster type. The engine will then be restarted and configured to send enclave logs to the instances.

Is this change user facing?

YES

References

#2276

@tedim52 tedim52 changed the title feat: grafloki start/top feat: grafloki start/stop Apr 3, 2025
@tedim52 tedim52 added this pull request to the merge queue Apr 9, 2025
Merged via the queue into main with commit 53d823a Apr 9, 2025
52 checks passed
@tedim52 tedim52 deleted the tedi/grafloki branch April 9, 2025 00:30
@odyslam
Copy link
odyslam commented Apr 9, 2025

Hey everyone, tried building from source on latest main and running grafloki, but the Grafana will just restart all the time with:

It built successfully using the top level build script and the CLI output that it's using the correct tag (dirty) of the kutosis engine.

Also a note, since grafana is usually also used with prometheus (e.g ethpandas/optimism) to gather metrics from various services, it would be nice to be able to configure the grafloki grafana to connect to any running prometheus service as part of an enclave (or perhaps move prometheus outside of the enclave all-together and basically observability is handled by kurtosis) such that we don't have 2 grafana instances running: one to gather metrics and one with grafloki.

An additional idea and simple idea would be that while fluentd is the default drive, you could also enable the user to set custom driver -- at least for docker, such that one can use journald for example.

logger=provisioning t=2025-04-09T11:22:46.661951039Z level=error msg="Failed to provision data sources" error="Datasource provisioning error: open /etc/grafana/provisioning/datasources/loki.yaml: permission denied"
logger=provisioning t=2025-04-09T11:22:46.661988409Z level=error msg="Failed to provision data sources" error="Datasource provisioning error: open /etc/grafana/provisioning/datasources/loki.yaml: permission denied"
logger=server t=2025-04-09T11:22:46.662023279Z level=error msg="Stopped background service" service=*provisioning.ProvisioningServiceImpl reason="Datasource provisioning error: open /etc/grafana/provisioning/datasources/loki.yaml: permission denied"
logger=grafana.update.checker t=2025-04-09T11:22:46.662105969Z level=error msg="Update check failed" error="failed to get stable version from grafana.com: Get \"https://grafana.com/api/grafana/versions/stable\": *provisioning.ProvisioningServiceImpl run error: Datasource provisioning error: open /etc/grafana/provisioning/datasources/loki.yaml: permission denied" duration=129.22µs
logger=infra.usagestats t=2025-04-09T11:22:46.662149039Z level=error msg="Failed to get last sent time" error="context canceled"
logger=serviceaccounts t=2025-04-09T11:22:46.662192079Z level=warn msg="Failed to get usage metrics" error="context canceled"
logger=auth t=2025-04-09T11:22:46.66264372Z level=error msg="Failed to lock and execute cleanup of expired auth token" error="context canceled"
logger=http.server t=2025-04-09T11:22:46.665312086Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket=
logger=serviceaccounts t=2025-04-09T11:22:46.677729954Z level=error msg="Failed to lock and execute the migration of API keys to service accounts" error="context canceled"
logger=plugin.signature.key_retriever t=2025-04-09T11:22:46.679031477Z level=error msg="Error downloading plugin manifest keys" error="kv get: context canceled"
Error: ✗ *provisioning.ProvisioningServiceImpl run error: Datasource provisioning error: open /etc/grafana/provisioning/datasources/loki.yaml: permission denied

github-merge-queue bot pushed a commit that referenced this pull request Apr 15, 2025
## Description
A user reported the grafana container not having access to the
bindmounted datasource yaml within the container. This was due to
varying permissions from the original tmp file created by the CLI and
inherited by the bindmounted file. To get around this, this change gives
the grafana container root user which fixed their issue.

## Is this change user facing?
YES

### References

#2696 (comment)
github-merge-queue bot pushed a commit that referenced this pull request Apr 18, 2025
🤖 I have created a release *beep* *boop*
---


##
[1.7.0](1.6.0...1.7.0)
(2025-04-17)


### Features

* add grafloki to kurtosis config
([#2707](#2707))
([b6794e1](b6794e1))
* export service logs (Kubenertes)
([#2693](#2693))
([c00f357](c00f357))
* grafloki start/stop
([#2696](#2696))
([53d823a](53d823a))
* service update
([#2689](#2689))
([4cda391](4cda391))


### Bug Fixes

* **backend/kubernetes:** use default KUBECONFIG resolution
([#2672](#2672))
([379e49b](379e49b))
* docker auth wasn't being used by engine and API
([#2699](#2699))
([4eea787](4eea787))
* give grafana root access
([#2706](#2706))
([b8e2fab](b8e2fab))
* remove k8s ingress on stop user services
([#2715](#2715))
([e31c0d4](e31c0d4))
* return empty deployment if not found
([#2716](#2716))
([451c70e](451c70e))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants
0