8000 Openshift/Kubernetes deployment: 401 error with lb running roundrobin on attachments · Issue #5120 · wekan/wekan · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Openshift/Kubernetes deployment: 401 error with lb running roundrobin on attachments #5120

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Dexus opened this issue Sep 5, 2023 · 12 comments · Fixed by #5136
Open

Openshift/Kubernetes deployment: 401 error with lb running roundrobin on attachments #5120

Dexus opened this issue Sep 5, 2023 · 12 comments · Fixed by #5136

Comments

@Dexus
Copy link
Contributor
Dexus commented Sep 5, 2023

Issue

I found out, that the loadbalancing is not a good idea, beacause you need to use a sticky session for all request.
How are the Sessions managed? Is there a way to distribute it between all nodes/pods evenly, and I miss the settings?

Server Setup Information

  • Did you test in newest Wekan?: 7.0.9
  • Did you configure root-url correctly so Wekan cards open correctly (see https://github.com/wekan/wekan/wiki/Settings)?
  • Operating System:
  • Deployment Method (Snap/Docker/Sandstorm/bundle/source): docker/deploymentconfig
  • Http frontend if any (Caddy, Nginx, Apache, see config examples from Wekan GitHub wiki first):
  • Node.js Version:
  • MongoDB Version:
  • What webbrowser version are you using (Wekan should work on all modern browsers that support Javascript)?

Problem description

As long as I use roundrobin or random distribution to the backends, I get 401 errors. But once i change it to be sticky it works out of the box without 401 errors.

Reproduction Steps

Deploy it to the openshift/kubernetes and create a service with loadbalancer with roundrobin/random. the requests to view attachments/images will fail with 401, change it to a sticky mode and you will see them all.

Logs

@xet7
Copy link
Member
xet7 commented Sep 5, 2023

@Dexus

4.0.9 ? Do you mean 7.0.9 ?

@xet7
Copy link
Member
xet7 commented Sep 5, 2023

@Dexus

Does fork mode help?

andruschka/pm2-meteor#6 (comment)

@xet7
Copy link
Member
xet7 commented Sep 6, 2023

@Dexus

What full-stack web frameworks work correctly with roundrobin at Openshift/Kubernetes?

Or would it be better to move attachments from MongoDB GridFS to Minio, and use Minio NPM driver to show attachments from Minio? There is some progress with Minio, but it is not fully complete yet, some beginnings of migration tools are at https://github.com/wekan/minio-metadata

@Dexus
Copy link
Contributor Author
Dexus commented Sep 6, 2023

@xet7

Does fork mode help?

andruschka/pm2-meteor#6 (comment)

I'm not sure how I should use this. But I will tell you how we use it.
We use the openshift template but without the mongodb setup, and create 5 pods of wekan. No usage of the PM2 Wekan tools. I would expect, that the sessions are stored to the mongodb, so the user can freely switch between the wekan servers over the loadbalancer that is provided by openshift.

What full-stack web frameworks work correctly with roundrobin at Openshift/Kubernetes?

Presumably any that can share its session information with the other instances. That's what big online shops and the like do.

Or would it be better to move attachments from MongoDB GridFS to Minio, and use Minio NPM driver to show attachments from Minio? There is some progress with Minio, but it is not fully complete yet, some beginnings of migration tools are at https://github.com/wekan/minio-metadata

Minio is currently no option. So at least the only way is GridFS or filestorage, but prefered here is GridFS at the moment.

@xet7
Copy link
Member
xet7 commented Sep 6, 2023

@filipenevola

Is it possible with Meteor, to not use sticky sessions, and instead share sessions with all instances at database?

@filipenevola
Copy link

No @xet7, today Meteor doesn't store the data of a session in the db, everything is kept in memory in the server. That is why you should always connect to the same container in order to keep state without doing a lot of extra work all the time.

@Dexus
Copy link
Contributor Author
Dexus commented Sep 8, 2023

That is really a pity. As soon as a container is overloaded because too many are routed to the same container, there are only 504 errors and timeouts. This is at least the current state with ~ 30 users all accessing at the same time. And HA can't be built by that either. What are the plans for this in the future?

@xet7
Copy link
Member
xet7 commented Sep 8, 2023

@Dexus

Can you test does round robin work at https://kanboard.org ? If I would add WeKan features to Kanboard?

@xet7
Copy link
Member
xet7 commented Sep 8, 2023

@Dexus

Please list all software names that works well with HA, so that I could look, is there something that would help with future versions of WeKan.

@xet7
Copy link
Member
xet7 commented Sep 8, 2023

@Dexus

I think, that in theory, I could make saving sessions to database working this way:

  • I would separate WeKan code to components, like using each part like separate CGI script with Node.js
  • In login form, after running some auth like password/OAuth2/LDAP etc, save session to database, that is used at all instances of HA
  • Coordinate work of each instance at same database, like scheduled tasks etc.
  • I presume I'll need to figure out how to setup Kubernetes, LDAP etc, to make some tests.

@xet7
Copy link
Member
xet7 commented Sep 8, 2023

I will also look what frameworks have some HA related features.

Dexus added a commit to Dexus/wekan that referenced this issue Sep 18, 2023
This would help, but not fix wekan#5120. It's now supports the running of multiple pods for scaling, without the issue that the session will not known on the backend pod.
@Dexus
Copy link
Contributor Author
Dexus commented Sep 19, 2023

@xet7 this is not fixed with #5136.

@xet7 xet7 reopened this Sep 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants
0