CPU utilization problems with large userbase


We have been trialing Wekan at a large organisation for nearly a year, starting with Libreboard and then moving on to Wekan. Since the move to Wekan we’ve had major CPU utilization issues by the node process that results in it failing.

Configuration is as follows (all servers running Red Hat 7.3):

  • 3 virtual servers (2x CPU, 4GB RAM), each hosting a single Wekan instance (currently v0.41) running on Node v4.8.4
  • 3 virtual servers (2x CPU, 4GB RAM), each hosting a MongoDB instance, configured in a ReplicaSet
  • 1 virtual server (2x CPU, 4GB RAM) running Nginx as a load balancer

Our usage figures are as follows:
Users: 2785 Boards: 3564 Lists: 10418 Cards: 21361

The problem we’re encountering is that as the number of concurrent users increases (anything above 50), the Node CPU usage figures for the node instances increases to between 90-100% until the process fails to be able to respond to requests through the web browser. It’s at this point that I need to either restart the Node process or it fails and is restarted by systemd.
For information the memory usage on the Node processes is low (<10%)
For information the CPU usage on the primary MongoDB server is 20-30%.

I’ve tried to debug the cause of this but have so far been unable to find a root cause. My suspicion is that there’s a costly process attached to each of the connected clients that ramps up the CPU usage but I don’t know enough about Meteor/Node/MongoDB to debug it. Can anyone suggest any routes to investigate?



I would guess bottleneck is MongoDB. Using PostgreSQL with ToroDB would speed up at least read traffic. There is some ToroDB server where replacing MongoDB with PostgreSQL is possible, and also using MySQL, so you could look at those from ToroDB.


I’ll be on honest, I’m not convinced that MongoDB is the source of the problems. I’ve been looking at the usage stats this morning and not seeing anything out of the ordinary. You can see a brief output from mongostat here. A lot of queries but barely anything in the queues.

I’m currently trying to setup a Kadira instance to help find the source of the problems. Will report back once I have more info


Yes, profiling would be much more useful than my guess :slight_smile: Thanks!


I have the same Issue,
Configuration is as follows (all servers running CentOS 7.5):

1 virtual server (8x CPU, 16GB RAM), hosting 5 single Wekan instance (currently v0.50), a MongoDB instance, a Nginx as a load balancer, running on Node v4.8.4

If a Wekan process over 20 clients, the CPU usage will exceed 100%, and client browser will be slow. But
at this time MongoDB is about 50% CPU usage.

Have any suggestion? Thanks.


Today released Wekan v0.60 has fixes to this, please test:


We’ve been using v0.60 since the 6th December and so far performance actually seems worse. Processes need to be restarted more often and the app is slower in response.

I’m going to look back through the intermediate versions between v0.50 and v0.60 to see if there was some other feature that may have had an impact.