I am hosting more than 10 services currently but only Nextcloud sends me errors periodically and only Nextcloud is super extremely painfully slow. I quit this sh*t. No more troubleshooting and optimization.
There are mainly 4 services in Nextcloud I’m using:
Files
: as simple server for upload and download binariesCalendar
(with DAVx5): as sync server without web UINotes
: simple note-takingNetwork folder
: mounted on Linux dolphin
Could you recommend me the alternatives for these? All services are supposed to be exposed by HTTPS, so authentication like login is needed. And I’ve tried note-taking apps like Joplin or trillium but couldn’t like it.
Thanks in advance.
If you’re having issues with NextCloud being slow and having errors, it’s probably because the machine you are running it on is low on RAM and/or CPU.
I bring this up because what ever replacements you try would likely have the same issues.
My NextCloud instance was nearly unusable when I had it on a Raspberry PI 3, but when I moved it to a container on my faster machine (AMD Ryzen 7 4800U with 16GB of ram) it now works flawlessly.
The backing database type and the storage it runs on are just as important too.
I agree with this. It needs a good amount of CPU cycle and RAM. Raspi struggled for me too.
My NC instance runs on a 24GB RAM, 4 CPU Ampere A1 host(Oracle), and still struggles. YMMV.
And it struggles as a photo backup host an i5-7xxx and 16GB RAM at home.
It’s not absurdly slow, it’s just…irritating sometimes.
Yeah, Ive got this in my setup as well and its been pretty slow. I thought it was a network thing because I’m currently using Tmobile home internet but switching to a fiber optic network with 500Mbps up and down soon. Im really hoping that changes things
There are performance tuning tweaks you can do on NextCloud like memory caching etc.
Ooo Lovely! I’ll look into that!
Whta db are you using
Postgres.
Also using redis, did all the typical perf checks listed on NC site etc.
Experiencing the same, a good CPU and lots of RAM would resolve the issue
Even if you ran a basic sqlite nexcloud, if properly optimized, you can deal with millions of files like its nothing. And that is the issue, the bugs and lacking optimization…
4650g + 64GB ram + Mysql and it was file locking on just a 21k 10GB folder constantly.
I have written apps (in Go) that do similar and process data 100 times faster then nextcloud. Hell, my scrapers are faster then nextcloud in a local netwerk, and that is dealing with external data, over the internet.
Its BADLY designed software that puts the blame on the consumer to get bigger and better hardware, for what is essentially, early 2000 functionality.
Mysql and it was file locking on just a 21k 10GB folder constantly
It’ll definitely do that if you keep your database on a network share with spinning disks.
Spin up a container with sqlite in a ram disk and point it to your same data location. Most of the problems go away.
It’ll definitely do that if you keep your database on a network share with spinning disks.
Database and Nextcloud where on a 4TB NVME drive … in Mysql with plenty of cache/memory assigned to it. Not my first rodeo, …
I’m running on an SSD as a VM on 10yr old laptop and have had very few issues compared to running on Raspis in the last. It’s not my first rodeo either and found Debian with NexCloudPi setup script worked the best, then restore from backup. The WebUI is performing great as well as bookmarks, contacts, calendar, video chats and most things I’ve thrown at it. NVME may be overkill but the combination of solid CPU, RAM and Disk IO should alleviate any problems. My hunch is there are other resource constraints or bottlenecks at play, perhaps DDOS or other attacks (experienced that for sure and you can test by dropping your firewall ingress rules to confirm).
Also, this is FOSS and I find the features and usability are better than anything else out there, especially with Letsencrypt.
Same and looking forward to the responses here. Nextcloud is too big and complicated. I deployed Immich to cover for the photo library. Still looking for a good solution for notes though.
Sorry to hear you’ve had a bad experience. I’ve been running the lsio Nextcloud docker container for 4 years without any issues at all.
- Syncthing for files.
- Proton calendar (so not self hosted)
- Joplin, using file based sync with aforementioned syncthing. I saw you didn’t like it though.
- I occasionally use scp
For calendaring, I also went with the option of syncthing via DecSync. I can get my contacts and calendar on Android and Thunderbird, so I can avoid yet another unnecessary webapp.
This does look cool! But I notice that there’s really only one contributor (technically two, but the second only did one tiny commit) and they haven’t contributed any code in over a year. I don’t want to invest too much time migrating to a stale if not dead project.
Honestly, I think that the lack of commits is more due to the application being feature complete than “dead”. I’ve been using it for at least 3 years now and it works quite well.
That’s a fine point! You talked me in to checking it out. Thanks for the recommendation!
What exactly have you tried to do to address your nextCloud problems?
I have my issues with Nextcloud, but it’s still, by far, the best solution I’ve come across.
I was on the same boat when I was running NC on a container. I switched to VM, and most of my issues have been resolved, but collabora. I am currently using the built-in collabora server, which is slow.
This is concerning to me because I’ve been considering ditching Synology and spinning up nextcloud. I like Synology drive but I’m tired of the underpowered hardware and dumb roadblocks and vendor lock-in nonsense. I’m very curious what you end up doing!
Not OP, but I run it on docker with postgres and redis, behind a reverse proxy. All apps on NC have pretty good performance and haven’t had any weird issues. It’s on an old xeon with 32gb and on spinning rust.
Do you have redis talking to nextcloud over the unix socket or just regular TCP? The former is apparently another way to speed up nextcloud, but I’m struggling to understand to get containers using the unix socket instead.
I have both Postgres and Redis talking to Nextcloud through their respective unix sockets; I store the sockets in a named volume, so I can mount it on whatever containers need to reach them.
Do you mind sharing your docker config, so I can try and replicate it. Thank you
Sure:
POSTGRES
--- version: '3.8' services: postgres: container_name: postgres image: postgres:14-alpine environment: POSTGRES_PASSWORD: "XXXXXXXXXXXXXXXX" PGDATA: "/var/lib/postgresql/data/pgdata" volumes: - type: bind source: ./data target: /var/lib/postgresql/data - type: volume source: postgres-socket target: /run/postgresql logging: driver: json-file options: max-size: 2m restart: unless-stopped networks: default: external: name: backend volumes: postgres-socket: name: postgres-socket
REDIS
--- version: '3.8' services: redis: image: redis:7.2-alpine command: - /data/redis.conf - --loglevel - verbose volumes: - type: bind source: ./data target: /data - type: volume source: redis-socket target: /var/run logging: driver: json-file options: max-size: 2m restart: unless-stopped networks: default: external: name: backend volumes: redis-socket: name: redis-socket
Here’s redis.conf, it took me a couple of tries to get it just right:
# create a unix domain socket to listen on unixsocket /var/run/redis/redis.sock unixsocketperm 666 # protected-mode no requirepass rrrrrrrrrrrrr bind 0.0.0.0 port 6379 tcp-keepalive 300 daemonize no stop-writes-on-bgsave-error no rdbcompression yes rdbchecksum yes # maximum memory allowed for redis maxmemory 50M # how redis will evice old objects - least recently used maxmemory-policy allkeys-lru # logging # levels: debug verbose notice warning loglevel notice logfile "" always-show-logo yes
NEXTCLOUD
--- version: '3.8' services: nextcloud: image: nextcloud:27-fpm env_file: - data/environment.txt volumes: - type: bind source: ./data/html target: /var/www/html - type: volume source: redis-socket target: /redis - type: volume source: postgres-socket target: /postgres - type: tmpfs target: /tmp:exec - type: bind source: ./data/zz-docker.conf target: /usr/local/etc/php-fpm.d/zz-docker.conf - type: bind source: ./data/opcache_cli.conf target: /usr/local/etc/php/conf.d/opcache_cli.conf networks: - web - backend logging: driver: json-file options: max-size: 2m restart: unless-stopped crond: image: nextcloud:27-fpm entrypoint: /cron.sh env_file: - data/environment.txt volumes: - type: bind source: ./data/html target: /var/www/html - type: bind source: ./data/zz-docker.conf target: /usr/local/etc/php-fpm.d/zz-docker.conf - type: volume source: redis-socket target: /redis - type: volume source: postgres-socket target: /postgres - type: tmpfs target: /tmp:exec networks: - web - backend logging: driver: json-file options: max-size: 2m restart: unless-stopped collabora: image: collabora/code:23.05.5.4.1 privileged: true environment: extra_params: "--o:ssl.enable=false --o:ssl.termination=true" aliasgroup1: 'https://my.nextcloud.domain.org:443' cap_add: - MKNOD networks: - web logging: driver: json-file options: max-size: 2m restart: unless-stopped networks: backend: external: name: backend web: external: name: web volumes: redis-socket: name: redis-socket postgres-socket: name: postgres-socket
The environment.txt file is hostnames, logins, passwords, etc…
POSTGRES_DB=nextcloud POSTGRES_USER=xxxxxxx POSTGRES_PASSWORD=yyyyyyyyyyyyyyyyyyy POSTGRES_SERVER=postgres POSTGRES_HOST=/postgres/.s.PGSQL.5432 NEXTCLOUD_ADMIN_USER=aaaaa NEXTCLOUD_ADMIN_PASSWORD=hhhhhhhhhhhhhhhhhhh REDIS_HOST=redis REDIS_HOST_PORT=6379 REDIS_HOST_PASSWORD=rrrrrrrrrrrrr
The zz-docker.conf file sets some process tuning and log format, some might not even be necessary:
[global] daemonize = no error_log = /proc/self/fd/2 log_limit = 8192 [www] access.log = /proc/self/fd/2 access.format = "%R - %u %t \"%m %r%Q%q\" %s %f %{mili}d %{kilo}M %C%%" catch_workers_output = yes decorate_workers_output = no clear_env = no user = www-data group = www-data listen = 9000 listen = /var/www/html/.fpm-sock listen.owner = www-data listen.group = www-data listen.mode = 0666 listen.backlog = 512 pm = dynamic pm.max_children = 16 pm.start_servers = 6 pm.min_spare_servers = 4 pm.max_spare_servers = 6 pm.process_idle_timeout = 30s; pm.max_requests = 512
The opcache_cli.conf file has a single line:
opcache.enable_cli=1
I don’t remember why it’s there but it’s working so I’m not touching it :-D
Good luck :-)
I dumped synology and just use proxmox for the automatic ZFS support, then I can run my apps in either containers or VMs and even do GPU passthrough if needed.
A confirmed, yet still not resolved bug caused me and about 200 other people lose data (metadata) for tons of files. Well, at least 200 reacted to the GitHub bugreport I filled. I think you can easily find it because it’s the most upvoted yet unresolved issue.
Besides this, it’d often give random errors and just not function properly. My favorites are the unexplained file locks: My brother in Christ, what do you mean error while deleting a file. It’s 2023 holy shit, just delete the damn file. It’s ridiculously unreliable and fragile. They have tons, thousands of bugreports open - yet they focus on pushing new, unwanted social features to become the new facebook and zoom. They definitely should focus on fixing the foundation first.
Do you have a link to that bugreport?
Thanks!
Nextcloud is great. I don’t doubt that OP is having problems, and I understand how frustration can set in and one might throw in the towel and look for alternatives, but OP’s experience is atypical. I’ve been running it for years without any issues. I should point out that I only use it for small-scale personal stuff, but it’s good for me. I have it syncing on eight devices, including Linux, MacOS, and Windows desktops; Android phone; iPad; Raspberry Pi. My phone auto-uploads new camera photos. I’m using WebDAV/Fuse mounts on some machines. Everything is solid.
Also not OP. I run nextcloud on 10th gen i3 on spinning rust and performance is good. I run it on LXC container though so without docker
How did you Spin it up in an LXC Container? I cant find any install Tutorials or Files for that. Do you have a link or something for me?
I create LXC container and then just install apache2, php and mariadb by hand with apt, then I install nextcloud from sources.
You can try this tutorial as its very close to what I did: https://docs.nextcloud.com/server/latest/admin\_manual/installation/example\_ubuntu.html
I moved Nextcloud from k8s to a well provisioned lxc container and ran a couple of performance boosting commands and it’s been working wonders since then
I use linuxserver.io’s nextcloud docker image. While I’ve seen people struggle to setup Nextcloud properly to the point of just giving and installing the snap version of it, I can count the number of times I’ve needed to do manual interventions for nextcloud with LSIO’s nextcloud image. It works like a charm.
Second this. Running on portsinet with the images. Absolutely breeze with 8gb ram and 2tb ssd
I just installed it baremetal, works like a charm.
lSIO is amazing, my first stop for container browsing! Followed in second place by hotio.dev
I don’t know of any decent calendar sync. DecSync seems pretty abandoned.
I’ve been using it for syncing with Android and my linux machines for 3 years already.
IMO Joplin is a better idea for note taking
Went from nextcloud to FileBrowser for web files access, with resilio/syncthing under the hood for synchronisation. My family couldn’t be happier, but yeah - we are not using calendar futures.
I would have nothing but issues if I ran the docker app on unraid and used the sqlexpress built in. I switched over to CasaOS and use Mariasql and the nextcloud container on it and it has been solid.
I use pydio for cloud drive. I think you can try this