Skip to content

[Bug]: Extremely slow trashbin restore #52268

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
5 of 8 tasks
mrAceT opened this issue Apr 18, 2025 · 1 comment
Open
5 of 8 tasks

[Bug]: Extremely slow trashbin restore #52268

mrAceT opened this issue Apr 18, 2025 · 1 comment
Labels
0. Needs triage Pending check for reproducibility or if it fits our roadmap 30-feedback bug feature: trashbin performance 🚀

Comments

@mrAceT
Copy link

mrAceT commented Apr 18, 2025

⚠️ This issue respects the following points: ⚠️

Bug description

Upon synchronization my Windows share I made an error in the exclude list..
That caused the Linux server to become slow. This triggered me in realizing the faulty sync. Then some 40.000 files were already deleted!

I went to the web-interface to start restoring.. and discovered that that also was an extremely server intensive activity.

Steps to reproduce

  1. Delete many thousands of files on your Windows share
  2. Restore them via the web-interface
  3. See the "show full processlist;"

Expected behavior

Since I use AWS-S3 I expected the restore to be a simple MySql update of the folder entry..

Nextcloud Server version

30

Operating system

RHEL/CentOS

PHP engine version

PHP 8.1

Web server

Apache (supported)

Database engine version

MariaDB

Is this bug present after an update or on a fresh install?

None

Are you using the Nextcloud Server Encryption module?

Encryption is Disabled

What user-backends are you using?

  • Default user-backend (database)
  • LDAP/ Active Directory
  • SSO - SAML
  • Other

Configuration report

-

List of activated Apps

-

Nextcloud Signing status

-

Nextcloud Logs

-

Additional info

With the command "show full processlist;"

I saw that these were the queries I saw several of that were running a multitude of seconds.

SELECT filecache.fileid, storage, path, path_hash, filecache.parent, filecache.name, mimetype, mimepart, size, mtime
, storage_mtime, encrypted, etag, filecache.permissions, checksum, unencrypted_size, metadata_etag, creation_time, upload_time
, meta.json AS meta_json, meta.sync_token AS meta_sync_token FROM oc_filecache filecache
LEFT JOIN oc_filecache_extended fe ON filecache.fileid = fe.fileid
LEFT JOIN oc_files_metadata meta ON filecache.fileid = meta.file_id
WHERE (filecache.parent = 88189) AND (storage = 2) ORDER BY name ASC

id 88189 is the ID of the trashbin. So it appears that after each delete the new remaining lis of files in the Trashbin is read.

But with my still some 40.000 files remaining in the trashbin this is a big query! I expect things would dramaticaly spead up when the MOVE / RESTORE function become separate from the "show trashbin content".

In other words, why not just first simply RESTORE everything I told it to.. and then.. all at the end read what's left in the trash?

If that could be done, the operations in the trashbin will likely bee only slighly slower when restoring just one file (since the restore call will be split into two operations). But when the RESTORE (or the permanently delete?) action is done on a multitude of files I expect the response will be far faster and the load on the server far lower!

@mrAceT mrAceT added 0. Needs triage Pending check for reproducibility or if it fits our roadmap bug labels Apr 18, 2025
@mrAceT
Copy link
Author

mrAceT commented Apr 18, 2025

For others finding this thread, this doesn't go fast, but it causes much less load on the server, use this commandline occ command:

occ trashbin:restore [user]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0. Needs triage Pending check for reproducibility or if it fits our roadmap 30-feedback bug feature: trashbin performance 🚀
Projects
None yet
Development

No branches or pull requests

3 participants