Replies: 5 comments 1 reply
-
Do you know about this?: https://unix.stackexchange.com/questions/13751/kernel-inotify-watch-limit-reached |
Beta Was this translation helpful? Give feedback.
-
@eq0cdk ? |
Beta Was this translation helpful? Give feedback.
-
According to the information in the post, each file watched takes 1KB on a 64-bit system. That means to monitor 1000 files will take 1MB memory. To monitor 100,000 files, you will need 100MB memory. For 1,000,000, 1GB. That is a miniscule amount of RAM required, relative to the number of files you will be monitoring, in my opinion. Also, max_user_watches can be changed by putting in /etc/sysctl.d/CONF_NAME.conf and doing Do you think you can do a test run of using incron to see if my idea is feasible/viable? I don't think the average user will need to monitor any more than 1 million files, which means that in the worst case scenario, RAM usage will be 1GB. On my system max_user_watches is set to 524288. |
Beta Was this translation helpful? Give feedback.
-
1GB only for filewatches is not an option imho. |
Beta Was this translation helpful? Give feedback.
-
This is an experimental feature since v18.0 |
Beta Was this translation helpful? Give feedback.
-
I have the Scan interval currently set to 1 minute. There is always a minimum delay of 1 minute before I can start accessing new files in albert. This is quite inconvenient for certain cases like dealing with opening newly downloaded mp3 files or viewing screenshots just taken. Also, in general, I can't immediately open files in ~/Downloads that I've recently downloaded.
Albert should index files whenever a file or directory are created, deleted, or renamed. To limit overhead, there should be an option called "Watch directory for change", which recursively checks whether a file is created, deleted, or renamed in directory A, B, C, etc., and updates the index for that specific directory in response. So, to reiterate, this will be an optional feature. The other directories can still be updated with the Scan interval of N minutes.
Since albert already has this behavior for the "Applications" extension (applications are instantly indexed), it only makes sense to implement this feature for files as well.
IMPLEMENTATION IDEAS:
Off the top of my head, incron is one application I know that allows the monitoring of files. It executes commands if files are changed in a certain directory. It can also recursively monitor directories. It is Linux-specific though. You could add/modify a file /etc/incron.d/albert.conf as follows:
Example:
The only change you'd need to have is the user would have to do
after installing albert.
What do you think? Thanks for the great application by the way.
Beta Was this translation helpful? Give feedback.
All reactions