You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm having trouble finding in the code where a filter could be done that applies AI to only recently modified files. There is a .smart-connects/multi/ folder that updates every file -- rather than those recently modified notes.
Would embedding for files that haven't changed not need to be rerun? Could Smart Connects keep track of the last runtime, then refresh when triggered or on some period I could set in the configuration?
My main objective would be to conserve tokens and not duplicate work. There aren't many differences between runs. Yet all files get modified. The count of embeds is similar each time.
While running, I can work at half capacity with 27k notes. I'd be okay with nightly batch runs of more intense GPU-based builds than regular refreshes while in work mode.
This is why I run a different configuration on my writing machines without this plug-in. There could be a caching scheme that could focus workload, which would allow processing to go faster.
P. s. I'm working on recording use cases for writers and content marketers using Obsidian to write copy. So far, I'm not clear what running the Smart Connect App is doing other than making my computer run tokens faster.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I'm having trouble finding in the code where a filter could be done that applies AI to only recently modified files. There is a
.smart-connects/multi/
folder that updates every file -- rather than those recently modified notes.Would embedding for files that haven't changed not need to be rerun? Could Smart Connects keep track of the last runtime, then refresh when triggered or on some period I could set in the configuration?
My main objective would be to conserve tokens and not duplicate work. There aren't many differences between runs. Yet all files get modified. The count of embeds is similar each time.
While running, I can work at half capacity with 27k notes. I'd be okay with nightly batch runs of more intense GPU-based builds than regular refreshes while in work mode.
This is why I run a different configuration on my writing machines without this plug-in. There could be a caching scheme that could focus workload, which would allow processing to go faster.
P. s. I'm working on recording use cases for writers and content marketers using Obsidian to write copy. So far, I'm not clear what running the Smart Connect App is doing other than making my computer run tokens faster.
Beta Was this translation helpful? Give feedback.
All reactions