Run it as a background process to backup your files to s3
- You can set any local directory under watch.
- As soon as any of the following change occurs, it creates an event.
- create a file
- Delete a file
- Modify the content of file
- Rename a file
- Move a file/folder into another one
- An empty folder won't be pushed
- If a folder contains just one file and you delete it, the folder also gets removed(s3 handles it itself)
- These events are stored in a channel, and are then consumed and based on the event,
filepath
andaction
(add or delete from s3) to be taken is stored in-memory - And every
10 minutes
these datas are flushed to the database for persistence. And once every24hrs
(configurable), those files are pushed to s3. - I used
bbolt
to persist the metadata till it gets flushed to s3. Whybbolt
?- It is very simple to use(trust me, it is! 🫣)
- It is a single user DB, no hassle, nothing, just
go get
it and you are good to go 😎 - It is Go native key/value store.
- It doesn't require a full database server such as Postgres or MySQL.
- Lastly spilling the secret, It looked interesting as you can import it directly into your project and run within the application. so, wanted to give it a try 😜 and given our limited requirement, it fits the usecase.
- Clone the repo
- To install Dependencies use
go run
(orgo test
orgo build
for that matter) any external dependencies will automatically (and recursively) be downloaded - Set the following environment variables in .env file(replace with your values, these are dummy ones for ref. :p) or use command like,
export BACKUP_INTERVAL=24
BACKUP_DIR=/home/praveen/notifyTest
S3_BACKUP_INTERVAL=24
S3_BACKUP_INTERVAL_UNIT=hours # one of hour(s)/minute(S)/second(s)
DB_PERSISTENCE_INTERVAL=10
DB_PERSISTENCE_INTERVAL_UNIT=minute # one of hour(s)/minute(S)/second(s)
S3_BUCKET=backupbucket-praveen
S3_BUCKET_PREFIX=experimenting/
AWS_ACCESS_KEY=AKGHYUU67PraveenIsGood36tYUI
AWS_SECRET_ACCESS_KEY=Htyf5JED/E9EPraveenIsGoodwPRLhtyMh6jgdsFT
AWS_REGION=us-east-1
- Build it:
go build -o anyName ./cmd/cloudkeeper
- Run:
./anyName
- Now go make changes and see for yourself.
- you want a script to run as a daemon, in the background and never end‐ ing.
nohup ./upload >>/Home/tmp/log/cloudkeeper.log 2>&1 <&- &
Note: Make sure you have your
/Home/tmp/log/cloudkeeper.log
created with necessary permisions. Here,upload
is your binary executable obtained by runninggo build
command, you can have any name.
- Don't wan't to store the logs? Use
nohup ./upload 0<&-1>/dev/null 2>&1 &
Writing to /dev/null
effectively throws away all the outputs from this program.
If faced with any issue, raise an issue here(I promise, I will reply within seconds :xd.. Yes, I am the Flash🫣)
- Write unit tests.
- DB transactions are not being hendled well.
- Imlement this project using checksum approach, instead of capturing every Fs event and benchmark them both.
- Work on notification part.
It is working as intended:
- Success message from terminal:
- Data correctly reflecting in my s3 bucket:
Note: Here, I have used
notify
package, and not fsnotify, because the later does not support recursive watching.