-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data should be in a human-readable, human-editable JSON or CSV file, and SQLite DB should be generated from that. #6
Comments
Wouldn't that by-pass the data management interface in the web application? The system is really conceived to be the reverse: the data is created & managed through the Web UI, and made available in those human/machine readable formats for downstream consumption. One concern with the SQLite db is resilience/backup. The reason that the data is stored alongside the software in SComCat (which is certainly an unusual arrangement) is that the in my original deployment I used Litestream to replicate the data to an S3 object store. So, my deployment pipeline was:
So, in this arrangement, the authoritative SQLite DB is the one in the S3 store. (Before I archived my version of SComCat, I fetched the latest DB from the S3 store and wrote it into the DB in the GitHub sources) I can share the K8s deployment manifests and the Litestream config if you're interested. I didn't include in the SComCat repo because they are somewhat tangential, and also a little idiosyncratic. I was motivated to find a way to use SQLite rather MySQL/Postgres. Litestream is marvellous by the way :-) |
@paulwalk, thanks so much for taking the time to write that comment. It's very eye-opening for us -- we didn't even know there was a web UI for data management (because we hadn't yet gotten the login functionality working nor explored what's behind the login door at all). Well, @smpsnr might have known, but I didn't. We really just deployed this quickly to get the data back online, and are still learning our way around. Your comment teaches us in five minutes what might otherwise have taken us hours to figure out. (At some point we might want to incorporate what you wrote into I may close this issue and file a better one, then. The real reason I wanted to have the data in a text-y file was so that I would have the option of using my usual text-editing tools to update the data sometimes, since that is much faster and more in-my-usual-flow than going through a web interface. However, there are other ways to achieve that goal: an API would be fine (and maybe there already is one and I just don't know it yet). It's not actually necessary to have a JSON file or whatever in my local clone. |
I think someone has managed to login and reset the default admin password. Hopefully that was you or someone in your team...!! Going back to your main point: originally, I had intended to use text files (actually a mixture of Markdown and YAML) for the data, and then use Hugo (a superb static-website-generator tool) to compile them into a web site. There were a couple of reasons behind my move to a dynamic web-application:
|
@paulwalk Oh, I wasn't arguing against having a database: the site should have one, for faceted search etc. The only question was: what is to be the primary, authoritative source for the data? Would the data live in in-tree human-editable text files, and be loaded in to the DB at deployment time? Or would the DB be the primary source for the data, thus facilitating data updates coming in from the web interface? You make a good case for the latter; it just has the disadvantage that now those of us who are likely to be updating data the most frequently must (by default) use a browser-based interface to do so, instead of our typical text-manipulation tools (e.g., one's local ${EDITOR}). Of course it's possible to set things up so that the web interface isn't the only way to edit/update data. With some authenticated APIs, those of us who still want to use local text editors could do so. It's just a bit of work to get that all going. |
It should be relatively simple to create a "Rake task" (essentially a CLI app that interfaces with the Rails stack to allow you to write console apps that read/write the database). Such a task could take structured files as input and add/update DB records. This is pretty much how the system was originally seeded with data anyway. |
Instead of
db/sqlite_database/production.sqlite3
, we should have a human-oriented JSON or CSV or whatever file, and then the deployment process should generate the SQLite database from that.The text was updated successfully, but these errors were encountered: