Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

在前端使用SQLite方便么? Is it too hard to use SQLite in frontend? #12

Open
FunctionSir opened this issue Aug 27, 2024 · 19 comments

Comments

@FunctionSir
Copy link
Owner

要是方便的话, 也许上数据库?
If it's not so hard, maybe we can use a DATABASE?

@xioi
Copy link
Contributor

xioi commented Aug 27, 2024

前端似乎只有操作一整个数据库文件的库
缺少那种向远程服务器发送指令的库,毕竟js只能使用fetch和websocket进行通讯,所以会很受限,
目前来看似乎只能部署本地REST风格的服务器来和前端通讯了。。。

@GaoNeng-wWw
Copy link

考虑下vercel或者cloudflare部署后端,前端直接拿数据

@vxst
Copy link
Contributor

vxst commented Aug 28, 2024

I have a preference for PostgreSQL over SQLite. For immutable data, JSON could be sufficient for the frontend. However, I'm concerned that using SQLite with encryption at the frontend might potentially lead to issues, particularly in the context of cryptocurrency. In my experience, the best practice is always to handle authentication in the backend rather than relying on frontend encryption for authentication purposes. No worry if you're not familiar with backend dev, I can do it if needed.

@GaoNeng-wWw
Copy link

I have a preference for PostgreSQL over SQLite. For immutable data, JSON could be sufficient for the frontend. However, I'm concerned that using SQLite with encryption at the frontend might potentially lead to issues, particularly in the context of cryptocurrency. In my experience, the best practice is always to handle authentication in the backend rather than relying on frontend encryption for authentication purposes. No worry if you're not familiar with backend dev, I can do it if needed.

If only rendering data, I think maybe SSG better than backend.

@vxst
Copy link
Contributor

vxst commented Aug 28, 2024

I believe FunctionSir is talking about securely handling sensitive information. In my old-fashioned mind, sensitive information simply shouldn't be shared with unauthorized parties; it should be protected by API authentication rather than relying on frontend encryption.

@FunctionSir
Copy link
Owner Author

I believe FunctionSir is talking about securely handling sensitive information. In my old-fashioned mind, sensitive information simply shouldn't be shared with unauthorized parties; it should be protected by API authentication rather than relying on frontend encryption.

Oh, you seems misunderstood... I'm sorry for that...
I think it's not easy to keep consistency between multi files. And, it's not easy to scale up, like if you want to add a new row, you need to change a lot of code... And some operations like merge some one's advice is not easy.

If we use SQL, we can easily do something. A new institute found? Just add some "INSERT INTO INSTITUTE_LIST", want to add a new article? Just "INSERT INTO RELATED_ARTICLES"... Even more? Like you can easily do something like "CREATE TABLE RESIDENCIES"...

And to use DB, we can "compress" all files into one.

And for contributors, they can just give me some lines of SQL, and DONE.

If they do not know how to code SQL, we can add a page to help them to gen lines of SQL, that's not very hard.

For front-end, SQLite DB is smaller than JSON (but if use gzip, that might be not, but the DB will not too large too...)

SQLite is fast, easy to use, strong, safe, and made it easy to import data to something like PostgreSQL... Also, there is a GUI tool (it's open src too) to help someone who is not very good at SQL to easily do something.

P.S. Forgive me... My English is poor, so the sentences might be hard to read and understand...

@xioi
Copy link
Contributor

xioi commented Aug 28, 2024

Furthermore, if .json and .csv files can be generated automatically by scripts in GitHub Actions?

@FunctionSir
Copy link
Owner Author

Furthermore, if .json and .csv files can be generated automatically by scripts in GitHub Actions?

I think might be.
There is the command:
sqlite3 -csv -header XXX.db "SELECT * FROM XXX;" > XXX.csv

@GaoNeng-wWw
Copy link

Could we generate data file from issue? just like https://github.com/type-challenges/type-challenges

@vxst
Copy link
Contributor

vxst commented Aug 28, 2024

  1. Perform a CSV and JSON merge after each edit. This ensures that the content of both file formats remains synchronized.

  2. The merge process steps:

  • Read both CSV and JSON files to get a merged list.
  • Compare the contents of the merged CSV and JSON files.
  • Make a new commit if contents of JSON and csv is not identical with the merged list
  • Skip making a new commit if the contents are identical.
  1. Integrate this process into a Continuous Integration (CI) system:
  • Allow team members to edit their preferred format (CSV or JSON).
  • Let the CI system automatically handle the merge job, ensuring consistency between formats.
  1. Advantages:
  • Improves data consistency and reduces human errors.
  • Provides flexibility for team members to choose their preferred format.

@vxst
Copy link
Contributor

vxst commented Aug 28, 2024

It's not really a good idea to put a SQLite database binary in a Git repository, as it defeats the basic usage of Git.

@FunctionSir
Copy link
Owner Author

It's not really a good idea to put a SQLite database binary in a Git repository, as it defeats the basic usage of Git.

Hmm... You are absolutely right.
May be we can put it in dir "bin", and the file will automatically gened by .sql files?

Like, after create a db, export it, and you get:

BEGIN TRANSACTION;
CREATE TABLE IF NOT EXISTS "A" (
	"Field1"	INTEGER,
	"Field2"	INTEGER,
	"Field3"	INTEGER,
	"Field4"	INTEGER,
	"Field5"	INTEGER,
	"Field6"	INTEGER,
	"Field7"	INTEGER,
	"Field8"	INTEGER
);
CREATE TABLE IF NOT EXISTS "B" (
	"Field1"	INTEGER,
	"Field2"	INTEGER,
	"Field3"	INTEGER,
	"Field4"	INTEGER,
	"Field5"	INTEGER
);
COMMIT;

If someone want to do something, just APPEND:

INSERT INTO A
VALUES ('1','2','3','4','5','1','2','3');

INSERT INTO A
VALUES ('1','2','3','4','5','1','2','3');

INSERT INTO A
VALUES ('1','2','3','4','5','1','2','3');

INSERT INTO A
VALUES ('1','2','3','4','5','1','2','3');

@FunctionSir
Copy link
Owner Author

Then, you can gen a new DB.

@FunctionSir
Copy link
Owner Author

It's rally a kind of odd solution...

@vxst
Copy link
Contributor

vxst commented Aug 28, 2024

Please don't adopt a coding style for an 1980s bank; it's usually called 屎山 in Chinese.

@FunctionSir
Copy link
Owner Author

Emm... Large amount of SQL is really a kind of mess... so I said that is a odd solution.

@vxst
Copy link
Contributor

vxst commented Aug 28, 2024

CSV is the de facto storage format for tabular data at this scale, and JSON is the de facto format for the web. I believe these two formats would be sufficient. Please KISS and don't add bloat because of passion for data formats. Additionally, SQL statements are never a de facto storage format for anything that needs sharing; there are too many dialects, and no one actually shares data using them.

This is a tabular dataset, for heaven's sake. SQL is used to handle complex references between many tabular relationships.

@FunctionSir
Copy link
Owner Author

Emm... Right. So we can just use CSV. and gen json from CSV.

@vxst
Copy link
Contributor

vxst commented Aug 28, 2024

  1. Perform a CSV and JSON merge after each edit. This ensures that the content of both file formats remains synchronized.
  2. The merge process steps:
  • Read both CSV and JSON files to get a merged list.
  • Compare the contents of the merged CSV and JSON files.
  • Make a new commit if contents of JSON and csv is not identical with the merged list
  • Skip making a new commit if the contents are identical.
  1. Integrate this process into a Continuous Integration (CI) system:
  • Allow team members to edit their preferred format (CSV or JSON).
  • Let the CI system automatically handle the merge job, ensuring consistency between formats.
  1. Advantages:
  • Improves data consistency and reduces human errors.
  • Provides flexibility for team members to choose their preferred format.

This process above can generate JSON from CSV and CSV from JSON, depending on which has been changed by contributor.

Read two arrays, merge them, and compare the two input arrays with the merged array. If there are changes, make a new commit replacing the changed format with the merged array. If there are no changes, output "Bingo!" and stop.

Emm... Right. So we can just use CSV. and gen json from CSV.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants