-
Notifications
You must be signed in to change notification settings - Fork 236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A copy of HTTP client API for TS (#1371) #1382
A copy of HTTP client API for TS (#1371) #1382
Conversation
Moving final reviews here. Hit another problem with key listing.
Followed by
|
Also, URLs have trailing /, they work when retrieving (this is a different table).
|
Here's the rest of the functional review Required
Retrieval causes problems.
If this data is inserted
It produces this entry in the list keys, not URL encoded. http://127.0.0.1:8098/ts/v1/tables/t/keys/f/f/g space :&?/s/s/ts/2/ Issuing the get fails as you would expected with bad request.
curl 'http://127.0.0.1:8098/ts/v1/tables/t/keys/f/f%2Fg%20space%20%3A%26%3F/s/s/ts/2/'
riak-shell(14)>INSERT INTO t(f,s,ts,extra) VALUES ('f','s',999,'x');
curl -X POST http://127.0.0.1:8098/ts/v1/tables/t/keys --data "{"f":"f","s":"s","ts":123,"extra":1}"; echo
{["ts",api_version,"tables",table,"list_keys"], riak_kv_wm_timeseries_listkeys,
curl -X POST -H 'Content-Type: text/plain' 'http://127.0.0.1:8098/ts/v1/tables/t/keys?timeout=a' --data "{"f":"f","s":"s","ts":123,"extra":1}"
io:format("~s\n" ,[[116,105,109,101,111,117,116,32,110,111,116,32,97,110,32,105,110,116,101,103,101,114,32,118,97,108,117,101,58,32,""a""]]).
curl -X POST -H 'Content-Type: text/plain' 'http://127.0.0.1:8098/ts/v1/tables/t/keys?badparam=a' --data "{"f":"f","s":"s","ts":123,"extra":1}"
|
Fixed all recently cropped up issues, and updated the test for it (basho/riak_test#1037, with complementary checks using curl in https://gist.github.com/hmmr/94abdb4d648a44009e93). Because of #1378, there is a massive manual merge ahead. |
Encoding of list key URLs fixed. The first issue still seems to be a problem
But
Note the info message back from curl. Illegal or missing hexadecimal sequence in chunked-encoding I've only seen this when the number or primary keys are short. No errors in the logs that I could find. |
Parameter and timeout issues both fixed. |
6d19303
to
3a8a1d7
Compare
There was a regression in |
3a8a1d7
to
f4aa201
Compare
* rows as tuples, not lists of cells; * [] for undefined.
Enable the following requests: GET /ts/1/table/Table single-key get DELETE /ts/1/table/Table single-key delete PUT /ts/1/table/Table batch put GET /ts/1/table/Table/keys list_keys GET /ts/1/coverage coverage for a query POST /ts/1/query execute SQL query Additional parameters (keys, data, query) are to be supplied in request body, as a JSON object.
Complementary to 9b62335 by atill The call to claimant to determine a table's bucket type status is an expensive operation. It was introduced in a1c1e2e, with the purpose to detect and report the error condition where a TS operation is attempted on a non-activated bucket type, but found to cause serious performance degradation. Andy's commit removed the expensive call; this commit captures and reports the error condition in various places in riak_kv_{wm,pb}_timeseries and riak_kv_ts_util.
andalso, check detailed per-table permissions when security is on.
because there were no changes for them to progress to 1.3, whereby they lose the version and become riak_ts-develop.
This improves on 5c56530.
…lete The version with both key-in-json and key-in-url is forked into own branch feature-az-http_ts_api_with_keys_in_json_in_body. If a decision is taken to resurrect that branch, there is a complementary branch, of the same name, in riak-erlang-http-client, which can produce requests with key in json.
* don't support coverage API call in HTTP callbacks; * unify permission strings, for shared use between wm and pb callbacks; * straighten up function call chain in riak_kv_wm_timeseries, now that all api calls are mapped 1:1 onto HTTP methods; * in particular, use wrq:path_info/2 to get table and keys, instead of clunky tokenization where it's no longer necessary; * introduce security and validation checks for listkeys.
Also, move around the check for no_such_table, and keep Mod and DDL once found.
Move the relevant code to riak_pb_ts_codec. This will allow riakc_ts:get_coverage to be served over TTB.
…to_develop # Conflicts: # src/riak_kv_qry_worker.erl # src/riak_kv_ts_svc.erl # src/riak_kv_ttb_ts.erl
This plays better with error presentation in riak_kv_ts_svc. Also, treat this error as E_SUBMIT (1001), as it used to be.
This unbreaks some ts_* tests.
This will conveniently detect non-tuples for records from the client.
In riak_kv_wm_timeseries_query, for shipping to the client in json format, records need to be lists. Let's avoid re-converting records, if only for the HTTP API callbacks, and delegate the conversion to tuples to the callers of riak_kv_api:query.
b9a0bff
to
1288556
Compare
It appears short keys, as in CREATE TABLE t (ts TIMESTAMP NOT NULL, PRIMARY KEY((ts),ts)); don't get sext-encoded, and are instead sent to the calling process of riak_client:stream_list_keys as tuples, causing the resource streaming fun to silently choke on sext:decode(Tuple). As a result, the http client expecting chunked response, receives nothing. This is probably a bug on the side of riak_kv, which we work around by explicitly not attempting a sext:decode on tuples.
c0c89a6
to
549d98b
Compare
@hmmr can you please separate the KV bits that correspond to basho/riak-erlang-client#227 (RIAK-2316) and basho/riak_pb#190 (RIAK-2492) into a dedicated KV PR? It's quite confusing to review the coverage relocation intertwingled with the HTTP work. That would also mean basho/riak_test#1037 would need to be split into HTTP tests vs coverage test changes. Thanks. That would be very, very helpful for Brett and I reviewing this massive set of changes. |
This big PR is now split into a series of #1396, #1397 (comes together with basho/riak_pb#190 (RIAK-2492) and an updated basho/riak-erlang-client#277) and #1398. |
This is the same as #1371 plus one additional commit to fix one dialyzer issue. That fix would be merged from lehoff-b#3 into Torben's branch.