Skip to content

Commit

Permalink
Merge 3.2 into master.
Browse files Browse the repository at this point in the history
commit 65039a6
Author: jianminzhao <[email protected]>
Date:   Mon Sep 9 10:32:29 2024 -0700

    CBL-6156: Support Inner Unnest Query in JSON (#2131)

    Added tests for multiple indexes, multi-leveled index, index with N1QL expression.

commit 163b832
Author: Pasin Suriyentrakorn <[email protected]>
Date:   Tue Aug 27 22:23:06 2024 -0700

    CBL-6193 : Fix address conversion for request when using proxy (#2127)

    * Fixed crash when converting from address (Address) to C4Address by using the cast operator instead of casting the pointer which is not valid anymore due to the change in private variables.

    * When creating an address object, used the path not full url for the path.

commit 0b30e0c
Author: jianminzhao <[email protected]>
Date:   Tue Jul 30 16:03:59 2024 -0700

    CBL-6120: Provide an option to enable full sync in the database (#2113)

    Added a flag to C4DatabaseFlags, kC4DB_DiskSyncFull. This flag is passed to DataFile::Options, which is used as we request a connection to the SQLite database.

commit 2bdccb1
Author: jianminzhao <[email protected]>
Date:   Tue Jul 30 08:56:04 2024 -0700

    CBL-6100: Flaky test "REST root level" (#2110)

    Allow several timeouts to the HTTP Request. For now, allowing 4 timeouts with each timeout being 5 seconds.

commit 0e613ae
Author: jianminzhao <[email protected]>
Date:   Tue Jul 30 08:55:35 2024 -0700

    CBL-6104: Flaky test, "Multiple Collections Incremental Revisions" (#2112)

    The reason the test occasionally fails because we assumeed that successive revisions all get replicated to the destinations, whereas replicator may skip obsolete revision, say rev-2, when rev-3 exists when the pusher turns to find the revisions to push. In our test case, we create successive revisions in intervals of 500 milliseconds. Most time, 500 ms is enough to set apart when the pusher picks the revisions. On Jenkins machine, the log shows that it found obsolete revisions when the test failed.

    We modified the test success criteria: we only check that the latest revisions are replicated to the destination. This is the designed behavior.

commit fb01661
Author: jianminzhao <[email protected]>
Date:   Mon Jul 29 16:43:14 2024 -0700

    CBL-6099: Test "Rapid Restarts" failing frequently on Linux (#2111)

    After the replicator is stopped, it may still take some time to wind down objects Our test currently allows 2 seconds for it. This turns out not enough in situation when there are following errors,

    Sync ERROR Obj=/Repl#21/revfinder#26/ Got LiteCore error: LiteCore NotOpen, "database not open"

    This error is artificial. It is because we close the database as soon as the replicator goes to stopped, as opposed to when the replicator is deleted. The error follows the exception to the top frame and takes substantial time when there are many of them.

    I increased 2 seconds allowance to 20 seconds. We won't always wait for 20 seconds, because the waiter polls the condition every 50ms. A situation of actual memory leak will wait for whole 20 seconds before failure.
  • Loading branch information
callumbirks committed Sep 11, 2024
1 parent 8009302 commit cc7ce8e
Show file tree
Hide file tree
Showing 11 changed files with 119 additions and 23 deletions.
15 changes: 8 additions & 7 deletions C/include/c4DatabaseTypes.h
Original file line number Diff line number Diff line change
Expand Up @@ -31,13 +31,14 @@ C4API_BEGIN_DECLS

/** Boolean options for C4DatabaseConfig. */
typedef C4_OPTIONS(uint32_t, C4DatabaseFlags){
kC4DB_Create = 0x01, ///< Create the file if it doesn't exist
kC4DB_ReadOnly = 0x02, ///< Open file read-only
kC4DB_AutoCompact = 0x04, ///< Enable auto-compaction [UNIMPLEMENTED]
kC4DB_VersionVectors = 0x08, ///< Upgrade DB to version vectors instead of rev trees
kC4DB_NoUpgrade = 0x20, ///< Disable upgrading an older-version database
kC4DB_NonObservable = 0x40, ///< Disable database/collection observers, for slightly faster writes
kC4DB_FakeVectorClock = 0x80, ///< Use counters instead of timestamps in version vectors (TESTS ONLY)
kC4DB_Create = 0x01, ///< Create the file if it doesn't exist
kC4DB_ReadOnly = 0x02, ///< Open file read-only
kC4DB_AutoCompact = 0x04, ///< Enable auto-compaction [UNIMPLEMENTED]
kC4DB_VersionVectors = 0x08, ///< Upgrade DB to version vectors instead of rev trees
kC4DB_NoUpgrade = 0x20, ///< Disable upgrading an older-version database
kC4DB_NonObservable = 0x40, ///< Disable database/collection observers, for slightly faster writes
kC4DB_DiskSyncFull = 0x80, ///< Flush to disk after each transaction
kC4DB_FakeVectorClock = 0x0100, ///< Use counters instead of timestamps in version vectors (TESTS ONLY)
};


Expand Down
56 changes: 56 additions & 0 deletions C/tests/c4QueryTest.cc
Original file line number Diff line number Diff line change
Expand Up @@ -783,7 +783,22 @@ N_WAY_TEST_CASE_METHOD(C4QueryTest, "C4Query UNNEST", "[Query][C][Unnest]") {
auto defaultColl = getCollection(db, kC4DefaultCollectionSpec);
REQUIRE(c4coll_createIndex(defaultColl, C4STR("likes"), C4STR("[[\".likes\"]]"), kC4JSONQuery,
kC4ArrayIndex, nullptr, nullptr));
REQUIRE(c4coll_createIndex(defaultColl, C4STR("phone"), C4STR("contact.phone"), kC4N1QLQuery, kC4ArrayIndex,
nullptr, nullptr));
}

// Two UNNESTs for two array properties.
compileSelect(json5("{WHAT: ['.person._id', '.phone'],\
FROM: [{as: 'person'}, \
{as: 'like', unnest: ['.person.likes']},\
{as: 'phone', unnest: ['.person.contact.phone']}],\
WHERE: ['=', ['.like'], 'climbing'],\
ORDER_BY: [['.person.name.first']]}"));
checkExplanation(withIndex);
CHECK(run2()
== (vector<string>{"0000021, 802-4827967", "0000017, 315-7142142", "0000017, 315-0405535",
"0000045, 501-7977106", "0000045, 501-7138486"}));

compileSelect(json5("{WHAT: ['.person._id'],\
FROM: [{as: 'person'}, \
{as: 'like', unnest: ['.person.likes']}],\
Expand Down Expand Up @@ -823,6 +838,8 @@ N_WAY_TEST_CASE_METHOD(NestedQueryTest, "C4Query UNNEST objects", "[Query][C][Un
C4Log("-------- Repeating with index --------");
REQUIRE(c4db_createIndex(db, C4STR("shapes"), C4STR("[[\".shapes\"], [\".color\"]]"), kC4ArrayIndex,
nullptr, nullptr));
REQUIRE(c4db_createIndex2(db, C4STR("shapes2"), C4STR("shapes, concat(color, to_string(size))"),
kC4N1QLQuery, kC4ArrayIndex, nullptr, nullptr));
}
compileSelect(json5("{WHAT: ['.shape.color'],\
DISTINCT: true,\
Expand All @@ -844,9 +861,48 @@ N_WAY_TEST_CASE_METHOD(NestedQueryTest, "C4Query UNNEST objects", "[Query][C][Un
WHERE: ['=', ['.shape.color'], 'red']}"));
checkExplanation(withIndex);
CHECK(run() == (vector<string>{"11"}));

compileSelect(json5("{WHAT: [['sum()', ['.shape.size']]],\
FROM: [{as: 'doc'}, \
{as: 'shape', unnest: ['.doc.shapes']}],\
WHERE: ['=', ['concat()', ['.shape.color'], ['to_string()',['.shape.size']]], 'red3']}"));
checkExplanation(withIndex);
CHECK(run() == (vector<string>{"3"}));
}
}

N_WAY_TEST_CASE_METHOD(NestedQueryTest, "C4Query Nested UNNEST", "[Query][C]") {
deleteDatabase();
db = c4db_openNamed(kDatabaseName, &dbConfig(), ERROR_INFO());
importJSONLines(sFixturesDir + "students.json");

compileSelect(json5("{WHAT: [['AS', ['.doc.name'], 'college'], ['.student.id'], ['.student.class'], ['.interest']],"
" FROM: [{as: 'doc'},"
" {as: 'student', unnest: ['.doc.students']},"
" {as: 'interest', unnest: ['.student.interests']}]"
"}"));
vector<string> results{
"Univ of Michigan, student_112, 3, violin", "Univ of Michigan, student_112, 3, baseball",
"Univ of Michigan, student_189, 2, violin", "Univ of Michigan, student_189, 2, tennis",
"Univ of Michigan, student_1209, 3, art", "Univ of Michigan, student_1209, 3, writing",
"Univ of Pennsylvania, student_112, 3, piano", "Univ of Pennsylvania, student_112, 3, swimming",
"Univ of Pennsylvania, student_189, 2, violin", "Univ of Pennsylvania, student_189, 2, movies"};

CHECK(run2(nullptr, 4) == results);

deleteDatabase();
db = c4db_openNamed(kDatabaseName, &dbConfig(), ERROR_INFO());
// The only difference from "students.json" is that there is an extra property from student to interests.
importJSONLines(sFixturesDir + "students2.json");

compileSelect(json5("{WHAT: [['AS', ['.doc.name'], 'college'], ['.student.id'], ['.student.class'], ['.interest']],"
" FROM: [{as: 'doc'},"
" {as: 'student', unnest: ['.doc.students']},"
" {as: 'interest', unnest: ['.student.extra.interests']}]"
"}"));
CHECK(run2(nullptr, 4) == results);
}

N_WAY_TEST_CASE_METHOD(C4QueryTest, "C4Query Seek", "[Query][C]") {
compile(json5("['=', ['.', 'contact', 'address', 'state'], 'CA']"));
C4Error error;
Expand Down
22 changes: 14 additions & 8 deletions C/tests/c4QueryTest.hh
Original file line number Diff line number Diff line change
Expand Up @@ -90,15 +90,21 @@ class C4QueryTest : public C4Test {
});
}

// Runs query, returning vector of doc IDs
std::vector<std::string> run2(const char* bindings = nullptr) {
// Runs query, returning vector of rows. Columns are comma separated.
std::vector<std::string> run2(const char* bindings = nullptr, unsigned colnCount = 2) {
REQUIRE(colnCount >= 2);
return runCollecting<std::string>(bindings, [&](C4QueryEnumerator* e) {
REQUIRE(FLArrayIterator_GetCount(&e->columns) >= 2);
fleece::alloc_slice c1 = FLValue_ToString(FLArrayIterator_GetValueAt(&e->columns, 0));
fleece::alloc_slice c2 = FLValue_ToString(FLArrayIterator_GetValueAt(&e->columns, 1));
if ( e->missingColumns & 1 ) c1 = "MISSING"_sl;
if ( e->missingColumns & 2 ) c2 = "MISSING"_sl;
return c1.asString() + ", " + c2.asString();
REQUIRE(FLArrayIterator_GetCount(&e->columns) >= colnCount);
std::string res;
for ( unsigned c = 0; c < colnCount; ++c ) {
if ( c > 0 ) res = res + ", ";
if ( e->missingColumns & (1 << c) ) res += "MISSING";
else {
fleece::alloc_slice c1 = FLValue_ToString(FLArrayIterator_GetValueAt(&e->columns, c));
res += c1.asString();
}
}
return res;
});
}

Expand Down
2 changes: 2 additions & 0 deletions C/tests/data/students.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
{"type":"university","name":"Univ of Michigan","students":[{"id":"student_112","class":"3","order":"1","interests":["violin","baseball"]},{"id":"student_189","class":"2","order":"5","interests":["violin","tennis"]},{"id":"student_1209","class":"3","order":"15","interests":["art","writing"]}]}
{"type":"university","name":"Univ of Pennsylvania","students":[{"id":"student_112","class":"3","order":"1","interests":["piano","swimming"]},{"id":"student_189","class":"2","order":"5","interests":["violin","movies"]}]}
2 changes: 2 additions & 0 deletions C/tests/data/students2.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
{"type":"university","name":"Univ of Michigan","students":[{"id":"student_112","class":"3","order":"1","extra":{"interests":["violin","baseball"]}},{"id":"student_189","class":"2","order":"5","extra":{"interests":["violin","tennis"]}},{"id":"student_1209","class":"3","order":"15","extra":{"interests":["art","writing"]}}]}
{"type":"university","name":"Univ of Pennsylvania","students":[{"id":"student_112","class":"3","order":"1","extra":{"interests":["piano","swimming"]}},{"id":"student_189","class":"2","order":"5","extra":{"interests":["violin","movies"]}}]}
1 change: 1 addition & 0 deletions LiteCore/Database/DatabaseImpl.cc
Original file line number Diff line number Diff line change
Expand Up @@ -132,6 +132,7 @@ namespace litecore {
options.create = (_config.flags & kC4DB_Create) != 0;
options.writeable = (_config.flags & kC4DB_ReadOnly) == 0;
options.upgradeable = (_config.flags & kC4DB_NoUpgrade) == 0;
options.diskSyncFull = (_config.flags & kC4DB_DiskSyncFull) != 0;
options.useDocumentKeys = true;
options.encryptionAlgorithm = (EncryptionAlgorithm)_config.encryptionKey.algorithm;
if ( options.encryptionAlgorithm != kNoEncryption ) {
Expand Down
8 changes: 3 additions & 5 deletions LiteCore/Storage/DataFile.cc
Original file line number Diff line number Diff line change
Expand Up @@ -123,11 +123,9 @@ namespace litecore {


const DataFile::Options DataFile::Options::defaults = {
{true}, // sequences
true,
true,
true,
true // create, writeable, useDocumentKeys, upgradeable
{true}, // sequences
true, true, true, true, // create, writeable, useDocumentKeys, upgradeable
false // diskSyncFull
};

DataFile::DataFile(const FilePath& path, Delegate* delegate, const DataFile::Options* options)
Expand Down
3 changes: 3 additions & 0 deletions LiteCore/Storage/DataFile.hh
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,7 @@ namespace litecore {
bool writeable : 1; ///< If false, db is opened read-only
bool useDocumentKeys : 1; ///< Use SharedKeys for Fleece docs
bool upgradeable : 1; ///< DB schema can be upgraded
bool diskSyncFull : 1; ///< SQLite PRAGMA synchronous
EncryptionAlgorithm encryptionAlgorithm; ///< What encryption (if any)
alloc_slice encryptionKey; ///< Encryption key, if encrypting
DatabaseTag dbTag;
Expand Down Expand Up @@ -273,6 +274,8 @@ namespace litecore {

void setOptions(const Options& o) { _options = o; }

const Options& getOptions() const { return _options; }

void forOpenKeyStores(function_ref<void(KeyStore&)> fn);

virtual Factory& factory() const = 0;
Expand Down
4 changes: 3 additions & 1 deletion LiteCore/Storage/KeyStore.hh
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,9 @@
//

#pragma once
#define LITECORE_CPP_API 1
#ifndef LITECORE_CPP_API
# define LITECORE_CPP_API 1
#endif
#include "IndexSpec.hh"
#include "RecordEnumerator.hh"
#include <optional>
Expand Down
25 changes: 25 additions & 0 deletions LiteCore/tests/c4BaseTest.cc
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
#include "c4ExceptionUtils.hh"
#include "fleece/InstanceCounted.hh"
#include "catch.hpp"
#include "DatabaseImpl.hh"
#include "NumConversion.hh"
#include "Actor.hh"
#include "URLTransformer.hh"
Expand All @@ -25,6 +26,7 @@
# include "Error.hh"
# include <winerror.h>
#endif
#include <sstream>

using namespace fleece;
using namespace std;
Expand Down Expand Up @@ -138,6 +140,29 @@ TEST_CASE("C4Error Reporting Macros", "[Errors][C]") {
#endif
}

TEST_CASE_METHOD(C4Test, "Database Flag FullSync", "[Database][C]") {
// Ensure that, by default, diskSyncFull is false.
CHECK(!litecore::asInternal(db)->dataFile()->options().diskSyncFull);

C4DatabaseConfig2 config = *c4db_getConfig2(db);
config.flags |= kC4DB_DiskSyncFull;

std::stringstream ss;
ss << std::string(c4db_getName(db)) << "_" << c4_now();
c4::ref<C4Database> dbWithFullSync = c4db_openNamed(slice(ss.str().c_str()), &config, ERROR_INFO());
// The flag in config is passed to DataFile options.
CHECK(litecore::asInternal(dbWithFullSync)->dataFile()->options().diskSyncFull);

config.flags &= ~kC4DB_DiskSyncFull;
c4::ref<C4Database> otherConnection = c4db_openNamed(c4db_getName(dbWithFullSync), &config, ERROR_INFO());
// The flag applies per connection opened with the config.
CHECK(!litecore::asInternal(otherConnection)->dataFile()->options().diskSyncFull);

c4::ref<C4Database> againConnection = c4db_openAgain(dbWithFullSync, ERROR_INFO());
// The flag is passed to database opened by openAgain.
CHECK(litecore::asInternal(againConnection)->dataFile()->options().diskSyncFull);
}

#pragma mark - INSTANCECOUNTED:

namespace {
Expand Down
4 changes: 2 additions & 2 deletions Networking/HTTP/HTTPLogic.cc
Original file line number Diff line number Diff line change
Expand Up @@ -94,8 +94,8 @@ namespace litecore::net {
// the new type can be handled the same way.
if ( _isWebSocket ) {
Address address = {_address.scheme() == "wss"_sl ? "https"_sl : "http"_sl, _address.hostname(),
_address.port(), _address.url()};
rq << string(Address::toURL(*(C4Address*)&address));
_address.port(), _address.path()};
rq << string(Address::toURL(*(C4Address*)address));
} else {
rq << string(_address.url());
}
Expand Down

0 comments on commit cc7ce8e

Please sign in to comment.