Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify specification around type conversion via Column.get(int, Class) #210

Open
lukaseder opened this issue Apr 9, 2021 · 7 comments
Open

Comments

@lukaseder
Copy link
Contributor

Feature Request

See also discussion here:
https://groups.google.com/g/r2dbc/c/0fpr5DY3big

Is your feature request related to a problem? Please describe

In https://r2dbc.io/spec/0.9.0.M1/spec/html/#rows, the specification claims that

When you specify a target type, the R2DBC driver tries to convert the value to the target type

This specification is a bit vague and allows for both assumptions:

  • The behaviour is deliberately implementation-specific, and clients will have to implement conversions themselves, or
  • The behaviour is implementation-specific to some extent, but some "reasonable" conversions can be expeted to work

Describe the solution you'd like

I'm assuming that some "reasonable" conversions can be safely expected to work on all drivers, but this is currently not the case, see e.g.:

I'd like the specification to be more specific, not having an opinion in which direction it should go, but as a client implementation, I would like to have certainty that certain conversions are can safely be expected to work now, and in the future.

@lukaseder lukaseder changed the title Clarify specification around Clarify specification around Column.get(int, Class) Apr 9, 2021
@mp911de mp911de changed the title Clarify specification around Column.get(int, Class) Clarify specification around type conversion via Column.get(int, Class) Apr 19, 2021
@lukaseder
Copy link
Contributor Author

I'd like to double down on this lack of specification clarity here. We only have a few drivers yet, and already I don't know if I should use byte[].class (the JDBC type), ByteBuffer.class (the R2dbcType type) or io.r2dbc.spi.Blob. For example:

create table t (b blob);
insert into t values (null);

And then:

Class<?> klass = ...?

System.out.println(
Flux.from(cf.create())
    .flatMap(c -> c.createStatement("select b from t").execute())
    .flatMap(it -> it.map((r, m) -> Optional.ofNullable(r.get(0, klass))))
    .collectList()
    .block()
);

What do I put for klass? The oracle driver (to my surprise) only supports io.r2dbc.spi.Blob. I mean, that's not an unreasonable type to support, but I would have really expected R2dbcType.BLOB.getJavaType() to work (which maps to ByteBuffer). However, I don't know if I should report a bug to https://github.com/oracle/oracle-r2dbc

@mp911de
Copy link
Member

mp911de commented Jun 30, 2021

For real blobs, that can contain a significant amount of data (an amount that doesn't easily fit into your heap size), please use Blob, as per https://r2dbc.io/spec/0.8.5.RELEASE/spec/html/#datatypes.mapping.binary. ByteBuffer should be generally supported but there's a limitation for databases that use locators such as Oracle. A locator requires additional database interaction which may cause a blocking behavior upon Row.get(0, ByteBuffer.class). I think because of that database-specific limitation they decided to not support ByteBuffer to avoid surprises.

@lukaseder
Copy link
Contributor Author

I don't remember having used those Oracle BLOB features, ever. When I stored images or PDFs in BLOBs, byte[] always sufficed with JDBC. Correctly streaming chunked blobs while implementing correct resource management is extremely hard, both for users and integrators. Maybe, a 5GB BLOB doesn't belong in the database but on a simple file system?

Anyway. Thanks for the clarification on this part.

@mp911de
Copy link
Member

mp911de commented Jun 30, 2021

Maybe, a 5GB BLOB doesn't belong in the database but on a simple file system?

Fully agree. According to their database, a blob can hold "TB's of data" so mentioned that for full transparency and consideration.

@lukaseder
Copy link
Contributor Author

I've raised the question in this discussion: oracle/oracle-r2dbc#17 (comment). Curious about the findings here.

@Michael-A-McMahon
Copy link
Contributor

I'd really like for Oracle R2DBC to support the ByteBuffer mapping of BLOB, but so far I've been unable to identify a good way to do it. I posted my thoughts about this here:
oracle/oracle-r2dbc#32 (comment)

Maybe someone can think of a better solution? If so, then please post on the issue I've linked to above. Any discussion is welcome!

@Michael-A-McMahon
Copy link
Contributor

Going back to the original topic, Section 14 of the R2DBC Specification provides a "guideline" of SQL to Java type mappings. If we can expect all R2DBC drivers to support these mappings, does this offer enough clarify an R2DBC consumer?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants