You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
gpq: error: failed to create schema after reading 39 features
Based on #142 the answer is clear: there are no non null values in any of the features for one of the columns.
Indeed, if I edit the file and add just one everything works fine.
The problem is that unlike in the linked issue it is not possible for me to increase the amount of rows scanned because all the rows have nulls, and this is a case that is pretty common with the files I am dealing with.
While this strict behaviour is understandableby default, is is preventing me from adopting the tool. The ogr2ogr behaviour is maybe questionable (in my case the incriminating column is being added as a string instead of an int), it at least produces an output that is usable.
So perhaps an option to --drop-non-inferrable-columns, or --import-ambiguous-columns-as-strings would be a useful escape hatch for gpq users.
(pre-processing json is of course an option too but more invovled)
The text was updated successfully, but these errors were encountered:
I agree that there should be a way to handle this. OGR will encode these “unknown” types as JSON strings.
In your case, I imagine you would want an optional integer type. The challenge is coming up with syntax for the command line args that is convenient and flexible. Referring to a secondary file with schema or other complex options might be nicer.
Being able to specify the schema would be nice for a perfect output yes. I think even just optionally treating unknown fields as strings or dropping non inferable columns altogether would be a nice improvement though (and acceptable for my personal use case)
Hi,
I am experiencing this issue with
gpq
:Based on #142 the answer is clear: there are no non null values in any of the features for one of the columns.
Indeed, if I edit the file and add just one everything works fine.
The problem is that unlike in the linked issue it is not possible for me to increase the amount of rows scanned because all the rows have nulls, and this is a case that is pretty common with the files I am dealing with.
While this strict behaviour is understandableby default, is is preventing me from adopting the tool. The
ogr2ogr
behaviour is maybe questionable (in my case the incriminating column is being added as astring
instead of anint
), it at least produces an output that is usable.So perhaps an option to
--drop-non-inferrable-columns
, or--import-ambiguous-columns-as-strings
would be a useful escape hatch for gpq users.(pre-processing json is of course an option too but more invovled)
The text was updated successfully, but these errors were encountered: