-
Notifications
You must be signed in to change notification settings - Fork 416
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dbfs paths not supported #1376
Comments
@MrPowers to the best of my knowledge there is not a REST API for DBFS or any such open "file system provider" for what DBFS actually. Does Databricks make it possible for third party interoperability with DBFS |
@rtyler - yea, I'm not sure. Perhaps I have to figure out another way to get the path to the data. |
I have the same issue when using a mounted ADSL2 in a Azure ML Studio job. I wish to write, and it fails on writing the log. The parquet-file is correctly written. This is ADSL2 with Hierarchial Storage. |
I also ran into this issue with AML, writing to mounted storage is not supported. The way I do it now is I don't mount but write to the adls2 container directly. |
I solved it the same way, but that means my jobs aren't as clear (output is not job output but a hidden API call) 😅 Thanks for responding! |
…rd link (#1868) compatible to write to local file systems that do not support hard link. # Description When we write to the local file system, sometimes hard link is not supported, such as blobfuse, goofys, s3fs, so deal with it with compatibility. It is important to note that: There is another problem with blobfuse, that is, when it comes to rename, it will report errors. Because rename did not release the file handle before. See here for details: #1765 Arrow-rs is required to cooperate with the modification, for example: https://github.com/GlareDB/arrow-rs/pull/2/files Because object_store has been upgraded to 0.8, there are a lot of breaking change, so I haven't changed this one for the time being. Will fix it after upgrading to 0.8 #1858 # Related Issue(s) #1765 #1376 # Documentation
Should work now for mounted storage with change by #1868 |
Environment
Delta-rs version: 0.9.0
Binding: Python
Environment:
Bug
What happened: Tried to instantiate a DeltaTable from a DBFS path, like this:
deltalake.DeltaTable("dbfs:/some-thing/some_dir")
What you expected to happen: I expected this to work. This works:
spark.read.format("delta").load("dbfs:/some-thing/some_dir").show()
How to reproduce it: Create a Delta table in Databricks with a DBFS path and then try to instantiate a
deltalake.DeltaTable
. Should be relatively easy to reproduce.More details: N/A
The text was updated successfully, but these errors were encountered: