-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Save Pins to Databricks #839
Comments
Thanks for this suggestion! 🙌 Can you share some specifics about how and what you would like to store in Databricks, perhaps highlighting what is different from the workflows supported by sparklyr? Like this: |
I have a use case for this, although interested in suggestions if there's a better solution. I work a lot with survey data that comes in an SPSS There is Unity Catalog Volumes, but I can't figure out how to store factors on Databricks while retaining read access from my local machine. You can save and read Pins in Databricks would solve this, because I could write the data directly to Thanks for all your work on this package! |
In general, I think accessing/storing information in Databrick's Volumes provides some great benefits.
Through the A pseudo example for reading in a directory of from databricks.sdk import WorkspaceClient
name = f"{catalog}.{database}.{volume}"
wc = WorkspaceClient()
volume = wc.volumes.read(name)
spark.read.text(
paths=volume.storage_location, # directory
wholetext=True, # single row
pathGlobFilter="*.yaml"
) A more full example in R via library(reticulate)
# https://github.com/databrickslabs/databricks-sdk-r
# package for using the REST API in R
library(databricks)
client <- DatabricksClient()
# this can also be accomplished through reticulate
volume <-
client |>
read_volume("{catalog}.{database}.{volume}")
location <- volume$storage_location
# grabbing a cluster that I can access more data
clusters <-
client |>
list_clusters() |>
subset(startsWith(creator_user_name, "jbarbone"))
cluster <- clusters$cluster_id[1]
# requires the {databricks-sdk} and {databricks-connect} packages
db <- import("databricks.sdk")
connect <- import("databricks.connect")
w <- db$WorkspaceClient()
volume <- w$volumes$read("{catalog}.{database}.{volume}")
location <- volume$storage_location
pyspark <- import("pyspark")
spark <-
connect $
DatabricksSession $
builder $
profile("DEFAULT") $
clusterId(cluster) $
getOrCreate()
content <- spark$read$text(
paths = location,
wholetext = TRUE,
pathGlobFilter = "*.yaml"
) Through the REST API you can find the storage location, but (https://docs.databricks.com/api/workspace/volumes/list) but you may need to access the spark context to read the data. The R example seems to work fine for me, and locally I just have a |
Dear Development Team,
Given the increasing collaboration between Posit and Databricks, I believe that the capability to store Pins to Databricks, in comparison to other platforms such as S3 and Azure, could prove to be an appealing feature for enterprise clients.
Sincerely,
The text was updated successfully, but these errors were encountered: