Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Save Pins to Databricks #839

Open
viv-analytics opened this issue Aug 31, 2024 · 3 comments
Open

[Feature Request] Save Pins to Databricks #839

viv-analytics opened this issue Aug 31, 2024 · 3 comments
Labels
boards 🧑‍🏫 feature a feature request or enhancement

Comments

@viv-analytics
Copy link

Dear Development Team,

Given the increasing collaboration between Posit and Databricks, I believe that the capability to store Pins to Databricks, in comparison to other platforms such as S3 and Azure, could prove to be an appealing feature for enterprise clients.

Sincerely,

@juliasilge
Copy link
Member

Thanks for this suggestion! 🙌

Can you share some specifics about how and what you would like to store in Databricks, perhaps highlighting what is different from the workflows supported by sparklyr? Like this:
https://spark.posit.co/deployment/databricks-connect.html
Using sparklyr has some real benefits over something like pins, such as being able to execute SQL queries.

@wklimowicz
Copy link

I have a use case for this, although interested in suggestions if there's a better solution.

I work a lot with survey data that comes in an SPSS .sav file format. sav's great because haven reads in the labels as ordered factors, so it retains the information behind the order of responses like c("Strongly agree", "Agree", ... or c("Yes", "Maybe", "No"). I want to keep it in a form where the factors are retained, which rules out standard Databricks tables.

There is Unity Catalog Volumes, but I can't figure out how to store factors on Databricks while retaining read access from my local machine. You can save and read .sav files from a Volume, but only when using a notebook in browser. When running sparklyr from my local machine it only seems to work for spark_read_csv. There is no function for spark_read_sav, and if I convert to an intermediate format like parquet it only seems to work from a notebook (i.e. running spark_read_parquet from local machine errors out -- the documentation is a bit sparse so not sure if this is a bug or unsupported behaviour).

Pins in Databricks would solve this, because I could write the data directly to board_databricks as Rds or qs. I would also want to use it as the default location for pins anyway: my organisation has board_ms365 disabled, and the permissions on the shared network drive are a bit opaque. This solution would also be in the Databricks spirit of having all your data and governance on a single platform.

Thanks for all your work on this package!

@jmbarbone
Copy link

In general, I think accessing/storing information in Databrick's Volumes provides some great benefits.

  • storage location is closer to processing/working location (we're not jumping out of Databricks for files)
  • Volumes are backed by Unity Catalog so orgs can control access to information we'd want to write to {pins}
  • Volumes can hold unstructured data
  • Volumes should look like file systems; volumes can hold multiple files and folders

Through the databricks-sdk you can manage (read/write) Volumes: https://docs.databricks.com/en/dev-tools/sdk-python.html#files-in-volumes

A pseudo example for reading in a directory of .yaml files

from databricks.sdk import WorkspaceClient
name = f"{catalog}.{database}.{volume}"
wc = WorkspaceClient()
volume = wc.volumes.read(name)
spark.read.text(
    paths=volume.storage_location, # directory
    wholetext=True, # single row
    pathGlobFilter="*.yaml" 
)

A more full example in R via {reticulate}

library(reticulate)

# https://github.com/databrickslabs/databricks-sdk-r
# package for using the REST API in R
library(databricks) 

client <- DatabricksClient()

# this can also be accomplished through reticulate
volume <- 
  client |> 
  read_volume("{catalog}.{database}.{volume}")

location <- volume$storage_location

# grabbing a cluster that I can access more data
clusters <- 
  client |> 
  list_clusters() |> 
  subset(startsWith(creator_user_name, "jbarbone")) 

cluster <- clusters$cluster_id[1]  

# requires the {databricks-sdk} and {databricks-connect} packages
db <- import("databricks.sdk")
connect <- import("databricks.connect")

w <- db$WorkspaceClient()
volume <- w$volumes$read("{catalog}.{database}.{volume}")
location <- volume$storage_location
pyspark <- import("pyspark")
spark <- 
    connect $
    DatabricksSession $
    builder $
    profile("DEFAULT") $
    clusterId(cluster) $ 
    getOrCreate()

content <- spark$read$text(
  paths = location,
  wholetext = TRUE,
  pathGlobFilter = "*.yaml"
)

Through the REST API you can find the storage location, but (https://docs.databricks.com/api/workspace/volumes/list) but you may need to access the spark context to read the data. The R example seems to work fine for me, and locally I just have a ~/.databrickscfg file with some profiles ("DEFAULT" used above). The cluster/user permissions would need to be set by the user, so what {pins} needs may be pretty minimum?

@juliasilge juliasilge added feature a feature request or enhancement boards 🧑‍🏫 labels Sep 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
boards 🧑‍🏫 feature a feature request or enhancement
Projects
None yet
Development

No branches or pull requests

4 participants