Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion NEXT_CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,8 @@

### New Features and Improvements

* Add `provider_config` support for manual plugin framework resources and data sources([#5127](https://github.com/databricks/terraform-provider-databricks/pull/5127))
* Add `provider_config` support for manual plugin framework resources and data sources ([#5127](https://github.com/databricks/terraform-provider-databricks/pull/5127))
* Added `databricks_permission` resource for managing permissions on Databricks objects for individual principals ([#5161](https://github.com/databricks/terraform-provider-databricks/pull/5161)).

### Bug Fixes

Expand Down
282 changes: 282 additions & 0 deletions docs/resources/permission.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,282 @@
---
subcategory: "Security"
---

# databricks_permission Resource

This resource allows you to manage permissions for a single principal on a Databricks workspace object. Unlike `databricks_permissions`, which manages all principals' permissions for an object at once, this resource is authoritative for a specific object-principal pair only.

-> This resource can only be used with a workspace-level provider!

~> This resource is _authoritative_ for the specified object-principal pair. Configuring this resource will manage the permission for the specified principal only, without affecting permissions for other principals.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe good to note that you can't use both databricks_permission and databricks_permissions for the same object.


-> Use `databricks_permissions` when you need to manage all permissions for an object in a single resource. Use `databricks_permission` (singular) when you want to manage permissions for individual principals independently.

## Example Usage

### Cluster Permissions

```hcl
resource "databricks_cluster" "shared" {
cluster_name = "Shared Analytics"
spark_version = data.databricks_spark_version.latest.id
node_type_id = data.databricks_node_type.smallest.id
autotermination_minutes = 20

autoscale {
min_workers = 1
max_workers = 10
}
}

resource "databricks_group" "data_engineers" {
display_name = "Data Engineers"
}

# Grant CAN_RESTART permission to a group
resource "databricks_permission" "cluster_de" {
cluster_id = databricks_cluster.shared.id
group_name = databricks_group.data_engineers.display_name
permission_level = "CAN_RESTART"
}

# Grant CAN_ATTACH_TO permission to a user
resource "databricks_permission" "cluster_analyst" {
cluster_id = databricks_cluster.shared.id
user_name = "[email protected]"
permission_level = "CAN_ATTACH_TO"
}
```

### Job Permissions

```hcl
resource "databricks_job" "etl" {
name = "ETL Pipeline"

task {
task_key = "process_data"

notebook_task {
notebook_path = "/Shared/ETL"
}

new_cluster {
num_workers = 2
spark_version = data.databricks_spark_version.latest.id
node_type_id = data.databricks_node_type.smallest.id
}
}
}

# Grant CAN_MANAGE to a service principal
resource "databricks_permission" "job_sp" {
job_id = databricks_job.etl.id
service_principal_name = databricks_service_principal.automation.application_id
permission_level = "CAN_MANAGE"
}

# Grant CAN_VIEW to a group
resource "databricks_permission" "job_viewers" {
job_id = databricks_job.etl.id
group_name = "Data Viewers"
permission_level = "CAN_VIEW"
}
```

### Notebook Permissions

```hcl
resource "databricks_notebook" "analysis" {
path = "/Shared/Analysis"
language = "PYTHON"
content_base64 = base64encode(<<-EOT
# Analysis Notebook
print("Hello, World!")
EOT
)
}

# Grant CAN_RUN to a user
resource "databricks_permission" "notebook_user" {
notebook_path = databricks_notebook.analysis.path
user_name = "[email protected]"
permission_level = "CAN_RUN"
}

# Grant CAN_EDIT to a group
resource "databricks_permission" "notebook_editors" {
notebook_path = databricks_notebook.analysis.path
group_name = "Notebook Editors"
permission_level = "CAN_EDIT"
}
```

### Token Permissions

This resource solves the limitation where all token permissions must be defined in a single `databricks_permissions` resource:

```hcl
# Multiple resources can now manage different principals independently
resource "databricks_permission" "tokens_team_a" {
authorization = "tokens"
group_name = "Team A"
permission_level = "CAN_USE"
}

resource "databricks_permission" "tokens_team_b" {
authorization = "tokens"
group_name = "Team B"
permission_level = "CAN_USE"
}

resource "databricks_permission" "tokens_service_account" {
authorization = "tokens"
service_principal_name = databricks_service_principal.ci_cd.application_id
permission_level = "CAN_USE"
}
```

### SQL Endpoint Permissions

```hcl
resource "databricks_sql_endpoint" "analytics" {
name = "Analytics Warehouse"
cluster_size = "Small"
max_num_clusters = 1
}

resource "databricks_permission" "warehouse_users" {
sql_endpoint_id = databricks_sql_endpoint.analytics.id
group_name = "SQL Users"
permission_level = "CAN_USE"
}
```

## Argument Reference

The following arguments are required:

* `permission_level` - (Required) The permission level to grant. The available permission levels depend on the object type. Common values include `CAN_MANAGE`, `CAN_USE`, `CAN_VIEW`, `CAN_RUN`, `CAN_EDIT`, `CAN_READ`, `CAN_RESTART`, `CAN_ATTACH_TO`.

Exactly one of the following principal identifiers must be specified:

* `user_name` - (Optional) User email address to grant permissions to. Conflicts with `group_name` and `service_principal_name`.
* `group_name` - (Optional) Group name to grant permissions to. Conflicts with `user_name` and `service_principal_name`.
* `service_principal_name` - (Optional) Application ID of the service principal. Conflicts with `user_name` and `group_name`.

Exactly one of the following object identifiers must be specified:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious: do you like the pattern of having a separate field for each permission type? I was thinking that it might be more scalable to split a bit from how databricks_permissions works and instead have object_type and object_id fields. That way, as new object types are added, this resource doesn't need to be updated. I think the IDs are all the same type IIUC. We should then point people to the REST API docs for the set of allowed types.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could be easier to read in some cases, but let me try to play with it

Copy link
Contributor Author

@alexott alexott Nov 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, we use specific field name to validate what are supported permissions... and additional customization. So it makes sense to support only explicit fields. Plus, compatibilit with permissions resouce


* `cluster_id` - (Optional) ID of the [databricks_cluster](cluster.md).
* `cluster_policy_id` - (Optional) ID of the [databricks_cluster_policy](cluster_policy.md).
* `instance_pool_id` - (Optional) ID of the [databricks_instance_pool](instance_pool.md).
* `job_id` - (Optional) ID of the [databricks_job](job.md).
* `pipeline_id` - (Optional) ID of the [databricks_pipeline](pipeline.md).
* `notebook_id` - (Optional) ID of the [databricks_notebook](notebook.md). Can be used when the notebook is referenced by ID.
* `notebook_path` - (Optional) Path to the [databricks_notebook](notebook.md).
* `directory_id` - (Optional) ID of the [databricks_directory](directory.md).
* `directory_path` - (Optional) Path to the [databricks_directory](directory.md).
* `workspace_file_id` - (Optional) ID of the [databricks_workspace_file](workspace_file.md).
* `workspace_file_path` - (Optional) Path to the [databricks_workspace_file](workspace_file.md).
* `registered_model_id` - (Optional) ID of the [databricks_mlflow_model](mlflow_model.md).
* `experiment_id` - (Optional) ID of the [databricks_mlflow_experiment](mlflow_experiment.md).
* `sql_dashboard_id` - (Optional) ID of the legacy [databricks_sql_dashboard](sql_dashboard.md).
* `sql_endpoint_id` - (Optional) ID of the [databricks_sql_endpoint](sql_endpoint.md).
* `sql_query_id` - (Optional) ID of the [databricks_query](query.md).
* `sql_alert_id` - (Optional) ID of the [databricks_alert](alert.md).
* `dashboard_id` - (Optional) ID of the [databricks_dashboard](dashboard.md) (Lakeview).
* `repo_id` - (Optional) ID of the [databricks_repo](repo.md).
* `repo_path` - (Optional) Path to the [databricks_repo](repo.md).
* `authorization` - (Optional) Type of authorization. Currently supports `tokens` and `passwords`.
* `serving_endpoint_id` - (Optional) ID of the [databricks_model_serving](model_serving.md) endpoint.
* `vector_search_endpoint_id` - (Optional) ID of the [databricks_vector_search_endpoint](vector_search_endpoint.md).

## Attribute Reference

In addition to all arguments above, the following attributes are exported:

* `id` - The ID of the permission in the format `/<object_type>/<object_id>/<principal>`.
* `object_type` - The type of object (e.g., `clusters`, `jobs`, `notebooks`).

## Import

Permissions can be imported using the format `/<object_type>/<object_id>/<principal>`. For example:

```bash
terraform import databricks_permission.cluster_analyst /clusters/0123-456789-abc12345/[email protected]
```

## Comparison with databricks_permissions

### When to use `databricks_permission` (singular)

* You want to manage permissions for individual principals independently
* Different teams manage permissions for different principals on the same object
* You need to avoid the "all-or-nothing" approach of `databricks_permissions`
* You want to add/remove principals without affecting others
* Special cases like token permissions where multiple independent configurations are needed

### When to use `databricks_permissions` (plural)

* You want to manage all permissions for an object in one place
* You have a small, stable set of principals for an object
* You want to ensure no unexpected permissions exist on the object
* You prefer a single source of truth for all permissions on an object

### Example Comparison

**Using `databricks_permissions` (manages ALL principals)**:

```hcl
resource "databricks_permissions" "cluster_all" {
cluster_id = databricks_cluster.shared.id

access_control {
group_name = "Data Engineers"
permission_level = "CAN_RESTART"
}

access_control {
user_name = "[email protected]"
permission_level = "CAN_ATTACH_TO"
}

# Adding a third principal requires modifying this resource
}
```

**Using `databricks_permission` (manages ONE principal per resource)**:

```hcl
resource "databricks_permission" "cluster_de" {
cluster_id = databricks_cluster.shared.id
group_name = "Data Engineers"
permission_level = "CAN_RESTART"
}

resource "databricks_permission" "cluster_analyst" {
cluster_id = databricks_cluster.shared.id
user_name = "[email protected]"
permission_level = "CAN_ATTACH_TO"
}

# Adding a third principal is a separate resource
# No need to modify existing resources
resource "databricks_permission" "cluster_viewer" {
cluster_id = databricks_cluster.shared.id
group_name = "Viewers"
permission_level = "CAN_ATTACH_TO"
}
```

## Related Resources

The following resources are used in the same context:

* [databricks_permissions](permissions.md) - Manage all permissions for an object at once
* [databricks_cluster](cluster.md) - Create Databricks clusters
* [databricks_job](job.md) - Create Databricks jobs
* [databricks_notebook](notebook.md) - Manage Databricks notebooks
* [databricks_group](group.md) - Manage Databricks groups
* [databricks_user](user.md) - Manage Databricks users
* [databricks_service_principal](service_principal.md) - Manage Databricks service principals
2 changes: 2 additions & 0 deletions internal/providers/pluginfw/pluginfw_rollout_utils.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ import (
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/products/dashboards"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/products/library"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/products/notificationdestinations"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/products/permissions"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/products/qualitymonitor"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/products/registered_model"
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/products/serving"
Expand Down Expand Up @@ -49,6 +50,7 @@ var migratedDataSources = []func() datasource.DataSource{
var pluginFwOnlyResources = append(
[]func() resource.Resource{
app.ResourceApp,
permissions.ResourcePermission,
},
autoGeneratedResources...,
)
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
package permissions

import (
"sync"
)

// objectMutexManager manages mutexes per object ID to prevent concurrent
// operations on the same Databricks object that could lead to race conditions.
//
// This is particularly important for Delete operations where multiple
// databricks_permission resources for the same object might be deleted
// concurrently, each doing GET -> filter -> SET, which could result in
// lost permission updates.
type objectMutexManager struct {
mutexes map[string]*sync.Mutex
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: you can use sync.Map and then don't need a separate mapLock.

mapLock sync.Mutex
}

// globalObjectMutexManager is the singleton instance used by all permission resources
var globalObjectMutexManager = &objectMutexManager{
mutexes: make(map[string]*sync.Mutex),
}

// Lock acquires a mutex for the given object ID.
// Each object ID gets its own mutex to allow concurrent operations on different objects
// while serializing operations on the same object.
func (m *objectMutexManager) Lock(objectID string) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is just used for permissions, maybe we accept object type and object ID as parameters and synthesize an appropriate key.

m.mapLock.Lock()
mu, exists := m.mutexes[objectID]
if !exists {
mu = &sync.Mutex{}
m.mutexes[objectID] = mu
}
m.mapLock.Unlock()

// Lock the object-specific mutex (outside the map lock to avoid deadlock)
mu.Lock()
}

// Unlock releases the mutex for the given object ID.
func (m *objectMutexManager) Unlock(objectID string) {
m.mapLock.Lock()
mu, exists := m.mutexes[objectID]
m.mapLock.Unlock()

if exists {
mu.Unlock()
}
}

// lockObject acquires a lock for the given object ID.
// This should be called at the start of any operation that modifies permissions.
func lockObject(objectID string) {
globalObjectMutexManager.Lock(objectID)
}

// unlockObject releases the lock for the given object ID.
// This should be called at the end of any operation that modifies permissions.
// Use defer to ensure it's always called even if the operation panics.
func unlockObject(objectID string) {
globalObjectMutexManager.Unlock(objectID)
}
Comment on lines +51 to +62
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these used outside of tests? Do we need these?

Loading
Loading