Skip to content

Table Properties#

AWS Data Access Server#

Expand

Property Description Example
DATASERVER_RANGER_AUTH_ENABLED Enable/disable Ranger authorization in DataServer.
DATASERVER_V2_WORKDER_THREADS Number of worker threads to process inbound connection. 20
DATASERVER_V2_CHANNEL_CONNECTION_BACKLOG Maximum queue size for inbound connection. 128
DATASERVER_V2_CHANNEL_CONNECTION_POOL Enable connection pool for outbound request. The property is disabled by default.
DATASERVER_V2_FRONT_CHANNEL_IDLE_TIMEOUT Idle timeout for inbound connection. 60
DATASERVER_V2_BACK_CHANNEL_IDLE_TIMEOUT Idle timeout for outbound connection and will take effect only if the connection pool enabled. 60
DATASERVER_HEAP_MIN_MEMORY_MB Add the minimum Java Heap memory in MB used by Dataserver.  1024
DATASERVER_HEAP_MAX_MEMORY_MB Add the maximum Java Heap memory in MB used by Dataserver.  1024
DATASERVER_USE_REGIONAL_ENDPOINT Set this property to enforce default region for all S3 buckets. true
DATASERVER_AWS_REGION Default AWS region for S3 bucket. us-east-1

S3#

Expand

Property Description Example
DATASERVER_USE_POD_IAM_ROLE Property to enable the creation of an IAM role that will be used for the Dataserver pod. true
DATASERVER_IAM_POLICY_ARN Full IAM policy ARN which needs to be attached to the IAM role associated with the Dataserver pod. arn:aws:iam::aws:policy/AmazonS3FullAccess
DATASERVER_USE_IAM_ROLE If you've given permission to an IAM role to access the bucket, enable Use IAM Roles.
DATASERVER_S3_AWS_API_KEY If you've used a access to access the bucket, disable Use IAM Role, and set the AWS API Key. AKIAIOSFODNN7EXAMPLE
DATASERVER_S3_AWS_SECRET_KEY If you've used a secret key to access the bucket, disable Use IAM Role, and set the AWS Secret Key. wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
DATASERVER_V2_S3_ENDPOINT_ENABLE Enable to use a custom S3 endpoint.
DATASERVER_V2_S3_ENDPOINT_SSL Property to enable/disable, if SSL is enabled/disabled on the MinIO server.
DATASERVER_V2_S3_ENDPOINT_HOST Add the endpoint server host. 192.468.12.142
DATASERVER_V2_S3_ENDPOINT_PORT Add the endpoint server port. 9000
DATASERVER_AWS_REQUEST_INCLUDE_USERINFO

Property to enable adding session role in CloudWatch logs for requests going via Dataserver.

This will be available with the privacera-user key in the Request Params of CloudWatch logs.

Set to true, if you want to see the privacera-user in CloudWatch.

true

Azure ADLS#

Expand

Property Name Description Example
AZURE_ACCT_SHARED_KEY_PAIRS 

To get the value for this property, Go to Azure portal > Storage accounts > Select the storage account you want to configure > Access keys > Copy Key.

If there are multiple storage accounts, then separate them by a comma.

storageAccountName:${SAS_KEY}

ENABLE_AZURE_CLI

AZURE_ACCOUNT_NAME

AZURE_SHARED_KEY

Uncomment to use Azure CLI.

The AZURE_ACCT_SHARED_KEY_PAIRS property wouldn't work with this property. So, you have set the AZURE_ACCOUNT_NAME and AZURE_SHARED_KEY properties.

ENABLE_AZURE_CLI: "true"

AZURE_ACCOUNT_NAME: "company-qa-dept"

AZURE_SHARED_KEY: "=0Ty4br:2BIasz>rXm{cqtP8hA;7|TgZZZuTHJTg40z8E5z4UJ':roeJy=d7*/W"

DATASERVER_AZURE_GEN2_SHARED_KEY_AUTH  Set true/false. true
AZURE_TENANTID To get the value for this property, Go to Azure portal > Azure Active Directory > Properties > Tenant ID  5a5cxxx-xxxx-xxxx-xxxx-c3172b33xxxx
AZURE_APP_CLIENT_ID Get the value by following the Pre-requisites section above.  8c08xxxx-xxxx-xxxx-xxxx-6w0c95v0xxxx
AZURE_SUBSCRIPTION_ID To get the value for this property, Go to Azure portal > Select Subscriptions in the left sidebar > Select whichever subscription is needed > Click on overview > Copy the Subscription ID 27e8xxxx-xxxx-xxxx-xxxx-c716258wxxxx
AZURE_RESOURCE_GROUP To get the value for this property, Go to Azure portal > Storage accounts > Select the storage account you want to configure > Click on Overview > Resource Group privacera-dev
BASE64_APP_CLIENT_SECRET

Get the value by following the Pre-requisites section above. 

Note: Add the following property with BASE64 format in the YAML file:

$ echo "CLIENT_SECRET" | base64

WncwSaMpleRZ1ZoLThJYWpZd3YzMkFJNEljZGdVN0FfVAo=

Policysync#

PostgreSQL#

Expand

Property Description Example
POSTGRES_JDBC_URL JDB URL of PostgreSQL database. Get the URL from the Prerequisites section above. jdbc:postgresql://example.cluster-cxwi0ytczd99i.us-east-1.rds.amazonaws.com:5432
POSTGRES_JDBC_DB Name of the PostgreSQL database. Get the URL from the Prerequisites section above.  privacera_db
POSTGRES_JDBC_USERNAME
POSTGRES_JDBC_PASSWORD
User credentials to connect to the PostgreSQL database. Get the URL from the Prerequisites section above.

POSTGRES_JDBC_USERNAME: "user1"

POSTGRES_JDBC_PASSWORD: "password"

POSTGRES_DEFAULT_USER_PASSWORD Enter a password that would be set by default for new users of the PostgreSQL database.  default1
POSTGRES_AUDIT_ENABLE Under Audit Properties section, property to enable/disable audits on the PostgreSQL database.  true
POSTGRES_AUDIT_SQS_QUEUE_NAME

Under the Advanced tab, name of the SQS Queue. Get the URL from the Prerequisites section above.

Additional Reading:

  • Learn how you can configure an SQS queue configured in a different AWS account. To know more, click here.
sqs_name
POSTGRES_MANAGE_DATABASE_LIST Add the database names to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}
Use comma-separated values to enter multiple databases.
customer,sales
POSTGRES_MANAGE_SCHEMA_LIST Add the database schemas to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}.{schema_name}
If the value is kept blank, then all schemas will be managed.
If the value is none, then no schemas will be managed.
If the value is specified as {database_name}.*, then all schemas will be managed.
Use comma-separated values to enter multiple schemas.
customer.customer_schema1,customer.customer_schema2
or
customer.*
POSTGRES_MANAGE_TABLE_LIST Add the database tables to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}.{schema_name}.{table_name}
If the value is kept blank, then all tables will be managed.
If the value is none, then no tables will be managed.
If the value is specified as {database_name}.{schema_name}.*, then all tables will be managed.
Use comma-separated values to enter multiple tables.
customer.customer_schema1.table1,customer.customer_schema2.table2
or
customer.customer_schema.*
POSTGRES_MANAGE_VIEW_LIST Add the database views to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}.{schema_name}.{view_name}
If the value is kept blank, then all views will be managed.
If the value is none, then no views will be managed.
If the value is specified as {database_name}.{schema_name}.*, then all views will be managed.
Use comma-separated values to enter multiple views.
customer.customer_schema1.view1,customer.customer_schema2.view2
or
customer.customer_schema.*
POSTGRES_MANAGE_ENTITIES   true
POSTGRES_GRANT_UPDATES   true
POSTGRES_IGNORE_USER_LIST Add the names of the users to be ignored. These users will not provided with access control in a PostgreSQL policy. user1,user2,user3
POSTGRES_IGNORE_GROUP_LIST Add the names of the groups to be ignored. These groups will not provided with access control in a PostgreSQL policy. group1,group2,group3
POSTGRES_IGNORE_ROLE_LIST Add the roles to be ignored. These roles will not provided with access control in a PostgreSQL policy. role1,role2,role3
POSTGRES_MANAGE_USER_LIST Add the names of the users to be managed. Only these users will be provided with access control in a PostgreSQL policy. user1,user2,user3
POSTGRES_MANAGE_GROUP_LIST Add the names of the groups to be managed. Only these groups will be provided with access control in a PostgreSQL policy. group1,group2,group3
POSTGRES_MANAGE_ROLE_LIST Add the roles to be managed. Only these roles will be provided with access control in a PostgreSQL policy. role1,role2,role3
POSTGRES_MANAGE_USER_FILTERBY_GROUP Set this property if you want to filter users by their groups. false
POSTGRES_MANAGE_GROUPS Set this property to manage groups. false
POSTGRES_ENABLE_ROW_FILTER Set this property to enable row-level filter. false
POSTGRES_ENABLE_VIEW_BASED_MASKING Set this property to enable view-level masking. false

MSSQL#

Expand

Property Name Description Example 
MSSQL_JDBC_URL

JDBC URL for the target MSSQL Server. 

jdbc:sqlserver://${MSSQL_SERVER_NAME}.database.windows.net:1433

MSSQL_JDBC_DB Database where you want to do access control.  
MSSQL_MASTER_DB Name of the master database. Usually, this is simply 'master'.  
MSSQL_JDBC_USERNAME

Name of the Privacera service user 

For local users,

MSSQL_JDBC_USERNAME: "privacera_policysync"

For a user with domain name (Azure AD),

MSSQL_JDBC_USERNAME: "privacera_policysync@example.com"

MSSQL_JDBC_PASSWORD Password for MSSQL_JDBC_USERNAME user.  
MSSQL_AUTHENTICATION_TYPE

Authentication type for the database engine.

If MSSQL_JDBC_USERNAME is a 'local user', set value as below:

MSSQL_AUTHENTICATION_TYPE: "SqlPassword"

If MSSQL_JDBC_USERNAME is an Azure AD user, then set as below: 

MSSQL_AUTHENTICATION_TYPE: "ActiveDirectoryPassword"

MSSQL_DEFAULT_USER_PASSWORD Password string to be assigned to new local users that are created through Privacera PolicySync.  
MSSQL_OWNER_ROLE Owner of controlled database objects(e.g. schemas, tables, views, and columns). Generally, use the same user as assigned to MSSQL_JDBC_USERNAME.  
MSSQL_AUDIT_ENABLE Set 'true', if audits have been configured for the MSSQL server. true
MSSQL_AUDIT_STORAGE_URL

Audits storage URL obtained in Prerequisite section.

If this parameter is left empty or blank, Privacera Platform will target all databases attached to the MSSQL Server. If one or more database names are listed (comma separated values), only those databases will be controlled by Privacera Platform. 

 
MSSQL_MANAGE_DATABASE_LIST Add the database name to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}
Use only single value for MSSQL.
customer
MSSQL_MANAGE_SCHEMA_LIST Add the database schemas to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}.{schema_name}
If the value is kept blank, then all schemas will be managed.
If the value is none, then no schemas will be managed.
If the value is specified as {database_name}.*, then all schemas will be managed.
Use comma-separated values to enter multiple schemas.
customer.customer_schema1,customer.customer_schema2
or
customer.*
MSSQL_MANAGE_TABLE_LIST Add the database tables to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}.{schema_name}.{table_name}
If the value is kept blank, then all tables will be managed.
If the value is none, then no tables will be managed.
If the value is specified as {database_name}.{schema_name}.*, then all tables will be managed.
Use comma-separated values to enter multiple tables.
customer.customer_schema1.table1,customer.customer_schema2.table2
or
customer.customer_schema.*
MSSQL_MANAGE_VIEW_LIST Add the database views to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}.{schema_name}.{view_name}
If the value is kept blank, then all views will be managed.
If the value is none, then no views will be managed.
If the value is specified as {database_name}.{schema_name}.*, then all views will be managed.
Use comma-separated values to enter multiple views.
customer.customer_schema1.view1,customer.customer_schema2.view2
or
customer.customer_schema.*
MSSQL_MANAGE_ENTITIES

Enable/Disable Manage User/Group/Role

false
MSSQL_GRANT_UPDATES

Enable/Disable Perform Grant and Revokes

false
MSSQL_ENABLE

Enable/Disable PolicySync V1.
If enabling this property then disable MSSQL_V2_ENABLE property.

true
MSSQL_V2_ENABLE

Enable/Disable PolicySync V1.
If enabling this property then disable MSSQL_ENABLE property.

true
MSSQL_IGNORE_USER_LIST Add the names of the users to be ignored. These users will not provided with access control in a Databricks SQL policy. user1,user2,user3
MSSQL_IGNORE_GROUP_LIST Add the names of the groups to be ignored. These groups will not provided with access control in a Databricks SQL policy. group1,group2,group3
MSSQL_IGNORE_ROLE_LIST Add the roles to be ignored. These roles will not provided with access control in a Databricks SQL policy. role1,role2,role3
MSSQL_MANAGE_USER_LIST Add the names of the users to be managed. Only these users will be provided with access control in a Databricks SQL policy. user1,user2,user3
MSSQL_MANAGE_GROUP_LIST Add the names of the groups to be managed. Only these groups will be provided with access control in a Databricks SQL policy. group1,group2,group3
MSSQL_MANAGE_ROLE_LIST Add the roles to be managed. Only these roles will be provided with access control in a Databricks SQL policy. role1,role2,role3
MSSQL_MANAGE_USER_FILTERBY_GROUP Set this property if you want to filter users by their groups. false
MSSQL_MANAGE_GROUPS Set this property to manage groups. false
MSSQL_ENABLE_ROW_FILTER Set this property to enable row-level filter. false
MSSQL_ENABLE_VIEW_BASED_MASKING Set this property to enable view-level masking. false
MSSQL_MANAGE_GROUP_POLICY_ONLY false
MSSQL_EXTERNAL_USER_AS_INTERNAL Set this property to create external user as internal. false

Power BI#

Expand

Property Name Description Example 
POWER_BI_USERNAME Username for authentication with Power BI.
For authentication either username/password or client secret is needed.
user1
POWER_BI_PASSWORD Password for authentication with Power BI. password
POWER_BI_TENANT_ID Tenant ID associated to Azure subscription. 5aXcXa2b-fdXX-XXXX-XXXX-c3172bXXaXXe
POWER_BI_CLIENT_ID Service principal ID for authentication with Power BI. 3eeXXXXX-XXXe-XXcf-aXXX-fXad7dXXXXXe
POWER_BI_CLIENT_SECRET Application's client secret for authentication with Power BI.
For authentication either username/password or client secret is needed.
String
POWER_BI_V2_ENABLE Property to enable/disable the PolicySync Power BI connector. true
POWER_BI_MANAGE_WORKSPACE_LIST Add the names of the workspaces to be managed. Only these workspaces will be provided with access control in a Power BI policy.
Regular expression can be used for example, demo* (This will manage all the workspaces named as demo1,demo2 .etc).
demo1,demo2,demo3
POWER_BI_MANAGE_USER_LIST Add the names of the users to be managed. Only these users will be provided with access control in a Power BI policy.
If the value is empty then no users will be managed.
If the value is specified as '*' then all groups will be managed
user1,user2,user3
POWER_BI_MANAGE_GROUP_LIST Add the names of the groups to be managed. Only these groups will be provided with access control in a Power BI policy.
If the value is empty then no groups will be managed.
If the value is specified as '*' then all groups will be managed
group1,group2,group3
POWER_BI_IGNORE_WORKSPACE_LIST Add the names of the workspaces to be ignored. These workspaces will not provided with access control in a Power BI policy. demo1,demo2,demo3
POWER_BI_IGNORE_USER_LIST Add the names of the users to be ignored. These users will not provided with access control in a Power BI policy. user1,user2,user3
POWER_BI_MANAGE_USER_FILTERBY_GROUP Set this property if you want to filter users by their groups. false
POWER_BI_ENABLE_AUDIT Property to enable/disable audits for Power BI policy. false
POWER_BI_AUDIT_LOAD_KEY load
POWER_BI_GRANT_UPDATES

Property to perform a dry run of the policy configuration on the Power BI service. In a dry run mode, you may want to view the logs if the policy is being applied as desired.

If set to false, then it enables the dry run mode. The access-control would not be applied on the Power BI service.

If set to true, then it disables the dry run mode. The access-control will be applied on the Power BI service.

true

Snowflake#

Expand

Property Description  Example
SNOWFLAKE_JDBC_URL jdbc:snowflake://testsnowflake.prod.us-west-2.aws.snowflakecomputing.com
SNOWFLAKE_JDBC_USERNAME The database user used by the Policy Sync process PRIVACERA_SYNC
SNOWFLAKE_JDBC_PASSWORD Password used while creating the database user 6.0GoldPlus
SNOWFLAKE_WAREHOUSE_TO_USE Warehouse which will be used by Policy Sync PRIVACERA_POLICYSYNC_WH
SNOWFLAKE_ROLE_TO_USE Role used by the Policy Sync. PRIVACERA_SYNC_ROLE
SNOWFLAKE_JDBC_DB the database to store masking policies privacera_db
SNOWFLAKE_DEFAULT_USER_PASSWORD Password to be set when a new user is created. welcome1
SNOWFLAKE_OWNER_ROLE This is the default owner for all user-created resources. By switching the roles to the default role helps in managing the grants/revokes. PRIVACERA_DEFAULT_OWNER
SNOWFLAKE_MANAGE_WAREHOUSE_LIST Manage Resources List. SNOWFLAKE_MANAGE_WAREHOUSE_LIST: "dev_,qa_"
SNOWFLAKE_MANAGE_DATABASE_LIST Add the database names to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}
Use comma-separated values to enter multiple databases.
customer,sales
SNOWFLAKE_MANAGE_SCHEMA_LIST Add the database schemas to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}.{schema_name}
If the value is kept blank, then all schemas will be managed.
If the value is none, then no schemas will be managed.
If the value is specified as {database_name}.*, then all schemas will be managed.
Use comma-separated values to enter multiple schemas.
customer.customer_schema1,customer.customer_schema2
or
customer.*
SNOWFLAKE_MANAGE_TABLE_LIST Add the database tables to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}.{schema_name}.{table_name}
If the value is kept blank, then all tables will be managed.
If the value is none, then no tables will be managed.
If the value is specified as {database_name}.{schema_name}.*, then all tables will be managed.
Use comma-separated values to enter multiple tables.
customer.customer_schema1.table1,customer.customer_schema2.table2
or
customer.customer_schema.*
SNOWFLAKE_MANAGE_VIEW_LIST Add the database views to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}.{schema_name}.{view_name}
If the value is kept blank, then all views will be managed.
If the value is none, then no views will be managed.
If the value is specified as {database_name}.{schema_name}.*, then all views will be managed.
Use comma-separated values to enter multiple views.
customer.customer_schema1.view1,customer.customer_schema2.view2
or
customer.customer_schema.*
SNOWFLAKE_MANAGE_ENTITIES

Enable/Disable Manage User/Group/Role.

Fill-in SNOWFLAKE_MANAGE_WAREHOUSE_LIST and SNOWFLAKE_MANAGE_DATABASE_LIST before enabling this to true

true
SNOWFLAKE_GRANT_UPDATES

Enable/Disable Perform Grant and Revokes.

Fill-in SNOWFLAKE_MANAGE_WAREHOUSE_LIST and SNOWFLAKE_MANAGE_DATABASE_LIST before enabling this to true

true

SNOWFLAKE_ENABLE_AUDIT_SOURCE_SIMPLE

SNOWFLAKE_ENABLE_AUDIT_SOURCE_ADVANCE

Properties are optional. Uncomment them and add values only if required.

Enable the Audit Setup based on your snowflake account settings

# SNOWFLAKE_ENABLE_AUDIT_SOURCE_SIMPLE: "true"

# SNOWFLAKE_ENABLE_AUDIT_SOURCE_ADVANCE: "false"

SNOWFLAKE_AUDIT_SOURCE_ADVANCE_DB_NAME Audit Properties PRIVACERA_ACCESS_LOGS_DB
POLICYSYNC_ENABLE Enable/Disable the complete PolicySync process true
SNOWFLAKE_ENABLE Enable/Disable only the Snowflake PolicySync process true
SNOWFLAKE_MANAGE_ENTITY_PREFIX

Put the prefix for user/group/roles to be managed, so only user/group/roles with specified prefixes will be managed.

Keep it commented to manage all user/group/roles present in Ranger.

For eg.

Frank user you can set this value as fr_,

Sally user you can set value as sa_

fr_*
SNOWFLAKE_ENTITY_ROLE_PREFIX

Set the prefixes for roles to be created in the database.

For eg.

Frank user you can set this value as fr_

Sally user you can set value as sa_

fr_
SNOWFLAKE_IGNORE_USER_LIST Add the names of the users to be ignored. These users will not provided with access control in a Snowflake policy. user1,user2,user3
SNOWFLAKE_IGNORE_GROUP_LIST Add the names of the groups to be ignored. These groups will not provided with access control in a Snowflake policy. group1,group2,group3
SNOWFLAKE_IGNORE_ROLE_LIST Add the roles to be ignored. These roles will not provided with access control in a Snowflake policy. role1,role2,role3
SNOWFLAKE_MANAGE_USER_LIST Add the names of the users to be managed. Only these users will be provided with access control in a Snowflake policy. user1,user2,user3
SNOWFLAKE_MANAGE_GROUP_LIST Add the names of the groups to be managed. Only these groups will be provided with access control in a Snowflake policy. group1,group2,group3
SNOWFLAKE_MANAGE_ROLE_LIST Add the roles to be managed. Only these roles will be provided with access control in a Snowflake policy. role1,role2,role3
SNOWFLAKE_MANAGE_USER_FILTERBY_GROUP Set this property if you want to filter users by their groups. false
SNOWFLAKE_MANAGE_GROUPS Set this property to manage groups. false
SNOWFLAKE_ENABLE_ROW_FILTER Set this property to enable row-level filter. false
SNOWFLAKE_ENABLE_VIEW_BASED_MASKING Set this property to enable view-level masking. false

Redshift#

Expand

Property Description  Example
REDSHIFT_JDBC_URL

The JDBC URL of the redshift cluster.

Note. Policysync uses Postgres driver for Redshift. Hence, the JDBC URL should start with jdbc:postgresql:** and not jdbc:redshift:**.

jdbc:postgresql://<your Redshift connection url>.us-east-1.redshift.amazonaws.com:5439
REDSHIFT_JDBC_DB Database that Privacera will connect to when creating policies in Redshift. privacera_db

REDSHIFT_JDBC_USERNAME

REDSHIFT_JDBC_PASSWORD

Privacera database user who can create policies and users in Redshift. This user needs admin privileges so it can run Grant/Revokes as well as create users in Redshift.

Password can be stored in a jceks file and referenced here

REDSHIFT_JDBC_USERNAME: "PRIVACERA_SYNC"

REDSHIFT_JDBC_PASSWORD: "6.0GoldPlus"

REDSHIFT_DEFAULT_USER_PASSWORD The password for users created by the database user in Redshift. welcome1
REDSHIFT_OWNER_ROLE The owner for all new resources created in Redshift - Without this, new resources will be owned by the creator of the resource which may or may not be desired. This ensures admins know who exactly the owner is of all new resources. PRIVACERA_SYNC_ROLE
REDSHIFT_AUDIT_ENABLE Under Advanced tab, this property relies on audits being enabled on Redshift side. Privacera will collect the audits in Redshift if this is set to true and the database user as defined above has permissions to collect the audits. true
REDSHIFT_MANAGE_DATABASE_LIST Add the database names to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}
Use comma-separated values to enter multiple databases.
customer,sales
REDSHIFT_MANAGE_SCHEMA_LIST Add the database schemas to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}.{schema_name}
If the value is kept blank, then all schemas will be managed.
If the value is none, then no schemas will be managed.
If the value is specified as {database_name}.*, then all schemas will be managed.
Use comma-separated values to enter multiple schemas.
customer.customer_schema1,customer.customer_schema2
or
customer.*
REDSHIFT_MANAGE_TABLE_LIST Add the database tables to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}.{schema_name}.{table_name}
If the value is kept blank, then all tables will be managed.
If the value is none, then no tables will be managed.
If the value is specified as {database_name}.{schema_name}.*, then all tables will be managed.
Use comma-separated values to enter multiple tables.
customer.customer_schema1.table1,customer.customer_schema2.table2
or
customer.customer_schema.*
REDSHIFT_MANAGE_VIEW_LIST Add the database views to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}.{schema_name}.{view_name}
If the value is kept blank, then all views will be managed.
If the value is none, then no views will be managed.
If the value is specified as {database_name}.{schema_name}.*, then all views will be managed.
Use comma-separated values to enter multiple views.
customer.customer_schema1.view1,customer.customer_schema2.view2
or
customer.customer_schema.*
REDSHIFT_MANAGE_ENTITIES Set to true if users/groups/roles created in Privacera need to be pushed down to Redshift. true
REDSHIFT_GRANT_UPDATES Set to true if Privacera will be used to run Grant/Revokes in Redshift.
POLICYSYNC_ENABLE Set to true to enable the module. true
REDSHIFT_ENABLE Set to true to integrate Redshift with PolicySync. true
REDSHIFT_MANAGE_ENTITY_PREFIX

To manage a single user/group/roles, enter its name.

To manage multiple user/group/roles, add the name prefix with

dev_,sa_*
REDSHIFT_ENTITY_ROLE_PREFIX Privacera will create roles in Redshift for each user. Provide a prefix for the role to be created in Redshift. This makes it easier to identify roles created by Privacera and manage them. dev_
REDSHIFT_IGNORE_USER_LIST Add the names of the users to be ignored. These users will not provided with access control in a Redshift policy. user1,user2,user3
REDSHIFT_IGNORE_GROUP_LIST Add the names of the groups to be ignored. These groups will not provided with access control in a Redshift policy. group1,group2,group3
REDSHIFT_IGNORE_ROLE_LIST Add the roles to be ignored. These roles will not provided with access control in a Redshift policy. role1,role2,role3
REDSHIFT_MANAGE_USER_LIST Add the names of the users to be managed. Only these users will be provided with access control in a Redshift policy. user1,user2,user3
REDSHIFT_MANAGE_GROUP_LIST Add the names of the groups to be managed. Only these groups will be provided with access control in a Redshift policy. group1,group2,group3
REDSHIFT_MANAGE_ROLE_LIST Add the roles to be managed. Only these roles will be provided with access control in a Redshift policy. role1,role2,role3
REDSHIFT_MANAGE_USER_FILTERBY_GROUP Set this property if you want to filter users by their groups. false
REDSHIFT_MANAGE_GROUPS Set this property to manage groups. false
REDSHIFT_ENABLE_ROW_FILTER Set this property to enable row-level filter. false
REDSHIFT_ENABLE_VIEW_BASED_MASKING Set this property to enable view-level masking. false

BigQuery#

Expand

Property Description Example
BIGQUERY_PROJECT_ID Set this property to specify a Google project ID. test-project-12345
BIGQUERY_PROJECT_LOCATION Set this property to specify the geographical region where the taxonomy for the PolicySync should be created. us
BIGQUERY_USE_VM_CREDENTIALS Enable this property to specify if you want to use Google VM attached service account credentials for PolicySync. true
BIGQUERY_OAUTH_SERVICE_ACCOUNT_EMAIL Set this property to specify service account email that you want to use for PolicySync. This needs to be specified if you are not using a Google VM attached service account.  
BIGQUERY_OAUTH_PRIVATE_KEY_FILE_NAME Set this property to specify the service account key that you have created for PolicySync. This needs to be specified if you are not using a Google VM attached service account.  
BIGQUERY_MANAGE_DATASET_LIST Add the datasets from BigQuery to be managed by PolicySync.
Enter the value for the property in the following:
{dataset_name}
Use comma-separated values to enter multiple databases.
example_dataset1,example_dataset2,example_dataset_march
BIGQUERY_MANAGE_TABLE_LIST Add the tables to be managed by PolicySync.
Enter the value for the property in the following:
{dataset_name}.{table_name}
If the value is kept blank, then all tables will be managed.
If the value is none, then no tables will be managed.
If the value is specified as {dataset_name}.*, then all tables will be managed.
Use comma-separated values to enter multiple tables.
example_dataset1.*,example_dataset2.*,example_dataset_march.*
or
example_dataset1.test_table1,example_dataset1.test_table_june.*
BIGQUERY_COLUMN_ACCESS_CONTROL_TYPE

Set this property to specify a way to handle column-level access control by PolicySync. Values can be view or tags.

  • view - PolicySync will create a secure view for the table and if any column is restricted from the table then that column will be shown as null to the user in the secure view. This is recommended by privacera as tags approach has limitations.
  • tags - PolicySync will make use of google taxonomy and put tags on all columns of table and create taxonomy tags policies to restrict users from accessing non permitted columns.

view
BIGQUERY_ENABLE_ROW_FILTER Set this property to specify if you want to use native row filter capability provided from BigQuery to filter data. true
BIGQUERY_ENABLE_VIEW_BASED_ROW_FILTER Set this property to specify if you want to use dynamic secure view based row filters. false
BIGQUERY_IGNORE_USER_LIST Add the names of the users to be ignored. These users will not provided with access control in a BigQuery policy. user1,user2,user3
BIGQUERY_IGNORE_GROUP_LIST Add the names of the groups to be ignored. These groups will not provided with access control in a BigQuery policy. group1,group2,group3
BIGQUERY_IGNORE_ROLE_LIST Add the roles to be ignored. These roles will not provided with access control in a BigQuery policy. role1,role2,role3
BIGQUERY_MANAGE_USER_LIST Add the names of the users to be managed. Only these users will be provided with access control in a BigQuery policy. user1,user2,user3
BIGQUERY_MANAGE_GROUP_LIST Add the names of the groups to be managed. Only these groups will be provided with access control in a BigQuery policy. group1,group2,group3
BIGQUERY_MANAGE_ROLE_LIST Add the roles to be managed. Only these roles will be provided with access control in a BigQuery policy. role1,role2,role3
BIGQUERY_MANAGE_USER_FILTERBY_GROUP Set this property if you want to filter users by their groups. false
BIGQUERY_MANAGE_GROUPS Set this property to manage groups. false
BIGQUERY_ENABLE_ROW_FILTER Set this property to enable row-level filter. false
BIGQUERY_ENABLE_VIEW_BASED_MASKING Set this property to enable view-level masking. false

Databricks SQL#

Expand

Property Description Example
DATABRICKS_SQL_ANALYTICS_JDBC_URL Get its value from the Prerequisites section. jdbc:spark://example.cloud.databricks.com:443/default;transportMode=http;ssl=1;AuthMech=3;httpPath=/sql/1.0/endpoints/1234567890;
DATABRICKS_SQL_ANALYTICS_JDBC_DB   default
DATABRICKS_SQL_ANALYTICS_JDBC_USERNAME Get its value from the Prerequisites section.  
DATABRICKS_SQL_ANALYTICS_JDBC_PASSWORD Get its value from the Prerequisites section.  
DATABRICKS_SQL_ANALYTICS_HOST_URL Get its value from the Prerequisites section. https://example.cloud.databricks.com
DATABRICKS_SQL_ANALYTICS_OWNER_ROLE Property to change the owner of the newly created resources. {{ DATABRICKS_SQL_ANALYTICS_JDBC_USERNAME }}
DATABRICKS_SQL_ANALYTICS_MANAGE_DATABASE_LIST Add the database names to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}
Get its value from the Prerequisites section.
Use comma-separated values to enter multiple databases.
customer,sales
DATABRICKS_SQL_ANALYTICS_MANAGE_SCHEMA_LIST Add the database schemas to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}.{schema_name}
If the value is kept blank, then all schemas will be managed.
If the value is none, then no schemas will be managed.
If the value is specified as {database_name}.*, then all schemas will be managed.
Use comma-separated values to enter multiple schemas.
customer.customer_schema1,customer.customer_schema2
or
customer.*
DATABRICKS_SQL_ANALYTICS_MANAGE_TABLE_LIST Add the database tables to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}.{schema_name}.{table_name}
If the value is kept blank, then all tables will be managed.
If the value is none, then no tables will be managed.
If the value is specified as {database_name}.{schema_name}.*, then all tables will be managed.
Use comma-separated values to enter multiple tables.
customer.customer_schema1.table1,customer.customer_schema2.table2
or
customer.customer_schema.*
DATABRICKS_SQL_ANALYTICS_MANAGE_VIEW_LIST Add the database views to be managed by PolicySync.
Enter the value for the property in the following:
{database_name}.{schema_name}.{view_name}
If the value is kept blank, then all views will be managed.
If the value is none, then no views will be managed.
If the value is specified as {database_name}.{schema_name}.*, then all views will be managed.
Use comma-separated values to enter multiple views.
customer.customer_schema1.view1,customer.customer_schema2.view2
or
customer.customer_schema.*
DATABRICKS_SQL_ANALYTICS_MANAGE_ENTITIES Property to enable/disable manage user/group/role. false
DATABRICKS_SQL_ANALYTICS_GRANT_UPDATES Property to enable/disable Perform Grant and Revokes false
POLICYSYNC_ENABLE Property to enable PolicySync. true
DATABRICKS_SQL_ANALYTICS_ENABLE Property to enable/disable SQL Analytics. true
DATABRICKS_SQL_ANALYTICS_MANAGE_ENTITY_PREFIX

Property to put the prefix for user/group/roles to be managed, so only user/group/roles with specified prefixes will be managed.

Keep it commented to manage all user/group/roles present in Ranger.

dev_,sa_
DATABRICKS_SQL_ANALYTICS_ENTITY_ROLE_PREFIX Property to set the prefixes for roles to be created in the database. priv_
DATABRICKS_SQL_ANALYTICS_IGNORE_USER_LIST Add the names of the users to be ignored. These users will not provided with access control in a Databricks SQL policy. user1,user2,user3
DATABRICKS_SQL_ANALYTICS_IGNORE_GROUP_LIST Add the names of the groups to be ignored. These groups will not provided with access control in a Databricks SQL policy. group1,group2,group3
DATABRICKS_SQL_ANALYTICS_IGNORE_ROLE_LIST Add the roles to be ignored. These roles will not provided with access control in a Databricks SQL policy. role1,role2,role3
DATABRICKS_SQL_ANALYTICS_MANAGE_USER_LIST Add the names of the users to be managed. Only these users will be provided with access control in a Databricks SQL policy. user1,user2,user3
DATABRICKS_SQL_ANALYTICS_MANAGE_GROUP_LIST Add the names of the groups to be managed. Only these groups will be provided with access control in a Databricks SQL policy. group1,group2,group3
DATABRICKS_SQL_ANALYTICS_MANAGE_ROLE_LIST Add the roles to be managed. Only these roles will be provided with access control in a Databricks SQL policy. role1,role2,role3
DATABRICKS_SQL_ANALYTICS_MANAGE_USER_FILTERBY_GROUP Set this property if you want to filter users by their groups. false
DATABRICKS_SQL_ANALYTICS_MANAGE_GROUPS Set this property to manage groups. false
DATABRICKS_SQL_ANALYTICS_ENABLE_VIEW_BASED_ROW_FILTER Set this property to enable view-based row filter. false
DATABRICKS_SQL_ANALYTICS_ENABLE_VIEW_BASED_MASKING Set this property to enable view-level masking. false
DATABRICKS_SQL_ANALYTICS_USE_HIVE_ACCESS_POLICIES Set this property to true, if you want to use privacera_hive access policies across Databricks SQL Analytics. false

Databricks#

Spark Plugin#

Expand

Property Name Description Example Values
DATABRICKS_HOST_URL Enter the URL where the Databricks environment is hosted. For AZURE Databricks,
DATABRICKS_HOST_URL: "https://xdx-66506xxxxxxxx.2.azuredatabricks.net/?o=665066931xxxxxxx"

For AWS Databricks
DATABRICKS_HOST_URL: "https://xxx-7xxxfaxx-xxxx.cloud.databricks.com"
DATABRICKS_TOKEN

Enter the token.

To generate the token,

1. Login to your Databricks account.
2. Click the user profile icon in the upper right corner of your Databricks workspace.
3. Click User Settings.
4. Click the Generate New Token button.
5. Optionally enter a description (comment) and expiration period.
6. Click the Generate button.
7. Copy the generated token.
DATABRICKS_TOKEN: "xapid40xxxf65xxxxxxe1470eayyyyycdc06"
DATABRICKS_WORKSPACES_LIST

Add multiple Databricks workspaces to connect to Ranger.

  1. To add a single workspace, add the following default JSON in the text area to define the host URL and token of the Databricks workspace. The text area should not be left empty and should at least contain the default JSON.

    [
    {
        "alias": "DEFAULT",
        "databricks_host_url": "{{DATABRICKS_HOST_URL}}",
        "token": "{{DATABRICKS_TOKEN}}"
    }
    ]
    

    Note: Do not edit any of the values in the default JSON.

  2. To add two workspaces, use the following JSON.

    [
    {
        "alias": "DEFAULT",
        "databricks_host_url": "{{DATABRICKS_HOST_URL}}",
        "token": "{{DATABRICKS_TOKEN}}"
    },
    {
        "alias": "<workspace-2-alias>",
        "databricks_host_url": "<workspace-2-url>",
        "token": "<dbx-token-for-workspace-2>"
    }
    ]
    

Note: {{var}} is an Ansible variable. Such a variable re-uses the value of a predefined variable. Hence, do not edit the properties, databricks_host_url and token of the alias: DEFAULT as they are set by DATABRICKS_HOST_URL and DATABRICKS_TOKEN respectively.

DATABRICKS_ENABLE If set to 'true' Privacera Manager will create the Databricks cluster Init script "ranger_enable.sh" to:

'~/privacera/privacera-manager/output/databricks/ranger_enable.sh.
"true"

"false"
DATABRICKS_MANAGE_INIT_SCRIPT

If set to 'true' Privacera Manager will upload Init script ('ranger_enable.sh') to the identified Databricks Host.

If set to 'false' upload the following two files to the DBFS location. The files can be located at ~/privacera/privacera-manager/output/databricks.

  • privacera_spark_plugin_job.conf
  • privacera_spark_plugin.conf
"true"

"false"
DATABRICKS_SPARK_PLUGIN_AGENT_JAR Use the Java agent to assign a string of extra JVM options to pass to the Spark driver. -javaagent:/databricks/jars/privacera-agent.jar
DATABRICKS_SPARK_PRIVACERA_CUSTOM_CURRENT_USER_UDF_NAME Map logged-in user to Ranger user for row-filter policy. current_user()
DATABRICKS_SPARK_PRIVACERA_VIEW_LEVEL_MASKING_ROWFILTER_EXTENSION_ENABLE Property to enable masking, row-filter and data_admin access on view. false
DATABRICKS_JWT_OAUTH_ENABLE Enable JWT auth in Databricks plugin and Databricks Signed URL. TRUE
DATABRICKS_JWT_PUBLIC_KEY_FILE_NAME

Enter the filename for the public key. Ensure the name does not contain any spaces

Note: Copy the public key in config/custom-properties folder.

jwttoken.pub
DATABRICKS_JWT_ISSUER Enter the URL of the identity provider. Get it from the Prerequisites section. https://your-idp-domain.com
DATABRICKS_JWT_SUBJECT Subject of the JWT (the user) api-token
DATABRICKS_JWT_SECRET Property for jwt secret. If the jwt token has been encrypted using secret, use the property to set the secret.  
DATABRICKS_JWT_USERKEY Define a unique userkey. client_id
DATABRICKS_JWT_GROUPKEY Define a unique group key. scope”
DATABRICKS_JWT_PARSER_TYPE

Assign one of the following values:

  • PING_IDENTITY
  • KEYCLOAKS
PING_IDENTITY
DATABRICKS_SQL_CLUSTER_POLICY_SPARK_CONF

Configure Databricks Cluster policy.

Add the following JSON in the text area:

[
{
    "Note":"First spark conf",
    "key":"spark.hadoop.first.spark.test",
    "value":"test1"
},
{
    "Note":"Second spark conf",
    "key":"spark.hadoop.first.spark.test",
    "value":"test2"
}
]
DATABRICKS_CUSTOM_SPARK_CONFIG_FILE

Using this property, you can pass custom properties to the Spark configuration.

  1. Create a file with the filename databricks-spark.conf.

  2. Add all the custom properties you want to pass. For example, you can add the property, "spark.databricks.delta.formatCheck.enabled"="false" in the file.

  3. Browse and select the Spark custom file where you have defined all the custom properties.

DATABRICKS_POST_PLUGIN_COMMAND_LIST Note: This property is not part of the default YAML file, but can be added, if required.

Use this property, if you want to run a specific set of commands in the Databricks init script.
The following example will be added to the cluster init script to allow Athena JDBC via data access server.

DATABRICKS_POST_PLUGIN_COMMAND_LIST:

- sudo iptables -I OUTPUT 1 -p tcp -m tcp --dport 8181 -j ACCEPT

- sudo curl -k -u user:password {{PORTAL_URL}}/api/dataserver/cert?type=dataserver_jks -o /etc/ssl/certs/dataserver.jks

- sudo chmod 755 /etc/ssl/certs/dataserver.jks

Scala Plugin#

Expand

Property Description Example
DATABRICKS_SCALA_ENABLE

Set the property to enable/disable Databricks Scala. This is found under Databricks Signed URL Configuration For Scala Clusters section.

DATASERVER_DATABRICKS_ALLOWED_URLS

Add a URL or comma-separated URLs.

Privacera Dataserver serves only those URLs mentioned in this property.

https://xxx-7xxxfaxx-xxxx.cloud.databricks.com
DATASERVER_AWS_STS_ROLE 

Add the instance profile ARN of the AWS role, which can access Delta Files in Databricks.

arn:aws:iam::111111111111:role/assume-role
DATABRICKS_MANAGE_INIT_SCRIPT

Set the init script.

If enabled, Privacera Manager will upload Init script ('ranger_enable.sh') to the identified Databricks Host.
If disabled, Privacera Manager will take no action regarding the Init script for the Databricks File System.

DATABRICKS_HOST_URL

Enter the URL where the Databricks environment is hosted.

For AZURE Databricks,
DATABRICKS_HOST_URL: "https://xdx-66506xxxxxxxx.2.azuredatabricks.net/?o=665066931xxxxxxx"

For AWS Databricks
DATABRICKS_HOST_URL: "https://xxx-7xxxfaxx-xxxx.cloud.databricks.com"
DATABRICKS_TOKEN

Enter the token.

To generate the token,

1. Login to your Databricks account.
2. Click the user profile icon in the upper right corner of your Databricks workspace.
3. Click User Settings.
4. Click the Generate New Token button.
5. Optionally enter a description (comment) and expiration period.
6. Click the Generate button.
7. Copy the generated token.

xapid40xxxf65xxxxxxe1470eayyyyycdc06
DATABRICKS_SCALA_CLUSTER_POLICY_SPARK_CONF

Configure Databricks Cluster policy.

Add the following JSON in the text area:

[
{
    "Note":"First spark conf",
    "key":"spark.hadoop.first.spark.test",
    "value":"test1"
},
{
    "Note":"Second spark conf",
    "key":"spark.hadoop.first.spark.test",
    "value":"test2"
}
]

Usersync#

LDAP#

Expand

Property Description Example
USERSYNC_SYNC_LDAP_URL  

 "ldap://dir.ldap.us:389" (when NonSSL)

or

"ldaps://dir.ldap.us:636" (when SSL)

USERSYNC_SYNC_LDAP_BIND_DN   CN=Bind User,OU=example,DC=ad,DC=example,DC=com
USERSYNC_SYNC_LDAP_BIND_PASSWORD    
USERSYNC_SYNC_LDAP_SEARCH_BASE   OU=example,DC=ad,DC=example,DC=com
USERSYNC_SYNC_LDAP_USER_SEARCH_BASE  
USERSYNC_SYNC_LDAP_SSL_ENABLED Set this to true if SSL is enabled on the LDAP server. true
USERSYNC_SYNC_LDAP_SSL_PM_GEN_TS

Set this to true if you want Privacera Manager to generate the truststore certificate.

Set this to false if you want to manually provide the truststore certificate. To learn how to upload SSL certificates, click here.

true

Azure Active Directory (AAD)#

Expand

Property Name Description Example
USERSYNC_AZUREAD_TENANT_ID  To get the value for this property, Go to Azure portal > Azure Active Directory > Properties > Tenant ID  5a5cxxx-xxxx-xxxx-xxxx-c3172b33xxxx
USERSYNC_AZUREAD_CLIENT_ID  Get the value by following the Pre-requisites section above. 8a08xxxx-xxxx-xxxx-xxxx-6c0c95a0xxxx
USERSYNC_AZUREAD_CLIENT_SECRET  Get the value by following the Pre-requisites section above.  ${CLIENT_SECRET}
USERSYNC_AZUREAD_DOMAINS  To get the value for this property, Go to Azure portal > Azure Active Directory > Domains  componydomain1.com,componydomain2.com
USERSYNC_AZUREAD_GROUPS  To get the value for this property, Go to Azure portal > Azure Active Directory > Groups  GROUP1,GROUP2",GROUP3
USERSYNC_ENABLE Set to true to enable usersync. true
USERSYNC_SOURCE

Source from which users/groups are synced. 

Values: unix, ldap, azuread 

azuread
USERSYNC_AZUREAD_USE_GROUP_LOOKUP_FIRST Set to true if you want to first sync all groups and then all the users within those groups. true
USERSYNC_SYNC_AZUREAD_USERNAME_RETRIVAL_FROM

Azure provides the user info in a JSON format.

Assign a JSON attribute that is unique. This would be the name of the user in Ranger.

userPrincipalName
USERSYNC_SYNC_AZUREAD_EMAIL_RETRIVAL_FROM

Azure provides the user info in a JSON format.

Set the email from the JSON attribute of the Azure user entity.

userPrincipalName
USERSYNC_SYNC_AZUREAD_GROUP_RETRIVAL_FROM

Azure provides the user info in a JSON format.

Use the JSON attribute to retrieve group information for the user.

displayName
SYNC_AZUREAD_USER_SERVICE_PRINCIPAL_ENABLED Set to true to sync Azure service principal to the Ranger user entity false
SYNC_AZUREAD_USER_SERVICE_PRINCIPAL_USERNAME_RETRIVAL_FROM

Azure provides the service principal info in a JSON format.

Assign a JSON attribute that is unique. This would be the name of the user in Ranger.

appId

Audit Fluentd#

Expand

Property Description Example
AUDIT_FLUENTD_AUDIT_DESTINATION 

Set the audit destination where the audits will be saved. If the value is set to S3, the audits get stored in the AWS S3 server. For S3, the default time interval to publish the audits is 3600s (1hr).

Local storage should be used only for development and testing purposes. All the audit received are stored in the same container/pod. 

Value: local, s3, azure-blob, azure-adls 

s3
When the destination is local, edit the following property:
AUDIT_FLUENTD_LOCAL_FILE_TIME_INTERVAL This is the time interval after which the audits will be pushed to the local destination. 3600s
When the destination is s3, edit the following properties:
AUDIT_FLUENTD_S3_BUCKET 

Set the bucket name, if you set the audit destination above to S3.

Leave unchanged, if you set the audit destination to local. 

bucket_1
AUDIT_FLUENTD_S3_REGION 

Set the bucket region, if you set the audit destination above to S3.

Leave unchanged, if you set the audit destination to local. 

us-east-1
AUDIT_FLUENTD_S3_FILE_TIME_INTERVAL This is the time interval after which the audits will be pushed to the S3 destination. 3600s

AUDIT_FLUENTD_S3_ACCESS_KEY

AUDIT_FLUENTD_S3_SECRET_KEY 

Set the access and secret key, if you set the audit destination above to S3.

Leave unchanged, if you set the audit destination to local and are using AWS IAM Instance Role.

AUDIT_FLUENTD_S3_ACCESS_KEY: "AKIAIOSFODNN7EXAMPLE"

AUDIT_FLUENTD_S3_SECRET_KEY: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"

AUDIT_FLUENTD_S3_BUCKET_ENCRYPTION_TYPE

Property to encrypt an S3 bucket. You can use the property, if you have set S3 as the audit destination in the property, AUDIT_FLUENTD_AUDIT_DESTINATION.

You can assign one of the following values as the encryption types:

  • SSE-S3
  • SSE-KMS
  • SSE-C
  • NONE

SSE-S3 and SSE-KMS are encryptions managed by AWS. You need to enable the server-side encryption for the S3 bucket. For more information on how to enable SSE-S3 or SSE-KMS encryption types, click here

SSE-C is the custom encryption type, where the encryption key and MD5 have to generated separately.

NONE
AUDIT_FLUENTD_S3_BUCKET_ENCRYPTION_KEY

If you have set SSE-C encryption type in the AUDIT_FLUENTD_S3_BUCKET_ENCRYPTION_TYPE property, then the encryption key is mandatory. It is optional for SSE-KMS encryption type.

AUDIT_FLUENTD_S3_BUCKET_ENCRYPTION_KEY_MD5

If you have set SSE-C encryption type in the AUDIT_FLUENTD_S3_BUCKET_ENCRYPTION_TYPE property, then the MD5 encryption key is mandatory.

To get the MD5 hash for the encryption key, run the following command:

echo -n "<generated-key>" |  openssl dgst -md5 -binary | openssl enc -base64

When the destination is azure-blob or azure-adls, edit the following properties:

AUDIT_FLUENTD_AZURE_STORAGE_ACCOUNT

AUDIT_FLUENTD_AZURE_CONTAINER

Set the storage account and the container, if you set the audit destination above to Azure Blob or Azure ADLS.

To know how to get the ADLS properties, click here.

Leave unchanged, if you set the audit destination to local.

Note: Currently, it supports Azure blob storage only.

AUDIT_FLUENTD_AZURE_STORAGE_ACCOUNT: "storage_account_1"

AUDIT_FLUENTD_AZURE_CONTAINER: "container_1"

AUDIT_FLUENTD_AZURE_FILE_TIME_INTERVAL This is the time interval after which the audits will be pushed to the Azure ADLS/Blob destination. 3600s
AUDIT_FLUENTD_AUTH_TYPE Select an authentication type from the dropdown list.

AUDIT_FLUENTD_AZURE_STORAGE_ACCOUNT_KEY

AUDIT_FLUENTD_AZURE_STORAGE_SAS_TOKEN 

Configure this property, if you have selected SAS Key in the property, AUDIT_FLUENTD_AUTH_TYPE.

Set the storage account key and the SAS token, if you set the audit destination above to Azure Blob.

Leave unchanged, if you're using Azure's Managed Identity Service.

 

AUDIT_FLUENTD_AZURE_OAUTH_TENANT_ID

AUDIT_FLUENTD_AZURE_OAUTH_APP_ID

AUDIT_FLUENTD_AZURE_OAUTH_SECRET

Configure this property, if you have selected OAUTH in the property, AUDIT_FLUENTD_AUTH_TYPE.

Set the storage account key and the SAS token, if you set the audit destination above to Azure ADLS.

Leave unchanged, if you're using Azure's Managed Identity Service.

 

AUDIT_FLUENTD_AZURE_USER_MANAGED_IDENTITY_ENABLE

AUDIT_FLUENTD_AZURE_USER_MANAGED_IDENTITY

Configure this property, if you have selected MSI (UserManaged) in the property, AUDIT_FLUENTD_AUTH_TYPE.

Spark Standalone#

Expand

Property Description Example
SPARK_STANDALONE_ENABLE Property to enable generating setup script and configs for Spark standalone plugin installation. true
SPARK_ENV_TYPE

Set the environment type. It can be any user-defined type.

For example, if you're working in an environment that runs locally, you can set the type as local; for a production environment, set it as prod.

local
SPARK_HOME Home path of your Spark installation. ~/privacera/spark/spark-3.1.1-bin-hadoop3.2
SPARK_USER_HOME User home directory of your Spark installation. /home/ec2-user

Trino Standalone#

Expand

Property Description Example
TRINO_STANDALONE_ENABLE Property to enable/disalbe Trino. true
TRINO_USER_HOME Property to set the path to the Trino home directoy. /home/ec2-user
TRINO_INSTALL_DIR_NAME Property to set the path to the directoy where Trino is installed. /etc/trino


Last update: August 20, 2021