{% for release in sorted(releases, key = lambda release: release['semver'], reverse=True) %}
{% end %}
Remote settings
Database settings
Change server
Enabled clusters
{% if available_clusters_error is not None %}
Could not list available clusters. Reason: {{ available_clusters_error }}
{% else %}
{% end %}
Create datatabase
Use this action to create the database if the Workspace's server or the Workspace's cluster has been changed.
Query API limits
If you want to set a limit to an endpoint use Endpoint Limits.
For Workspace rate limits use Rate Limits or change the Workspace plan.
To limit concurrency in the CH cluster enable DISTRIBUTED_ENDPOINT_CONCURRENCY FF and use `max_concurrency_queries`, the number is global, so to limit to 10 concurrent queries, set it to 10.
Setting max concurrency queries to 0 means we will not limit the concurrency of the endpoint.
Query API
Value
Actions
{% for name, config in current_query_api_limits.items() %}
{% end %}
{% if available_query_api_limits %}
{% else %}
You cannot set new Query API limits.
{% end %}
Endpoint Limits
If you want to set a limit to a new endpoint, you need to publish it first.
To limit concurrency in the CH cluster enable DISTRIBUTED_ENDPOINT_CONCURRENCY FF and use `max_concurrency_queries`, the number is global, so to limit to 10 concurrent queries, set it to 10.
Setting max concurrency queries to 0 means we will not limit the concurrency of the endpoint.
Setting max threads to 0 means we will use the default value.
Setting backend hint to 1 will disable the default sticky behaviour. It forces the endpoint to backend_hint = None.
Setting max rps to 0 means we will not limit the amount of requests per second for the endpoint.
Do not use max_rps to limit concurrency, use max_concurrent_queries, or use max_rps and set max_execution_time to 1s both together.
For max threads and backend hint, the priority setting is: Workspace Limit > Query template setting > Endpoint Limit > Default setting.
Endpoint
Value
Actions
{% for name, config in current_endpoint_limits.items() %}
{% end %}
{% if available_endpoint_limits %}
{% else %}
You cannot set new endpoint limits: All the endpoints has all limits defined or there is not endpoint published.
{% end %}
ClickHouse Limits
Name
Value
Actions
{% for name, value in current_ch_limits %}
{% end %}
{% if available_ch_limits %}
{% else %}
You cannot set new limits: the user already has all limits defined.
{% end %}
HFI Configuration
Name
Value
Actions
Gatherer Configuration
{% if workspace['use_gatherer'] %}
Use of Gatherer for HFI and Kafka is enabled
{% else %}
Use of Gatherer for HFI and Kafka is disabled
{% end %}
{% if workspace['allow_gatherer_fallback'] %}
Allow HFI to insert directly into the landing when no Gatherer is present is enabled
{% else %}
Allow inserting directly into the landing when no Gatherer is present is disabled
{% end %}
{% if workspace['gatherer_allow_s3_backup_on_user_errors'] %}
Allow S3 backups on user errors is enabled
{% else %}
Allow S3 backups on user errors is disabled
{% end %}
Name
Value
Actions
Gatherer flush time per table
{% for name, value in current_gatherer_flush_time_ds %}
{% end %}
ClickHouse configuration
{% for name, value in current_gatherer_ch_limits %}
{% end %}
{% if available_gatherer_ch_limits %}
{% else %}
You cannot set new limits: the user already has all limits defined.
{% end %}
Multiwriter configuration
{% for name, value in current_gatherer_multiwriter_limits %}
{% end %}
{% if available_gatherer_multiwriter_limits %}
{% else %}
You cannot set new multiwriter limits: the user already has all limits defined.
{% end %}
{% if len(storage_policies) %}
S3 Configuration
Name
Value
Actions
{% end %}
Kafka Limits
Name
Value
Actions
{% for name, value in current_kafka_limits %}
{% end %}
{% if available_kafka_limits %}
{% else %}
You cannot set new limits: the user already has all limits defined.
{% end %}
Populate Limits
Name
Value
Actions
{% for name, value in current_populate_limits %}
{% end %}
{% if available_populate_limits %}
{% else %}
You cannot set new limits: the user already has all limits defined.
{% end %}
Copy Limits
Name
Value
Actions
{% for name, value in current_copy_limits %}
{% end %}
{% if available_copy_limits %}
{% else %}
You cannot set new limits: the user already has all limits defined.
{% end %}
Branch Copy Limits
To override a branch copy limit, it should exist as a copy limit. If you need to override a default value in the branch, add the limit in the Copy Limits section with the same value as the default.
These limits are only automatically applied on branch creation, not in branches that already exist. For existing branches, you can go to the Cheriff page of the branch.
Name
Value
Actions
{% for name, value in current_copy_branch_limits %}
{% end %}
{% if available_copy_branch_limits %}
{% else %}
You cannot set new limits: the user already has all limits defined.
{% end %}
Data Sinks Limits
Name
Value
Actions
{% for name, value in current_sinks_limits %}
{% end %}
{% if available_sinks_limits %}
{% else %}
You cannot set new limits: the user already has all limits defined.
{% end %}
DynamoDB Limits
Name
Value
Actions
{% for name, value in current_dynamodb_limits %}
{% end %}
{% if available_sinks_limits %}
{% else %}
You cannot set new limits: the user already has all limits defined.
{% end %}
Delete Limits
Name
Value
Actions
{% for name, value in current_delete_limits %}
{% end %}
{% if available_delete_limits %}
{% else %}
You cannot set new limits: the user already has all limits defined.
{% end %}
Iterating Limits
Name
Value
Actions
{% for name, value in current_iterating_limits %}
{% end %}
{% if available_iterating_limits %}
{% else %}
You cannot set new limits: the user already has all limits defined.
{% end %}
Rate Limits
Options
Count: Number of requests allowed per time period. Must be greater than zero.
Period: The time extent in seconds of the check. Must be greater than zero.
Max Burst: Number of requests that will be allowed to exceed the rate in a single burst. Must be greater than or equal to zero. To allow some momentary flexibility, you have to set this value higher than 0. Think about this as if you were consuming tokens from the future count. When in doubt, set the max burst value to match the count value.
Examples
count=6, period=60, burst=0: allow 6 requests per minute. With burst set to zero, you will have to wait 10 seconds between requests. This is equivalent to count=1, period=10, burst=0 or 1 request every 10 seconds.
count=5, period=60, burst=3: allow 5 requests per minute but momentary bursts of 3 requests. This means that you will not have to wait 12 seconds between requests as you are allowed to do 3 consecutive requests.
Name
Count
Period (seconds)
Max Burst
Actions
{% for name, rl_config in current_rate_limit_config %}
{% end %}
{% if available_rate_limits %}
{% else %}
You cannot set new limits: the user already has all limits defined.
{% end %}
Import Limits
Name
Value
Actions
{% for name, value in current_import_limits %}
{% end %}
{% if available_import_limits %}
{% else %}
You cannot set new limits: the user already has all limits defined.
{% end %}
CDK Limits
⚠️ The change of any of these limits will only affect connections created from that moment forward
Name
Value
Actions
{% for name, value in current_cdk_limits %}
{% end %}
{% if available_cdk_limits %}
{% else %}
You cannot set new limits: the user already has all limits defined.
{% end %}
Workspace Limits
Name
Value
Actions
{% for name, value in current_workspace_limits %}
{% end %}
{% if available_workspace_limits %}
{% else %}
You cannot set new limits: the user already has all limits defined.
{% end %}
Release Limits
Name
Value
Actions
{% for name, value in current_release_limits %}
{% end %}
{% if available_release_limits %}
{% else %}
You cannot set new limits: the user already has all limits defined.
{% end %}
Feature Flags
Name
Status
Actions
{% for flag, value, is_override in workspace_feature_flags %}
{{ flag['name'] }}
{% raw flag['description'] %}
{% if value %}
Activated{% if is_override %}*{% end %}
{% else %}
Deactivated{% if is_override %}*{% end %}
{% end %}
{% if is_override %}
{% end %}
{% end %}
Postgres connector settings
Database
{{workspace['database']}}
User
user_{{workspace['database']}}
Database operations
After clicking on "Create database" make sure to change the password
Change password
This runs an "ALTER ROLE" in the PostgreSQL instance. Once changed, the connector is ready and the user and password can be shared with the customer.
ClickHouse BI Connector
Mirror DataSources and Endpoints to a ClickHouse BI Connector machine. It might not account for all cases.
If you want to try, we'll need to provide you with the CH BI Server Address. Let us know in #tmp-bi-connector.
Database
{{workspace['database']}}
User
user_{{workspace['name']}}
Data Connectors
{% if data_connectors %}
{% for connector in data_connectors %}
Name
ID
Type
{{connector['name']}}
{{connector['id']}}
{{connector['service']}}
Connector settingsLinkers
{% if connector['linkers'] %}
{% end %}
{% for linker in connector['linkers'] %}
{% end %}
{% end %}
{% else %}
No settings
{% end %}
Kafka
External Datasources Integration
{% if workspace.cdk_gcp_service_account %}
Hard-delete GCP Service account
Panic button to delete the workspace's service account in case there is some leak. WARNING: Once this is used all permissions granted by the user to the account will be lost and all schedules in the workspace will lose access to BQ.
{% else %}
No GCP service account provisioned
{% end %}
Token restoration
Query Profiles
Name
Value
Actions
{% for name, value in profiles.items() %}
{% end %}
Tags
{% if tags_error %}
{{ tags_error }}
{% else %}
{% if len(tags) > 0 %}
{% try %}
Tag
Resources
{% for tag in tags %}
{{ tag.name }}
{% for resource in tag.resources %}
{{ resource.get('name', '') }} - {{ resource.get('id') }}
{% end %}
{% end %}
{% except %}
There was an exception while rendering the tags
{% end %}
{% else %}
No tags found
{% end %}
{% end %}
{% set resources, edges = graph %}
Tables and Materialized Views
Data Source
Table
Engine
Disk usage
Hosts
Clusters
{% for r in sorted(resources, key=lambda r: r.name) %}
{% if r.resource in ('Datasource', 'OrphanTable', 'MaterializedNode') %}