Citus Utility Functions - Citus 13.0.1 documentation
This section contains reference information for the User Defined Functions provided by Citus. These functions help in providing additional distributed functionality to Citus other than the standard SQL commands.
Table and Shard DDL
citus_schema_distribute
This method also allows you to convert existing regular schemas into distributed schemas, which are automatically associated with individual colocation groups such that the tables created in those schemas will be automatically converted to colocated distributed tables without a shard key. The process of distributing the schema will automatically assign and move it to an existing node in the cluster.
Arguments
schemaname: Name of the schema which needs to be distributed.
Example
Distributing three schemas named: tenant_a, tenant_b and tenant_c.
SELECT citus_schema_distribute('tenant_a'); SELECT citus_schema_distribute('tenant_b'); SELECT citus_schema_distribute('tenant_c');
For more examples, see Microservices.
citus_schema_undistribute
Converts an existing distributed schema back into a regular schema. The process results in the tables and data being moved from the current node back to the coordinator node in the cluster.
Arguments
schemaname: Name of the schema which needs to be undistributed.
Example
Converting three different distributed schemas, back into regular schemas.
SELECT citus_schema_undistribute('tenant_a'); SELECT citus_schema_undistribute('tenant_b'); SELECT citus_schema_undistribute('tenant_c');
For more examples, see Microservices.
citus_schema_move
This function moves a distributed schema from one node to another.
There are two ways to move a distributed schema: blocking or nonblocking. The blocking approach means that during the move all modifications to the tables in the schema are paused. The second way, which avoids blocking writes, relies on Postgres 10 logical replication.
Arguments
schema_id: Oid of the distributed schema to be moved. If you provide the name of the schema as a string literal, this string is automatically casted to the oid.
target_node_name: DNS name of the node on which the distributed schema is to be moved (“target” node).
target_node_port: The port on the target worker node on which the database server is listening.
shard_transfer_mode: (Optional) Specify the method of replication, whether to use PostgreSQL logical replication or a cross-worker COPY command. The possible values are:
auto: Require replica identity if logical replication is possible, otherwise use legacy behaviour. This is the default value.
force_logical: Use logical replication even if the all the tables in the schema don’t have a replica identity. Any concurrent update/delete statements to the tables in the schema will fail during replication.
block_writes: Use COPY (blocking writes) for the tables in the schema lacking primary key or replica identity.
Example
SELECT citus_schema_move('schema-name', 'to_host', 5432);
create_distributed_table
The create_distributed_table() function is used to define a distributed table and create its shards if it’s a hash-distributed table. This function takes in a table name, the distribution column and an optional distribution method and inserts appropriate metadata to mark the table as distributed. The function defaults to ‘hash’ distribution if no distribution method is specified. If the table is hash-distributed, the function also creates worker shards based on the shard count configuration value. If the table contains any rows, they are automatically distributed to worker nodes.
Arguments
table_name: Name of the table which needs to be distributed.
distribution_column: The column on which the table is to be distributed.
colocate_with: (Optional) include current table in the co-location group of another table. By default tables are co-located when they are distributed by columns of the same type with the same shard count.
If you want to break this colocation later, you can use update_distributed_table_colocation. Possible values for colocate_with are default, none to start a new co-location group, or the name of another table to co-locate with that table. (See Co-Locating Tables.)
Keep in mind that the default value of colocate_with does implicit co-location. As Table Co-Location explains, this can be a great thing when tables are related or will be joined. However, when two tables are unrelated but happen to use the same datatype for their distribution columns, accidentally co-locating them can decrease performance during shard rebalancing. The table shards will be moved together unnecessarily in a “cascade.”
If you want to break this implicit colocation, you can use update_distributed_table_colocation.
If a new distributed table is not related to other tables, it’s best to specify colocate_with => 'none'.
shard_count: (Optional) the number of shards to create for the new distributed table. When specifying shard_count you can’t specify a value of colocate_with other than none. To change the shard count of an existing table or colocation group, use the alter_distributed_table function.
Possible values for shard_count are between 1 and 64000. For guidance on choosing the optimal value, see Shard Count.
Example
This example informs the database that the github_events table should be distributed by hash on the repo_id column.
SELECT create_distributed_table('github_events', 'repo_id'); -- alternatively, to be more explicit: SELECT create_distributed_table('github_events', 'repo_id', colocate_with => 'github_repo');
For more examples, see Creating and Modifying Distributed Objects (DDL).
truncate_local_data_after_distributing_table
Truncate all local rows after distributing a table, and prevent constraints from failing due to outdated local records. The truncation cascades to tables having a foreign key to the designated table. If the referring tables are not themselves distributed then truncation is forbidden until they are, to protect referential integrity:
ERROR: cannot truncate a table referenced in a foreign key constraint by a local table
Truncating local coordinator node table data is safe for distributed tables because their rows, if they have any, are copied to worker nodes during distribution.
Arguments
table_name: Name of the distributed table whose local counterpart on the coordinator node should be truncated.
Example
-- requires that argument is a distributed table SELECT truncate_local_data_after_distributing_table('public.github_events');
undistribute_table
The undistribute_table() function undoes the action of create_distributed_table or create_reference_table. Undistributing moves all data from shards back into a local table on the coordinator node (assuming the data can fit), then deletes the shards.
Citus will not undistribute tables that have – or are referenced by – foreign keys, unless the cascade_via_foreign_keys argument is set to true. If this argument is false (or omitted), then you must manually drop the offending foreign key constraints before undistributing.
Arguments
table_name: Name of the distributed or reference table to undistribute.
cascade_via_foreign_keys: (Optional) When this argument set to “true,” undistribute_table also undistributes all tables that are related to table_name through foreign keys. Use caution with this parameter, because it can potentially affect many tables.
Example
This example distributes a github_events table and then undistributes it.
-- first distribute the table SELECT create_distributed_table('github_events', 'repo_id'); -- undo that and make it local again SELECT undistribute_table('github_events');
alter_distributed_table
The alter_distributed_table() function can be used to change the distribution column, shard count or colocation properties of a distributed table.
Arguments
table_name: Name of the distributed table that will be altered.
distribution_column: (Optional) Name of the new distribution column.
shard_count: (Optional) The new shard count.
colocate_with: (Optional) The table that the current distributed table will
be colocated with. Possible values are default, none to start a new
colocation group, or the name of another table with which to colocate.
cascade_to_colocated: (Optional) When this argument is set to “true”,
shard_count and colocate_with changes will also be applied to all of
the tables that were previously colocated with the table, and the colocation
will be preserved. If it is “false”, the current colocation of this table will
be broken.
Example
-- change distribution column SELECT alter_distributed_table('github_events', distribution_column:='event_id'); -- change shard count of all tables in colocation group SELECT alter_distributed_table('github_events', shard_count:=6, cascade_to_colocated:=true); -- change colocation SELECT alter_distributed_table('github_events', colocate_with:='another_table');
alter_table_set_access_method
The alter_table_set_access_method() function changes access method of a table (e.g. heap or columnar).
Arguments
table_name: Name of the table whose access method will change.
access_method: Name of the new access method.
Example
SELECT alter_table_set_access_method('github_events', 'columnar');
create_reference_table
The create_reference_table() function is used to define a small reference or dimension table. This function takes in a table name, and creates a distributed table with just one shard, replicated to every worker node.
Arguments
table_name: Name of the small dimension or reference table which needs to be distributed.
Example
This example informs the database that the nation table should be defined as a reference table
SELECT create_reference_table('nation');
update_distributed_table_colocation
The update_distributed_table_colocation() function is used to update colocation of a distributed table. This function can also be used to break colocation of a distributed table. Citus will implicitly colocate two tables if the distribution column is the same type, this can be useful if the tables are related and will do some joins. If table A and B are colocated, and table A gets rebalanced, table B will also be rebalanced. If table B does not have a replica identity, the rebalance will fail. Therefore, this function can be useful breaking the implicit colocation in that case.
Note that this function does not move any data around physically.
Arguments
table_name: Name of the table colocation of which will be updated.
colocate_with: The table to which the table should be colocated with.
If you want to break the colocation of a table, you should specify colocate_with => 'none'.
Example
This example shows that colocation of table A is updated as colocation of table B.
SELECT update_distributed_table_colocation('A', colocate_with => 'B');
Assume that table A and table B are colocated( possibily implicitly), if you want to break the colocation:
SELECT update_distributed_table_colocation('A', colocate_with => 'none');
Now, assume that table A, table B, table C and table D are colocated and you want to colocate table A
and table B together, and table C and table D together:
SELECT update_distributed_table_colocation('C', colocate_with => 'none'); SELECT update_distributed_table_colocation('D', colocate_with => 'C');
If you have a hash distributed table named none and you want to update its colocation, you can do:
SELECT update_distributed_table_colocation('"none"', colocate_with => 'some_other_hash_distributed_table');
create_distributed_function
Propagates a function from the coordinator node to workers, and marks it for distributed execution. When a distributed function is called on the coordinator, Citus uses the value of the “distribution argument” to pick a worker node to run the function. Executing the function on workers increases parallelism, and can bring the code closer to data in shards for lower latency.
Note that the Postgres search path is not propagated from the coordinator to workers during distributed function execution, so distributed function code should fully-qualify the names of database objects. Also notices emitted by the functions will not be displayed to the user.
Arguments
function_name: Name of the function to be distributed. The name must
include the function’s parameter types in parentheses, because multiple
functions can have the same name in PostgreSQL. For instance, 'foo(int)' is
different from 'foo(int, text)'.
distribution_arg_name: (Optional) The argument name by which to distribute.
For convenience (or if the function arguments do not have names), a positional
placeholder is allowed, such as '$1'. If this parameter is not specified,
then the function named by function_name is merely created on the workers.
If worker nodes are added in the future the function will automatically be
created there too.
colocate_with: (Optional) When the distributed function reads or writes to
a distributed table (or more generally Co-Locating Tables), be sure to
name that table using the colocate_with parameter. This ensures that each
invocation of the function runs on the worker node containing relevant shards.
Example
-- an example function which updates a hypothetical -- event_responses table which itself is distributed by event_id CREATE OR REPLACE FUNCTION register_for_event(p_event_id int, p_user_id int) RETURNS void LANGUAGE plpgsql AS $fn$ BEGIN INSERT INTO event_responses VALUES ($1, $2, 'yes') ON CONFLICT (event_id, user_id) DO UPDATE SET response = EXCLUDED.response; END; $fn$; -- distribute the function to workers, using the p_event_id argument -- to determine which shard each invocation affects, and explicitly -- colocating with event_responses which the function updates SELECT create_distributed_function( 'register_for_event(int, int)', 'p_event_id', colocate_with := 'event_responses' );
alter_columnar_table_set
The alter_columnar_table_set() function changes settings on a columnar table. Calling this function on a non-columnar table gives an error. All arguments except the table name are optional.
To view current options for all columnar tables, consult this table:
SELECT * FROM columnar.options;
The default values for columnar settings for newly-created tables can be overridden with these GUCs:
columnar.compression
columnar.compression_level
columnar.stripe_row_count
columnar.chunk_row_count
Arguments
table_name: Name of the columnar table.
chunk_row_count: (Optional) The maximum number of rows per chunk for newly-inserted data. Existing chunks of data will not be changed and may have more rows than this maximum value. The default value is 10000.
stripe_row_count: (Optional) The maximum number of rows per stripe for newly-inserted data. Existing stripes of data will not be changed and may have more rows than this maximum value. The default value is 150000.
compression: (Optional) [none|pglz|zstd|lz4|lz4hc] The compression type
for newly-inserted data. Existing data will not be recompressed or
decompressed. The default and generally suggested value is zstd (if support has
been compiled in).
compression_level: (Optional) Valid settings are from 1 through 19. If the compression method does not support the level chosen, the closest level will be selected instead.
Example
SELECT alter_columnar_table_set( 'my_columnar_table', compression => 'none', stripe_row_count => 10000);
create_time_partitions
The create_time_partitions() function creates partitions of a given interval to cover a given range of time.
Arguments
table_name: (regclass) table for which to create new partitions. The table must be partitioned on one column, of type date, timestamp, or timestamptz.
partition_interval: an interval of time, such as '2 hours', or '1
month', to use when setting ranges on new partitions.
end_at: (timestamptz) create partitions up to this time. The last partition will contain the point end_at, and no later partitions will be created.
start_from: (timestamptz, optional) pick the first partition so that it
contains the point start_from. The default value is now().
Return Value
True if it needed to create new partitions, false if they all existed already.
Example
-- create a year's worth of monthly partitions -- in table foo, starting from the current time SELECT create_time_partitions( table_name := 'foo', partition_interval := '1 month', end_at := now() + '12 months' );
drop_old_time_partitions
The drop_old_time_partitions() function removes all partitions whose intervals fall before a given timestamp. In addition to using this function, you might consider alter_old_partitions_set_access_method to compress the old partitions with columnar storage.
Arguments
table_name: (regclass) table for which to remove partitions. The table must be partitioned on one column, of type date, timestamp, or timestamptz.
older_than: (timestamptz) drop partitions whose upper range is less than or equal to older_than.
Example
-- drop partitions that are over a year old CALL drop_old_time_partitions('foo', now() - interval '12 months');
alter_old_partitions_set_access_method
In a Timeseries Data use case, tables are often partitioned by time, and old partitions are compressed into read-only columnar storage.
Arguments
parent_table_name: (regclass) table for which to change partitions. The table must be partitioned on one column, of type date, timestamp, or timestamptz.
older_than: (timestamptz) change partitions whose upper range is less than or equal to older_than.
new_access_method: (name) either ‘heap’ for row-based storage, or ‘columnar’ for columnar storage.
Example
CALL alter_old_partitions_set_access_method( 'foo', now() - interval '6 months', 'columnar' );
Cluster Management And Repair Functions
citus_move_shard_placement
This function moves a given shard (and shards co-located with it) from one node to another. It is typically used indirectly during shard rebalancing rather than being called directly by a database administrator.
There are two ways to move the data: blocking or nonblocking. The blocking approach means that during the move all modifications to the shard are paused. The second way, which avoids blocking shard writes, relies on Postgres 10 logical replication.
After a successful move operation, shards in the source node get deleted. If the move fails at any point, this function throws an error and leaves the source and target nodes unchanged.
Arguments
shard_id: Id of the shard to be moved.
source_node_name: DNS name of the node on which the healthy shard placement is present (“source” node).
source_node_port: The port on the source worker node on which the database server is listening.
target_node_name: DNS name of the node on which the invalid shard placement is present (“target” node).
target_node_port: The port on the target worker node on which the database server is listening.
shard_transfer_mode: (Optional) Specify the method of replication, whether to use PostgreSQL logical replication or a cross-worker COPY command. The possible values are:
auto: Require replica identity if logical replication is possible, otherwise use legacy behaviour (e.g. for shard repair, PostgreSQL 9.6). This is the default value.
force_logical: Use logical replication even if the table doesn’t have a replica identity. Any concurrent update/delete statements to the table will fail during replication.
block_writes: Use COPY (blocking writes) for tables lacking primary key or replica identity.Note
Citus Community edition supports all shard transfer modes as of version 11.0!
Example
SELECT citus_move_shard_placement(12345, 'from_host', 5432, 'to_host', 5432);
citus_rebalance_start
The citus_rebalance_start() function moves table shards to make them evenly distributed among the workers. It begins a background job to do the rebalancing, and returns immediately.
The rebalancing process first calculates the list of moves it needs to make in order to ensure that the cluster is balanced within the given threshold. Then, it moves shard placements one by one from the source node to the destination node and updates the corresponding shard metadata to reflect the move.
Every shard is assigned a cost when determining whether shards are “evenly distributed.” By default each shard has the same cost (a value of 1), so distributing to equalize the cost across workers is the same as equalizing the number of shards on each. The constant cost strategy is called “by_shard_count” and is the default rebalancing strategy.
The “by_shard_count” strategy is appropriate under these circumstances:
The shards are roughly the same size
The shards get roughly the same amount of traffic
Worker nodes are all the same size/type
Shards haven’t been pinned to particular workers
If any of these assumptions don’t hold, then rebalancing “by_shard_count” can result in a bad plan.
The default rebalancing starategy is “by_disk_size”. You can always customize the strategy, using the
rebalance_strategy parameter.
It’s advisable to call get_rebalance_table_shards_plan before running citus_rebalance_start, to see and verify the actions to be performed.
Arguments
threshold: (Optional) A float number between 0.0 and 1.0 which indicates the maximum difference ratio of node utilization from average utilization. For example, specifying 0.1 will cause the shard rebalancer to attempt to balance all nodes to hold the same number of shards ±10%. Specifically, the shard rebalancer will try to converge utilization of all worker nodes to the (1 - threshold) * average_utilization … (1 + threshold) * average_utilization range.
drain_only: (Optional) When true, move shards off worker nodes who have shouldhaveshards set to false in Worker node table; move no other shards.
rebalance_strategy: (Optional) the name of a strategy in Rebalancer strategy table. If this argument is omitted, the function chooses the default strategy, as indicated in the table.
Example
The example below will attempt to rebalance shards within the default threshold.
SELECT citus_rebalance_start(); NOTICE: Scheduling... NOTICE: Scheduled as job 1337. DETAIL: Rebalance scheduled as background job 1337. HINT: To monitor progress, run: SELECT details FROM citus_rebalance_status();
citus_rebalance_status
The citus_rebalance_start function returns immediately, while the rebalance continues as a background job. The``citus_rebalance_status()`` function allows you to monitor the progress of this rebalance.
Example
To get general information about the rebalance, you can select all columns from the status. This shows the basic state of the job:
SELECT * FROM citus_rebalance_status();
. job_id | state | job_type | description | started_at | finished_at | details --------+----------+-----------+---------------------------------+-------------------------------+-------------------------------+----------- 4 | running | rebalance | Rebalance colocation group 1 | 2022-08-09 21:57:27.833055+02 | 2022-08-09 21:57:27.833055+02 | { ... }
Rebalancer specifics live in the details column, in JSON
format:
SELECT details FROM citus_rebalance_status();
{ "phase": "copy", "phase_index": 1, "phase_count": 3, "last_change":"2022-08-09 21:57:27", "colocations": { "1": { "shard_moves": 30, "shard_moved": 29, "last_move":"2022-08-09 21:57:27" }, "1337": { "shard_moves": 130, "shard_moved": 0 } } }
citus_rebalance_stop
This function cancels a rebalance in progress, if any.
citus_rebalance_wait
This function blocks until a running rebalance is complete. If
no rebalance is in progress when citus_rebalance_wait() is
called, then the function returns immediately.
The function can be useful for scripts or benchmarking.
rebalance_table_shards
Warning
The rebalance_table_shards() function is deprecated. As of Citus v11.2,
use citus_rebalance_start instead.
get_rebalance_table_shards_plan
Output the planned shard movements of citus_rebalance_start without performing them. While it’s unlikely, get_rebalance_table_shards_plan can output a slightly different plan than what a citus_rebalance_start call with the same arguments will do. This could happen because they are not executed at the same time, so facts about the cluster – e.g. disk space – might differ between the calls.
Arguments
A superset of the arguments for citus_rebalance_start: relation, threshold, max_shard_moves, excluded_shard_list, and drain_only. See documentation of that function for the arguments’ meaning.
Return Value
Tuples containing these columns:
table_name: The table whose shards would move
shardid: The shard in question
shard_size: Size in bytes
sourcename: Hostname of the source node
sourceport: Port of the source node
targetname: Hostname of the destination node
targetport: Port of the destination node
get_rebalance_progress
Note
Citus v11.2 introduces the citus_rebalance_status function, whose
output is easier to understand than that of get_rebalance_progress.
Once a shard rebalance begins, the get_rebalance_progress() function lists the progress of every shard involved. It monitors the moves planned and executed by citus_rebalance_start().
Return Value
Tuples containing these columns:
sessionid: Postgres PID of the rebalance monitor
table_name: The table whose shards are moving
shardid: The shard in question
shard_size: Size of the shard in bytes
sourcename: Hostname of the source node
sourceport: Port of the source node
targetname: Hostname of the destination node
targetport: Port of the destination node
progress: 0 = waiting to be moved; 1 = moving; 2 = complete
source_shard_size: Size of the shard on the source node in bytes
target_shard_size: Size of the shard on the target node in bytes
Example
SELECT * FROM get_rebalance_progress();
┌───────────┬────────────┬─────────┬────────────┬───────────────┬────────────┬───────────────┬────────────┬──────────┬───────────────────┬───────────────────┐ │ sessionid │ table_name │ shardid │ shard_size │ sourcename │ sourceport │ targetname │ targetport │ progress │ source_shard_size │ target_shard_size │ ├───────────┼────────────┼─────────┼────────────┼───────────────┼────────────┼───────────────┼────────────┼──────────┼───────────────────┼───────────────────┤ │ 7083 │ foo │ 102008 │ 1204224 │ n1.foobar.com │ 5432 │ n4.foobar.com │ 5432 │ 0 │ 1204224 │ 0 │ │ 7083 │ foo │ 102009 │ 1802240 │ n1.foobar.com │ 5432 │ n4.foobar.com │ 5432 │ 0 │ 1802240 │ 0 │ │ 7083 │ foo │ 102018 │ 614400 │ n2.foobar.com │ 5432 │ n4.foobar.com │ 5432 │ 1 │ 614400 │ 354400 │ │ 7083 │ foo │ 102019 │ 8192 │ n3.foobar.com │ 5432 │ n4.foobar.com │ 5432 │ 2 │ 0 │ 8192 │ └───────────┴────────────┴─────────┴────────────┴───────────────┴────────────┴───────────────┴────────────┴──────────┴───────────────────┴───────────────────┘
citus_add_rebalance_strategy
Append a row to the pg_dist_rebalance_strategy.
Arguments
For more about these arguments, see the corresponding column values in Rebalancer strategy table.
name: identifier for the new strategy
shard_cost_function: identifies the function used to determine the “cost” of each shard
node_capacity_function: identifies the function to measure node capacity
shard_allowed_on_node_function: identifies the function which determines which shards can be placed on which nodes
default_threshold: a floating point threshold that tunes how precisely the cumulative shard cost should be balanced between nodes
minimum_threshold: (Optional) a safeguard column that holds the minimum value allowed for the threshold argument of citus_rebalance_start(). Its default value is 0
citus_set_default_rebalance_strategy
Update the Rebalancer strategy table table, changing the strategy named by its argument to be the default chosen when rebalancing shards.
Arguments
name: the name of the strategy in pg_dist_rebalance_strategy
Example
SELECT citus_set_default_rebalance_strategy('by_disk_size');
citus_remote_connection_stats
The citus_remote_connection_stats() function shows the number of active connections to each remote node.
Example
SELECT * from citus_remote_connection_stats();
. hostname | port | database_name | connection_count_to_node ----------------+------+---------------+-------------------------- citus_worker_1 | 5432 | postgres | 3 (1 row)
citus_drain_node
The citus_drain_node() function moves shards off the designated node and onto other nodes who have shouldhaveshards set to true in Worker node table. This function is designed to be called prior to removing a node from the cluster, i.e. turning the node’s physical server off.
Arguments
nodename: The hostname name of the node to be drained.
nodeport: The port number of the node to be drained.
shard_transfer_mode: (Optional) Specify the method of replication, whether to use PostgreSQL logical replication or a cross-worker COPY command. The possible values are:
auto: Require replica identity if logical replication is possible, otherwise use legacy behaviour (e.g. for shard repair, PostgreSQL 9.6). This is the default value.
force_logical: Use logical replication even if the table doesn’t have a replica identity. Any concurrent update/delete statements to the table will fail during replication.
block_writes: Use COPY (blocking writes) for tables lacking primary key or replica identity.Note
Citus Community edition supports all shard transfer modes as of version 11.0!
rebalance_strategy: (Optional) the name of a strategy in Rebalancer strategy table. If this argument is omitted, the function chooses the default strategy, as indicated in the table.
Example
Here are the typical steps to remove a single node (for example ‘10.0.0.1’ on a standard PostgreSQL port):
Drain the node.
SELECT * from citus_drain_node('10.0.0.1', 5432);
Wait until the command finishes
Remove the node
When draining multiple nodes it’s recommended to use citus_rebalance_start instead. Doing so allows Citus to plan ahead and move shards the minimum number of times.
Run this for each node that you want to remove:
SELECT * FROM citus_set_node_property(node_hostname, node_port, 'shouldhaveshards', false);
Drain them all at once with citus_rebalance_start:
SELECT * FROM citus_rebalance_start(drain_only := true);
Wait until the draining rebalance finishes
Remove the nodes
isolate_tenant_to_new_shard
Note
Citus Community edition includes the isolate_tenant_to_new_shard function as of version 11.0!
This function creates a new shard to hold rows with a specific single value in the distribution column. It is especially handy for the multi-tenant Citus use case, where a large tenant can be placed alone on its own shard and ultimately its own physical node.
For a more in-depth discussion, see Tenant Isolation.
Arguments
table_name: The name of the table to get a new shard.
tenant_id: The value of the distribution column which will be assigned to the new shard.
cascade_option: (Optional) When set to “CASCADE,” also isolates a shard from all tables in the current table’s Co-Locating Tables.
Return Value
shard_id: The function returns the unique id assigned to the newly created shard.
Examples
Create a new shard to hold the lineitems for tenant 135:
SELECT isolate_tenant_to_new_shard('lineitem', 135);
┌─────────────────────────────┐ │ isolate_tenant_to_new_shard │ ├─────────────────────────────┤ │ 102240 │ └─────────────────────────────┘
citus_create_restore_point
Temporarily blocks writes to the cluster, and creates a named restore point on all nodes. This function is similar to pg_create_restore_point, but applies to all nodes and makes sure the restore point is consistent across them. This function is well suited to doing point-in-time recovery, and cluster forking.
Arguments
name: The name of the restore point to create.
Return Value
coordinator_lsn: Log sequence number of the restore point in the coordinator node WAL.
Examples
select citus_create_restore_point('foo');
┌────────────────────────────┐ │ citus_create_restore_point │ ├────────────────────────────┤ │ 0/1EA2808 │ └────────────────────────────┘