Skip to content
8 changes: 4 additions & 4 deletions _partials/_timescaledb_supported_windows.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
| Operation system | Version |
|---------------------------------------------|------------|
| Microsoft Windows | 10, 11 |
| Microsoft Windows Server | 2019, 2020 |
| Operation system | Version |
|---------------------------------------------|------------------|
| Microsoft Windows | 10, 11 |
| Microsoft Windows Server | 2019, 2022, 2025 |
64 changes: 40 additions & 24 deletions api/continuous-aggregates/create_materialized_view.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,9 @@ products: [cloud, self_hosted, mst]

import Since2220 from "versionContent/_partials/_since_2_22_0.mdx";

# CREATE MATERIALIZED VIEW (Continuous Aggregate) <Tag type="community">Community</Tag>
# CREATE MATERIALIZED VIEW (continuous aggregate) <Tag type="community">Community</Tag>

The `CREATE MATERIALIZED VIEW` statement is used to create continuous
aggregates. To learn more, see the
The `CREATE MATERIALIZED VIEW` statement is used to create $CAGGs. To learn more, see the
[continuous aggregate how-to guides][cagg-how-tos].

The syntax is:
Expand All @@ -39,30 +38,46 @@ GROUP BY time_bucket( <const_value>, <partition_col_of_hypertable> ),
[HAVING ...]
```

The continuous aggregate view defaults to `WITH DATA`. This means that when the
The $CAGG view defaults to `WITH DATA`. This means that when the
view is created, it refreshes using all the current data in the underlying
hypertable or continuous aggregate. This occurs once when the view is created.
$HYPERTABLE or $CAGG. This occurs once when the view is created.
If you want the view to be refreshed regularly, you can use a refresh policy. If
you do not want the view to update when it is first created, use the
`WITH NO DATA` parameter. For more information, see
[`refresh_continuous_aggregate`][refresh-cagg].

Continuous aggregates have some limitations of what types of queries they can
$CAGG_CAPs have some limitations of what types of queries they can
support. For more information, see the
[continuous aggregates section][cagg-how-tos].

$TIMESCALE_DB v2.17.1 and greater dramatically decrease the amount
of data written on a continuous aggregate in the presence of a small number of changes,
reduce the i/o cost of refreshing a continuous aggregate, and generate fewer Write-Ahead
Logs (WAL), set the`timescaledb.enable_merge_on_cagg_refresh`
configuration parameter to `TRUE`. This enables continuous aggregate
refresh to use merge instead of deleting old materialized data and re-inserting.
In $TIMESCALE_DB v2.17.0 and greater (with $PG 15+), you can dramatically decrease the amount
of data written on a $CAGG in the presence of a small number of changes,
reduce the I/O cost of refreshing a $CAGG, and generate fewer Write-Ahead
Logs (WAL) by enabling the `timescaledb.enable_merge_on_cagg_refresh`
[GUC parameter][gucs]. This enables $CAGG
refresh to use `MERGE` instead of deleting old materialized data and re-inserting.
This parameter only works for finalized $CAGGs
that don't have compression enabled. It is disabled by default.

For more settings for continuous aggregates, see [timescaledb_information.continuous_aggregates][info-views].
To enable this parameter for your session:

```sql
SET timescaledb.enable_merge_on_cagg_refresh = ON;
```

To enable it at the database level:

```sql
ALTER DATABASE your_database SET timescaledb.enable_merge_on_cagg_refresh = ON;
```

For more information about GUC parameters, see the [configuration documentation][gucs].

For more settings for $CAGGs, see [timescaledb_information.continuous_aggregates][info-views].

## Samples

Create a daily continuous aggregate view:
Create a daily $CAGG view:

```sql
CREATE MATERIALIZED VIEW continuous_aggregate_daily( timec, minl, sumt, sumh )
Expand All @@ -72,7 +87,7 @@ WITH (timescaledb.continuous) AS
GROUP BY time_bucket('1day', timec)
```

Add a thirty day continuous aggregate on top of the same raw hypertable:
Add a thirty day $CAGG on top of the same raw $HYPERTABLE:

```sql
CREATE MATERIALIZED VIEW continuous_aggregate_thirty_day( timec, minl, sumt, sumh )
Expand All @@ -82,7 +97,7 @@ WITH (timescaledb.continuous) AS
GROUP BY time_bucket('30day', timec);
```

Add an hourly continuous aggregate on top of the same raw hypertable:
Add an hourly $CAGG on top of the same raw $HYPERTABLE:

```sql
CREATE MATERIALIZED VIEW continuous_aggregate_hourly( timec, minl, sumt, sumh )
Expand All @@ -96,26 +111,26 @@ WITH (timescaledb.continuous) AS

|Name|Type|Description|
|-|-|-|
|`<view_name>`|TEXT|Name (optionally schema-qualified) of continuous aggregate view to create|
|`<view_name>`|TEXT|Name (optionally schema-qualified) of $CAGG view to create|
|`<column_name>`|TEXT|Optional list of names to be used for columns of the view. If not given, the column names are calculated from the query|
|`WITH` clause|TEXT|Specifies options for the continuous aggregate view|
|`WITH` clause|TEXT|Specifies options for the $CAGG view|
|`<select_query>`|TEXT|A `SELECT` query that uses the specified syntax|

Required `WITH` clause options:

|Name|Type|Description|
|-|-|-|
|`timescaledb.continuous`|BOOLEAN|If `timescaledb.continuous` is not specified, this is a regular PostgresSQL materialized view|
|`timescaledb.continuous`|BOOLEAN|If `timescaledb.continuous` is not specified, this is a regular $PG materialized view|

Optional `WITH` clause options:

|Name|Type| Description |Default value|
|-|-|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-|
|`timescaledb.chunk_interval`|INTERVAL| Set the chunk interval. The default value is 10x the original hypertable. |
|`timescaledb.create_group_indexes`|BOOLEAN| Create indexes on the continuous aggregate for columns in its `GROUP BY` clause. Indexes are in the form `(<GROUP_BY_COLUMN>, time_bucket)` |`TRUE`|
|`timescaledb.finalized`|BOOLEAN| In TimescaleDB 2.7 and above, use the new version of continuous aggregates, which stores finalized results for aggregate functions. Supports all aggregate functions, including ones that use `FILTER`, `ORDER BY`, and `DISTINCT` clauses. |`TRUE`|
|`timescaledb.materialized_only`|BOOLEAN| Return only materialized data when querying the continuous aggregate view |`TRUE`|
| `timescaledb.invalidate_using` | TEXT | <Since2220 />Set to `wal` to read changes from the WAL using logical decoding, then update the materialization invalidations for continuous aggregates using this information. This reduces the I/O and CPU needed to manage the hypertable invalidation log. Set to `trigger` to collect invalidations whenever there are inserts, updates, or deletes to a hypertable. This default behaviour uses more resources than `wal`. | `trigger` |
|`timescaledb.chunk_interval`|INTERVAL| Set the chunk interval. The default value is 10x the original $HYPERTABLE. |
|`timescaledb.create_group_indexes`|BOOLEAN| Create indexes on the $CAGG for columns in its `GROUP BY` clause. Indexes are in the form `(<GROUP_BY_COLUMN>, time_bucket)` |`TRUE`|
|`timescaledb.finalized`|BOOLEAN| In $TIMESCALE_DB 2.7 and above, use the new version of $CAGGs, which stores finalized results for aggregate functions. Supports all aggregate functions, including ones that use `FILTER`, `ORDER BY`, and `DISTINCT` clauses. |`TRUE`|
|`timescaledb.materialized_only`|BOOLEAN| Return only materialized data when querying the $CAGG view |`TRUE`|
| `timescaledb.invalidate_using` | TEXT | <Since2220 />Set to `wal` to read changes from the WAL using logical decoding, then update the materialization invalidations for $CAGGs using this information. This reduces the I/O and CPU needed to manage the $HYPERTABLE invalidation log. Set to `trigger` to collect invalidations whenever there are inserts, updates, or deletes to a $HYPERTABLE. This default behaviour uses more resources than `wal`. | `trigger` |

For more information, see the [real-time aggregates][real-time-aggregates] section.

Expand All @@ -125,3 +140,4 @@ For more information, see the [real-time aggregates][real-time-aggregates] secti
[real-time-aggregates]: /use-timescale/:currentVersion:/continuous-aggregates/real-time-aggregates/
[refresh-cagg]: /api/:currentVersion:/continuous-aggregates/refresh_continuous_aggregate/
[info-views]: /api/:currentVersion:/informational-views/continuous_aggregates/
[gucs]: /api/:currentVersion:/configuration/gucs/
2 changes: 0 additions & 2 deletions migrate/livesync-for-s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,6 @@ The $S3_CONNECTOR continuously imports data from an Amazon S3 bucket into your d

**Note**: the connector currently only syncs existing and new files—it does not support updating or deleting records based on updates and deletes from S3 to tables in a $SERVICE_LONG.

<EarlyAccessNoRelease />: this source S3 connector is not supported for production use. If you have any questions or feedback, talk to us in <a href="https://app.slack.com/client/T4GT3N2JK/C086NU9EZ88">#livesync in the Tiger Community</a>.

## Prerequisites

<PrereqCloud />
Expand Down
4 changes: 2 additions & 2 deletions use-timescale/data-tiering/about-data-tiering.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,8 @@ $CLOUD_LONG high-performance storage comes in the following types:

Once you [enable tiered storage][manage-tiering], you can start moving rarely used data to the object tier. The object tier is based on AWS S3 and stores your data in the [Apache Parquet][parquet] format. Within a Parquet file, a set of rows is grouped together to form a row group. Within a row group, values for a single column across multiple rows are stored together. The original size of the data in your $SERVICE_SHORT, compressed or uncompressed, does not correspond directly to its size in S3. A compressed hypertable may even take more space in S3 than it does in $CLOUD_LONG.

<TieredStorageBilling />

<NotSupportedAzure />

Apache Parquet allows for more efficient scans across longer time periods, and $CLOUD_LONG uses other metadata and query optimizations to reduce the amount of data that needs to be fetched to satisfy a query, such as:
Expand Down Expand Up @@ -89,8 +91,6 @@ The object storage tier is more than an archiving solution. It is also:

By default, tiered data is not included when you query from a $SERVICE_LONG. To access tiered data, you [enable tiered reads][querying-tiered-data] for a query, a session, or even for all sessions. After you enable tiered reads, when you run regular SQL queries, a behind-the-scenes process transparently pulls data from wherever it's located: the standard high-performance storage tier, the object storage tier, or both. You can `JOIN` against tiered data, build views, and even define continuous aggregates on it. In fact, because the implementation of continuous aggregates also uses hypertables, they can be tiered to low-cost storage as well.

<TieredStorageBilling />

The low-cost storage tier comes with the following limitations:

- **Limited schema modifications**: some schema modifications are not allowed
Expand Down