1
0
Fork 0

Merging upstream version 25.6.1.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-02-13 21:43:00 +01:00
parent 78f79d1d22
commit 4a7feb3eaa
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
69 changed files with 46817 additions and 45778 deletions

View file

@ -1,6 +1,45 @@
Changelog Changelog
========= =========
## [v25.6.0] - 2024-07-17
### :boom: BREAKING CHANGES
- due to [`89fc63c`](https://github.com/tobymao/sqlglot/commit/89fc63c5831dc5d63feff9e39fea1e90d65e9a09) - QUALIFY comes after WINDOW clause in queries *(PR [#3745](https://github.com/tobymao/sqlglot/pull/3745) by [@georgesittas](https://github.com/georgesittas))*:
QUALIFY comes after WINDOW clause in queries (#3745)
- due to [`a2a6efb`](https://github.com/tobymao/sqlglot/commit/a2a6efb45dc0f380747aa4afdaa19122389f3c28) - Canonicalize struct & array inline constructor *(PR [#3751](https://github.com/tobymao/sqlglot/pull/3751) by [@VaggelisD](https://github.com/VaggelisD))*:
Canonicalize struct & array inline constructor (#3751)
### :sparkles: New Features
- [`e9c4bbb`](https://github.com/tobymao/sqlglot/commit/e9c4bbbb0d0a03d1b1efaad9abe0068b3b7efa9d) - Support for ORDER BY ALL *(PR [#3756](https://github.com/tobymao/sqlglot/pull/3756) by [@VaggelisD](https://github.com/VaggelisD))*
- :arrow_lower_right: *addresses issue [#3755](https://github.com/tobymao/sqlglot/issues/3755) opened by [@Hunterlige](https://github.com/Hunterlige)*
- [`4a843e6`](https://github.com/tobymao/sqlglot/commit/4a843e6cca7bcc0d9956fe975dbc77e67038f1b8) - **postgres**: Support FROM ROWS FROM (...) *(PR [#3753](https://github.com/tobymao/sqlglot/pull/3753) by [@VaggelisD](https://github.com/VaggelisD))*
- [`321051a`](https://github.com/tobymao/sqlglot/commit/321051aef30f11f2778444040a2078633e617144) - **presto, trino**: Add support for exp.TimestampAdd *(PR [#3765](https://github.com/tobymao/sqlglot/pull/3765) by [@VaggelisD](https://github.com/VaggelisD))*
- :arrow_lower_right: *addresses issue [#3762](https://github.com/tobymao/sqlglot/issues/3762) opened by [@ddelzell](https://github.com/ddelzell)*
- [`82a1bb4`](https://github.com/tobymao/sqlglot/commit/82a1bb42856d628651bb5f1ef9aa8f440736c450) - Support for RPAD & LPAD functions *(PR [#3757](https://github.com/tobymao/sqlglot/pull/3757) by [@VaggelisD](https://github.com/VaggelisD))*
### :bug: Bug Fixes
- [`89fc63c`](https://github.com/tobymao/sqlglot/commit/89fc63c5831dc5d63feff9e39fea1e90d65e9a09) - **duckdb, clickhouse**: QUALIFY comes after WINDOW clause in queries *(PR [#3745](https://github.com/tobymao/sqlglot/pull/3745) by [@georgesittas](https://github.com/georgesittas))*
- :arrow_lower_right: *fixes issue [#3744](https://github.com/tobymao/sqlglot/issues/3744) opened by [@taylorbarstow](https://github.com/taylorbarstow)*
- [`15ca924`](https://github.com/tobymao/sqlglot/commit/15ca924ac6e8a72396a882c394856e466cae9ac3) - **optimizer**: Fix expansion of SELECT * REPLACE, RENAME *(PR [#3742](https://github.com/tobymao/sqlglot/pull/3742) by [@VaggelisD](https://github.com/VaggelisD))*
- [`0363fef`](https://github.com/tobymao/sqlglot/commit/0363fefd3ddd490ddddae47f7eb0192f0ff3cc5e) - attach comments to Commands *(PR [#3758](https://github.com/tobymao/sqlglot/pull/3758) by [@georgesittas](https://github.com/georgesittas))*
- [`a2a6efb`](https://github.com/tobymao/sqlglot/commit/a2a6efb45dc0f380747aa4afdaa19122389f3c28) - **bigquery**: Canonicalize struct & array inline constructor *(PR [#3751](https://github.com/tobymao/sqlglot/pull/3751) by [@VaggelisD](https://github.com/VaggelisD))*
- [`5df3f52`](https://github.com/tobymao/sqlglot/commit/5df3f5292488df6a8e21abf3b49086c823797e78) - Remove number matching from COLON placeholder parser *(PR [#3761](https://github.com/tobymao/sqlglot/pull/3761) by [@VaggelisD](https://github.com/VaggelisD))*
- :arrow_lower_right: *fixes issue [#3759](https://github.com/tobymao/sqlglot/issues/3759) opened by [@egan8888](https://github.com/egan8888)*
- [`0606af6`](https://github.com/tobymao/sqlglot/commit/0606af66dba7c290fee65926dcb74baad82c84ac) - **duckdb**: Transpile UDFs from Databricks *(PR [#3768](https://github.com/tobymao/sqlglot/pull/3768) by [@VaggelisD](https://github.com/VaggelisD))*
- :arrow_lower_right: *fixes issue [#3764](https://github.com/tobymao/sqlglot/issues/3764) opened by [@aersam](https://github.com/aersam)*
- [`dcc783a`](https://github.com/tobymao/sqlglot/commit/dcc783aad7c2e7184224e90fed7710eb08ddc76a) - **clickhouse**: Allow TokenType.SELECT as a Tuple field identifier *(PR [#3766](https://github.com/tobymao/sqlglot/pull/3766) by [@VaggelisD](https://github.com/VaggelisD))*
- :arrow_lower_right: *fixes issue [#3763](https://github.com/tobymao/sqlglot/issues/3763) opened by [@cpcloud](https://github.com/cpcloud)*
- [`b42b7ac`](https://github.com/tobymao/sqlglot/commit/b42b7ac5bb1785a9028235c1557b9842ea1d7524) - extract from time/date *(commit by [@tobymao](https://github.com/tobymao))*
## [v25.5.1] - 2024-07-05
### :bug: Bug Fixes
- [`2bdde22`](https://github.com/tobymao/sqlglot/commit/2bdde2221b8017791ce4cc619abb2706464ca408) - **optimizer**: only qualify coalesced USING columns if they exist in table schemas *(PR [#3740](https://github.com/tobymao/sqlglot/pull/3740) by [@georgesittas](https://github.com/georgesittas))*
## [v25.5.0] - 2024-07-04 ## [v25.5.0] - 2024-07-04
### :boom: BREAKING CHANGES ### :boom: BREAKING CHANGES
- due to [`8335ba1`](https://github.com/tobymao/sqlglot/commit/8335ba10e60c7c63881d7559a6f1fada11b0e55d) - preserve EXTRACT(date_part FROM datetime) calls *(PR [#3729](https://github.com/tobymao/sqlglot/pull/3729) by [@georgesittas](https://github.com/georgesittas))*: - due to [`8335ba1`](https://github.com/tobymao/sqlglot/commit/8335ba10e60c7c63881d7559a6f1fada11b0e55d) - preserve EXTRACT(date_part FROM datetime) calls *(PR [#3729](https://github.com/tobymao/sqlglot/pull/3729) by [@georgesittas](https://github.com/georgesittas))*:
@ -4117,3 +4156,5 @@ Changelog
[v25.4.0]: https://github.com/tobymao/sqlglot/compare/v25.3.3...v25.4.0 [v25.4.0]: https://github.com/tobymao/sqlglot/compare/v25.3.3...v25.4.0
[v25.4.1]: https://github.com/tobymao/sqlglot/compare/v25.4.0...v25.4.1 [v25.4.1]: https://github.com/tobymao/sqlglot/compare/v25.4.0...v25.4.1
[v25.5.0]: https://github.com/tobymao/sqlglot/compare/v25.4.1...v25.5.0 [v25.5.0]: https://github.com/tobymao/sqlglot/compare/v25.4.1...v25.5.0
[v25.5.1]: https://github.com/tobymao/sqlglot/compare/v25.5.0...v25.5.1
[v25.6.0]: https://github.com/tobymao/sqlglot/compare/v25.5.1...v25.6.0

File diff suppressed because one or more lines are too long

View file

@ -76,8 +76,8 @@
</span><span id="L-12"><a href="#L-12"><span class="linenos">12</span></a><span class="n">__version_tuple__</span><span class="p">:</span> <span class="n">VERSION_TUPLE</span> </span><span id="L-12"><a href="#L-12"><span class="linenos">12</span></a><span class="n">__version_tuple__</span><span class="p">:</span> <span class="n">VERSION_TUPLE</span>
</span><span id="L-13"><a href="#L-13"><span class="linenos">13</span></a><span class="n">version_tuple</span><span class="p">:</span> <span class="n">VERSION_TUPLE</span> </span><span id="L-13"><a href="#L-13"><span class="linenos">13</span></a><span class="n">version_tuple</span><span class="p">:</span> <span class="n">VERSION_TUPLE</span>
</span><span id="L-14"><a href="#L-14"><span class="linenos">14</span></a> </span><span id="L-14"><a href="#L-14"><span class="linenos">14</span></a>
</span><span id="L-15"><a href="#L-15"><span class="linenos">15</span></a><span class="n">__version__</span> <span class="o">=</span> <span class="n">version</span> <span class="o">=</span> <span class="s1">&#39;25.5.0&#39;</span> </span><span id="L-15"><a href="#L-15"><span class="linenos">15</span></a><span class="n">__version__</span> <span class="o">=</span> <span class="n">version</span> <span class="o">=</span> <span class="s1">&#39;25.6.0&#39;</span>
</span><span id="L-16"><a href="#L-16"><span class="linenos">16</span></a><span class="n">__version_tuple__</span> <span class="o">=</span> <span class="n">version_tuple</span> <span class="o">=</span> <span class="p">(</span><span class="mi">25</span><span class="p">,</span> <span class="mi">5</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span> </span><span id="L-16"><a href="#L-16"><span class="linenos">16</span></a><span class="n">__version_tuple__</span> <span class="o">=</span> <span class="n">version_tuple</span> <span class="o">=</span> <span class="p">(</span><span class="mi">25</span><span class="p">,</span> <span class="mi">6</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
</span></pre></div> </span></pre></div>
@ -97,7 +97,7 @@
<section id="version"> <section id="version">
<div class="attr variable"> <div class="attr variable">
<span class="name">version</span><span class="annotation">: str</span> = <span class="name">version</span><span class="annotation">: str</span> =
<span class="default_value">&#39;25.5.0&#39;</span> <span class="default_value">&#39;25.6.0&#39;</span>
</div> </div>
@ -109,7 +109,7 @@
<section id="version_tuple"> <section id="version_tuple">
<div class="attr variable"> <div class="attr variable">
<span class="name">version_tuple</span><span class="annotation">: object</span> = <span class="name">version_tuple</span><span class="annotation">: object</span> =
<span class="default_value">(25, 5, 0)</span> <span class="default_value">(25, 6, 0)</span>
</div> </div>

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -502,6 +502,7 @@
<dd id="RisingWave.PREFER_CTE_ALIAS_COLUMN" class="variable"><a href="dialect.html#Dialect.PREFER_CTE_ALIAS_COLUMN">PREFER_CTE_ALIAS_COLUMN</a></dd> <dd id="RisingWave.PREFER_CTE_ALIAS_COLUMN" class="variable"><a href="dialect.html#Dialect.PREFER_CTE_ALIAS_COLUMN">PREFER_CTE_ALIAS_COLUMN</a></dd>
<dd id="RisingWave.FORCE_EARLY_ALIAS_REF_EXPANSION" class="variable"><a href="dialect.html#Dialect.FORCE_EARLY_ALIAS_REF_EXPANSION">FORCE_EARLY_ALIAS_REF_EXPANSION</a></dd> <dd id="RisingWave.FORCE_EARLY_ALIAS_REF_EXPANSION" class="variable"><a href="dialect.html#Dialect.FORCE_EARLY_ALIAS_REF_EXPANSION">FORCE_EARLY_ALIAS_REF_EXPANSION</a></dd>
<dd id="RisingWave.EXPAND_ALIAS_REFS_EARLY_ONLY_IN_GROUP_BY" class="variable"><a href="dialect.html#Dialect.EXPAND_ALIAS_REFS_EARLY_ONLY_IN_GROUP_BY">EXPAND_ALIAS_REFS_EARLY_ONLY_IN_GROUP_BY</a></dd> <dd id="RisingWave.EXPAND_ALIAS_REFS_EARLY_ONLY_IN_GROUP_BY" class="variable"><a href="dialect.html#Dialect.EXPAND_ALIAS_REFS_EARLY_ONLY_IN_GROUP_BY">EXPAND_ALIAS_REFS_EARLY_ONLY_IN_GROUP_BY</a></dd>
<dd id="RisingWave.SUPPORTS_ORDER_BY_ALL" class="variable"><a href="dialect.html#Dialect.SUPPORTS_ORDER_BY_ALL">SUPPORTS_ORDER_BY_ALL</a></dd>
<dd id="RisingWave.DATE_PART_MAPPING" class="variable"><a href="dialect.html#Dialect.DATE_PART_MAPPING">DATE_PART_MAPPING</a></dd> <dd id="RisingWave.DATE_PART_MAPPING" class="variable"><a href="dialect.html#Dialect.DATE_PART_MAPPING">DATE_PART_MAPPING</a></dd>
<dd id="RisingWave.get_or_raise" class="function"><a href="dialect.html#Dialect.get_or_raise">get_or_raise</a></dd> <dd id="RisingWave.get_or_raise" class="function"><a href="dialect.html#Dialect.get_or_raise">get_or_raise</a></dd>
<dd id="RisingWave.format_time" class="function"><a href="dialect.html#Dialect.format_time">format_time</a></dd> <dd id="RisingWave.format_time" class="function"><a href="dialect.html#Dialect.format_time">format_time</a></dd>
@ -645,7 +646,7 @@ Default: True</li>
<div id="RisingWave.Generator.AFTER_HAVING_MODIFIER_TRANSFORMS" class="classattr"> <div id="RisingWave.Generator.AFTER_HAVING_MODIFIER_TRANSFORMS" class="classattr">
<div class="attr variable"> <div class="attr variable">
<span class="name">AFTER_HAVING_MODIFIER_TRANSFORMS</span> = <span class="name">AFTER_HAVING_MODIFIER_TRANSFORMS</span> =
<span class="default_value">{&#39;qualify&#39;: &lt;function Generator.&lt;lambda&gt;&gt;, &#39;windows&#39;: &lt;function Generator.&lt;lambda&gt;&gt;}</span> <span class="default_value">{&#39;windows&#39;: &lt;function Generator.&lt;lambda&gt;&gt;, &#39;qualify&#39;: &lt;function Generator.&lt;lambda&gt;&gt;}</span>
</div> </div>
@ -708,6 +709,7 @@ Default: True</li>
<dd id="RisingWave.Generator.HEX_FUNC" class="variable"><a href="../generator.html#Generator.HEX_FUNC">HEX_FUNC</a></dd> <dd id="RisingWave.Generator.HEX_FUNC" class="variable"><a href="../generator.html#Generator.HEX_FUNC">HEX_FUNC</a></dd>
<dd id="RisingWave.Generator.WITH_PROPERTIES_PREFIX" class="variable"><a href="../generator.html#Generator.WITH_PROPERTIES_PREFIX">WITH_PROPERTIES_PREFIX</a></dd> <dd id="RisingWave.Generator.WITH_PROPERTIES_PREFIX" class="variable"><a href="../generator.html#Generator.WITH_PROPERTIES_PREFIX">WITH_PROPERTIES_PREFIX</a></dd>
<dd id="RisingWave.Generator.QUOTE_JSON_PATH" class="variable"><a href="../generator.html#Generator.QUOTE_JSON_PATH">QUOTE_JSON_PATH</a></dd> <dd id="RisingWave.Generator.QUOTE_JSON_PATH" class="variable"><a href="../generator.html#Generator.QUOTE_JSON_PATH">QUOTE_JSON_PATH</a></dd>
<dd id="RisingWave.Generator.PAD_FILL_PATTERN_IS_REQUIRED" class="variable"><a href="../generator.html#Generator.PAD_FILL_PATTERN_IS_REQUIRED">PAD_FILL_PATTERN_IS_REQUIRED</a></dd>
<dd id="RisingWave.Generator.PARSE_JSON_NAME" class="variable"><a href="../generator.html#Generator.PARSE_JSON_NAME">PARSE_JSON_NAME</a></dd> <dd id="RisingWave.Generator.PARSE_JSON_NAME" class="variable"><a href="../generator.html#Generator.PARSE_JSON_NAME">PARSE_JSON_NAME</a></dd>
<dd id="RisingWave.Generator.TIME_PART_SINGULARS" class="variable"><a href="../generator.html#Generator.TIME_PART_SINGULARS">TIME_PART_SINGULARS</a></dd> <dd id="RisingWave.Generator.TIME_PART_SINGULARS" class="variable"><a href="../generator.html#Generator.TIME_PART_SINGULARS">TIME_PART_SINGULARS</a></dd>
<dd id="RisingWave.Generator.TOKEN_MAPPING" class="variable"><a href="../generator.html#Generator.TOKEN_MAPPING">TOKEN_MAPPING</a></dd> <dd id="RisingWave.Generator.TOKEN_MAPPING" class="variable"><a href="../generator.html#Generator.TOKEN_MAPPING">TOKEN_MAPPING</a></dd>
@ -1063,6 +1065,7 @@ Default: True</li>
<dd id="RisingWave.Generator.strtodate_sql" class="function"><a href="../generator.html#Generator.strtodate_sql">strtodate_sql</a></dd> <dd id="RisingWave.Generator.strtodate_sql" class="function"><a href="../generator.html#Generator.strtodate_sql">strtodate_sql</a></dd>
<dd id="RisingWave.Generator.strtotime_sql" class="function"><a href="../generator.html#Generator.strtotime_sql">strtotime_sql</a></dd> <dd id="RisingWave.Generator.strtotime_sql" class="function"><a href="../generator.html#Generator.strtotime_sql">strtotime_sql</a></dd>
<dd id="RisingWave.Generator.changes_sql" class="function"><a href="../generator.html#Generator.changes_sql">changes_sql</a></dd> <dd id="RisingWave.Generator.changes_sql" class="function"><a href="../generator.html#Generator.changes_sql">changes_sql</a></dd>
<dd id="RisingWave.Generator.pad_sql" class="function"><a href="../generator.html#Generator.pad_sql">pad_sql</a></dd>
</div> </div>
<div><dt><a href="postgres.html#Postgres.Generator">sqlglot.dialects.postgres.Postgres.Generator</a></dt> <div><dt><a href="postgres.html#Postgres.Generator">sqlglot.dialects.postgres.Postgres.Generator</a></dt>

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load diff

View file

@ -1893,7 +1893,7 @@ belong to some totally-ordered set.</p>
<section id="DATE_UNITS"> <section id="DATE_UNITS">
<div class="attr variable"> <div class="attr variable">
<span class="name">DATE_UNITS</span> = <span class="name">DATE_UNITS</span> =
<span class="default_value">{&#39;day&#39;, &#39;year&#39;, &#39;quarter&#39;, &#39;month&#39;, &#39;week&#39;, &#39;year_month&#39;}</span> <span class="default_value">{&#39;week&#39;, &#39;day&#39;, &#39;year_month&#39;, &#39;year&#39;, &#39;month&#39;, &#39;quarter&#39;}</span>
</div> </div>

View file

@ -585,7 +585,7 @@
<div class="attr variable"> <div class="attr variable">
<span class="name">ALL_JSON_PATH_PARTS</span> = <span class="name">ALL_JSON_PATH_PARTS</span> =
<input id="ALL_JSON_PATH_PARTS-view-value" class="view-value-toggle-state" type="checkbox" aria-hidden="true" tabindex="-1"> <input id="ALL_JSON_PATH_PARTS-view-value" class="view-value-toggle-state" type="checkbox" aria-hidden="true" tabindex="-1">
<label class="view-value-button pdoc-button" for="ALL_JSON_PATH_PARTS-view-value"></label><span class="default_value">{&lt;class &#39;<a href="expressions.html#JSONPathSelector">sqlglot.expressions.JSONPathSelector</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathSlice">sqlglot.expressions.JSONPathSlice</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathUnion">sqlglot.expressions.JSONPathUnion</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathScript">sqlglot.expressions.JSONPathScript</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathRoot">sqlglot.expressions.JSONPathRoot</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathRecursive">sqlglot.expressions.JSONPathRecursive</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathKey">sqlglot.expressions.JSONPathKey</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathWildcard">sqlglot.expressions.JSONPathWildcard</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathFilter">sqlglot.expressions.JSONPathFilter</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathSubscript">sqlglot.expressions.JSONPathSubscript</a>&#39;&gt;}</span> <label class="view-value-button pdoc-button" for="ALL_JSON_PATH_PARTS-view-value"></label><span class="default_value">{&lt;class &#39;<a href="expressions.html#JSONPathKey">sqlglot.expressions.JSONPathKey</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathWildcard">sqlglot.expressions.JSONPathWildcard</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathFilter">sqlglot.expressions.JSONPathFilter</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathUnion">sqlglot.expressions.JSONPathUnion</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathSubscript">sqlglot.expressions.JSONPathSubscript</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathSelector">sqlglot.expressions.JSONPathSelector</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathSlice">sqlglot.expressions.JSONPathSlice</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathScript">sqlglot.expressions.JSONPathScript</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathRoot">sqlglot.expressions.JSONPathRoot</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathRecursive">sqlglot.expressions.JSONPathRecursive</a>&#39;&gt;}</span>
</div> </div>

File diff suppressed because one or more lines are too long

View file

@ -586,7 +586,7 @@ queries if it would result in multiple table selects in a single query:</p>
<div class="attr variable"> <div class="attr variable">
<span class="name">UNMERGABLE_ARGS</span> = <span class="name">UNMERGABLE_ARGS</span> =
<input id="UNMERGABLE_ARGS-view-value" class="view-value-toggle-state" type="checkbox" aria-hidden="true" tabindex="-1"> <input id="UNMERGABLE_ARGS-view-value" class="view-value-toggle-state" type="checkbox" aria-hidden="true" tabindex="-1">
<label class="view-value-button pdoc-button" for="UNMERGABLE_ARGS-view-value"></label><span class="default_value">{&#39;kind&#39;, &#39;connect&#39;, &#39;having&#39;, &#39;prewhere&#39;, &#39;cluster&#39;, &#39;locks&#39;, &#39;distribute&#39;, &#39;limit&#39;, &#39;format&#39;, &#39;sort&#39;, &#39;laterals&#39;, &#39;into&#39;, &#39;qualify&#39;, &#39;group&#39;, &#39;offset&#39;, &#39;options&#39;, &#39;match&#39;, &#39;distinct&#39;, &#39;pivots&#39;, &#39;windows&#39;, &#39;sample&#39;, &#39;settings&#39;, &#39;with&#39;}</span> <label class="view-value-button pdoc-button" for="UNMERGABLE_ARGS-view-value"></label><span class="default_value">{&#39;offset&#39;, &#39;pivots&#39;, &#39;kind&#39;, &#39;group&#39;, &#39;sort&#39;, &#39;format&#39;, &#39;distribute&#39;, &#39;distinct&#39;, &#39;windows&#39;, &#39;limit&#39;, &#39;settings&#39;, &#39;match&#39;, &#39;prewhere&#39;, &#39;with&#39;, &#39;connect&#39;, &#39;having&#39;, &#39;cluster&#39;, &#39;sample&#39;, &#39;qualify&#39;, &#39;into&#39;, &#39;locks&#39;, &#39;laterals&#39;, &#39;options&#39;}</span>
</div> </div>

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -3209,7 +3209,7 @@ prefix are statically known.</p>
<div class="attr variable"> <div class="attr variable">
<span class="name">DATETRUNC_COMPARISONS</span> = <span class="name">DATETRUNC_COMPARISONS</span> =
<input id="DATETRUNC_COMPARISONS-view-value" class="view-value-toggle-state" type="checkbox" aria-hidden="true" tabindex="-1"> <input id="DATETRUNC_COMPARISONS-view-value" class="view-value-toggle-state" type="checkbox" aria-hidden="true" tabindex="-1">
<label class="view-value-button pdoc-button" for="DATETRUNC_COMPARISONS-view-value"></label><span class="default_value">{&lt;class &#39;<a href="../expressions.html#LT">sqlglot.expressions.LT</a>&#39;&gt;, &lt;class &#39;<a href="../expressions.html#EQ">sqlglot.expressions.EQ</a>&#39;&gt;, &lt;class &#39;<a href="../expressions.html#NEQ">sqlglot.expressions.NEQ</a>&#39;&gt;, &lt;class &#39;<a href="../expressions.html#In">sqlglot.expressions.In</a>&#39;&gt;, &lt;class &#39;<a href="../expressions.html#LTE">sqlglot.expressions.LTE</a>&#39;&gt;, &lt;class &#39;<a href="../expressions.html#GT">sqlglot.expressions.GT</a>&#39;&gt;, &lt;class &#39;<a href="../expressions.html#GTE">sqlglot.expressions.GTE</a>&#39;&gt;}</span> <label class="view-value-button pdoc-button" for="DATETRUNC_COMPARISONS-view-value"></label><span class="default_value">{&lt;class &#39;<a href="../expressions.html#NEQ">sqlglot.expressions.NEQ</a>&#39;&gt;, &lt;class &#39;<a href="../expressions.html#GTE">sqlglot.expressions.GTE</a>&#39;&gt;, &lt;class &#39;<a href="../expressions.html#EQ">sqlglot.expressions.EQ</a>&#39;&gt;, &lt;class &#39;<a href="../expressions.html#LTE">sqlglot.expressions.LTE</a>&#39;&gt;, &lt;class &#39;<a href="../expressions.html#GT">sqlglot.expressions.GT</a>&#39;&gt;, &lt;class &#39;<a href="../expressions.html#LT">sqlglot.expressions.LT</a>&#39;&gt;, &lt;class &#39;<a href="../expressions.html#In">sqlglot.expressions.In</a>&#39;&gt;}</span>
</div> </div>
@ -3289,7 +3289,7 @@ prefix are statically known.</p>
<section id="JOINS"> <section id="JOINS">
<div class="attr variable"> <div class="attr variable">
<span class="name">JOINS</span> = <span class="name">JOINS</span> =
<span class="default_value">{(&#39;&#39;, &#39;INNER&#39;), (&#39;RIGHT&#39;, &#39;OUTER&#39;), (&#39;RIGHT&#39;, &#39;&#39;), (&#39;&#39;, &#39;&#39;)}</span> <span class="default_value">{(&#39;RIGHT&#39;, &#39;OUTER&#39;), (&#39;&#39;, &#39;INNER&#39;), (&#39;RIGHT&#39;, &#39;&#39;), (&#39;&#39;, &#39;&#39;)}</span>
</div> </div>

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -8768,7 +8768,7 @@
<div class="attr variable"> <div class="attr variable">
<span class="name">COMMANDS</span> = <span class="name">COMMANDS</span> =
<input id="Tokenizer.COMMANDS-view-value" class="view-value-toggle-state" type="checkbox" aria-hidden="true" tabindex="-1"> <input id="Tokenizer.COMMANDS-view-value" class="view-value-toggle-state" type="checkbox" aria-hidden="true" tabindex="-1">
<label class="view-value-button pdoc-button" for="Tokenizer.COMMANDS-view-value"></label><span class="default_value">{&lt;<a href="#TokenType.COMMAND">TokenType.COMMAND</a>: &#39;COMMAND&#39;&gt;, &lt;<a href="#TokenType.FETCH">TokenType.FETCH</a>: &#39;FETCH&#39;&gt;, &lt;<a href="#TokenType.SHOW">TokenType.SHOW</a>: &#39;SHOW&#39;&gt;, &lt;<a href="#TokenType.EXECUTE">TokenType.EXECUTE</a>: &#39;EXECUTE&#39;&gt;}</span> <label class="view-value-button pdoc-button" for="Tokenizer.COMMANDS-view-value"></label><span class="default_value">{&lt;<a href="#TokenType.FETCH">TokenType.FETCH</a>: &#39;FETCH&#39;&gt;, &lt;<a href="#TokenType.SHOW">TokenType.SHOW</a>: &#39;SHOW&#39;&gt;, &lt;<a href="#TokenType.COMMAND">TokenType.COMMAND</a>: &#39;COMMAND&#39;&gt;, &lt;<a href="#TokenType.EXECUTE">TokenType.EXECUTE</a>: &#39;EXECUTE&#39;&gt;}</span>
</div> </div>
@ -8780,7 +8780,7 @@
<div id="Tokenizer.COMMAND_PREFIX_TOKENS" class="classattr"> <div id="Tokenizer.COMMAND_PREFIX_TOKENS" class="classattr">
<div class="attr variable"> <div class="attr variable">
<span class="name">COMMAND_PREFIX_TOKENS</span> = <span class="name">COMMAND_PREFIX_TOKENS</span> =
<span class="default_value">{&lt;<a href="#TokenType.BEGIN">TokenType.BEGIN</a>: &#39;BEGIN&#39;&gt;, &lt;<a href="#TokenType.SEMICOLON">TokenType.SEMICOLON</a>: &#39;SEMICOLON&#39;&gt;}</span> <span class="default_value">{&lt;<a href="#TokenType.SEMICOLON">TokenType.SEMICOLON</a>: &#39;SEMICOLON&#39;&gt;, &lt;<a href="#TokenType.BEGIN">TokenType.BEGIN</a>: &#39;BEGIN&#39;&gt;}</span>
</div> </div>

View file

@ -735,6 +735,13 @@ class BigQuery(Dialect):
exp.VolatileProperty: exp.Properties.Location.UNSUPPORTED, exp.VolatileProperty: exp.Properties.Location.UNSUPPORTED,
} }
# WINDOW comes after QUALIFY
# https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#window_clause
AFTER_HAVING_MODIFIER_TRANSFORMS = {
"qualify": generator.Generator.AFTER_HAVING_MODIFIER_TRANSFORMS["qualify"],
"windows": generator.Generator.AFTER_HAVING_MODIFIER_TRANSFORMS["windows"],
}
# from: https://cloud.google.com/bigquery/docs/reference/standard-sql/lexical#reserved_keywords # from: https://cloud.google.com/bigquery/docs/reference/standard-sql/lexical#reserved_keywords
RESERVED_KEYWORDS = { RESERVED_KEYWORDS = {
"all", "all",

View file

@ -2,7 +2,7 @@ from __future__ import annotations
import typing as t import typing as t
from sqlglot import exp, generator, parser, tokens, transforms from sqlglot import exp, generator, parser, tokens
from sqlglot.dialects.dialect import ( from sqlglot.dialects.dialect import (
Dialect, Dialect,
arg_max_or_min_no_count, arg_max_or_min_no_count,
@ -332,6 +332,8 @@ class ClickHouse(Dialect):
TokenType.SET, TokenType.SET,
} }
RESERVED_TOKENS = parser.Parser.RESERVED_TOKENS - {TokenType.SELECT}
AGG_FUNC_MAPPING = ( AGG_FUNC_MAPPING = (
lambda functions, suffixes: { lambda functions, suffixes: {
f"{f}{sfx}": (f, sfx) for sfx in (suffixes + [""]) for f in functions f"{f}{sfx}": (f, sfx) for sfx in (suffixes + [""]) for f in functions
@ -789,7 +791,6 @@ class ClickHouse(Dialect):
exp.Quantile: _quantile_sql, exp.Quantile: _quantile_sql,
exp.RegexpLike: lambda self, e: self.func("match", e.this, e.expression), exp.RegexpLike: lambda self, e: self.func("match", e.this, e.expression),
exp.Rand: rename_func("randCanonical"), exp.Rand: rename_func("randCanonical"),
exp.Select: transforms.preprocess([transforms.eliminate_qualify]),
exp.StartsWith: rename_func("startsWith"), exp.StartsWith: rename_func("startsWith"),
exp.StrPosition: lambda self, e: self.func( exp.StrPosition: lambda self, e: self.func(
"position", e.this, e.args.get("substr"), e.args.get("position") "position", e.this, e.args.get("substr"), e.args.get("position")

View file

@ -356,6 +356,11 @@ class Dialect(metaclass=_Dialect):
EXPAND_ALIAS_REFS_EARLY_ONLY_IN_GROUP_BY = False EXPAND_ALIAS_REFS_EARLY_ONLY_IN_GROUP_BY = False
"""Whether alias reference expansion before qualification should only happen for the GROUP BY clause.""" """Whether alias reference expansion before qualification should only happen for the GROUP BY clause."""
SUPPORTS_ORDER_BY_ALL = False
"""
Whether ORDER BY ALL is supported (expands to all the selected columns) as in DuckDB, Spark3/Databricks
"""
# --- Autofilled --- # --- Autofilled ---
tokenizer_class = Tokenizer tokenizer_class = Tokenizer

View file

@ -116,17 +116,24 @@ def _build_make_timestamp(args: t.List) -> exp.Expression:
def _struct_sql(self: DuckDB.Generator, expression: exp.Struct) -> str: def _struct_sql(self: DuckDB.Generator, expression: exp.Struct) -> str:
args: t.List[str] = [] args: t.List[str] = []
for i, expr in enumerate(expression.expressions):
if isinstance(expr, exp.PropertyEQ):
key = expr.name
value = expr.expression
else:
key = f"_{i}"
value = expr
# BigQuery allows inline construction such as "STRUCT<a STRING, b INTEGER>('str', 1)" which is
# canonicalized to "ROW('str', 1) AS STRUCT(a TEXT, b INT)" in DuckDB
is_struct_cast = expression.find_ancestor(exp.Cast)
for i, expr in enumerate(expression.expressions):
is_property_eq = isinstance(expr, exp.PropertyEQ)
value = expr.expression if is_property_eq else expr
if is_struct_cast:
args.append(self.sql(value))
else:
key = expr.name if is_property_eq else f"_{i}"
args.append(f"{self.sql(exp.Literal.string(key))}: {self.sql(value)}") args.append(f"{self.sql(exp.Literal.string(key))}: {self.sql(value)}")
return f"{{{', '.join(args)}}}" csv_args = ", ".join(args)
return f"ROW({csv_args})" if is_struct_cast else f"{{{csv_args}}}"
def _datatype_sql(self: DuckDB.Generator, expression: exp.DataType) -> str: def _datatype_sql(self: DuckDB.Generator, expression: exp.DataType) -> str:
@ -172,6 +179,7 @@ class DuckDB(Dialect):
SAFE_DIVISION = True SAFE_DIVISION = True
INDEX_OFFSET = 1 INDEX_OFFSET = 1
CONCAT_COALESCE = True CONCAT_COALESCE = True
SUPPORTS_ORDER_BY_ALL = True
# https://duckdb.org/docs/sql/introduction.html#creating-a-new-table # https://duckdb.org/docs/sql/introduction.html#creating-a-new-table
NORMALIZATION_STRATEGY = NormalizationStrategy.CASE_INSENSITIVE NORMALIZATION_STRATEGY = NormalizationStrategy.CASE_INSENSITIVE
@ -381,6 +389,7 @@ class DuckDB(Dialect):
SUPPORTS_TO_NUMBER = False SUPPORTS_TO_NUMBER = False
COPY_HAS_INTO_KEYWORD = False COPY_HAS_INTO_KEYWORD = False
STAR_EXCEPT = "EXCLUDE" STAR_EXCEPT = "EXCLUDE"
PAD_FILL_PATTERN_IS_REQUIRED = True
TRANSFORMS = { TRANSFORMS = {
**generator.Generator.TRANSFORMS, **generator.Generator.TRANSFORMS,
@ -448,6 +457,8 @@ class DuckDB(Dialect):
), ),
exp.RegexpLike: rename_func("REGEXP_MATCHES"), exp.RegexpLike: rename_func("REGEXP_MATCHES"),
exp.RegexpSplit: rename_func("STR_SPLIT_REGEX"), exp.RegexpSplit: rename_func("STR_SPLIT_REGEX"),
exp.Return: lambda self, e: self.sql(e, "this"),
exp.ReturnsProperty: lambda self, e: "TABLE" if isinstance(e.this, exp.Schema) else "",
exp.Rand: rename_func("RANDOM"), exp.Rand: rename_func("RANDOM"),
exp.SafeDivide: no_safe_divide_sql, exp.SafeDivide: no_safe_divide_sql,
exp.Split: rename_func("STR_SPLIT"), exp.Split: rename_func("STR_SPLIT"),
@ -609,6 +620,7 @@ class DuckDB(Dialect):
# can be transpiled to DuckDB, so we explicitly override them accordingly # can be transpiled to DuckDB, so we explicitly override them accordingly
PROPERTIES_LOCATION[exp.LikeProperty] = exp.Properties.Location.POST_SCHEMA PROPERTIES_LOCATION[exp.LikeProperty] = exp.Properties.Location.POST_SCHEMA
PROPERTIES_LOCATION[exp.TemporaryProperty] = exp.Properties.Location.POST_CREATE PROPERTIES_LOCATION[exp.TemporaryProperty] = exp.Properties.Location.POST_CREATE
PROPERTIES_LOCATION[exp.ReturnsProperty] = exp.Properties.Location.POST_ALIAS
def strtotime_sql(self, expression: exp.StrToTime) -> str: def strtotime_sql(self, expression: exp.StrToTime) -> str:
if expression.args.get("safe"): if expression.args.get("safe"):

View file

@ -447,6 +447,7 @@ class Hive(Dialect):
SUPPORTS_TO_NUMBER = False SUPPORTS_TO_NUMBER = False
WITH_PROPERTIES_PREFIX = "TBLPROPERTIES" WITH_PROPERTIES_PREFIX = "TBLPROPERTIES"
PARSE_JSON_NAME = None PARSE_JSON_NAME = None
PAD_FILL_PATTERN_IS_REQUIRED = True
EXPRESSIONS_WITHOUT_NESTED_CTES = { EXPRESSIONS_WITHOUT_NESTED_CTES = {
exp.Insert, exp.Insert,

View file

@ -690,6 +690,7 @@ class MySQL(Dialect):
JSON_KEY_VALUE_PAIR_SEP = "," JSON_KEY_VALUE_PAIR_SEP = ","
SUPPORTS_TO_NUMBER = False SUPPORTS_TO_NUMBER = False
PARSE_JSON_NAME = None PARSE_JSON_NAME = None
PAD_FILL_PATTERN_IS_REQUIRED = True
TRANSFORMS = { TRANSFORMS = {
**generator.Generator.TRANSFORMS, **generator.Generator.TRANSFORMS,

View file

@ -447,6 +447,9 @@ class Postgres(Dialect):
return self.expression(exp.Extract, this=part, expression=value) return self.expression(exp.Extract, this=part, expression=value)
def _parse_unique_key(self) -> t.Optional[exp.Expression]:
return None
class Generator(generator.Generator): class Generator(generator.Generator):
SINGLE_STRING_INTERVAL = True SINGLE_STRING_INTERVAL = True
RENAME_TABLE_WITH_DB = False RENAME_TABLE_WITH_DB = False

View file

@ -35,6 +35,8 @@ from sqlglot.helper import apply_index_offset, seq_get
from sqlglot.tokens import TokenType from sqlglot.tokens import TokenType
from sqlglot.transforms import unqualify_columns from sqlglot.transforms import unqualify_columns
DATE_ADD_OR_SUB = t.Union[exp.DateAdd, exp.TimestampAdd, exp.DateSub]
def _explode_to_unnest_sql(self: Presto.Generator, expression: exp.Lateral) -> str: def _explode_to_unnest_sql(self: Presto.Generator, expression: exp.Lateral) -> str:
if isinstance(expression.this, exp.Explode): if isinstance(expression.this, exp.Explode):
@ -223,6 +225,21 @@ def _build_to_char(args: t.List) -> exp.TimeToStr:
return build_formatted_time(exp.TimeToStr, "teradata")(args) return build_formatted_time(exp.TimeToStr, "teradata")(args)
def _date_delta_sql(
name: str, negate_interval: bool = False
) -> t.Callable[[Presto.Generator, DATE_ADD_OR_SUB], str]:
def _delta_sql(self: Presto.Generator, expression: DATE_ADD_OR_SUB) -> str:
interval = _to_int(expression.expression)
return self.func(
name,
unit_to_str(expression),
interval * (-1) if negate_interval else interval,
expression.this,
)
return _delta_sql
class Presto(Dialect): class Presto(Dialect):
INDEX_OFFSET = 1 INDEX_OFFSET = 1
NULL_ORDERING = "nulls_are_last" NULL_ORDERING = "nulls_are_last"
@ -335,6 +352,7 @@ class Presto(Dialect):
SUPPORTS_TO_NUMBER = False SUPPORTS_TO_NUMBER = False
HEX_FUNC = "TO_HEX" HEX_FUNC = "TO_HEX"
PARSE_JSON_NAME = "JSON_PARSE" PARSE_JSON_NAME = "JSON_PARSE"
PAD_FILL_PATTERN_IS_REQUIRED = True
PROPERTIES_LOCATION = { PROPERTIES_LOCATION = {
**generator.Generator.PROPERTIES_LOCATION, **generator.Generator.PROPERTIES_LOCATION,
@ -385,24 +403,14 @@ class Presto(Dialect):
exp.BitwiseXor: lambda self, e: self.func("BITWISE_XOR", e.this, e.expression), exp.BitwiseXor: lambda self, e: self.func("BITWISE_XOR", e.this, e.expression),
exp.Cast: transforms.preprocess([transforms.epoch_cast_to_ts]), exp.Cast: transforms.preprocess([transforms.epoch_cast_to_ts]),
exp.CurrentTimestamp: lambda *_: "CURRENT_TIMESTAMP", exp.CurrentTimestamp: lambda *_: "CURRENT_TIMESTAMP",
exp.DateAdd: lambda self, e: self.func( exp.DateAdd: _date_delta_sql("DATE_ADD"),
"DATE_ADD",
unit_to_str(e),
_to_int(e.expression),
e.this,
),
exp.DateDiff: lambda self, e: self.func( exp.DateDiff: lambda self, e: self.func(
"DATE_DIFF", unit_to_str(e), e.expression, e.this "DATE_DIFF", unit_to_str(e), e.expression, e.this
), ),
exp.DateStrToDate: datestrtodate_sql, exp.DateStrToDate: datestrtodate_sql,
exp.DateToDi: lambda self, exp.DateToDi: lambda self,
e: f"CAST(DATE_FORMAT({self.sql(e, 'this')}, {Presto.DATEINT_FORMAT}) AS INT)", e: f"CAST(DATE_FORMAT({self.sql(e, 'this')}, {Presto.DATEINT_FORMAT}) AS INT)",
exp.DateSub: lambda self, e: self.func( exp.DateSub: _date_delta_sql("DATE_ADD", negate_interval=True),
"DATE_ADD",
unit_to_str(e),
_to_int(e.expression * -1),
e.this,
),
exp.Decode: lambda self, e: encode_decode_sql(self, e, "FROM_UTF8"), exp.Decode: lambda self, e: encode_decode_sql(self, e, "FROM_UTF8"),
exp.DiToDate: lambda self, exp.DiToDate: lambda self,
e: f"CAST(DATE_PARSE(CAST({self.sql(e, 'this')} AS VARCHAR), {Presto.DATEINT_FORMAT}) AS DATE)", e: f"CAST(DATE_PARSE(CAST({self.sql(e, 'this')} AS VARCHAR), {Presto.DATEINT_FORMAT}) AS DATE)",
@ -451,6 +459,7 @@ class Presto(Dialect):
exp.StructExtract: struct_extract_sql, exp.StructExtract: struct_extract_sql,
exp.Table: transforms.preprocess([_unnest_sequence]), exp.Table: transforms.preprocess([_unnest_sequence]),
exp.Timestamp: no_timestamp_sql, exp.Timestamp: no_timestamp_sql,
exp.TimestampAdd: _date_delta_sql("DATE_ADD"),
exp.TimestampTrunc: timestamptrunc_sql(), exp.TimestampTrunc: timestamptrunc_sql(),
exp.TimeStrToDate: timestrtotime_sql, exp.TimeStrToDate: timestrtotime_sql,
exp.TimeStrToTime: timestrtotime_sql, exp.TimeStrToTime: timestrtotime_sql,

View file

@ -90,6 +90,8 @@ def _dateadd_sql(self: Spark.Generator, expression: exp.TsOrDsAdd | exp.Timestam
class Spark(Spark2): class Spark(Spark2):
SUPPORTS_ORDER_BY_ALL = True
class Tokenizer(Spark2.Tokenizer): class Tokenizer(Spark2.Tokenizer):
STRING_ESCAPES_ALLOWED_IN_RAW_STRINGS = False STRING_ESCAPES_ALLOWED_IN_RAW_STRINGS = False
@ -129,6 +131,7 @@ class Spark(Spark2):
class Generator(Spark2.Generator): class Generator(Spark2.Generator):
SUPPORTS_TO_NUMBER = True SUPPORTS_TO_NUMBER = True
PAD_FILL_PATTERN_IS_REQUIRED = False
TYPE_MAPPING = { TYPE_MAPPING = {
**Spark2.Generator.TYPE_MAPPING, **Spark2.Generator.TYPE_MAPPING,

View file

@ -1869,7 +1869,7 @@ class TitleColumnConstraint(ColumnConstraintKind):
class UniqueColumnConstraint(ColumnConstraintKind): class UniqueColumnConstraint(ColumnConstraintKind):
arg_types = {"this": False, "index_type": False, "on_conflict": False} arg_types = {"this": False, "index_type": False, "on_conflict": False, "nulls": False}
class UppercaseColumnConstraint(ColumnConstraintKind): class UppercaseColumnConstraint(ColumnConstraintKind):
@ -2015,7 +2015,7 @@ class CopyParameter(Expression):
arg_types = {"this": True, "expression": False, "expressions": False} arg_types = {"this": True, "expression": False, "expressions": False}
class Copy(Expression): class Copy(DML):
arg_types = { arg_types = {
"this": True, "this": True,
"kind": True, "kind": True,
@ -3043,6 +3043,7 @@ class Table(Expression):
"only": False, "only": False,
"partition": False, "partition": False,
"changes": False, "changes": False,
"rows_from": False,
} }
@property @property
@ -4797,6 +4798,11 @@ class List(Func):
is_var_len_args = True is_var_len_args = True
# String pad, kind True -> LPAD, False -> RPAD
class Pad(Func):
arg_types = {"this": True, "expression": True, "fill_pattern": False, "is_left": True}
# https://docs.snowflake.com/en/sql-reference/functions/to_char # https://docs.snowflake.com/en/sql-reference/functions/to_char
# https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/TO_CHAR-number.html # https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/TO_CHAR-number.html
class ToChar(Func): class ToChar(Func):

View file

@ -375,6 +375,9 @@ class Generator(metaclass=_Generator):
# Whether to quote the generated expression of exp.JsonPath # Whether to quote the generated expression of exp.JsonPath
QUOTE_JSON_PATH = True QUOTE_JSON_PATH = True
# Whether the text pattern/fill (3rd) parameter of RPAD()/LPAD() is optional (defaults to space)
PAD_FILL_PATTERN_IS_REQUIRED = False
# The name to generate for the JSONPath expression. If `None`, only `this` will be generated # The name to generate for the JSONPath expression. If `None`, only `this` will be generated
PARSE_JSON_NAME: t.Optional[str] = "PARSE_JSON" PARSE_JSON_NAME: t.Optional[str] = "PARSE_JSON"
@ -406,13 +409,13 @@ class Generator(metaclass=_Generator):
AFTER_HAVING_MODIFIER_TRANSFORMS = { AFTER_HAVING_MODIFIER_TRANSFORMS = {
"cluster": lambda self, e: self.sql(e, "cluster"), "cluster": lambda self, e: self.sql(e, "cluster"),
"distribute": lambda self, e: self.sql(e, "distribute"), "distribute": lambda self, e: self.sql(e, "distribute"),
"qualify": lambda self, e: self.sql(e, "qualify"),
"sort": lambda self, e: self.sql(e, "sort"), "sort": lambda self, e: self.sql(e, "sort"),
"windows": lambda self, e: ( "windows": lambda self, e: (
self.seg("WINDOW ") + self.expressions(e, key="windows", flat=True) self.seg("WINDOW ") + self.expressions(e, key="windows", flat=True)
if e.args.get("windows") if e.args.get("windows")
else "" else ""
), ),
"qualify": lambda self, e: self.sql(e, "qualify"),
} }
TOKEN_MAPPING: t.Dict[TokenType, str] = {} TOKEN_MAPPING: t.Dict[TokenType, str] = {}
@ -512,6 +515,7 @@ class Generator(metaclass=_Generator):
# Expressions whose comments are separated from them for better formatting # Expressions whose comments are separated from them for better formatting
WITH_SEPARATED_COMMENTS: t.Tuple[t.Type[exp.Expression], ...] = ( WITH_SEPARATED_COMMENTS: t.Tuple[t.Type[exp.Expression], ...] = (
exp.Command,
exp.Create, exp.Create,
exp.Delete, exp.Delete,
exp.Drop, exp.Drop,
@ -957,7 +961,8 @@ class Generator(metaclass=_Generator):
index_type = f" USING {index_type}" if index_type else "" index_type = f" USING {index_type}" if index_type else ""
on_conflict = self.sql(expression, "on_conflict") on_conflict = self.sql(expression, "on_conflict")
on_conflict = f" {on_conflict}" if on_conflict else "" on_conflict = f" {on_conflict}" if on_conflict else ""
return f"UNIQUE{this}{index_type}{on_conflict}" nulls_sql = " NULLS NOT DISTINCT" if expression.args.get("nulls") else ""
return f"UNIQUE{nulls_sql}{this}{index_type}{on_conflict}"
def createable_sql(self, expression: exp.Create, locations: t.DefaultDict) -> str: def createable_sql(self, expression: exp.Create, locations: t.DefaultDict) -> str:
return self.sql(expression, "this") return self.sql(expression, "this")
@ -996,6 +1001,7 @@ class Generator(metaclass=_Generator):
expression_sql = f"{begin}{self.sep()}{expression_sql}{end}" expression_sql = f"{begin}{self.sep()}{expression_sql}{end}"
if self.CREATE_FUNCTION_RETURN_AS or not isinstance(expression.expression, exp.Return): if self.CREATE_FUNCTION_RETURN_AS or not isinstance(expression.expression, exp.Return):
postalias_props_sql = ""
if properties_locs.get(exp.Properties.Location.POST_ALIAS): if properties_locs.get(exp.Properties.Location.POST_ALIAS):
postalias_props_sql = self.properties( postalias_props_sql = self.properties(
exp.Properties( exp.Properties(
@ -1003,9 +1009,8 @@ class Generator(metaclass=_Generator):
), ),
wrapped=False, wrapped=False,
) )
expression_sql = f" AS {postalias_props_sql}{expression_sql}" postalias_props_sql = f" {postalias_props_sql}" if postalias_props_sql else ""
else: expression_sql = f" AS{postalias_props_sql}{expression_sql}"
expression_sql = f" AS{expression_sql}"
postindex_props_sql = "" postindex_props_sql = ""
if properties_locs.get(exp.Properties.Location.POST_INDEX): if properties_locs.get(exp.Properties.Location.POST_INDEX):
@ -1754,6 +1759,10 @@ class Generator(metaclass=_Generator):
changes = self.sql(expression, "changes") changes = self.sql(expression, "changes")
changes = f" {changes}" if changes else "" changes = f" {changes}" if changes else ""
rows_from = self.expressions(expression, key="rows_from")
if rows_from:
table = f"ROWS FROM {self.wrap(rows_from)}"
return f"{only}{table}{changes}{partition}{version}{file_format}{alias}{hints}{pivots}{joins}{laterals}{ordinality}" return f"{only}{table}{changes}{partition}{version}{file_format}{alias}{hints}{pivots}{joins}{laterals}{ordinality}"
def tablesample_sql( def tablesample_sql(
@ -4043,3 +4052,12 @@ class Generator(metaclass=_Generator):
end = f"{self.seg('')}{end}" if end else "" end = f"{self.seg('')}{end}" if end else ""
return f"CHANGES ({information}){at_before}{end}" return f"CHANGES ({information}){at_before}{end}"
def pad_sql(self, expression: exp.Pad) -> str:
prefix = "L" if expression.args.get("is_left") else "R"
fill_pattern = self.sql(expression, "fill_pattern") or None
if not fill_pattern and self.PAD_FILL_PATTERN_IS_REQUIRED:
fill_pattern = "' '"
return self.func(f"{prefix}PAD", expression.this, expression.expression, fill_pattern)

View file

@ -184,7 +184,6 @@ class TypeAnnotator(metaclass=_TypeAnnotator):
exp.Ceil, exp.Ceil,
exp.DatetimeDiff, exp.DatetimeDiff,
exp.DateDiff, exp.DateDiff,
exp.Extract,
exp.TimestampDiff, exp.TimestampDiff,
exp.TimeDiff, exp.TimeDiff,
exp.DateToDi, exp.DateToDi,
@ -268,6 +267,7 @@ class TypeAnnotator(metaclass=_TypeAnnotator):
exp.Div: lambda self, e: self._annotate_div(e), exp.Div: lambda self, e: self._annotate_div(e),
exp.Dot: lambda self, e: self._annotate_dot(e), exp.Dot: lambda self, e: self._annotate_dot(e),
exp.Explode: lambda self, e: self._annotate_explode(e), exp.Explode: lambda self, e: self._annotate_explode(e),
exp.Extract: lambda self, e: self._annotate_extract(e),
exp.Filter: lambda self, e: self._annotate_by_args(e, "this"), exp.Filter: lambda self, e: self._annotate_by_args(e, "this"),
exp.GenerateDateArray: lambda self, e: self._annotate_with_type( exp.GenerateDateArray: lambda self, e: self._annotate_with_type(
e, exp.DataType.build("ARRAY<DATE>") e, exp.DataType.build("ARRAY<DATE>")
@ -680,3 +680,14 @@ class TypeAnnotator(metaclass=_TypeAnnotator):
self._set_type(expression, map_type) self._set_type(expression, map_type)
return expression return expression
def _annotate_extract(self, expression: exp.Extract) -> exp.Extract:
self._annotate_args(expression)
part = expression.name
if part == "TIME":
self._set_type(expression, exp.DataType.Type.TIME)
elif part == "DATE":
self._set_type(expression, exp.DataType.Type.DATE)
else:
self._set_type(expression, exp.DataType.Type.INT)
return expression

View file

@ -513,7 +513,9 @@ def _expand_stars(
new_selections = [] new_selections = []
except_columns: t.Dict[int, t.Set[str]] = {} except_columns: t.Dict[int, t.Set[str]] = {}
replace_columns: t.Dict[int, t.Dict[str, str]] = {} replace_columns: t.Dict[int, t.Dict[str, exp.Alias]] = {}
rename_columns: t.Dict[int, t.Dict[str, str]] = {}
coalesced_columns = set() coalesced_columns = set()
dialect = resolver.schema.dialect dialect = resolver.schema.dialect
@ -548,11 +550,13 @@ def _expand_stars(
tables.extend(scope.selected_sources) tables.extend(scope.selected_sources)
_add_except_columns(expression, tables, except_columns) _add_except_columns(expression, tables, except_columns)
_add_replace_columns(expression, tables, replace_columns) _add_replace_columns(expression, tables, replace_columns)
_add_rename_columns(expression, tables, rename_columns)
elif expression.is_star: elif expression.is_star:
if not isinstance(expression, exp.Dot): if not isinstance(expression, exp.Dot):
tables.append(expression.table) tables.append(expression.table)
_add_except_columns(expression.this, tables, except_columns) _add_except_columns(expression.this, tables, except_columns)
_add_replace_columns(expression.this, tables, replace_columns) _add_replace_columns(expression.this, tables, replace_columns)
_add_rename_columns(expression.this, tables, rename_columns)
elif is_bigquery: elif is_bigquery:
struct_fields = _expand_struct_stars(expression) struct_fields = _expand_struct_stars(expression)
if struct_fields: if struct_fields:
@ -578,6 +582,8 @@ def _expand_stars(
table_id = id(table) table_id = id(table)
columns_to_exclude = except_columns.get(table_id) or set() columns_to_exclude = except_columns.get(table_id) or set()
renamed_columns = rename_columns.get(table_id, {})
replaced_columns = replace_columns.get(table_id, {})
if pivot: if pivot:
if pivot_output_columns and pivot_exclude_columns: if pivot_output_columns and pivot_exclude_columns:
@ -606,10 +612,12 @@ def _expand_stars(
alias(exp.func("coalesce", *coalesce_args), alias=name, copy=False) alias(exp.func("coalesce", *coalesce_args), alias=name, copy=False)
) )
else: else:
alias_ = replace_columns.get(table_id, {}).get(name, name) alias_ = renamed_columns.get(name, name)
column = exp.column(name, table=table) selection_expr = replaced_columns.get(name) or exp.column(name, table=table)
new_selections.append( new_selections.append(
alias(column, alias_, copy=False) if alias_ != name else column alias(selection_expr, alias_, copy=False)
if alias_ != name
else selection_expr
) )
# Ensures we don't overwrite the initial selections with an empty list # Ensures we don't overwrite the initial selections with an empty list
@ -631,15 +639,29 @@ def _add_except_columns(
except_columns[id(table)] = columns except_columns[id(table)] = columns
def _add_rename_columns(
expression: exp.Expression, tables, rename_columns: t.Dict[int, t.Dict[str, str]]
) -> None:
rename = expression.args.get("rename")
if not rename:
return
columns = {e.this.name: e.alias for e in rename}
for table in tables:
rename_columns[id(table)] = columns
def _add_replace_columns( def _add_replace_columns(
expression: exp.Expression, tables, replace_columns: t.Dict[int, t.Dict[str, str]] expression: exp.Expression, tables, replace_columns: t.Dict[int, t.Dict[str, exp.Alias]]
) -> None: ) -> None:
replace = expression.args.get("replace") replace = expression.args.get("replace")
if not replace: if not replace:
return return
columns = {e.this.name: e.alias for e in replace} columns = {e.alias: e for e in replace}
for table in tables: for table in tables:
replace_columns[id(table)] = columns replace_columns[id(table)] = columns

View file

@ -284,6 +284,7 @@ class Scope:
or column.name not in named_selects or column.name not in named_selects
) )
) )
or (isinstance(ancestor, exp.Star) and not column.arg_key == "except")
): ):
self._columns.append(column) self._columns.append(column)

View file

@ -108,6 +108,15 @@ def build_mod(args: t.List) -> exp.Mod:
return exp.Mod(this=this, expression=expression) return exp.Mod(this=this, expression=expression)
def build_pad(args: t.List, is_left: bool = True):
return exp.Pad(
this=seq_get(args, 0),
expression=seq_get(args, 1),
fill_pattern=seq_get(args, 2),
is_left=is_left,
)
class _Parser(type): class _Parser(type):
def __new__(cls, clsname, bases, attrs): def __new__(cls, clsname, bases, attrs):
klass = super().__new__(cls, clsname, bases, attrs) klass = super().__new__(cls, clsname, bases, attrs)
@ -159,7 +168,11 @@ class Parser(metaclass=_Parser):
"LOG2": lambda args: exp.Log(this=exp.Literal.number(2), expression=seq_get(args, 0)), "LOG2": lambda args: exp.Log(this=exp.Literal.number(2), expression=seq_get(args, 0)),
"LOG10": lambda args: exp.Log(this=exp.Literal.number(10), expression=seq_get(args, 0)), "LOG10": lambda args: exp.Log(this=exp.Literal.number(10), expression=seq_get(args, 0)),
"LOWER": build_lower, "LOWER": build_lower,
"LPAD": lambda args: build_pad(args),
"LEFTPAD": lambda args: build_pad(args),
"MOD": build_mod, "MOD": build_mod,
"RPAD": lambda args: build_pad(args, is_left=False),
"RIGHTPAD": lambda args: build_pad(args, is_left=False),
"SCOPE_RESOLUTION": lambda args: exp.ScopeResolution(expression=seq_get(args, 0)) "SCOPE_RESOLUTION": lambda args: exp.ScopeResolution(expression=seq_get(args, 0))
if len(args) != 2 if len(args) != 2
else exp.ScopeResolution(this=seq_get(args, 0), expression=seq_get(args, 1)), else exp.ScopeResolution(this=seq_get(args, 0), expression=seq_get(args, 1)),
@ -777,7 +790,7 @@ class Parser(metaclass=_Parser):
TokenType.PARAMETER: lambda self: self._parse_parameter(), TokenType.PARAMETER: lambda self: self._parse_parameter(),
TokenType.COLON: lambda self: ( TokenType.COLON: lambda self: (
self.expression(exp.Placeholder, this=self._prev.text) self.expression(exp.Placeholder, this=self._prev.text)
if self._match(TokenType.NUMBER) or self._match_set(self.ID_VAR_TOKENS) if self._match_set(self.ID_VAR_TOKENS)
else None else None
), ),
} }
@ -1478,7 +1491,10 @@ class Parser(metaclass=_Parser):
def _parse_command(self) -> exp.Command: def _parse_command(self) -> exp.Command:
self._warn_unsupported() self._warn_unsupported()
return self.expression( return self.expression(
exp.Command, this=self._prev.text.upper(), expression=self._parse_string() exp.Command,
comments=self._prev_comments,
this=self._prev.text.upper(),
expression=self._parse_string(),
) )
def _try_parse(self, parse_method: t.Callable[[], T], retreat: bool = False) -> t.Optional[T]: def _try_parse(self, parse_method: t.Callable[[], T], retreat: bool = False) -> t.Optional[T]:
@ -3375,11 +3391,17 @@ class Parser(metaclass=_Parser):
bracket = parse_bracket and self._parse_bracket(None) bracket = parse_bracket and self._parse_bracket(None)
bracket = self.expression(exp.Table, this=bracket) if bracket else None bracket = self.expression(exp.Table, this=bracket) if bracket else None
rows_from = self._match_text_seq("ROWS", "FROM") and self._parse_wrapped_csv(
self._parse_table
)
rows_from = self.expression(exp.Table, rows_from=rows_from) if rows_from else None
only = self._match(TokenType.ONLY) only = self._match(TokenType.ONLY)
this = t.cast( this = t.cast(
exp.Expression, exp.Expression,
bracket bracket
or rows_from
or self._parse_bracket( or self._parse_bracket(
self._parse_table_parts(schema=schema, is_db_reference=is_db_reference) self._parse_table_parts(schema=schema, is_db_reference=is_db_reference)
), ),
@ -3842,6 +3864,9 @@ class Parser(metaclass=_Parser):
if not this: if not this:
return None return None
if this.name.upper() == "ALL" and self.dialect.SUPPORTS_ORDER_BY_ALL:
this = exp.var("ALL")
asc = self._match(TokenType.ASC) asc = self._match(TokenType.ASC)
desc = self._match(TokenType.DESC) or (asc and False) desc = self._match(TokenType.DESC) or (asc and False)
@ -4252,6 +4277,13 @@ class Parser(metaclass=_Parser):
index = self._index index = self._index
data_type = self._parse_types(check_func=True, allow_identifiers=False) data_type = self._parse_types(check_func=True, allow_identifiers=False)
# parse_types() returns a Cast if we parsed BQ's inline constructor <type>(<values>) e.g.
# STRUCT<a INT, b STRING>(1, 'foo'), which is canonicalized to CAST(<values> AS <type>)
if isinstance(data_type, exp.Cast):
# This constructor can contain ops directly after it, for instance struct unnesting:
# STRUCT<a INT, b STRING>(1, 'foo').* --> CAST(STRUCT(1, 'foo') AS STRUCT<a iNT, b STRING).*
return self._parse_column_ops(data_type)
if data_type: if data_type:
index2 = self._index index2 = self._index
this = self._parse_primary() this = self._parse_primary()
@ -4471,9 +4503,14 @@ class Parser(metaclass=_Parser):
this=exp.DataType.Type[type_token.value], this=exp.DataType.Type[type_token.value],
expressions=expressions, expressions=expressions,
nested=nested, nested=nested,
values=values,
prefix=prefix, prefix=prefix,
) )
# Empty arrays/structs are allowed
if values is not None:
cls = exp.Struct if is_struct else exp.Array
this = exp.cast(cls(expressions=values), this, copy=False)
elif expressions: elif expressions:
this.set("expressions", expressions) this.set("expressions", expressions)
@ -5142,11 +5179,15 @@ class Parser(metaclass=_Parser):
return self.CONSTRAINT_PARSERS[constraint](self) return self.CONSTRAINT_PARSERS[constraint](self)
def _parse_unique_key(self) -> t.Optional[exp.Expression]:
return self._parse_id_var(any_token=False)
def _parse_unique(self) -> exp.UniqueColumnConstraint: def _parse_unique(self) -> exp.UniqueColumnConstraint:
self._match_text_seq("KEY") self._match_text_seq("KEY")
return self.expression( return self.expression(
exp.UniqueColumnConstraint, exp.UniqueColumnConstraint,
this=self._parse_schema(self._parse_id_var(any_token=False)), nulls=self._match_text_seq("NULLS", "NOT", "DISTINCT"),
this=self._parse_schema(self._parse_unique_key()),
index_type=self._match(TokenType.USING) and self._advance_any() and self._prev.text, index_type=self._match(TokenType.USING) and self._advance_any() and self._prev.text,
on_conflict=self._parse_on_conflict(), on_conflict=self._parse_on_conflict(),
) )

View file

@ -138,7 +138,6 @@ LANGUAGE js AS
self.validate_identity("SELECT CAST(CURRENT_DATE AS STRING FORMAT 'DAY') AS current_day") self.validate_identity("SELECT CAST(CURRENT_DATE AS STRING FORMAT 'DAY') AS current_day")
self.validate_identity("SAFE_CAST(encrypted_value AS STRING FORMAT 'BASE64')") self.validate_identity("SAFE_CAST(encrypted_value AS STRING FORMAT 'BASE64')")
self.validate_identity("CAST(encrypted_value AS STRING FORMAT 'BASE64')") self.validate_identity("CAST(encrypted_value AS STRING FORMAT 'BASE64')")
self.validate_identity("CAST(STRUCT<a INT64>(1) AS STRUCT<a INT64>)")
self.validate_identity("STRING_AGG(a)") self.validate_identity("STRING_AGG(a)")
self.validate_identity("STRING_AGG(a, ' & ')") self.validate_identity("STRING_AGG(a, ' & ')")
self.validate_identity("STRING_AGG(DISTINCT a, ' & ')") self.validate_identity("STRING_AGG(DISTINCT a, ' & ')")
@ -162,12 +161,9 @@ LANGUAGE js AS
self.validate_identity("x <> ''") self.validate_identity("x <> ''")
self.validate_identity("DATE_TRUNC(col, WEEK(MONDAY))") self.validate_identity("DATE_TRUNC(col, WEEK(MONDAY))")
self.validate_identity("SELECT b'abc'") self.validate_identity("SELECT b'abc'")
self.validate_identity("""SELECT * FROM UNNEST(ARRAY<STRUCT<x INT64>>[])""")
self.validate_identity("SELECT AS STRUCT 1 AS a, 2 AS b") self.validate_identity("SELECT AS STRUCT 1 AS a, 2 AS b")
self.validate_identity("SELECT DISTINCT AS STRUCT 1 AS a, 2 AS b") self.validate_identity("SELECT DISTINCT AS STRUCT 1 AS a, 2 AS b")
self.validate_identity("SELECT AS VALUE STRUCT(1 AS a, 2 AS b)") self.validate_identity("SELECT AS VALUE STRUCT(1 AS a, 2 AS b)")
self.validate_identity("SELECT STRUCT<ARRAY<STRING>>(['2023-01-17'])")
self.validate_identity("SELECT STRUCT<STRING>((SELECT a FROM b.c LIMIT 1)).*")
self.validate_identity("SELECT * FROM q UNPIVOT(values FOR quarter IN (b, c))") self.validate_identity("SELECT * FROM q UNPIVOT(values FOR quarter IN (b, c))")
self.validate_identity("""CREATE TABLE x (a STRUCT<values ARRAY<INT64>>)""") self.validate_identity("""CREATE TABLE x (a STRUCT<values ARRAY<INT64>>)""")
self.validate_identity("""CREATE TABLE x (a STRUCT<b STRING OPTIONS (description='b')>)""") self.validate_identity("""CREATE TABLE x (a STRUCT<b STRING OPTIONS (description='b')>)""")
@ -1427,6 +1423,12 @@ WHERE
transpile("DATE_ADD(x, day)", read="bigquery") transpile("DATE_ADD(x, day)", read="bigquery")
def test_warnings(self): def test_warnings(self):
with self.assertLogs(parser_logger) as cm:
self.validate_identity(
"/* some comment */ DECLARE foo DATE DEFAULT DATE_SUB(current_date, INTERVAL 2 day)"
)
self.assertIn("contains unsupported syntax", cm.output[0])
with self.assertLogs(helper_logger) as cm: with self.assertLogs(helper_logger) as cm:
self.validate_identity( self.validate_identity(
"WITH cte(c) AS (SELECT * FROM t) SELECT * FROM cte", "WITH cte(c) AS (SELECT * FROM t) SELECT * FROM cte",
@ -1607,7 +1609,7 @@ WHERE
"SELECT * FROM GAP_FILL(TABLE device_data, ts_column => 'time', bucket_width => INTERVAL '1' MINUTE, value_columns => [('signal', 'null')], origin => CAST('2023-11-01 09:30:01' AS DATETIME)) ORDER BY time" "SELECT * FROM GAP_FILL(TABLE device_data, ts_column => 'time', bucket_width => INTERVAL '1' MINUTE, value_columns => [('signal', 'null')], origin => CAST('2023-11-01 09:30:01' AS DATETIME)) ORDER BY time"
) )
self.validate_identity( self.validate_identity(
"SELECT * FROM GAP_FILL(TABLE (SELECT * FROM UNNEST(ARRAY<STRUCT<device_id INT64, time DATETIME, signal INT64, state STRING>>[STRUCT(1, CAST('2023-11-01 09:34:01' AS DATETIME), 74, 'INACTIVE'), STRUCT(2, CAST('2023-11-01 09:36:00' AS DATETIME), 77, 'ACTIVE'), STRUCT(3, CAST('2023-11-01 09:37:00' AS DATETIME), 78, 'ACTIVE'), STRUCT(4, CAST('2023-11-01 09:38:01' AS DATETIME), 80, 'ACTIVE')])), ts_column => 'time', bucket_width => INTERVAL '1' MINUTE, value_columns => [('signal', 'linear')]) ORDER BY time" "SELECT * FROM GAP_FILL(TABLE device_data, ts_column => 'time', bucket_width => INTERVAL '1' MINUTE, value_columns => [('signal', 'locf')]) ORDER BY time"
) )
def test_models(self): def test_models(self):
@ -1757,3 +1759,49 @@ OPTIONS (
"MOD((a + 1), b)", "MOD((a + 1), b)",
"MOD(a + 1, b)", "MOD(a + 1, b)",
) )
def test_inline_constructor(self):
self.validate_identity(
"""SELECT STRUCT<ARRAY<STRING>>(["2023-01-17"])""",
"""SELECT CAST(STRUCT(['2023-01-17']) AS STRUCT<ARRAY<STRING>>)""",
)
self.validate_identity(
"""SELECT STRUCT<STRING>((SELECT 'foo')).*""",
"""SELECT CAST(STRUCT((SELECT 'foo')) AS STRUCT<STRING>).*""",
)
self.validate_all(
"SELECT ARRAY<INT>[1, 2, 3]",
write={
"bigquery": "SELECT CAST([1, 2, 3] AS ARRAY<INT64>)",
"duckdb": "SELECT CAST([1, 2, 3] AS INT[])",
},
)
self.validate_all(
"CAST(STRUCT<a INT64>(1) AS STRUCT<a INT64>)",
write={
"bigquery": "CAST(CAST(STRUCT(1) AS STRUCT<a INT64>) AS STRUCT<a INT64>)",
"duckdb": "CAST(CAST(ROW(1) AS STRUCT(a BIGINT)) AS STRUCT(a BIGINT))",
},
)
self.validate_all(
"SELECT * FROM UNNEST(ARRAY<STRUCT<x INT64>>[])",
write={
"bigquery": "SELECT * FROM UNNEST(CAST([] AS ARRAY<STRUCT<x INT64>>))",
"duckdb": "SELECT * FROM UNNEST(CAST([] AS STRUCT(x BIGINT)[]))",
},
)
self.validate_all(
"SELECT * FROM UNNEST(ARRAY<STRUCT<device_id INT64, time DATETIME, signal INT64, state STRING>>[STRUCT(1, DATETIME '2023-11-01 09:34:01', 74, 'INACTIVE'),STRUCT(4, DATETIME '2023-11-01 09:38:01', 80, 'ACTIVE')])",
write={
"bigquery": "SELECT * FROM UNNEST(CAST([STRUCT(1, CAST('2023-11-01 09:34:01' AS DATETIME), 74, 'INACTIVE'), STRUCT(4, CAST('2023-11-01 09:38:01' AS DATETIME), 80, 'ACTIVE')] AS ARRAY<STRUCT<device_id INT64, time DATETIME, signal INT64, state STRING>>))",
"duckdb": "SELECT * FROM UNNEST(CAST([ROW(1, CAST('2023-11-01 09:34:01' AS TIMESTAMP), 74, 'INACTIVE'), ROW(4, CAST('2023-11-01 09:38:01' AS TIMESTAMP), 80, 'ACTIVE')] AS STRUCT(device_id BIGINT, time TIMESTAMP, signal BIGINT, state TEXT)[]))",
},
)
self.validate_all(
"SELECT STRUCT<a INT64, b STRUCT<c STRING>>(1, STRUCT('c_str'))",
write={
"bigquery": "SELECT CAST(STRUCT(1, STRUCT('c_str')) AS STRUCT<a INT64, b STRUCT<c STRING>>)",
"duckdb": "SELECT CAST(ROW(1, ROW('c_str')) AS STRUCT(a BIGINT, b STRUCT(c TEXT)))",
},
)

View file

@ -83,6 +83,9 @@ class TestClickhouse(Validator):
self.validate_identity("TRUNCATE TABLE t1 ON CLUSTER test_cluster") self.validate_identity("TRUNCATE TABLE t1 ON CLUSTER test_cluster")
self.validate_identity("TRUNCATE DATABASE db") self.validate_identity("TRUNCATE DATABASE db")
self.validate_identity("TRUNCATE DATABASE db ON CLUSTER test_cluster") self.validate_identity("TRUNCATE DATABASE db ON CLUSTER test_cluster")
self.validate_identity(
"SELECT number, COUNT() OVER (PARTITION BY number % 3) AS partition_count FROM numbers(10) WINDOW window_name AS (PARTITION BY number) QUALIFY partition_count = 4 ORDER BY number"
)
self.validate_identity( self.validate_identity(
"SELECT id, quantileGK(100, 0.95)(reading) OVER (PARTITION BY id ORDER BY id RANGE BETWEEN 30000 PRECEDING AND CURRENT ROW) AS window FROM table" "SELECT id, quantileGK(100, 0.95)(reading) OVER (PARTITION BY id ORDER BY id RANGE BETWEEN 30000 PRECEDING AND CURRENT ROW) AS window FROM table"
) )
@ -448,6 +451,10 @@ class TestClickhouse(Validator):
self.validate_identity("ALTER TABLE visits REPLACE PARTITION ID '201901' FROM visits_tmp") self.validate_identity("ALTER TABLE visits REPLACE PARTITION ID '201901' FROM visits_tmp")
self.validate_identity("ALTER TABLE visits ON CLUSTER test_cluster DROP COLUMN col1") self.validate_identity("ALTER TABLE visits ON CLUSTER test_cluster DROP COLUMN col1")
self.assertIsInstance(
parse_one("Tuple(select Int64)", into=exp.DataType, read="clickhouse"), exp.DataType
)
def test_cte(self): def test_cte(self):
self.validate_identity("WITH 'x' AS foo SELECT foo") self.validate_identity("WITH 'x' AS foo SELECT foo")
self.validate_identity("WITH ['c'] AS field_names SELECT field_names") self.validate_identity("WITH ['c'] AS field_names SELECT field_names")
@ -545,6 +552,7 @@ class TestClickhouse(Validator):
self.validate_identity( self.validate_identity(
"CREATE TABLE foo (x UInt32) TTL time_column + INTERVAL '1' MONTH DELETE WHERE column = 'value'" "CREATE TABLE foo (x UInt32) TTL time_column + INTERVAL '1' MONTH DELETE WHERE column = 'value'"
) )
self.validate_identity("CREATE TABLE named_tuples (a Tuple(select String, i Int64))")
self.validate_all( self.validate_all(
""" """

View file

@ -98,6 +98,22 @@ class TestDatabricks(Validator):
read="databricks", read="databricks",
) )
self.validate_all(
"CREATE OR REPLACE FUNCTION func(a BIGINT, b BIGINT) RETURNS TABLE (a INT) RETURN SELECT a",
write={
"databricks": "CREATE OR REPLACE FUNCTION func(a BIGINT, b BIGINT) RETURNS TABLE (a INT) RETURN SELECT a",
"duckdb": "CREATE OR REPLACE FUNCTION func(a, b) AS TABLE SELECT a",
},
)
self.validate_all(
"CREATE OR REPLACE FUNCTION func(a BIGINT, b BIGINT) RETURNS BIGINT RETURN a",
write={
"databricks": "CREATE OR REPLACE FUNCTION func(a BIGINT, b BIGINT) RETURNS BIGINT RETURN a",
"duckdb": "CREATE OR REPLACE FUNCTION func(a, b) AS a",
},
)
# https://docs.databricks.com/sql/language-manual/functions/colonsign.html # https://docs.databricks.com/sql/language-manual/functions/colonsign.html
def test_json(self): def test_json(self):
self.validate_identity("SELECT c1:price, c1:price.foo, c1:price.bar[1]") self.validate_identity("SELECT c1:price, c1:price.foo, c1:price.bar[1]")

View file

@ -1164,6 +1164,13 @@ class TestDialect(Validator):
}, },
) )
order_by_all_sql = "SELECT * FROM t ORDER BY ALL"
self.validate_identity(order_by_all_sql).find(exp.Ordered).this.assert_is(exp.Column)
for dialect in ("duckdb", "spark", "databricks"):
with self.subTest(f"Testing ORDER BY ALL in {dialect}"):
parse_one(order_by_all_sql, read=dialect).find(exp.Ordered).this.assert_is(exp.Var)
def test_json(self): def test_json(self):
self.validate_all( self.validate_all(
"""JSON_EXTRACT(x, '$["a b"]')""", """JSON_EXTRACT(x, '$["a b"]')""",
@ -2267,7 +2274,7 @@ SELECT
write={ write={
"duckdb": "SELECT * FROM t QUALIFY COUNT(*) OVER () > 1", "duckdb": "SELECT * FROM t QUALIFY COUNT(*) OVER () > 1",
"snowflake": "SELECT * FROM t QUALIFY COUNT(*) OVER () > 1", "snowflake": "SELECT * FROM t QUALIFY COUNT(*) OVER () > 1",
"clickhouse": "SELECT * FROM (SELECT *, COUNT(*) OVER () AS _w FROM t) AS _t WHERE _w > 1", "clickhouse": "SELECT * FROM t QUALIFY COUNT(*) OVER () > 1",
"mysql": "SELECT * FROM (SELECT *, COUNT(*) OVER () AS _w FROM t) AS _t WHERE _w > 1", "mysql": "SELECT * FROM (SELECT *, COUNT(*) OVER () AS _w FROM t) AS _t WHERE _w > 1",
"oracle": "SELECT * FROM (SELECT *, COUNT(*) OVER () AS _w FROM t) _t WHERE _w > 1", "oracle": "SELECT * FROM (SELECT *, COUNT(*) OVER () AS _w FROM t) _t WHERE _w > 1",
"postgres": "SELECT * FROM (SELECT *, COUNT(*) OVER () AS _w FROM t) AS _t WHERE _w > 1", "postgres": "SELECT * FROM (SELECT *, COUNT(*) OVER () AS _w FROM t) AS _t WHERE _w > 1",
@ -2279,7 +2286,7 @@ SELECT
write={ write={
"duckdb": 'SELECT "user id", some_id, 1 AS other_id, 2 AS "2 nd id" FROM t QUALIFY COUNT(*) OVER () > 1', "duckdb": 'SELECT "user id", some_id, 1 AS other_id, 2 AS "2 nd id" FROM t QUALIFY COUNT(*) OVER () > 1',
"snowflake": 'SELECT "user id", some_id, 1 AS other_id, 2 AS "2 nd id" FROM t QUALIFY COUNT(*) OVER () > 1', "snowflake": 'SELECT "user id", some_id, 1 AS other_id, 2 AS "2 nd id" FROM t QUALIFY COUNT(*) OVER () > 1',
"clickhouse": 'SELECT "user id", some_id, other_id, "2 nd id" FROM (SELECT "user id", some_id, 1 AS other_id, 2 AS "2 nd id", COUNT(*) OVER () AS _w FROM t) AS _t WHERE _w > 1', "clickhouse": 'SELECT "user id", some_id, 1 AS other_id, 2 AS "2 nd id" FROM t QUALIFY COUNT(*) OVER () > 1',
"mysql": "SELECT `user id`, some_id, other_id, `2 nd id` FROM (SELECT `user id`, some_id, 1 AS other_id, 2 AS `2 nd id`, COUNT(*) OVER () AS _w FROM t) AS _t WHERE _w > 1", "mysql": "SELECT `user id`, some_id, other_id, `2 nd id` FROM (SELECT `user id`, some_id, 1 AS other_id, 2 AS `2 nd id`, COUNT(*) OVER () AS _w FROM t) AS _t WHERE _w > 1",
"oracle": 'SELECT "user id", some_id, other_id, "2 nd id" FROM (SELECT "user id", some_id, 1 AS other_id, 2 AS "2 nd id", COUNT(*) OVER () AS _w FROM t) _t WHERE _w > 1', "oracle": 'SELECT "user id", some_id, other_id, "2 nd id" FROM (SELECT "user id", some_id, 1 AS other_id, 2 AS "2 nd id", COUNT(*) OVER () AS _w FROM t) _t WHERE _w > 1',
"postgres": 'SELECT "user id", some_id, other_id, "2 nd id" FROM (SELECT "user id", some_id, 1 AS other_id, 2 AS "2 nd id", COUNT(*) OVER () AS _w FROM t) AS _t WHERE _w > 1', "postgres": 'SELECT "user id", some_id, other_id, "2 nd id" FROM (SELECT "user id", some_id, 1 AS other_id, 2 AS "2 nd id", COUNT(*) OVER () AS _w FROM t) AS _t WHERE _w > 1',
@ -2566,3 +2573,33 @@ FROM subquery2""",
"""SELECT partition.d FROM t PARTITION (d)""", """SELECT partition.d FROM t PARTITION (d)""",
"""SELECT partition.d FROM t AS PARTITION(d)""", """SELECT partition.d FROM t AS PARTITION(d)""",
) )
def test_string_functions(self):
for pad_func in ("LPAD", "RPAD"):
ch_alias = "LEFTPAD" if pad_func == "LPAD" else "RIGHTPAD"
for fill_pattern in ("", ", ' '"):
with self.subTest(f"Testing {pad_func}() with pattern {fill_pattern}"):
self.validate_all(
f"SELECT {pad_func}('bar', 5{fill_pattern})",
read={
"snowflake": f"SELECT {pad_func}('bar', 5{fill_pattern})",
"databricks": f"SELECT {pad_func}('bar', 5{fill_pattern})",
"spark": f"SELECT {pad_func}('bar', 5{fill_pattern})",
"postgres": f"SELECT {pad_func}('bar', 5{fill_pattern})",
"clickhouse": f"SELECT {ch_alias}('bar', 5{fill_pattern})",
},
write={
"": f"SELECT {pad_func}('bar', 5{fill_pattern})",
"spark": f"SELECT {pad_func}('bar', 5{fill_pattern})",
"postgres": f"SELECT {pad_func}('bar', 5{fill_pattern})",
"clickhouse": f"SELECT {pad_func}('bar', 5{fill_pattern})",
"snowflake": f"SELECT {pad_func}('bar', 5{fill_pattern})",
"databricks": f"SELECT {pad_func}('bar', 5{fill_pattern})",
"duckdb": f"SELECT {pad_func}('bar', 5, ' ')",
"mysql": f"SELECT {pad_func}('bar', 5, ' ')",
"hive": f"SELECT {pad_func}('bar', 5, ' ')",
"spark2": f"SELECT {pad_func}('bar', 5, ' ')",
"presto": f"SELECT {pad_func}('bar', 5, ' ')",
"trino": f"SELECT {pad_func}('bar', 5, ' ')",
},
)

View file

@ -361,6 +361,10 @@ class TestDuckDB(Validator):
self.validate_identity( self.validate_identity(
"SELECT * FROM (PIVOT Cities ON Year USING SUM(Population) GROUP BY Country) AS pivot_alias" "SELECT * FROM (PIVOT Cities ON Year USING SUM(Population) GROUP BY Country) AS pivot_alias"
) )
self.validate_identity(
# QUALIFY comes after WINDOW
"SELECT schema_name, function_name, ROW_NUMBER() OVER my_window AS function_rank FROM DUCKDB_FUNCTIONS() WINDOW my_window AS (PARTITION BY schema_name ORDER BY function_name) QUALIFY ROW_NUMBER() OVER my_window < 3"
)
self.validate_identity("DATE_SUB('YEAR', col, '2020-01-01')").assert_is(exp.Anonymous) self.validate_identity("DATE_SUB('YEAR', col, '2020-01-01')").assert_is(exp.Anonymous)
self.validate_identity("DATESUB('YEAR', col, '2020-01-01')").assert_is(exp.Anonymous) self.validate_identity("DATESUB('YEAR', col, '2020-01-01')").assert_is(exp.Anonymous)
@ -1038,11 +1042,11 @@ class TestDuckDB(Validator):
) )
self.validate_identity( self.validate_identity(
"CAST([[STRUCT_PACK(a := 1)]] AS STRUCT(a BIGINT)[][])", "CAST([[STRUCT_PACK(a := 1)]] AS STRUCT(a BIGINT)[][])",
"CAST([[{'a': 1}]] AS STRUCT(a BIGINT)[][])", "CAST([[ROW(1)]] AS STRUCT(a BIGINT)[][])",
) )
self.validate_identity( self.validate_identity(
"CAST([STRUCT_PACK(a := 1)] AS STRUCT(a BIGINT)[])", "CAST([STRUCT_PACK(a := 1)] AS STRUCT(a BIGINT)[])",
"CAST([{'a': 1}] AS STRUCT(a BIGINT)[])", "CAST([ROW(1)] AS STRUCT(a BIGINT)[])",
) )
self.validate_all( self.validate_all(

View file

@ -977,6 +977,10 @@ class TestPostgres(Validator):
}, },
) )
self.validate_identity("CREATE TABLE tbl (col INT UNIQUE NULLS NOT DISTINCT DEFAULT 9.99)")
self.validate_identity("CREATE TABLE tbl (col UUID UNIQUE DEFAULT GEN_RANDOM_UUID())")
self.validate_identity("CREATE TABLE tbl (col UUID, UNIQUE NULLS NOT DISTINCT (col))")
with self.assertRaises(ParseError): with self.assertRaises(ParseError):
transpile("CREATE TABLE products (price DECIMAL CHECK price > 0)", read="postgres") transpile("CREATE TABLE products (price DECIMAL CHECK price > 0)", read="postgres")
with self.assertRaises(ParseError): with self.assertRaises(ParseError):
@ -1130,3 +1134,12 @@ CROSS JOIN JSON_ARRAY_ELEMENTS(CAST(boxcrate AS JSON)) AS x(tbox)
CROSS JOIN JSON_ARRAY_ELEMENTS(CAST(JSON_EXTRACT_PATH(tbox, 'boxes') AS JSON)) AS y(boxes)""" CROSS JOIN JSON_ARRAY_ELEMENTS(CAST(JSON_EXTRACT_PATH(tbox, 'boxes') AS JSON)) AS y(boxes)"""
self.validate_all(expected_postgres, read={"trino": trino_input}, pretty=True) self.validate_all(expected_postgres, read={"trino": trino_input}, pretty=True)
def test_rows_from(self):
self.validate_identity("""SELECT * FROM ROWS FROM (FUNC1(col1, col2))""")
self.validate_identity(
"""SELECT * FROM ROWS FROM (FUNC1(col1) AS alias1("col1" TEXT), FUNC2(col2) AS alias2("col2" INT)) WITH ORDINALITY"""
)
self.validate_identity(
"""SELECT * FROM table1, ROWS FROM (FUNC1(col1) AS alias1("col1" TEXT)) WITH ORDINALITY AS alias3("col3" INT, "col4" TEXT)"""
)

View file

@ -405,6 +405,14 @@ class TestPresto(Validator):
) )
self.validate_identity("DATE_ADD('DAY', 1, y)") self.validate_identity("DATE_ADD('DAY', 1, y)")
self.validate_all(
"SELECT DATE_ADD('MINUTE', 30, col)",
write={
"presto": "SELECT DATE_ADD('MINUTE', 30, col)",
"trino": "SELECT DATE_ADD('MINUTE', 30, col)",
},
)
def test_ddl(self): def test_ddl(self):
self.validate_all( self.validate_all(
"CREATE TABLE test WITH (FORMAT = 'PARQUET') AS SELECT 1", "CREATE TABLE test WITH (FORMAT = 'PARQUET') AS SELECT 1",

View file

@ -684,6 +684,8 @@ TBLPROPERTIES (
write={ write={
"spark": "SELECT DATE_ADD(MONTH, 20, col)", "spark": "SELECT DATE_ADD(MONTH, 20, col)",
"databricks": "SELECT DATE_ADD(MONTH, 20, col)", "databricks": "SELECT DATE_ADD(MONTH, 20, col)",
"presto": "SELECT DATE_ADD('MONTH', 20, col)",
"trino": "SELECT DATE_ADD('MONTH', 20, col)",
}, },
) )

View file

@ -53,6 +53,7 @@ x >> 1 | 1 & 1 ^ 1
x || y x || y
x[ : ] x[ : ]
x[1 : ] x[1 : ]
x[ : 2]
x[1 : 2] x[1 : 2]
x[-4 : -1] x[-4 : -1]
1 - -1 1 - -1
@ -777,14 +778,6 @@ ALTER TABLE table1 RENAME COLUMN IF EXISTS c1 TO c2
SELECT div.a FROM test_table AS div SELECT div.a FROM test_table AS div
WITH view AS (SELECT 1 AS x) SELECT * FROM view WITH view AS (SELECT 1 AS x) SELECT * FROM view
ARRAY<STRUCT<INT, DOUBLE, ARRAY<INT>>> ARRAY<STRUCT<INT, DOUBLE, ARRAY<INT>>>
ARRAY<INT>[1, 2, 3]
ARRAY<INT>[]
STRUCT<x VARCHAR(10)>
STRUCT<x VARCHAR(10)>("bla")
STRUCT<VARCHAR(10)>("bla")
STRUCT<INT>(5)
STRUCT<DATE>("2011-05-05")
STRUCT<x INT, y TEXT>(1, t.str_col)
STRUCT<int INT> STRUCT<int INT>
SELECT CAST(NULL AS ARRAY<INT>) IS NULL AS array_is_null SELECT CAST(NULL AS ARRAY<INT>) IS NULL AS array_is_null
ALTER TABLE "schema"."tablename" ADD CONSTRAINT "CHK_Name" CHECK (NOT "IdDwh" IS NULL AND "IdDwh" <> (0)) ALTER TABLE "schema"."tablename" ADD CONSTRAINT "CHK_Name" CHECK (NOT "IdDwh" IS NULL AND "IdDwh" <> (0))

View file

@ -55,3 +55,15 @@ STRUCT<a INT, b DOUBLE>;
# dialect: presto # dialect: presto
ROW(1, 2.5, 'foo'); ROW(1, 2.5, 'foo');
STRUCT<INT, DOUBLE, VARCHAR>; STRUCT<INT, DOUBLE, VARCHAR>;
# dialect: bigquery
EXTRACT(date from x);
DATE;
# dialect: bigquery
EXTRACT(time from x);
TIME;
# dialect: bigquery
EXTRACT(day from x);
INT;

View file

@ -385,14 +385,14 @@ WITH player AS (SELECT player.name, player.asset.info FROM players) SELECT * FRO
WITH player AS (SELECT players.player.name AS name, players.player.asset.info AS info FROM players AS players) SELECT player.name AS name, player.info AS info FROM player AS player; WITH player AS (SELECT players.player.name AS name, players.player.asset.info AS info FROM players AS players) SELECT player.name AS name, player.info AS info FROM player AS player;
-------------------------------------- --------------------------------------
-- Except and Replace -- Except, Replace, Rename
-------------------------------------- --------------------------------------
# execute: false # execute: false
SELECT * REPLACE(a AS d) FROM x; SELECT * RENAME(a AS d) FROM x;
SELECT x.a AS d, x.b AS b FROM x AS x; SELECT x.a AS d, x.b AS b FROM x AS x;
# execute: false # execute: false
SELECT * EXCEPT(b) REPLACE(a AS d) FROM x; SELECT * EXCEPT(b) RENAME(a AS d) FROM x;
SELECT x.a AS d FROM x AS x; SELECT x.a AS d FROM x AS x;
SELECT x.* EXCEPT(a), y.* FROM x, y; SELECT x.* EXCEPT(a), y.* FROM x, y;
@ -416,6 +416,30 @@ SELECT x.a AS a, x.b AS b, y.b AS b FROM x AS x LEFT JOIN x AS y ON x.a = y.a;
SELECT COALESCE(CAST(t1.a AS VARCHAR), '') AS a, t2.* EXCEPT (a) FROM x AS t1, x AS t2; SELECT COALESCE(CAST(t1.a AS VARCHAR), '') AS a, t2.* EXCEPT (a) FROM x AS t1, x AS t2;
SELECT COALESCE(CAST(t1.a AS VARCHAR), '') AS a, t2.b AS b FROM x AS t1, x AS t2; SELECT COALESCE(CAST(t1.a AS VARCHAR), '') AS a, t2.b AS b FROM x AS t1, x AS t2;
# execute: false
SELECT * REPLACE(2 AS a) FROM x;
SELECT 2 AS a, x.b AS b FROM x AS x;
# execute: false
SELECT * EXCEPT (a, b) REPLACE (a AS a) FROM x;
SELECT * EXCEPT (a, b) REPLACE (x.a AS a) FROM x AS x;
# execute: false
SELECT * REPLACE(COALESCE(b, a) AS a, a as b) FROM x;
SELECT COALESCE(x.b, x.a) AS a, x.a AS b FROM x AS x;
# execute: false
SELECT * REPLACE(1 AS a) RENAME(b as alias_b) FROM x;
SELECT 1 AS a, x.b AS alias_b FROM x AS x;
# execute: false
SELECT * EXCEPT(a) REPLACE(COALESCE(a, b) AS b) RENAME(b AS new_b) FROM x;
SELECT COALESCE(x.a, x.b) AS new_b FROM x AS x;
# execute: false
SELECT * REPLACE(1 AS a, a AS b) RENAME(b AS new_b) FROM x;
SELECT 1 AS a, x.a AS new_b FROM x AS x;
-------------------------------------- --------------------------------------
-- Using -- Using
-------------------------------------- --------------------------------------

View file

@ -177,3 +177,6 @@ SELECT a.id, a_2.id FROM c.db1.a AS a JOIN c.db2.a AS a_2 ON a.id = a_2.id;
SELECT cat.db1.a.id, db2.a.id FROM cat.db1.a JOIN db2.a ON cat.db1.a.id = db2.a.id; SELECT cat.db1.a.id, db2.a.id FROM cat.db1.a JOIN db2.a ON cat.db1.a.id = db2.a.id;
SELECT a.id, a_2.id FROM cat.db1.a AS a JOIN c.db2.a AS a_2 ON a.id = a_2.id; SELECT a.id, a_2.id FROM cat.db1.a AS a JOIN c.db2.a AS a_2 ON a.id = a_2.id;
COPY INTO (SELECT * FROM x) TO 'data' WITH (FORMAT 'CSV');
COPY INTO (SELECT * FROM c.db.x AS x) TO 'data' WITH (FORMAT 'CSV');