Merging upstream version 25.24.5.
Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
parent
f2b92bd29a
commit
1763c7a4ef
80 changed files with 61531 additions and 59444 deletions
88
CHANGELOG.md
88
CHANGELOG.md
|
@ -1,6 +1,89 @@
|
|||
Changelog
|
||||
=========
|
||||
|
||||
## [v25.24.4] - 2024-10-04
|
||||
### :bug: Bug Fixes
|
||||
- [`484df7d`](https://github.com/tobymao/sqlglot/commit/484df7d50df5cb314943e1810db18a7d7d5bb3eb) - tsql union with limit *(commit by [@tobymao](https://github.com/tobymao))*
|
||||
|
||||
|
||||
## [v25.24.3] - 2024-10-03
|
||||
### :sparkles: New Features
|
||||
- [`25b18d2`](https://github.com/tobymao/sqlglot/commit/25b18d28e5ad7b3687e2848ff92a0a1fc17b06fa) - **trino**: Support JSON_QUERY *(PR [#4206](https://github.com/tobymao/sqlglot/pull/4206) by [@VaggelisD](https://github.com/VaggelisD))*
|
||||
- :arrow_lower_right: *addresses issue [#4200](https://github.com/tobymao/sqlglot/issues/4200) opened by [@Harmuth94](https://github.com/Harmuth94)*
|
||||
- [`5781b45`](https://github.com/tobymao/sqlglot/commit/5781b455fa3ec495b65f3f3f4a959192389bd816) - **duckdb**: Add more Postgres operators *(PR [#4199](https://github.com/tobymao/sqlglot/pull/4199) by [@VaggelisD](https://github.com/VaggelisD))*
|
||||
- :arrow_lower_right: *addresses issue [#4189](https://github.com/tobymao/sqlglot/issues/4189) opened by [@rustyconover](https://github.com/rustyconover)*
|
||||
- [`89c0703`](https://github.com/tobymao/sqlglot/commit/89c07039da402fb2ad77e00edb4f09079ecbb41d) - **bigquery**: Native math function annotations *(PR [#4201](https://github.com/tobymao/sqlglot/pull/4201) by [@VaggelisD](https://github.com/VaggelisD))*
|
||||
- [`977d9e5`](https://github.com/tobymao/sqlglot/commit/977d9e5a854b58b4469be1af6aa14a5bf5a4b8c6) - allow supplying dialect in diff, conditionally copy ASTs *(PR [#4208](https://github.com/tobymao/sqlglot/pull/4208) by [@georgesittas](https://github.com/georgesittas))*
|
||||
- :arrow_lower_right: *addresses issue [#4203](https://github.com/tobymao/sqlglot/issues/4203) opened by [@mkmoisen](https://github.com/mkmoisen)*
|
||||
|
||||
### :bug: Bug Fixes
|
||||
- [`332c74b`](https://github.com/tobymao/sqlglot/commit/332c74b881487cd9ce711ca3bd065a8992872098) - attach comments to subquery predicates properly, fix comment case *(PR [#4207](https://github.com/tobymao/sqlglot/pull/4207) by [@georgesittas](https://github.com/georgesittas))*
|
||||
- :arrow_lower_right: *fixes issue [#4205](https://github.com/tobymao/sqlglot/issues/4205) opened by [@mkmoisen](https://github.com/mkmoisen)*
|
||||
- [`55da21d`](https://github.com/tobymao/sqlglot/commit/55da21dd043dfcbefa3653fe168eb9cae5dc5bf5) - Unexpected row deduplication using eliminate_full_outer_join *(PR [#4178](https://github.com/tobymao/sqlglot/pull/4178) by [@liaco](https://github.com/liaco))*
|
||||
|
||||
|
||||
## [v25.24.2] - 2024-10-02
|
||||
### :sparkles: New Features
|
||||
- [`c8b7c1e`](https://github.com/tobymao/sqlglot/commit/c8b7c1ef7c6070a51638af18833c649a77e735cb) - **optimizer**: Fixture file for function annotations *(PR [#4182](https://github.com/tobymao/sqlglot/pull/4182) by [@VaggelisD](https://github.com/VaggelisD))*
|
||||
- [`0adbbf7`](https://github.com/tobymao/sqlglot/commit/0adbbf7ad8f16700adc48c6757c07768199860d9) - **duckdb**: Parse ** and ^ operators as POW *(PR [#4193](https://github.com/tobymao/sqlglot/pull/4193) by [@VaggelisD](https://github.com/VaggelisD))*
|
||||
- :arrow_lower_right: *addresses issue [#4186](https://github.com/tobymao/sqlglot/issues/4186) opened by [@rustyconover](https://github.com/rustyconover)*
|
||||
- [`4949906`](https://github.com/tobymao/sqlglot/commit/4949906e9dd0c3039a161e06ddb970f37067b88f) - **duckdb**: Parse ~~~ as GLOB *(PR [#4194](https://github.com/tobymao/sqlglot/pull/4194) by [@VaggelisD](https://github.com/VaggelisD))*
|
||||
- :arrow_lower_right: *addresses issue [#4187](https://github.com/tobymao/sqlglot/issues/4187) opened by [@rustyconover](https://github.com/rustyconover)*
|
||||
- [`6ba2bb0`](https://github.com/tobymao/sqlglot/commit/6ba2bb03f973c30788508768c3ba716aa94b0299) - **oracle**: Add support for BULK COLLECT INTO *(PR [#4181](https://github.com/tobymao/sqlglot/pull/4181) by [@mkmoisen](https://github.com/mkmoisen))*
|
||||
- [`0de59ce`](https://github.com/tobymao/sqlglot/commit/0de59cebe550b33ac34a92c1ded1d3f9b8f679c4) - mark `expressions` as unsupported in Into generator *(commit by [@georgesittas](https://github.com/georgesittas))*
|
||||
|
||||
### :bug: Bug Fixes
|
||||
- [`475f7a3`](https://github.com/tobymao/sqlglot/commit/475f7a3c639c7b8c5f3af1b2e5fcce9174be39ec) - **redshift**: Add unsupported warnings for UNNEST *(PR [#4173](https://github.com/tobymao/sqlglot/pull/4173) by [@VaggelisD](https://github.com/VaggelisD))*
|
||||
- :arrow_lower_right: *fixes issue [#4169](https://github.com/tobymao/sqlglot/issues/4169) opened by [@bjabes](https://github.com/bjabes)*
|
||||
- [`d38e023`](https://github.com/tobymao/sqlglot/commit/d38e023966c32b208fe5ae9843bbd716e2181521) - **spark**: Offset TRY_ELEMENT_AT by one *(PR [#4183](https://github.com/tobymao/sqlglot/pull/4183) by [@VaggelisD](https://github.com/VaggelisD))*
|
||||
- [`3d1c643`](https://github.com/tobymao/sqlglot/commit/3d1c6430791dcce05f1a71f17311e294d9fc9d3d) - rename SHA function to SHA1 for DuckDB *(PR [#4191](https://github.com/tobymao/sqlglot/pull/4191) by [@rustyconover](https://github.com/rustyconover))*
|
||||
- [`0388a51`](https://github.com/tobymao/sqlglot/commit/0388a519dba63636a9aac3e3272cdea0f0b8312d) - add support for UHUGEINT for duckdb *(PR [#4190](https://github.com/tobymao/sqlglot/pull/4190) by [@rustyconover](https://github.com/rustyconover))*
|
||||
- :arrow_lower_right: *fixes issue [#4184](https://github.com/tobymao/sqlglot/issues/4184) opened by [@rustyconover](https://github.com/rustyconover)*
|
||||
- [`9eba00d`](https://github.com/tobymao/sqlglot/commit/9eba00dca517efe7df171b09ed916af3ea5e350d) - **duckdb**: Parse ~~ as LIKE *(PR [#4195](https://github.com/tobymao/sqlglot/pull/4195) by [@VaggelisD](https://github.com/VaggelisD))*
|
||||
- :arrow_lower_right: *fixes issue [#4188](https://github.com/tobymao/sqlglot/issues/4188) opened by [@rustyconover](https://github.com/rustyconover)*
|
||||
- [`6a65973`](https://github.com/tobymao/sqlglot/commit/6a659736f3a176e335c68fdd07d8265c3d0421dc) - expand UPDATABLE_EXPRESSION_TYPES to account for Identifier changes *(PR [#4197](https://github.com/tobymao/sqlglot/pull/4197) by [@georgesittas](https://github.com/georgesittas))*
|
||||
- :arrow_lower_right: *fixes issue [#4192](https://github.com/tobymao/sqlglot/issues/4192) opened by [@mkmoisen](https://github.com/mkmoisen)*
|
||||
- [`a6c28c6`](https://github.com/tobymao/sqlglot/commit/a6c28c63f4e44bb62ba8df30f1407c728eb215f2) - **sqlite**: generate StrPosition as INSTR *(PR [#4198](https://github.com/tobymao/sqlglot/pull/4198) by [@pruzko](https://github.com/pruzko))*
|
||||
- :arrow_lower_right: *fixes issue [#4196](https://github.com/tobymao/sqlglot/issues/4196) opened by [@pruzko](https://github.com/pruzko)*
|
||||
- [`5a123a5`](https://github.com/tobymao/sqlglot/commit/5a123a54ecd033c0a104e33476b17d816a09caac) - **oracle**: retreat properly when parsing BULK COLLECT INTO *(commit by [@georgesittas](https://github.com/georgesittas))*
|
||||
- [`f935e42`](https://github.com/tobymao/sqlglot/commit/f935e42130724e032b294074f3b552f21e20bc57) - properly escape closing identifier delimiters *(PR [#4202](https://github.com/tobymao/sqlglot/pull/4202) by [@georgesittas](https://github.com/georgesittas))*
|
||||
|
||||
|
||||
## [v25.24.1] - 2024-10-01
|
||||
### :sparkles: New Features
|
||||
- [`7af33a2`](https://github.com/tobymao/sqlglot/commit/7af33a2f74dd1300bcd45f1974b7fd28abe66b8e) - **spark**: Custom annotation for more string functions *(PR [#4156](https://github.com/tobymao/sqlglot/pull/4156) by [@VaggelisD](https://github.com/VaggelisD))*
|
||||
|
||||
### :bug: Bug Fixes
|
||||
- [`81df4e1`](https://github.com/tobymao/sqlglot/commit/81df4e104ff3d60e3c23d3ac321e719b1f0962c0) - **athena**: Case sensitivity in CTAS property names *(PR [#4171](https://github.com/tobymao/sqlglot/pull/4171) by [@erindru](https://github.com/erindru))*
|
||||
- [`0703152`](https://github.com/tobymao/sqlglot/commit/0703152a25afced183dc5efd5f62311a48545420) - **bigquery**: Do not generate null ordering on agg funcs *(PR [#4172](https://github.com/tobymao/sqlglot/pull/4172) by [@VaggelisD](https://github.com/VaggelisD))*
|
||||
- :arrow_lower_right: *fixes issue [#4170](https://github.com/tobymao/sqlglot/issues/4170) opened by [@yjabri](https://github.com/yjabri)*
|
||||
|
||||
|
||||
## [v25.24.0] - 2024-09-26
|
||||
### :boom: BREAKING CHANGES
|
||||
- due to [`3ab6dfb`](https://github.com/tobymao/sqlglot/commit/3ab6dfb486f18d036bfac6a90d5f81b0ce7a91ea) - Generalize COLUMNS(...) APPLY *(PR [#4161](https://github.com/tobymao/sqlglot/pull/4161) by [@VaggelisD](https://github.com/VaggelisD))*:
|
||||
|
||||
Generalize COLUMNS(...) APPLY (#4161)
|
||||
|
||||
|
||||
### :sparkles: New Features
|
||||
- [`93cef30`](https://github.com/tobymao/sqlglot/commit/93cef30bc534a155bce06f35d441d20e5dd78cf6) - **postgres**: Support OVERLAY function *(PR [#4165](https://github.com/tobymao/sqlglot/pull/4165) by [@VaggelisD](https://github.com/VaggelisD))*
|
||||
- :arrow_lower_right: *addresses issue [#4159](https://github.com/tobymao/sqlglot/issues/4159) opened by [@s1101010110](https://github.com/s1101010110)*
|
||||
- [`0a5444d`](https://github.com/tobymao/sqlglot/commit/0a5444dc822b7c53c008bc946eb3b54ca2147f3c) - expose a flag to automatically exclude Keep diff nodes *(PR [#4168](https://github.com/tobymao/sqlglot/pull/4168) by [@georgesittas](https://github.com/georgesittas))*
|
||||
|
||||
### :bug: Bug Fixes
|
||||
- [`9c17264`](https://github.com/tobymao/sqlglot/commit/9c172643aa3f3f0ffcc2e62242b62ba9c6141925) - **hive**: Enclose exp.Split with \E *(PR [#4163](https://github.com/tobymao/sqlglot/pull/4163) by [@VaggelisD](https://github.com/VaggelisD))*
|
||||
- :arrow_lower_right: *fixes issue [#4158](https://github.com/tobymao/sqlglot/issues/4158) opened by [@cpcloud](https://github.com/cpcloud)*
|
||||
- [`3ab6dfb`](https://github.com/tobymao/sqlglot/commit/3ab6dfb486f18d036bfac6a90d5f81b0ce7a91ea) - **clickhouse**: Generalize COLUMNS(...) APPLY *(PR [#4161](https://github.com/tobymao/sqlglot/pull/4161) by [@VaggelisD](https://github.com/VaggelisD))*
|
||||
- :arrow_lower_right: *fixes issue [#4157](https://github.com/tobymao/sqlglot/issues/4157) opened by [@elchyn-cheliabiyeu](https://github.com/elchyn-cheliabiyeu)*
|
||||
|
||||
### :recycle: Refactors
|
||||
- [`2540e50`](https://github.com/tobymao/sqlglot/commit/2540e50d2b0df12f940c68acc574e540d19546cf) - simplify check_deploy job *(commit by [@georgesittas](https://github.com/georgesittas))*
|
||||
|
||||
### :wrench: Chores
|
||||
- [`f6d3bdd`](https://github.com/tobymao/sqlglot/commit/f6d3bdd740d0fe128d4d5dd99833a6f71c890ed3) - update supported dialect count (21 -> 23) *(commit by [@georgesittas](https://github.com/georgesittas))*
|
||||
|
||||
|
||||
## [v25.23.2] - 2024-09-25
|
||||
### :wrench: Chores
|
||||
- [`eca05d3`](https://github.com/tobymao/sqlglot/commit/eca05d3b08645d7a984ee65b438282b35cb41960) - tweak should_deploy_rs script to avoid marking CI as failed *(commit by [@georgesittas](https://github.com/georgesittas))*
|
||||
|
@ -4881,3 +4964,8 @@ Changelog
|
|||
[v25.23.0]: https://github.com/tobymao/sqlglot/compare/v25.20.2...v25.23.0
|
||||
[v25.23.1]: https://github.com/tobymao/sqlglot/compare/v25.23.0...v25.23.1
|
||||
[v25.23.2]: https://github.com/tobymao/sqlglot/compare/v25.23.1...v25.23.2
|
||||
[v25.24.0]: https://github.com/tobymao/sqlglot/compare/v25.23.2...v25.24.0
|
||||
[v25.24.1]: https://github.com/tobymao/sqlglot/compare/v25.24.0...v25.24.1
|
||||
[v25.24.2]: https://github.com/tobymao/sqlglot/compare/v25.24.1...v25.24.2
|
||||
[v25.24.3]: https://github.com/tobymao/sqlglot/compare/v25.24.2...v25.24.3
|
||||
[v25.24.4]: https://github.com/tobymao/sqlglot/compare/v25.24.3...v25.24.4
|
||||
|
|
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
|
@ -76,8 +76,8 @@
|
|||
</span><span id="L-12"><a href="#L-12"><span class="linenos">12</span></a><span class="n">__version_tuple__</span><span class="p">:</span> <span class="n">VERSION_TUPLE</span>
|
||||
</span><span id="L-13"><a href="#L-13"><span class="linenos">13</span></a><span class="n">version_tuple</span><span class="p">:</span> <span class="n">VERSION_TUPLE</span>
|
||||
</span><span id="L-14"><a href="#L-14"><span class="linenos">14</span></a>
|
||||
</span><span id="L-15"><a href="#L-15"><span class="linenos">15</span></a><span class="n">__version__</span> <span class="o">=</span> <span class="n">version</span> <span class="o">=</span> <span class="s1">'25.23.2'</span>
|
||||
</span><span id="L-16"><a href="#L-16"><span class="linenos">16</span></a><span class="n">__version_tuple__</span> <span class="o">=</span> <span class="n">version_tuple</span> <span class="o">=</span> <span class="p">(</span><span class="mi">25</span><span class="p">,</span> <span class="mi">23</span><span class="p">,</span> <span class="mi">2</span><span class="p">)</span>
|
||||
</span><span id="L-15"><a href="#L-15"><span class="linenos">15</span></a><span class="n">__version__</span> <span class="o">=</span> <span class="n">version</span> <span class="o">=</span> <span class="s1">'25.24.4'</span>
|
||||
</span><span id="L-16"><a href="#L-16"><span class="linenos">16</span></a><span class="n">__version_tuple__</span> <span class="o">=</span> <span class="n">version_tuple</span> <span class="o">=</span> <span class="p">(</span><span class="mi">25</span><span class="p">,</span> <span class="mi">24</span><span class="p">,</span> <span class="mi">4</span><span class="p">)</span>
|
||||
</span></pre></div>
|
||||
|
||||
|
||||
|
@ -97,7 +97,7 @@
|
|||
<section id="version">
|
||||
<div class="attr variable">
|
||||
<span class="name">version</span><span class="annotation">: str</span> =
|
||||
<span class="default_value">'25.23.2'</span>
|
||||
<span class="default_value">'25.24.4'</span>
|
||||
|
||||
|
||||
</div>
|
||||
|
@ -109,7 +109,7 @@
|
|||
<section id="version_tuple">
|
||||
<div class="attr variable">
|
||||
<span class="name">version_tuple</span><span class="annotation">: object</span> =
|
||||
<span class="default_value">(25, 23, 2)</span>
|
||||
<span class="default_value">(25, 24, 4)</span>
|
||||
|
||||
|
||||
</div>
|
||||
|
|
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
|
@ -1102,6 +1102,8 @@ Default: True</li>
|
|||
<dd id="RisingWave.Generator.grant_sql" class="function"><a href="../generator.html#Generator.grant_sql">grant_sql</a></dd>
|
||||
<dd id="RisingWave.Generator.grantprivilege_sql" class="function"><a href="../generator.html#Generator.grantprivilege_sql">grantprivilege_sql</a></dd>
|
||||
<dd id="RisingWave.Generator.grantprincipal_sql" class="function"><a href="../generator.html#Generator.grantprincipal_sql">grantprincipal_sql</a></dd>
|
||||
<dd id="RisingWave.Generator.columns_sql" class="function"><a href="../generator.html#Generator.columns_sql">columns_sql</a></dd>
|
||||
<dd id="RisingWave.Generator.overlay_sql" class="function"><a href="../generator.html#Generator.overlay_sql">overlay_sql</a></dd>
|
||||
|
||||
</div>
|
||||
<div><dt><a href="postgres.html#Postgres.Generator">sqlglot.dialects.postgres.Postgres.Generator</a></dt>
|
||||
|
|
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because it is too large
Load diff
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
|
@ -1895,7 +1895,7 @@ belong to some totally-ordered set.</p>
|
|||
<section id="DATE_UNITS">
|
||||
<div class="attr variable">
|
||||
<span class="name">DATE_UNITS</span> =
|
||||
<span class="default_value">{'week', 'year_month', 'day', 'quarter', 'year', 'month'}</span>
|
||||
<span class="default_value">{'quarter', 'week', 'year_month', 'month', 'year', 'day'}</span>
|
||||
|
||||
|
||||
</div>
|
||||
|
|
|
@ -586,7 +586,7 @@
|
|||
<div class="attr variable">
|
||||
<span class="name">ALL_JSON_PATH_PARTS</span> =
|
||||
<input id="ALL_JSON_PATH_PARTS-view-value" class="view-value-toggle-state" type="checkbox" aria-hidden="true" tabindex="-1">
|
||||
<label class="view-value-button pdoc-button" for="ALL_JSON_PATH_PARTS-view-value"></label><span class="default_value">{<class '<a href="expressions.html#JSONPathSubscript">sqlglot.expressions.JSONPathSubscript</a>'>, <class '<a href="expressions.html#JSONPathUnion">sqlglot.expressions.JSONPathUnion</a>'>, <class '<a href="expressions.html#JSONPathSelector">sqlglot.expressions.JSONPathSelector</a>'>, <class '<a href="expressions.html#JSONPathSlice">sqlglot.expressions.JSONPathSlice</a>'>, <class '<a href="expressions.html#JSONPathScript">sqlglot.expressions.JSONPathScript</a>'>, <class '<a href="expressions.html#JSONPathRoot">sqlglot.expressions.JSONPathRoot</a>'>, <class '<a href="expressions.html#JSONPathWildcard">sqlglot.expressions.JSONPathWildcard</a>'>, <class '<a href="expressions.html#JSONPathRecursive">sqlglot.expressions.JSONPathRecursive</a>'>, <class '<a href="expressions.html#JSONPathKey">sqlglot.expressions.JSONPathKey</a>'>, <class '<a href="expressions.html#JSONPathFilter">sqlglot.expressions.JSONPathFilter</a>'>}</span>
|
||||
<label class="view-value-button pdoc-button" for="ALL_JSON_PATH_PARTS-view-value"></label><span class="default_value">{<class '<a href="expressions.html#JSONPathFilter">sqlglot.expressions.JSONPathFilter</a>'>, <class '<a href="expressions.html#JSONPathUnion">sqlglot.expressions.JSONPathUnion</a>'>, <class '<a href="expressions.html#JSONPathSubscript">sqlglot.expressions.JSONPathSubscript</a>'>, <class '<a href="expressions.html#JSONPathSelector">sqlglot.expressions.JSONPathSelector</a>'>, <class '<a href="expressions.html#JSONPathSlice">sqlglot.expressions.JSONPathSlice</a>'>, <class '<a href="expressions.html#JSONPathScript">sqlglot.expressions.JSONPathScript</a>'>, <class '<a href="expressions.html#JSONPathRoot">sqlglot.expressions.JSONPathRoot</a>'>, <class '<a href="expressions.html#JSONPathWildcard">sqlglot.expressions.JSONPathWildcard</a>'>, <class '<a href="expressions.html#JSONPathRecursive">sqlglot.expressions.JSONPathRecursive</a>'>, <class '<a href="expressions.html#JSONPathKey">sqlglot.expressions.JSONPathKey</a>'>}</span>
|
||||
|
||||
|
||||
</div>
|
||||
|
|
File diff suppressed because one or more lines are too long
|
@ -586,7 +586,7 @@ queries if it would result in multiple table selects in a single query:</p>
|
|||
<div class="attr variable">
|
||||
<span class="name">UNMERGABLE_ARGS</span> =
|
||||
<input id="UNMERGABLE_ARGS-view-value" class="view-value-toggle-state" type="checkbox" aria-hidden="true" tabindex="-1">
|
||||
<label class="view-value-button pdoc-button" for="UNMERGABLE_ARGS-view-value"></label><span class="default_value">{'into', 'qualify', 'offset', 'having', 'locks', 'windows', 'distinct', 'settings', 'match', 'cluster', 'sample', 'limit', 'pivots', 'distribute', 'sort', 'format', 'connect', 'laterals', 'options', 'group', 'prewhere', 'with', 'kind'}</span>
|
||||
<label class="view-value-button pdoc-button" for="UNMERGABLE_ARGS-view-value"></label><span class="default_value">{'qualify', 'windows', 'with', 'kind', 'options', 'pivots', 'match', 'laterals', 'into', 'prewhere', 'settings', 'sample', 'limit', 'group', 'distribute', 'cluster', 'having', 'offset', 'connect', 'locks', 'format', 'sort', 'distinct'}</span>
|
||||
|
||||
|
||||
</div>
|
||||
|
|
|
@ -3218,7 +3218,7 @@ prefix are statically known.</p>
|
|||
<div class="attr variable">
|
||||
<span class="name">DATETRUNC_COMPARISONS</span> =
|
||||
<input id="DATETRUNC_COMPARISONS-view-value" class="view-value-toggle-state" type="checkbox" aria-hidden="true" tabindex="-1">
|
||||
<label class="view-value-button pdoc-button" for="DATETRUNC_COMPARISONS-view-value"></label><span class="default_value">{<class '<a href="../expressions.html#NEQ">sqlglot.expressions.NEQ</a>'>, <class '<a href="../expressions.html#LTE">sqlglot.expressions.LTE</a>'>, <class '<a href="../expressions.html#GTE">sqlglot.expressions.GTE</a>'>, <class '<a href="../expressions.html#LT">sqlglot.expressions.LT</a>'>, <class '<a href="../expressions.html#In">sqlglot.expressions.In</a>'>, <class '<a href="../expressions.html#EQ">sqlglot.expressions.EQ</a>'>, <class '<a href="../expressions.html#GT">sqlglot.expressions.GT</a>'>}</span>
|
||||
<label class="view-value-button pdoc-button" for="DATETRUNC_COMPARISONS-view-value"></label><span class="default_value">{<class '<a href="../expressions.html#NEQ">sqlglot.expressions.NEQ</a>'>, <class '<a href="../expressions.html#In">sqlglot.expressions.In</a>'>, <class '<a href="../expressions.html#LTE">sqlglot.expressions.LTE</a>'>, <class '<a href="../expressions.html#EQ">sqlglot.expressions.EQ</a>'>, <class '<a href="../expressions.html#GTE">sqlglot.expressions.GTE</a>'>, <class '<a href="../expressions.html#LT">sqlglot.expressions.LT</a>'>, <class '<a href="../expressions.html#GT">sqlglot.expressions.GT</a>'>}</span>
|
||||
|
||||
|
||||
</div>
|
||||
|
@ -3302,7 +3302,7 @@ prefix are statically known.</p>
|
|||
<section id="JOINS">
|
||||
<div class="attr variable">
|
||||
<span class="name">JOINS</span> =
|
||||
<span class="default_value">{('RIGHT', ''), ('', 'INNER'), ('RIGHT', 'OUTER'), ('', '')}</span>
|
||||
<span class="default_value">{('RIGHT', 'OUTER'), ('', 'INNER'), ('RIGHT', ''), ('', '')}</span>
|
||||
|
||||
|
||||
</div>
|
||||
|
|
28352
docs/sqlglot/parser.html
28352
docs/sqlglot/parser.html
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because it is too large
Load diff
|
@ -48,7 +48,7 @@ def _location_property_sql(self: Athena.Generator, e: exp.LocationProperty):
|
|||
),
|
||||
None,
|
||||
)
|
||||
if table_type_property and table_type_property.text("value") == "iceberg":
|
||||
if table_type_property and table_type_property.text("value").lower() == "iceberg":
|
||||
prop_name = "location"
|
||||
|
||||
return f"{prop_name}={self.sql(e, 'this')}"
|
||||
|
@ -132,6 +132,7 @@ class Athena(Trino):
|
|||
TRANSFORMS = {
|
||||
**Trino.Generator.TRANSFORMS,
|
||||
exp.FileFormatProperty: lambda self, e: f"format={self.sql(e, 'this')}",
|
||||
exp.PartitionedByProperty: lambda self, e: f"partitioned_by={self.sql(e, 'this')}",
|
||||
exp.LocationProperty: _location_property_sql,
|
||||
}
|
||||
|
||||
|
|
|
@ -33,6 +33,8 @@ from sqlglot.tokens import TokenType
|
|||
if t.TYPE_CHECKING:
|
||||
from sqlglot._typing import E, Lit
|
||||
|
||||
from sqlglot.optimizer.annotate_types import TypeAnnotator
|
||||
|
||||
logger = logging.getLogger("sqlglot")
|
||||
|
||||
|
||||
|
@ -247,6 +249,26 @@ def _str_to_datetime_sql(
|
|||
return self.func(f"PARSE_{dtype}", fmt, this, expression.args.get("zone"))
|
||||
|
||||
|
||||
def _annotate_math_functions(self: TypeAnnotator, expression: E) -> E:
|
||||
"""
|
||||
Many BigQuery math functions such as CEIL, FLOOR etc follow this return type convention:
|
||||
+---------+---------+---------+------------+---------+
|
||||
| INPUT | INT64 | NUMERIC | BIGNUMERIC | FLOAT64 |
|
||||
+---------+---------+---------+------------+---------+
|
||||
| OUTPUT | FLOAT64 | NUMERIC | BIGNUMERIC | FLOAT64 |
|
||||
+---------+---------+---------+------------+---------+
|
||||
"""
|
||||
self._annotate_args(expression)
|
||||
|
||||
this: exp.Expression = expression.this
|
||||
|
||||
self._set_type(
|
||||
expression,
|
||||
exp.DataType.Type.DOUBLE if this.is_type(*exp.DataType.INTEGER_TYPES) else this.type,
|
||||
)
|
||||
return expression
|
||||
|
||||
|
||||
class BigQuery(Dialect):
|
||||
WEEK_OFFSET = -1
|
||||
UNNEST_COLUMN_ONLY = True
|
||||
|
@ -293,6 +315,15 @@ class BigQuery(Dialect):
|
|||
# All set operations require either a DISTINCT or ALL specifier
|
||||
SET_OP_DISTINCT_BY_DEFAULT = dict.fromkeys((exp.Except, exp.Intersect, exp.Union), None)
|
||||
|
||||
ANNOTATORS = {
|
||||
**Dialect.ANNOTATORS,
|
||||
**{
|
||||
expr_type: lambda self, e: _annotate_math_functions(self, e)
|
||||
for expr_type in (exp.Floor, exp.Ceil, exp.Log, exp.Ln, exp.Sqrt, exp.Exp, exp.Round)
|
||||
},
|
||||
exp.Sign: lambda self, e: self._annotate_by_args(e, "this"),
|
||||
}
|
||||
|
||||
def normalize_identifier(self, expression: E) -> E:
|
||||
if (
|
||||
isinstance(expression, exp.Identifier)
|
||||
|
|
|
@ -577,7 +577,6 @@ class Dialect(metaclass=_Dialect):
|
|||
exp.DataType.Type.DOUBLE: {
|
||||
exp.ApproxQuantile,
|
||||
exp.Avg,
|
||||
exp.Div,
|
||||
exp.Exp,
|
||||
exp.Ln,
|
||||
exp.Log,
|
||||
|
@ -689,9 +688,10 @@ class Dialect(metaclass=_Dialect):
|
|||
exp.GenerateTimestampArray: lambda self, e: self._annotate_with_type(
|
||||
e, exp.DataType.build("ARRAY<TIMESTAMP>")
|
||||
),
|
||||
exp.Greatest: lambda self, e: self._annotate_by_args(e, "this", "expressions"),
|
||||
exp.If: lambda self, e: self._annotate_by_args(e, "true", "false"),
|
||||
exp.Interval: lambda self, e: self._annotate_with_type(e, exp.DataType.Type.INTERVAL),
|
||||
exp.Least: lambda self, e: self._annotate_by_args(e, "expressions"),
|
||||
exp.Least: lambda self, e: self._annotate_by_args(e, "this", "expressions"),
|
||||
exp.Literal: lambda self, e: self._annotate_literal(e),
|
||||
exp.Map: lambda self, e: self._annotate_map(e),
|
||||
exp.Max: lambda self, e: self._annotate_by_args(e, "this", "expressions"),
|
||||
|
@ -1052,7 +1052,10 @@ def property_sql(self: Generator, expression: exp.Property) -> str:
|
|||
|
||||
|
||||
def str_position_sql(
|
||||
self: Generator, expression: exp.StrPosition, generate_instance: bool = False
|
||||
self: Generator,
|
||||
expression: exp.StrPosition,
|
||||
generate_instance: bool = False,
|
||||
str_position_func_name: str = "STRPOS",
|
||||
) -> str:
|
||||
this = self.sql(expression, "this")
|
||||
substr = self.sql(expression, "substr")
|
||||
|
@ -1065,7 +1068,7 @@ def str_position_sql(
|
|||
this = self.func("SUBSTR", this, position)
|
||||
position_offset = f" + {position} - 1"
|
||||
|
||||
return self.func("STRPOS", this, substr, instance) + position_offset
|
||||
return self.func(str_position_func_name, this, substr, instance) + position_offset
|
||||
|
||||
|
||||
def struct_extract_sql(self: Generator, expression: exp.StructExtract) -> str:
|
||||
|
|
|
@ -38,6 +38,7 @@ from sqlglot.dialects.dialect import (
|
|||
)
|
||||
from sqlglot.helper import seq_get
|
||||
from sqlglot.tokens import TokenType
|
||||
from sqlglot.parser import binary_range_parser
|
||||
|
||||
DATETIME_DELTA = t.Union[
|
||||
exp.DateAdd, exp.TimeAdd, exp.DatetimeAdd, exp.TsOrDsAdd, exp.DateSub, exp.DatetimeSub
|
||||
|
@ -289,6 +290,10 @@ class DuckDB(Dialect):
|
|||
KEYWORDS = {
|
||||
**tokens.Tokenizer.KEYWORDS,
|
||||
"//": TokenType.DIV,
|
||||
"**": TokenType.DSTAR,
|
||||
"^@": TokenType.CARET_AT,
|
||||
"@>": TokenType.AT_GT,
|
||||
"<@": TokenType.LT_AT,
|
||||
"ATTACH": TokenType.COMMAND,
|
||||
"BINARY": TokenType.VARBINARY,
|
||||
"BITSTRING": TokenType.BIT,
|
||||
|
@ -325,6 +330,19 @@ class DuckDB(Dialect):
|
|||
**parser.Parser.BITWISE,
|
||||
TokenType.TILDA: exp.RegexpLike,
|
||||
}
|
||||
BITWISE.pop(TokenType.CARET)
|
||||
|
||||
RANGE_PARSERS = {
|
||||
**parser.Parser.RANGE_PARSERS,
|
||||
TokenType.DAMP: binary_range_parser(exp.ArrayOverlaps),
|
||||
TokenType.CARET_AT: binary_range_parser(exp.StartsWith),
|
||||
}
|
||||
|
||||
EXPONENT = {
|
||||
**parser.Parser.EXPONENT,
|
||||
TokenType.CARET: exp.Pow,
|
||||
TokenType.DSTAR: exp.Pow,
|
||||
}
|
||||
|
||||
FUNCTIONS_WITH_ALIASED_ARGS = {*parser.Parser.FUNCTIONS_WITH_ALIASED_ARGS, "STRUCT_PACK"}
|
||||
|
||||
|
@ -480,7 +498,6 @@ class DuckDB(Dialect):
|
|||
**generator.Generator.TRANSFORMS,
|
||||
exp.ApproxDistinct: approx_count_distinct_sql,
|
||||
exp.Array: inline_array_unless_query,
|
||||
exp.ArrayContainsAll: rename_func("ARRAY_HAS_ALL"),
|
||||
exp.ArrayFilter: rename_func("LIST_FILTER"),
|
||||
exp.ArraySize: rename_func("ARRAY_LENGTH"),
|
||||
exp.ArgMax: arg_max_or_min_no_count("ARG_MAX"),
|
||||
|
@ -549,6 +566,7 @@ class DuckDB(Dialect):
|
|||
exp.ReturnsProperty: lambda self, e: "TABLE" if isinstance(e.this, exp.Schema) else "",
|
||||
exp.Rand: rename_func("RANDOM"),
|
||||
exp.SafeDivide: no_safe_divide_sql,
|
||||
exp.SHA: rename_func("SHA1"),
|
||||
exp.SHA2: sha256_sql,
|
||||
exp.Split: rename_func("STR_SPLIT"),
|
||||
exp.SortArray: _sort_array_sql,
|
||||
|
|
|
@ -300,7 +300,6 @@ class MySQL(Dialect):
|
|||
"DAYOFMONTH": lambda args: exp.DayOfMonth(this=exp.TsOrDsToDate(this=seq_get(args, 0))),
|
||||
"DAYOFWEEK": lambda args: exp.DayOfWeek(this=exp.TsOrDsToDate(this=seq_get(args, 0))),
|
||||
"DAYOFYEAR": lambda args: exp.DayOfYear(this=exp.TsOrDsToDate(this=seq_get(args, 0))),
|
||||
"INSTR": lambda args: exp.StrPosition(substr=seq_get(args, 1), this=seq_get(args, 0)),
|
||||
"FROM_UNIXTIME": build_formatted_time(exp.UnixToTime, "mysql"),
|
||||
"ISNULL": isnull_to_is_null,
|
||||
"LOCATE": locate_to_strposition,
|
||||
|
|
|
@ -95,6 +95,7 @@ class Oracle(Dialect):
|
|||
"(+)": TokenType.JOIN_MARKER,
|
||||
"BINARY_DOUBLE": TokenType.DOUBLE,
|
||||
"BINARY_FLOAT": TokenType.FLOAT,
|
||||
"BULK COLLECT INTO": TokenType.BULK_COLLECT_INTO,
|
||||
"COLUMNS": TokenType.COLUMN,
|
||||
"MATCH_RECOGNIZE": TokenType.MATCH_RECOGNIZE,
|
||||
"MINUS": TokenType.EXCEPT,
|
||||
|
@ -241,6 +242,24 @@ class Oracle(Dialect):
|
|||
on_condition=self._parse_on_condition(),
|
||||
)
|
||||
|
||||
def _parse_into(self) -> t.Optional[exp.Into]:
|
||||
# https://docs.oracle.com/en/database/oracle/oracle-database/19/lnpls/SELECT-INTO-statement.html
|
||||
bulk_collect = self._match(TokenType.BULK_COLLECT_INTO)
|
||||
if not bulk_collect and not self._match(TokenType.INTO):
|
||||
return None
|
||||
|
||||
index = self._index
|
||||
|
||||
expressions = self._parse_expressions()
|
||||
if len(expressions) == 1:
|
||||
self._retreat(index)
|
||||
self._match(TokenType.TABLE)
|
||||
return self.expression(
|
||||
exp.Into, this=self._parse_table(schema=True), bulk_collect=bulk_collect
|
||||
)
|
||||
|
||||
return self.expression(exp.Into, bulk_collect=bulk_collect, expressions=expressions)
|
||||
|
||||
class Generator(generator.Generator):
|
||||
LOCKING_READS_SUPPORTED = True
|
||||
JOIN_HINTS = False
|
||||
|
@ -344,3 +363,10 @@ class Oracle(Dialect):
|
|||
def coalesce_sql(self, expression: exp.Coalesce) -> str:
|
||||
func_name = "NVL" if expression.args.get("is_nvl") else "COALESCE"
|
||||
return rename_func(func_name)(self, expression)
|
||||
|
||||
def into_sql(self, expression: exp.Into) -> str:
|
||||
into = "INTO" if not expression.args.get("bulk_collect") else "BULK COLLECT INTO"
|
||||
if expression.this:
|
||||
return f"{self.seg(into)} {self.sql(expression, 'this')}"
|
||||
|
||||
return f"{self.seg(into)} {self.expressions(expression)}"
|
||||
|
|
|
@ -286,9 +286,6 @@ class Postgres(Dialect):
|
|||
|
||||
KEYWORDS = {
|
||||
**tokens.Tokenizer.KEYWORDS,
|
||||
"~~": TokenType.LIKE,
|
||||
"~~*": TokenType.ILIKE,
|
||||
"~*": TokenType.IRLIKE,
|
||||
"~": TokenType.RLIKE,
|
||||
"@@": TokenType.DAT,
|
||||
"@>": TokenType.AT_GT,
|
||||
|
@ -386,12 +383,10 @@ class Postgres(Dialect):
|
|||
|
||||
RANGE_PARSERS = {
|
||||
**parser.Parser.RANGE_PARSERS,
|
||||
TokenType.AT_GT: binary_range_parser(exp.ArrayContainsAll),
|
||||
TokenType.DAMP: binary_range_parser(exp.ArrayOverlaps),
|
||||
TokenType.DAT: lambda self, this: self.expression(
|
||||
exp.MatchAgainst, this=self._parse_bitwise(), expressions=[this]
|
||||
),
|
||||
TokenType.LT_AT: binary_range_parser(exp.ArrayContainsAll, reverse_args=True),
|
||||
TokenType.OPERATOR: lambda self, this: self._parse_operator(this),
|
||||
}
|
||||
|
||||
|
@ -489,8 +484,6 @@ class Postgres(Dialect):
|
|||
**generator.Generator.TRANSFORMS,
|
||||
exp.AnyValue: any_value_to_max_sql,
|
||||
exp.ArrayConcat: lambda self, e: self.arrayconcat_sql(e, name="ARRAY_CAT"),
|
||||
exp.ArrayContainsAll: lambda self, e: self.binary(e, "@>"),
|
||||
exp.ArrayOverlaps: lambda self, e: self.binary(e, "&&"),
|
||||
exp.ArrayFilter: filter_array_using_unnest,
|
||||
exp.ArraySize: lambda self, e: self.func("ARRAY_LENGTH", e.this, e.expression or "1"),
|
||||
exp.BitwiseXor: lambda self, e: self.binary(e, "#"),
|
||||
|
|
|
@ -166,35 +166,6 @@ def _unix_to_time_sql(self: Presto.Generator, expression: exp.UnixToTime) -> str
|
|||
return f"FROM_UNIXTIME(CAST({timestamp} AS DOUBLE) / POW(10, {scale}))"
|
||||
|
||||
|
||||
def _jsonextract_sql(self: Presto.Generator, expression: exp.JSONExtract) -> str:
|
||||
is_json_extract = self.dialect.settings.get("variant_extract_is_json_extract", True)
|
||||
|
||||
# Generate JSON_EXTRACT unless the user has configured that a Snowflake / Databricks
|
||||
# VARIANT extract (e.g. col:x.y) should map to dot notation (i.e ROW access) in Presto/Trino
|
||||
if not expression.args.get("variant_extract") or is_json_extract:
|
||||
return self.func(
|
||||
"JSON_EXTRACT", expression.this, expression.expression, *expression.expressions
|
||||
)
|
||||
|
||||
this = self.sql(expression, "this")
|
||||
|
||||
# Convert the JSONPath extraction `JSON_EXTRACT(col, '$.x.y) to a ROW access col.x.y
|
||||
segments = []
|
||||
for path_key in expression.expression.expressions[1:]:
|
||||
if not isinstance(path_key, exp.JSONPathKey):
|
||||
# Cannot transpile subscripts, wildcards etc to dot notation
|
||||
self.unsupported(f"Cannot transpile JSONPath segment '{path_key}' to ROW access")
|
||||
continue
|
||||
key = path_key.this
|
||||
if not exp.SAFE_IDENTIFIER_RE.match(key):
|
||||
key = f'"{key}"'
|
||||
segments.append(f".{key}")
|
||||
|
||||
expr = "".join(segments)
|
||||
|
||||
return f"{this}{expr}"
|
||||
|
||||
|
||||
def _to_int(self: Presto.Generator, expression: exp.Expression) -> exp.Expression:
|
||||
if not expression.type:
|
||||
from sqlglot.optimizer.annotate_types import annotate_types
|
||||
|
@ -435,7 +406,7 @@ class Presto(Dialect):
|
|||
exp.If: if_sql(),
|
||||
exp.ILike: no_ilike_sql,
|
||||
exp.Initcap: _initcap_sql,
|
||||
exp.JSONExtract: _jsonextract_sql,
|
||||
exp.JSONExtract: lambda self, e: self.jsonextract_sql(e),
|
||||
exp.Last: _first_last_sql,
|
||||
exp.LastValue: _first_last_sql,
|
||||
exp.LastDay: lambda self, e: self.func("LAST_DAY_OF_MONTH", e.this),
|
||||
|
@ -693,3 +664,33 @@ class Presto(Dialect):
|
|||
expression = t.cast(exp.Delete, expression.transform(unqualify_columns))
|
||||
|
||||
return super().delete_sql(expression)
|
||||
|
||||
def jsonextract_sql(self, expression: exp.JSONExtract) -> str:
|
||||
is_json_extract = self.dialect.settings.get("variant_extract_is_json_extract", True)
|
||||
|
||||
# Generate JSON_EXTRACT unless the user has configured that a Snowflake / Databricks
|
||||
# VARIANT extract (e.g. col:x.y) should map to dot notation (i.e ROW access) in Presto/Trino
|
||||
if not expression.args.get("variant_extract") or is_json_extract:
|
||||
return self.func(
|
||||
"JSON_EXTRACT", expression.this, expression.expression, *expression.expressions
|
||||
)
|
||||
|
||||
this = self.sql(expression, "this")
|
||||
|
||||
# Convert the JSONPath extraction `JSON_EXTRACT(col, '$.x.y) to a ROW access col.x.y
|
||||
segments = []
|
||||
for path_key in expression.expression.expressions[1:]:
|
||||
if not isinstance(path_key, exp.JSONPathKey):
|
||||
# Cannot transpile subscripts, wildcards etc to dot notation
|
||||
self.unsupported(
|
||||
f"Cannot transpile JSONPath segment '{path_key}' to ROW access"
|
||||
)
|
||||
continue
|
||||
key = path_key.this
|
||||
if not exp.SAFE_IDENTIFIER_RE.match(key):
|
||||
key = f'"{key}"'
|
||||
segments.append(f".{key}")
|
||||
|
||||
expr = "".join(segments)
|
||||
|
||||
return f"{this}{expr}"
|
||||
|
|
|
@ -134,6 +134,7 @@ class Redshift(Postgres):
|
|||
"TOP": TokenType.TOP,
|
||||
"UNLOAD": TokenType.COMMAND,
|
||||
"VARBYTE": TokenType.VARBINARY,
|
||||
"BINARY VARYING": TokenType.VARBINARY,
|
||||
}
|
||||
KEYWORDS.pop("VALUES")
|
||||
|
||||
|
@ -184,6 +185,7 @@ class Redshift(Postgres):
|
|||
exp.DateDiff: date_delta_sql("DATEDIFF"),
|
||||
exp.DistKeyProperty: lambda self, e: self.func("DISTKEY", e.this),
|
||||
exp.DistStyleProperty: lambda self, e: self.naked_property(e),
|
||||
exp.Explode: lambda self, e: self.explode_sql(e),
|
||||
exp.FromBase: rename_func("STRTOL"),
|
||||
exp.GeneratedAsIdentityColumnConstraint: generatedasidentitycolumnconstraint_sql,
|
||||
exp.JSONExtract: json_extract_segments("JSON_EXTRACT_PATH_TEXT"),
|
||||
|
@ -388,11 +390,16 @@ class Redshift(Postgres):
|
|||
args = expression.expressions
|
||||
num_args = len(args)
|
||||
|
||||
if num_args > 1:
|
||||
if num_args != 1:
|
||||
self.unsupported(f"Unsupported number of arguments in UNNEST: {num_args}")
|
||||
return ""
|
||||
|
||||
if isinstance(expression.find_ancestor(exp.From, exp.Join, exp.Select), exp.Select):
|
||||
self.unsupported("Unsupported UNNEST when not used in FROM/JOIN clauses")
|
||||
return ""
|
||||
|
||||
arg = self.sql(seq_get(args, 0))
|
||||
|
||||
alias = self.expressions(expression.args.get("alias"), key="columns", flat=True)
|
||||
return f"{arg} AS {alias}" if alias else arg
|
||||
|
||||
|
@ -434,3 +441,7 @@ class Redshift(Postgres):
|
|||
return super().array_sql(expression)
|
||||
|
||||
return rename_func("ARRAY")(self, expression)
|
||||
|
||||
def explode_sql(self, expression: exp.Explode) -> str:
|
||||
self.unsupported("Unsupported EXPLODE() function")
|
||||
return ""
|
||||
|
|
|
@ -113,7 +113,10 @@ class Spark(Spark2):
|
|||
"TIMESTAMP_LTZ": _build_as_cast("TIMESTAMP_LTZ"),
|
||||
"TIMESTAMP_NTZ": _build_as_cast("TIMESTAMP_NTZ"),
|
||||
"TRY_ELEMENT_AT": lambda args: exp.Bracket(
|
||||
this=seq_get(args, 0), expressions=ensure_list(seq_get(args, 1)), safe=True
|
||||
this=seq_get(args, 0),
|
||||
expressions=ensure_list(seq_get(args, 1)),
|
||||
offset=1,
|
||||
safe=True,
|
||||
),
|
||||
}
|
||||
|
||||
|
@ -172,7 +175,7 @@ class Spark(Spark2):
|
|||
|
||||
def bracket_sql(self, expression: exp.Bracket) -> str:
|
||||
if expression.args.get("safe"):
|
||||
key = seq_get(self.bracket_offset_expressions(expression), 0)
|
||||
key = seq_get(self.bracket_offset_expressions(expression, index_offset=1), 0)
|
||||
return self.func("TRY_ELEMENT_AT", expression.this, key)
|
||||
|
||||
return super().bracket_sql(expression)
|
||||
|
|
|
@ -13,7 +13,7 @@ from sqlglot.dialects.dialect import (
|
|||
unit_to_str,
|
||||
)
|
||||
from sqlglot.dialects.hive import Hive
|
||||
from sqlglot.helper import seq_get
|
||||
from sqlglot.helper import seq_get, ensure_list
|
||||
from sqlglot.transforms import (
|
||||
preprocess,
|
||||
remove_unique_constraints,
|
||||
|
@ -21,6 +21,11 @@ from sqlglot.transforms import (
|
|||
move_schema_columns_to_partitioned_by,
|
||||
)
|
||||
|
||||
if t.TYPE_CHECKING:
|
||||
from sqlglot._typing import E
|
||||
|
||||
from sqlglot.optimizer.annotate_types import TypeAnnotator
|
||||
|
||||
|
||||
def _map_sql(self: Spark2.Generator, expression: exp.Map) -> str:
|
||||
keys = expression.args.get("keys")
|
||||
|
@ -110,10 +115,48 @@ def temporary_storage_provider(expression: exp.Expression) -> exp.Expression:
|
|||
return expression
|
||||
|
||||
|
||||
def _annotate_by_similar_args(
|
||||
self: TypeAnnotator, expression: E, *args: str, target_type: exp.DataType | exp.DataType.Type
|
||||
) -> E:
|
||||
"""
|
||||
Infers the type of the expression according to the following rules:
|
||||
- If all args are of the same type OR any arg is of target_type, the expr is inferred as such
|
||||
- If any arg is of UNKNOWN type and none of target_type, the expr is inferred as UNKNOWN
|
||||
"""
|
||||
self._annotate_args(expression)
|
||||
|
||||
expressions: t.List[exp.Expression] = []
|
||||
for arg in args:
|
||||
arg_expr = expression.args.get(arg)
|
||||
expressions.extend(expr for expr in ensure_list(arg_expr) if expr)
|
||||
|
||||
last_datatype = None
|
||||
|
||||
has_unknown = False
|
||||
for expr in expressions:
|
||||
if expr.is_type(exp.DataType.Type.UNKNOWN):
|
||||
has_unknown = True
|
||||
elif expr.is_type(target_type):
|
||||
has_unknown = False
|
||||
last_datatype = target_type
|
||||
break
|
||||
else:
|
||||
last_datatype = expr.type
|
||||
|
||||
self._set_type(expression, exp.DataType.Type.UNKNOWN if has_unknown else last_datatype)
|
||||
return expression
|
||||
|
||||
|
||||
class Spark2(Hive):
|
||||
ANNOTATORS = {
|
||||
**Hive.ANNOTATORS,
|
||||
exp.Substring: lambda self, e: self._annotate_by_args(e, "this"),
|
||||
exp.Concat: lambda self, e: _annotate_by_similar_args(
|
||||
self, e, "expressions", target_type=exp.DataType.Type.TEXT
|
||||
),
|
||||
exp.Pad: lambda self, e: _annotate_by_similar_args(
|
||||
self, e, "this", "fill_pattern", target_type=exp.DataType.Type.TEXT
|
||||
),
|
||||
}
|
||||
|
||||
class Parser(Hive.Parser):
|
||||
|
|
|
@ -15,6 +15,7 @@ from sqlglot.dialects.dialect import (
|
|||
no_tablesample_sql,
|
||||
no_trycast_sql,
|
||||
rename_func,
|
||||
str_position_sql,
|
||||
)
|
||||
from sqlglot.tokens import TokenType
|
||||
|
||||
|
@ -186,6 +187,9 @@ class SQLite(Dialect):
|
|||
transforms.eliminate_semi_and_anti_joins,
|
||||
]
|
||||
),
|
||||
exp.StrPosition: lambda self, e: str_position_sql(
|
||||
self, e, str_position_func_name="INSTR"
|
||||
),
|
||||
exp.TableSample: no_tablesample_sql,
|
||||
exp.TimeStrToTime: lambda self, e: self.sql(e, "this"),
|
||||
exp.TimeToStr: lambda self, e: self.func("STRFTIME", e.args.get("format"), e.this),
|
||||
|
@ -251,7 +255,7 @@ class SQLite(Dialect):
|
|||
elif unit == "NANOSECOND":
|
||||
sql = f"{sql} * 8640000000000.0"
|
||||
else:
|
||||
self.unsupported("DATEDIFF unsupported for '{unit}'.")
|
||||
self.unsupported(f"DATEDIFF unsupported for '{unit}'.")
|
||||
|
||||
return f"CAST({sql} AS INTEGER)"
|
||||
|
||||
|
|
|
@ -1,8 +1,9 @@
|
|||
from __future__ import annotations
|
||||
|
||||
from sqlglot import exp
|
||||
from sqlglot import exp, parser
|
||||
from sqlglot.dialects.dialect import merge_without_target_sql, trim_sql, timestrtotime_sql
|
||||
from sqlglot.dialects.presto import Presto
|
||||
from sqlglot.tokens import TokenType
|
||||
|
||||
|
||||
class Trino(Presto):
|
||||
|
@ -13,8 +14,30 @@ class Trino(Presto):
|
|||
FUNCTION_PARSERS = {
|
||||
**Presto.Parser.FUNCTION_PARSERS,
|
||||
"TRIM": lambda self: self._parse_trim(),
|
||||
"JSON_QUERY": lambda self: self._parse_json_query(),
|
||||
}
|
||||
|
||||
JSON_QUERY_OPTIONS: parser.OPTIONS_TYPE = {
|
||||
**dict.fromkeys(
|
||||
("WITH", "WITHOUT"),
|
||||
(
|
||||
("CONDITIONAL", "WRAPPER"),
|
||||
("CONDITIONAL", "ARRAY", "WRAPPED"),
|
||||
("UNCONDITIONAL", "WRAPPER"),
|
||||
("UNCONDITIONAL", "ARRAY", "WRAPPER"),
|
||||
),
|
||||
),
|
||||
}
|
||||
|
||||
def _parse_json_query(self):
|
||||
return self.expression(
|
||||
exp.JSONExtract,
|
||||
this=self._parse_bitwise(),
|
||||
expression=self._match(TokenType.COMMA) and self._parse_bitwise(),
|
||||
option=self._parse_var_from_options(self.JSON_QUERY_OPTIONS, raise_unmatched=False),
|
||||
json_query=True,
|
||||
)
|
||||
|
||||
class Generator(Presto.Generator):
|
||||
TRANSFORMS = {
|
||||
**Presto.Generator.TRANSFORMS,
|
||||
|
@ -23,6 +46,7 @@ class Trino(Presto):
|
|||
exp.Merge: merge_without_target_sql,
|
||||
exp.TimeStrToTime: lambda self, e: timestrtotime_sql(self, e, include_precision=True),
|
||||
exp.Trim: trim_sql,
|
||||
exp.JSONExtract: lambda self, e: self.jsonextract_sql(e),
|
||||
}
|
||||
|
||||
SUPPORTED_JSON_PATH_PARTS = {
|
||||
|
@ -31,5 +55,15 @@ class Trino(Presto):
|
|||
exp.JSONPathSubscript,
|
||||
}
|
||||
|
||||
def jsonextract_sql(self, expression: exp.JSONExtract) -> str:
|
||||
if not expression.args.get("json_query"):
|
||||
return super().jsonextract_sql(expression)
|
||||
|
||||
json_path = self.sql(expression, "expression")
|
||||
option = self.sql(expression, "option")
|
||||
option = f" {option}" if option else ""
|
||||
|
||||
return self.func("JSON_QUERY", expression.this, json_path + option)
|
||||
|
||||
class Tokenizer(Presto.Tokenizer):
|
||||
HEX_STRINGS = [("X'", "'")]
|
||||
|
|
|
@ -14,6 +14,9 @@ from heapq import heappop, heappush
|
|||
from sqlglot import Dialect, expressions as exp
|
||||
from sqlglot.helper import ensure_list
|
||||
|
||||
if t.TYPE_CHECKING:
|
||||
from sqlglot.dialects.dialect import DialectType
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class Insert:
|
||||
|
@ -63,6 +66,7 @@ def diff(
|
|||
target: exp.Expression,
|
||||
matchings: t.List[t.Tuple[exp.Expression, exp.Expression]] | None = None,
|
||||
delta_only: bool = False,
|
||||
copy: bool = True,
|
||||
**kwargs: t.Any,
|
||||
) -> t.List[Edit]:
|
||||
"""
|
||||
|
@ -91,6 +95,9 @@ def diff(
|
|||
Note: expression references in this list must refer to the same node objects that are
|
||||
referenced in source / target trees.
|
||||
delta_only: excludes all `Keep` nodes from the diff.
|
||||
copy: whether to copy the input expressions.
|
||||
Note: if this is set to false, the caller must ensure that there are no shared references
|
||||
in the two ASTs, otherwise the diffing algorithm may produce unexpected behavior.
|
||||
kwargs: additional arguments to pass to the ChangeDistiller instance.
|
||||
|
||||
Returns:
|
||||
|
@ -110,8 +117,8 @@ def diff(
|
|||
if id(old_node) in matching_ids
|
||||
}
|
||||
|
||||
source_copy = source.copy()
|
||||
target_copy = target.copy()
|
||||
source_copy = source.copy() if copy else source
|
||||
target_copy = target.copy() if copy else target
|
||||
|
||||
node_mappings = {
|
||||
**compute_node_mappings(source, source_copy),
|
||||
|
@ -129,12 +136,14 @@ def diff(
|
|||
|
||||
# The expression types for which Update edits are allowed.
|
||||
UPDATABLE_EXPRESSION_TYPES = (
|
||||
exp.Alias,
|
||||
exp.Boolean,
|
||||
exp.Column,
|
||||
exp.DataType,
|
||||
exp.Lambda,
|
||||
exp.Literal,
|
||||
exp.Table,
|
||||
exp.Column,
|
||||
exp.Lambda,
|
||||
exp.Window,
|
||||
)
|
||||
|
||||
IGNORED_LEAF_EXPRESSION_TYPES = (exp.Identifier,)
|
||||
|
@ -147,10 +156,10 @@ class ChangeDistiller:
|
|||
Chawathe et al. described in http://ilpubs.stanford.edu:8090/115/1/1995-46.pdf.
|
||||
"""
|
||||
|
||||
def __init__(self, f: float = 0.6, t: float = 0.6) -> None:
|
||||
def __init__(self, f: float = 0.6, t: float = 0.6, dialect: DialectType = None) -> None:
|
||||
self.f = f
|
||||
self.t = t
|
||||
self._sql_generator = Dialect().generator()
|
||||
self._sql_generator = Dialect.get_or_raise(dialect).generator()
|
||||
|
||||
def diff(
|
||||
self,
|
||||
|
|
|
@ -2106,7 +2106,13 @@ class PrimaryKey(Expression):
|
|||
# https://www.postgresql.org/docs/9.1/sql-selectinto.html
|
||||
# https://docs.aws.amazon.com/redshift/latest/dg/r_SELECT_INTO.html#r_SELECT_INTO-examples
|
||||
class Into(Expression):
|
||||
arg_types = {"this": True, "temporary": False, "unlogged": False}
|
||||
arg_types = {
|
||||
"this": False,
|
||||
"temporary": False,
|
||||
"unlogged": False,
|
||||
"bulk_collect": False,
|
||||
"expressions": False,
|
||||
}
|
||||
|
||||
|
||||
class From(Expression):
|
||||
|
@ -3284,6 +3290,200 @@ class Update(Expression):
|
|||
"limit": False,
|
||||
}
|
||||
|
||||
def table(
|
||||
self, expression: ExpOrStr, dialect: DialectType = None, copy: bool = True, **opts
|
||||
) -> Update:
|
||||
"""
|
||||
Set the table to update.
|
||||
|
||||
Example:
|
||||
>>> Update().table("my_table").set_("x = 1").sql()
|
||||
'UPDATE my_table SET x = 1'
|
||||
|
||||
Args:
|
||||
expression : the SQL code strings to parse.
|
||||
If a `Table` instance is passed, this is used as-is.
|
||||
If another `Expression` instance is passed, it will be wrapped in a `Table`.
|
||||
dialect: the dialect used to parse the input expression.
|
||||
copy: if `False`, modify this expression instance in-place.
|
||||
opts: other options to use to parse the input expressions.
|
||||
|
||||
Returns:
|
||||
The modified Update expression.
|
||||
"""
|
||||
return _apply_builder(
|
||||
expression=expression,
|
||||
instance=self,
|
||||
arg="this",
|
||||
into=Table,
|
||||
prefix=None,
|
||||
dialect=dialect,
|
||||
copy=copy,
|
||||
**opts,
|
||||
)
|
||||
|
||||
def set_(
|
||||
self,
|
||||
*expressions: ExpOrStr,
|
||||
append: bool = True,
|
||||
dialect: DialectType = None,
|
||||
copy: bool = True,
|
||||
**opts,
|
||||
) -> Update:
|
||||
"""
|
||||
Append to or set the SET expressions.
|
||||
|
||||
Example:
|
||||
>>> Update().table("my_table").set_("x = 1").sql()
|
||||
'UPDATE my_table SET x = 1'
|
||||
|
||||
Args:
|
||||
*expressions: the SQL code strings to parse.
|
||||
If `Expression` instance(s) are passed, they will be used as-is.
|
||||
Multiple expressions are combined with a comma.
|
||||
append: if `True`, add the new expressions to any existing SET expressions.
|
||||
Otherwise, this resets the expressions.
|
||||
dialect: the dialect used to parse the input expressions.
|
||||
copy: if `False`, modify this expression instance in-place.
|
||||
opts: other options to use to parse the input expressions.
|
||||
"""
|
||||
return _apply_list_builder(
|
||||
*expressions,
|
||||
instance=self,
|
||||
arg="expressions",
|
||||
append=append,
|
||||
into=Expression,
|
||||
prefix=None,
|
||||
dialect=dialect,
|
||||
copy=copy,
|
||||
**opts,
|
||||
)
|
||||
|
||||
def where(
|
||||
self,
|
||||
*expressions: t.Optional[ExpOrStr],
|
||||
append: bool = True,
|
||||
dialect: DialectType = None,
|
||||
copy: bool = True,
|
||||
**opts,
|
||||
) -> Select:
|
||||
"""
|
||||
Append to or set the WHERE expressions.
|
||||
|
||||
Example:
|
||||
>>> Update().table("tbl").set_("x = 1").where("x = 'a' OR x < 'b'").sql()
|
||||
"UPDATE tbl SET x = 1 WHERE x = 'a' OR x < 'b'"
|
||||
|
||||
Args:
|
||||
*expressions: the SQL code strings to parse.
|
||||
If an `Expression` instance is passed, it will be used as-is.
|
||||
Multiple expressions are combined with an AND operator.
|
||||
append: if `True`, AND the new expressions to any existing expression.
|
||||
Otherwise, this resets the expression.
|
||||
dialect: the dialect used to parse the input expressions.
|
||||
copy: if `False`, modify this expression instance in-place.
|
||||
opts: other options to use to parse the input expressions.
|
||||
|
||||
Returns:
|
||||
Select: the modified expression.
|
||||
"""
|
||||
return _apply_conjunction_builder(
|
||||
*expressions,
|
||||
instance=self,
|
||||
arg="where",
|
||||
append=append,
|
||||
into=Where,
|
||||
dialect=dialect,
|
||||
copy=copy,
|
||||
**opts,
|
||||
)
|
||||
|
||||
def from_(
|
||||
self,
|
||||
expression: t.Optional[ExpOrStr] = None,
|
||||
dialect: DialectType = None,
|
||||
copy: bool = True,
|
||||
**opts,
|
||||
) -> Update:
|
||||
"""
|
||||
Set the FROM expression.
|
||||
|
||||
Example:
|
||||
>>> Update().table("my_table").set_("x = 1").from_("baz").sql()
|
||||
'UPDATE my_table SET x = 1 FROM baz'
|
||||
|
||||
Args:
|
||||
expression : the SQL code strings to parse.
|
||||
If a `From` instance is passed, this is used as-is.
|
||||
If another `Expression` instance is passed, it will be wrapped in a `From`.
|
||||
If nothing is passed in then a from is not applied to the expression
|
||||
dialect: the dialect used to parse the input expression.
|
||||
copy: if `False`, modify this expression instance in-place.
|
||||
opts: other options to use to parse the input expressions.
|
||||
|
||||
Returns:
|
||||
The modified Update expression.
|
||||
"""
|
||||
if not expression:
|
||||
return maybe_copy(self, copy)
|
||||
|
||||
return _apply_builder(
|
||||
expression=expression,
|
||||
instance=self,
|
||||
arg="from",
|
||||
into=From,
|
||||
prefix="FROM",
|
||||
dialect=dialect,
|
||||
copy=copy,
|
||||
**opts,
|
||||
)
|
||||
|
||||
def with_(
|
||||
self,
|
||||
alias: ExpOrStr,
|
||||
as_: ExpOrStr,
|
||||
recursive: t.Optional[bool] = None,
|
||||
materialized: t.Optional[bool] = None,
|
||||
append: bool = True,
|
||||
dialect: DialectType = None,
|
||||
copy: bool = True,
|
||||
**opts,
|
||||
) -> Update:
|
||||
"""
|
||||
Append to or set the common table expressions.
|
||||
|
||||
Example:
|
||||
>>> Update().table("my_table").set_("x = 1").from_("baz").with_("baz", "SELECT id FROM foo").sql()
|
||||
'WITH baz AS (SELECT id FROM foo) UPDATE my_table SET x = 1 FROM baz'
|
||||
|
||||
Args:
|
||||
alias: the SQL code string to parse as the table name.
|
||||
If an `Expression` instance is passed, this is used as-is.
|
||||
as_: the SQL code string to parse as the table expression.
|
||||
If an `Expression` instance is passed, it will be used as-is.
|
||||
recursive: set the RECURSIVE part of the expression. Defaults to `False`.
|
||||
materialized: set the MATERIALIZED part of the expression.
|
||||
append: if `True`, add to any existing expressions.
|
||||
Otherwise, this resets the expressions.
|
||||
dialect: the dialect used to parse the input expression.
|
||||
copy: if `False`, modify this expression instance in-place.
|
||||
opts: other options to use to parse the input expressions.
|
||||
|
||||
Returns:
|
||||
The modified expression.
|
||||
"""
|
||||
return _apply_cte_builder(
|
||||
self,
|
||||
alias,
|
||||
as_,
|
||||
recursive=recursive,
|
||||
materialized=materialized,
|
||||
append=append,
|
||||
dialect=dialect,
|
||||
copy=copy,
|
||||
**opts,
|
||||
)
|
||||
|
||||
|
||||
class Values(UDTF):
|
||||
arg_types = {"expressions": True, "alias": False}
|
||||
|
@ -5732,6 +5932,8 @@ class JSONExtract(Binary, Func):
|
|||
"only_json_types": False,
|
||||
"expressions": False,
|
||||
"variant_extract": False,
|
||||
"json_query": False,
|
||||
"option": False,
|
||||
}
|
||||
_sql_names = ["JSON_EXTRACT"]
|
||||
is_var_len_args = True
|
||||
|
@ -6803,9 +7005,10 @@ def from_(expression: ExpOrStr, dialect: DialectType = None, **opts) -> Select:
|
|||
|
||||
def update(
|
||||
table: str | Table,
|
||||
properties: dict,
|
||||
properties: t.Optional[dict] = None,
|
||||
where: t.Optional[ExpOrStr] = None,
|
||||
from_: t.Optional[ExpOrStr] = None,
|
||||
with_: t.Optional[t.Dict[str, ExpOrStr]] = None,
|
||||
dialect: DialectType = None,
|
||||
**opts,
|
||||
) -> Update:
|
||||
|
@ -6813,14 +7016,15 @@ def update(
|
|||
Creates an update statement.
|
||||
|
||||
Example:
|
||||
>>> update("my_table", {"x": 1, "y": "2", "z": None}, from_="baz", where="id > 1").sql()
|
||||
"UPDATE my_table SET x = 1, y = '2', z = NULL FROM baz WHERE id > 1"
|
||||
>>> update("my_table", {"x": 1, "y": "2", "z": None}, from_="baz_cte", where="baz_cte.id > 1 and my_table.id = baz_cte.id", with_={"baz_cte": "SELECT id FROM foo"}).sql()
|
||||
"WITH baz_cte AS (SELECT id FROM foo) UPDATE my_table SET x = 1, y = '2', z = NULL FROM baz_cte WHERE baz_cte.id > 1 AND my_table.id = baz_cte.id"
|
||||
|
||||
Args:
|
||||
*properties: dictionary of properties to set which are
|
||||
properties: dictionary of properties to SET which are
|
||||
auto converted to sql objects eg None -> NULL
|
||||
where: sql conditional parsed into a WHERE statement
|
||||
from_: sql statement parsed into a FROM statement
|
||||
with_: dictionary of CTE aliases / select statements to include in a WITH clause.
|
||||
dialect: the dialect used to parse the input expressions.
|
||||
**opts: other options to use to parse the input expressions.
|
||||
|
||||
|
@ -6828,6 +7032,7 @@ def update(
|
|||
Update: the syntax tree for the UPDATE statement.
|
||||
"""
|
||||
update_expr = Update(this=maybe_parse(table, into=Table, dialect=dialect))
|
||||
if properties:
|
||||
update_expr.set(
|
||||
"expressions",
|
||||
[
|
||||
|
@ -6847,6 +7052,15 @@ def update(
|
|||
"where",
|
||||
maybe_parse(where, into=Where, dialect=dialect, prefix="WHERE", **opts),
|
||||
)
|
||||
if with_:
|
||||
cte_list = [
|
||||
alias_(CTE(this=maybe_parse(qry, dialect=dialect, **opts)), alias, table=True)
|
||||
for alias, qry in with_.items()
|
||||
]
|
||||
update_expr.set(
|
||||
"with",
|
||||
With(expressions=cte_list),
|
||||
)
|
||||
return update_expr
|
||||
|
||||
|
||||
|
|
|
@ -114,6 +114,8 @@ class Generator(metaclass=_Generator):
|
|||
**JSON_PATH_PART_TRANSFORMS,
|
||||
exp.AllowedValuesProperty: lambda self,
|
||||
e: f"ALLOWED_VALUES {self.expressions(e, flat=True)}",
|
||||
exp.ArrayContainsAll: lambda self, e: self.binary(e, "@>"),
|
||||
exp.ArrayOverlaps: lambda self, e: self.binary(e, "&&"),
|
||||
exp.AutoRefreshProperty: lambda self, e: f"AUTO REFRESH {self.sql(e, 'this')}",
|
||||
exp.BackupProperty: lambda self, e: f"BACKUP {self.sql(e, 'this')}",
|
||||
exp.CaseSpecificColumnConstraint: lambda _,
|
||||
|
@ -203,7 +205,8 @@ class Generator(metaclass=_Generator):
|
|||
}
|
||||
|
||||
# Whether null ordering is supported in order by
|
||||
# True: Full Support, None: No support, False: No support in window specifications
|
||||
# True: Full Support, None: No support, False: No support for certain cases
|
||||
# such as window specifications, aggregate functions etc
|
||||
NULL_ORDERING_SUPPORTED: t.Optional[bool] = True
|
||||
|
||||
# Whether ignore nulls is inside the agg or outside.
|
||||
|
@ -671,9 +674,7 @@ class Generator(metaclass=_Generator):
|
|||
self._escaped_quote_end: str = (
|
||||
self.dialect.tokenizer_class.STRING_ESCAPES[0] + self.dialect.QUOTE_END
|
||||
)
|
||||
self._escaped_identifier_end: str = (
|
||||
self.dialect.tokenizer_class.IDENTIFIER_ESCAPES[0] + self.dialect.IDENTIFIER_END
|
||||
)
|
||||
self._escaped_identifier_end = self.dialect.IDENTIFIER_END * 2
|
||||
|
||||
self._next_name = name_sequence("_t")
|
||||
|
||||
|
@ -716,6 +717,16 @@ class Generator(metaclass=_Generator):
|
|||
|
||||
def preprocess(self, expression: exp.Expression) -> exp.Expression:
|
||||
"""Apply generic preprocessing transformations to a given expression."""
|
||||
expression = self._move_ctes_to_top_level(expression)
|
||||
|
||||
if self.ENSURE_BOOLS:
|
||||
from sqlglot.transforms import ensure_bools
|
||||
|
||||
expression = ensure_bools(expression)
|
||||
|
||||
return expression
|
||||
|
||||
def _move_ctes_to_top_level(self, expression: E) -> E:
|
||||
if (
|
||||
not expression.parent
|
||||
and type(expression) in self.EXPRESSIONS_WITHOUT_NESTED_CTES
|
||||
|
@ -724,12 +735,6 @@ class Generator(metaclass=_Generator):
|
|||
from sqlglot.transforms import move_ctes_to_top_level
|
||||
|
||||
expression = move_ctes_to_top_level(expression)
|
||||
|
||||
if self.ENSURE_BOOLS:
|
||||
from sqlglot.transforms import ensure_bools
|
||||
|
||||
expression = ensure_bools(expression)
|
||||
|
||||
return expression
|
||||
|
||||
def unsupported(self, message: str) -> None:
|
||||
|
@ -1191,7 +1196,11 @@ class Generator(metaclass=_Generator):
|
|||
return f"WITH {recursive}{sql}"
|
||||
|
||||
def cte_sql(self, expression: exp.CTE) -> str:
|
||||
alias = self.sql(expression, "alias")
|
||||
alias = expression.args.get("alias")
|
||||
if alias:
|
||||
alias.add_comments(expression.pop_comments())
|
||||
|
||||
alias_sql = self.sql(expression, "alias")
|
||||
|
||||
materialized = expression.args.get("materialized")
|
||||
if materialized is False:
|
||||
|
@ -1199,7 +1208,7 @@ class Generator(metaclass=_Generator):
|
|||
elif materialized:
|
||||
materialized = "MATERIALIZED "
|
||||
|
||||
return f"{alias} AS {materialized or ''}{self.wrap(expression)}"
|
||||
return f"{alias_sql} AS {materialized or ''}{self.wrap(expression)}"
|
||||
|
||||
def tablealias_sql(self, expression: exp.TableAlias) -> str:
|
||||
alias = self.sql(expression, "this")
|
||||
|
@ -1376,7 +1385,9 @@ class Generator(metaclass=_Generator):
|
|||
order = expression.args.get("order")
|
||||
|
||||
if limit or order:
|
||||
select = exp.subquery(expression, "_l_0", copy=False).select("*", copy=False)
|
||||
select = self._move_ctes_to_top_level(
|
||||
exp.subquery(expression, "_l_0", copy=False).select("*", copy=False)
|
||||
)
|
||||
|
||||
if limit:
|
||||
select = select.limit(limit.pop(), copy=False)
|
||||
|
@ -2037,6 +2048,7 @@ class Generator(metaclass=_Generator):
|
|||
def var_sql(self, expression: exp.Var) -> str:
|
||||
return self.sql(expression, "this")
|
||||
|
||||
@unsupported_args("expressions")
|
||||
def into_sql(self, expression: exp.Into) -> str:
|
||||
temporary = " TEMPORARY" if expression.args.get("temporary") else ""
|
||||
unlogged = " UNLOGGED" if expression.args.get("unlogged") else ""
|
||||
|
@ -2345,6 +2357,18 @@ class Generator(metaclass=_Generator):
|
|||
f"'{nulls_sort_change.strip()}' translation not supported in window functions"
|
||||
)
|
||||
nulls_sort_change = ""
|
||||
elif (
|
||||
self.NULL_ORDERING_SUPPORTED is False
|
||||
and (isinstance(expression.find_ancestor(exp.AggFunc, exp.Select), exp.AggFunc))
|
||||
and (
|
||||
(asc and nulls_sort_change == " NULLS LAST")
|
||||
or (desc and nulls_sort_change == " NULLS FIRST")
|
||||
)
|
||||
):
|
||||
self.unsupported(
|
||||
f"'{nulls_sort_change.strip()}' translation not supported for aggregate functions with {sort_order} sort order"
|
||||
)
|
||||
nulls_sort_change = ""
|
||||
elif self.NULL_ORDERING_SUPPORTED is None:
|
||||
if expression.this.is_int:
|
||||
self.unsupported(
|
||||
|
@ -2497,6 +2521,11 @@ class Generator(metaclass=_Generator):
|
|||
self.sql(expression, "from", comment=False),
|
||||
)
|
||||
|
||||
# If both the CTE and SELECT clauses have comments, generate the latter earlier
|
||||
if expression.args.get("with"):
|
||||
sql = self.maybe_comment(sql, expression)
|
||||
expression.pop_comments()
|
||||
|
||||
sql = self.prepend_ctes(expression, sql)
|
||||
|
||||
if not self.SUPPORTS_SELECT_INTO and into:
|
||||
|
@ -2644,11 +2673,13 @@ class Generator(metaclass=_Generator):
|
|||
high = self.sql(expression, "high")
|
||||
return f"{this} BETWEEN {low} AND {high}"
|
||||
|
||||
def bracket_offset_expressions(self, expression: exp.Bracket) -> t.List[exp.Expression]:
|
||||
def bracket_offset_expressions(
|
||||
self, expression: exp.Bracket, index_offset: t.Optional[int] = None
|
||||
) -> t.List[exp.Expression]:
|
||||
return apply_index_offset(
|
||||
expression.this,
|
||||
expression.expressions,
|
||||
self.dialect.INDEX_OFFSET - expression.args.get("offset", 0),
|
||||
(index_offset or self.dialect.INDEX_OFFSET) - expression.args.get("offset", 0),
|
||||
)
|
||||
|
||||
def bracket_sql(self, expression: exp.Bracket) -> str:
|
||||
|
|
|
@ -212,6 +212,7 @@ class Parser(metaclass=_Parser):
|
|||
),
|
||||
"GLOB": lambda args: exp.Glob(this=seq_get(args, 1), expression=seq_get(args, 0)),
|
||||
"HEX": build_hex,
|
||||
"INSTR": lambda args: exp.StrPosition(this=seq_get(args, 0), substr=seq_get(args, 1)),
|
||||
"JSON_EXTRACT": build_extract_json_with_path(exp.JSONExtract),
|
||||
"JSON_EXTRACT_SCALAR": build_extract_json_with_path(exp.JSONExtractScalar),
|
||||
"JSON_EXTRACT_PATH_TEXT": build_extract_json_with_path(exp.JSONExtractScalar),
|
||||
|
@ -859,6 +860,7 @@ class Parser(metaclass=_Parser):
|
|||
}
|
||||
|
||||
RANGE_PARSERS = {
|
||||
TokenType.AT_GT: binary_range_parser(exp.ArrayContainsAll),
|
||||
TokenType.BETWEEN: lambda self, this: self._parse_between(this),
|
||||
TokenType.GLOB: binary_range_parser(exp.Glob),
|
||||
TokenType.ILIKE: binary_range_parser(exp.ILike),
|
||||
|
@ -866,6 +868,7 @@ class Parser(metaclass=_Parser):
|
|||
TokenType.IRLIKE: binary_range_parser(exp.RegexpILike),
|
||||
TokenType.IS: lambda self, this: self._parse_is(this),
|
||||
TokenType.LIKE: binary_range_parser(exp.Like),
|
||||
TokenType.LT_AT: binary_range_parser(exp.ArrayContainsAll, reverse_args=True),
|
||||
TokenType.OVERLAPS: binary_range_parser(exp.Overlaps),
|
||||
TokenType.RLIKE: binary_range_parser(exp.RegexpLike),
|
||||
TokenType.SIMILAR_TO: binary_range_parser(exp.SimilarTo),
|
||||
|
@ -3027,15 +3030,20 @@ class Parser(metaclass=_Parser):
|
|||
comments = self._prev_comments
|
||||
recursive = self._match(TokenType.RECURSIVE)
|
||||
|
||||
last_comments = None
|
||||
expressions = []
|
||||
while True:
|
||||
expressions.append(self._parse_cte())
|
||||
if last_comments:
|
||||
expressions[-1].add_comments(last_comments)
|
||||
|
||||
if not self._match(TokenType.COMMA) and not self._match(TokenType.WITH):
|
||||
break
|
||||
else:
|
||||
self._match(TokenType.WITH)
|
||||
|
||||
last_comments = self._prev_comments
|
||||
|
||||
return self.expression(
|
||||
exp.With, comments=comments, expressions=expressions, recursive=recursive
|
||||
)
|
||||
|
@ -5075,6 +5083,7 @@ class Parser(metaclass=_Parser):
|
|||
elif isinstance(this, exp.Column) and not this.args.get("catalog"):
|
||||
this = self.expression(
|
||||
exp.Column,
|
||||
comments=this.comments,
|
||||
this=field,
|
||||
table=this.this,
|
||||
db=this.args.get("table"),
|
||||
|
@ -5226,7 +5235,9 @@ class Parser(metaclass=_Parser):
|
|||
subquery_predicate = self.SUBQUERY_PREDICATES.get(token_type)
|
||||
|
||||
if subquery_predicate and self._curr.token_type in (TokenType.SELECT, TokenType.WITH):
|
||||
this = self.expression(subquery_predicate, this=self._parse_select())
|
||||
this = self.expression(
|
||||
subquery_predicate, comments=comments, this=self._parse_select()
|
||||
)
|
||||
self._match_r_paren()
|
||||
return this
|
||||
|
||||
|
|
|
@ -60,6 +60,7 @@ class TokenType(AutoName):
|
|||
PIPE_SLASH = auto()
|
||||
DPIPE_SLASH = auto()
|
||||
CARET = auto()
|
||||
CARET_AT = auto()
|
||||
TILDA = auto()
|
||||
ARROW = auto()
|
||||
DARROW = auto()
|
||||
|
@ -221,6 +222,7 @@ class TokenType(AutoName):
|
|||
AUTO_INCREMENT = auto()
|
||||
BEGIN = auto()
|
||||
BETWEEN = auto()
|
||||
BULK_COLLECT_INTO = auto()
|
||||
CACHE = auto()
|
||||
CASE = auto()
|
||||
CHARACTER_SET = auto()
|
||||
|
@ -649,6 +651,10 @@ class Tokenizer(metaclass=_Tokenizer):
|
|||
"<->": TokenType.LR_ARROW,
|
||||
"&&": TokenType.DAMP,
|
||||
"??": TokenType.DQMARK,
|
||||
"~~~": TokenType.GLOB,
|
||||
"~~": TokenType.LIKE,
|
||||
"~~*": TokenType.ILIKE,
|
||||
"~*": TokenType.IRLIKE,
|
||||
"ALL": TokenType.ALL,
|
||||
"ALWAYS": TokenType.ALWAYS,
|
||||
"AND": TokenType.AND,
|
||||
|
@ -818,6 +824,7 @@ class Tokenizer(metaclass=_Tokenizer):
|
|||
"SMALLINT": TokenType.SMALLINT,
|
||||
"INT128": TokenType.INT128,
|
||||
"HUGEINT": TokenType.INT128,
|
||||
"UHUGEINT": TokenType.UINT128,
|
||||
"INT2": TokenType.SMALLINT,
|
||||
"INTEGER": TokenType.INT,
|
||||
"INT": TokenType.INT,
|
||||
|
|
|
@ -8,6 +8,7 @@ from sqlglot.helper import find_new_name, name_sequence
|
|||
|
||||
|
||||
if t.TYPE_CHECKING:
|
||||
from sqlglot._typing import E
|
||||
from sqlglot.generator import Generator
|
||||
|
||||
|
||||
|
@ -628,16 +629,28 @@ def eliminate_full_outer_join(expression: exp.Expression) -> exp.Expression:
|
|||
expression_copy = expression.copy()
|
||||
expression.set("limit", None)
|
||||
index, full_outer_join = full_outer_joins[0]
|
||||
full_outer_join.set("side", "left")
|
||||
expression_copy.args["joins"][index].set("side", "right")
|
||||
expression_copy.args.pop("with", None) # remove CTEs from RIGHT side
|
||||
|
||||
return exp.union(expression, expression_copy, copy=False)
|
||||
tables = (expression.args["from"].alias_or_name, full_outer_join.alias_or_name)
|
||||
join_conditions = full_outer_join.args.get("on") or exp.and_(
|
||||
*[
|
||||
exp.column(col, tables[0]).eq(exp.column(col, tables[1]))
|
||||
for col in full_outer_join.args.get("using")
|
||||
]
|
||||
)
|
||||
|
||||
full_outer_join.set("side", "left")
|
||||
anti_join_clause = exp.select("1").from_(expression.args["from"]).where(join_conditions)
|
||||
expression_copy.args["joins"][index].set("side", "right")
|
||||
expression_copy = expression_copy.where(exp.Exists(this=anti_join_clause).not_())
|
||||
expression_copy.args.pop("with", None) # remove CTEs from RIGHT side
|
||||
expression.args.pop("order", None) # remove order by from LEFT side
|
||||
|
||||
return exp.union(expression, expression_copy, copy=False, distinct=False)
|
||||
|
||||
return expression
|
||||
|
||||
|
||||
def move_ctes_to_top_level(expression: exp.Expression) -> exp.Expression:
|
||||
def move_ctes_to_top_level(expression: E) -> E:
|
||||
"""
|
||||
Some dialects (e.g. Hive, T-SQL, Spark prior to version 3) only allow CTEs to be
|
||||
defined at the top-level, so for example queries like:
|
||||
|
|
|
@ -62,8 +62,12 @@ class TestAthena(Validator):
|
|||
|
||||
# CTAS goes to the Trino engine, where the table properties cant be encased in single quotes like they can for Hive
|
||||
# ref: https://docs.aws.amazon.com/athena/latest/ug/create-table-as.html#ctas-table-properties
|
||||
# They're also case sensitive and need to be lowercase, otherwise you get eg "Table properties [FORMAT] are not supported."
|
||||
self.validate_identity(
|
||||
"CREATE TABLE foo WITH (table_type='ICEBERG', external_location='s3://foo/') AS SELECT * FROM a"
|
||||
"CREATE TABLE foo WITH (table_type='ICEBERG', location='s3://foo/', format='orc', partitioning=ARRAY['bucket(id, 5)']) AS SELECT * FROM a"
|
||||
)
|
||||
self.validate_identity(
|
||||
"CREATE TABLE foo WITH (table_type='HIVE', external_location='s3://foo/', format='parquet', partitioned_by=ARRAY['ds']) AS SELECT * FROM a"
|
||||
)
|
||||
self.validate_identity(
|
||||
"CREATE TABLE foo AS WITH foo AS (SELECT a, b FROM bar) SELECT * FROM foo"
|
||||
|
|
|
@ -1985,3 +1985,17 @@ OPTIONS (
|
|||
self.validate_identity(
|
||||
"SELECT RANGE(CAST('2022-10-01 14:53:27 America/Los_Angeles' AS TIMESTAMP), CAST('2022-10-01 16:00:00 America/Los_Angeles' AS TIMESTAMP))"
|
||||
)
|
||||
|
||||
def test_null_ordering(self):
|
||||
# Aggregate functions allow "NULLS FIRST" only with ascending order and
|
||||
# "NULLS LAST" only with descending
|
||||
for sort_order, null_order in (("ASC", "NULLS LAST"), ("DESC", "NULLS FIRST")):
|
||||
self.validate_all(
|
||||
f"SELECT color, ARRAY_AGG(id ORDER BY id {sort_order}) AS ids FROM colors GROUP BY 1",
|
||||
read={
|
||||
"": f"SELECT color, ARRAY_AGG(id ORDER BY id {sort_order} {null_order}) AS ids FROM colors GROUP BY 1"
|
||||
},
|
||||
write={
|
||||
"bigquery": f"SELECT color, ARRAY_AGG(id ORDER BY id {sort_order}) AS ids FROM colors GROUP BY 1",
|
||||
},
|
||||
)
|
||||
|
|
|
@ -858,6 +858,28 @@ class TestDuckDB(Validator):
|
|||
self.validate_identity(
|
||||
"SELECT COALESCE(*COLUMNS(['a', 'b', 'c'])) AS result FROM (SELECT NULL AS a, 42 AS b, TRUE AS c)"
|
||||
)
|
||||
self.validate_all(
|
||||
"SELECT UNNEST(foo) AS x",
|
||||
write={
|
||||
"redshift": UnsupportedError,
|
||||
},
|
||||
)
|
||||
self.validate_identity("a ^ b", "POWER(a, b)")
|
||||
self.validate_identity("a ** b", "POWER(a, b)")
|
||||
self.validate_identity("a ~~~ b", "a GLOB b")
|
||||
self.validate_identity("a ~~ b", "a LIKE b")
|
||||
self.validate_identity("a @> b")
|
||||
self.validate_identity("a <@ b", "b @> a")
|
||||
self.validate_identity("a && b").assert_is(exp.ArrayOverlaps)
|
||||
self.validate_identity("a ^@ b", "STARTS_WITH(a, b)")
|
||||
self.validate_identity(
|
||||
"a !~~ b",
|
||||
"NOT a LIKE b",
|
||||
)
|
||||
self.validate_identity(
|
||||
"a !~~* b",
|
||||
"NOT a ILIKE b",
|
||||
)
|
||||
|
||||
def test_array_index(self):
|
||||
with self.assertLogs(helper_logger) as cm:
|
||||
|
@ -967,6 +989,15 @@ class TestDuckDB(Validator):
|
|||
"spark": "DATE_FORMAT(x, 'yy-M-ss')",
|
||||
},
|
||||
)
|
||||
|
||||
self.validate_all(
|
||||
"SHA1(x)",
|
||||
write={
|
||||
"duckdb": "SHA1(x)",
|
||||
"": "SHA(x)",
|
||||
},
|
||||
)
|
||||
|
||||
self.validate_all(
|
||||
"STRFTIME(x, '%Y-%m-%d %H:%M:%S')",
|
||||
write={
|
||||
|
@ -1086,6 +1117,7 @@ class TestDuckDB(Validator):
|
|||
self.validate_identity("CAST(x AS INT16)", "CAST(x AS SMALLINT)")
|
||||
self.validate_identity("CAST(x AS NUMERIC(1, 2))", "CAST(x AS DECIMAL(1, 2))")
|
||||
self.validate_identity("CAST(x AS HUGEINT)", "CAST(x AS INT128)")
|
||||
self.validate_identity("CAST(x AS UHUGEINT)", "CAST(x AS UINT128)")
|
||||
self.validate_identity("CAST(x AS CHAR)", "CAST(x AS TEXT)")
|
||||
self.validate_identity("CAST(x AS BPCHAR)", "CAST(x AS TEXT)")
|
||||
self.validate_identity("CAST(x AS STRING)", "CAST(x AS TEXT)")
|
||||
|
|
|
@ -747,16 +747,28 @@ class TestMySQL(Validator):
|
|||
},
|
||||
)
|
||||
self.validate_all(
|
||||
"SELECT * FROM x LEFT JOIN y ON x.id = y.id UNION SELECT * FROM x RIGHT JOIN y ON x.id = y.id LIMIT 0",
|
||||
"SELECT * FROM x LEFT JOIN y ON x.id = y.id UNION ALL SELECT * FROM x RIGHT JOIN y ON x.id = y.id WHERE NOT EXISTS(SELECT 1 FROM x WHERE x.id = y.id) ORDER BY 1 LIMIT 0",
|
||||
read={
|
||||
"postgres": "SELECT * FROM x FULL JOIN y ON x.id = y.id LIMIT 0",
|
||||
"postgres": "SELECT * FROM x FULL JOIN y ON x.id = y.id ORDER BY 1 LIMIT 0",
|
||||
},
|
||||
)
|
||||
self.validate_all(
|
||||
# MySQL doesn't support FULL OUTER joins
|
||||
"WITH t1 AS (SELECT 1) SELECT * FROM t1 LEFT OUTER JOIN t2 ON t1.x = t2.x UNION SELECT * FROM t1 RIGHT OUTER JOIN t2 ON t1.x = t2.x",
|
||||
"SELECT * FROM t1 LEFT OUTER JOIN t2 ON t1.x = t2.x UNION ALL SELECT * FROM t1 RIGHT OUTER JOIN t2 ON t1.x = t2.x WHERE NOT EXISTS(SELECT 1 FROM t1 WHERE t1.x = t2.x)",
|
||||
read={
|
||||
"postgres": "WITH t1 AS (SELECT 1) SELECT * FROM t1 FULL OUTER JOIN t2 ON t1.x = t2.x",
|
||||
"postgres": "SELECT * FROM t1 FULL OUTER JOIN t2 ON t1.x = t2.x",
|
||||
},
|
||||
)
|
||||
self.validate_all(
|
||||
"SELECT * FROM t1 LEFT OUTER JOIN t2 USING (x) UNION ALL SELECT * FROM t1 RIGHT OUTER JOIN t2 USING (x) WHERE NOT EXISTS(SELECT 1 FROM t1 WHERE t1.x = t2.x)",
|
||||
read={
|
||||
"postgres": "SELECT * FROM t1 FULL OUTER JOIN t2 USING (x) ",
|
||||
},
|
||||
)
|
||||
self.validate_all(
|
||||
"SELECT * FROM t1 LEFT OUTER JOIN t2 USING (x, y) UNION ALL SELECT * FROM t1 RIGHT OUTER JOIN t2 USING (x, y) WHERE NOT EXISTS(SELECT 1 FROM t1 WHERE t1.x = t2.x AND t1.y = t2.y)",
|
||||
read={
|
||||
"postgres": "SELECT * FROM t1 FULL OUTER JOIN t2 USING (x, y) ",
|
||||
},
|
||||
)
|
||||
self.validate_all(
|
||||
|
|
|
@ -66,6 +66,15 @@ class TestOracle(Validator):
|
|||
self.validate_identity(
|
||||
"SELECT COUNT(1) INTO V_Temp FROM TABLE(CAST(somelist AS data_list)) WHERE col LIKE '%contact'"
|
||||
)
|
||||
self.validate_identity(
|
||||
"SELECT department_id INTO v_department_id FROM departments FETCH FIRST 1 ROWS ONLY"
|
||||
)
|
||||
self.validate_identity(
|
||||
"SELECT department_id BULK COLLECT INTO v_department_ids FROM departments"
|
||||
)
|
||||
self.validate_identity(
|
||||
"SELECT department_id, department_name BULK COLLECT INTO v_department_ids, v_department_names FROM departments"
|
||||
)
|
||||
self.validate_identity(
|
||||
"SELECT MIN(column_name) KEEP (DENSE_RANK FIRST ORDER BY column_name DESC) FROM table_name"
|
||||
)
|
||||
|
@ -102,6 +111,14 @@ class TestOracle(Validator):
|
|||
"SELECT * FROM t START WITH col CONNECT BY NOCYCLE PRIOR col1 = col2"
|
||||
)
|
||||
|
||||
self.validate_all(
|
||||
"SELECT department_id, department_name INTO v_department_id, v_department_name FROM departments FETCH FIRST 1 ROWS ONLY",
|
||||
write={
|
||||
"oracle": "SELECT department_id, department_name INTO v_department_id, v_department_name FROM departments FETCH FIRST 1 ROWS ONLY",
|
||||
"postgres": UnsupportedError,
|
||||
"tsql": UnsupportedError,
|
||||
},
|
||||
)
|
||||
self.validate_all(
|
||||
"TRUNC(SYSDATE, 'YEAR')",
|
||||
write={
|
||||
|
|
|
@ -354,10 +354,10 @@ class TestPostgres(Validator):
|
|||
self.validate_all(
|
||||
"SELECT ARRAY[1, 2, 3] @> ARRAY[1, 2]",
|
||||
read={
|
||||
"duckdb": "SELECT ARRAY_HAS_ALL([1, 2, 3], [1, 2])",
|
||||
"duckdb": "SELECT [1, 2, 3] @> [1, 2]",
|
||||
},
|
||||
write={
|
||||
"duckdb": "SELECT ARRAY_HAS_ALL([1, 2, 3], [1, 2])",
|
||||
"duckdb": "SELECT [1, 2, 3] @> [1, 2]",
|
||||
"mysql": UnsupportedError,
|
||||
"postgres": "SELECT ARRAY[1, 2, 3] @> ARRAY[1, 2]",
|
||||
},
|
||||
|
@ -398,13 +398,6 @@ class TestPostgres(Validator):
|
|||
"postgres": "SELECT (data ->> 'en-US') AS acat FROM my_table",
|
||||
},
|
||||
)
|
||||
self.validate_all(
|
||||
"SELECT ARRAY[1, 2, 3] && ARRAY[1, 2]",
|
||||
write={
|
||||
"": "SELECT ARRAY_OVERLAPS(ARRAY(1, 2, 3), ARRAY(1, 2))",
|
||||
"postgres": "SELECT ARRAY[1, 2, 3] && ARRAY[1, 2]",
|
||||
},
|
||||
)
|
||||
self.validate_all(
|
||||
"SELECT JSON_EXTRACT_PATH_TEXT(x, k1, k2, k3) FROM t",
|
||||
read={
|
||||
|
@ -802,6 +795,7 @@ class TestPostgres(Validator):
|
|||
)
|
||||
self.validate_identity("SELECT OVERLAY(a PLACING b FROM 1)")
|
||||
self.validate_identity("SELECT OVERLAY(a PLACING b FROM 1 FOR 1)")
|
||||
self.validate_identity("ARRAY[1, 2, 3] && ARRAY[1, 2]").assert_is(exp.ArrayOverlaps)
|
||||
|
||||
def test_ddl(self):
|
||||
# Checks that user-defined types are parsed into DataType instead of Identifier
|
||||
|
|
|
@ -213,6 +213,12 @@ class TestRedshift(Validator):
|
|||
"redshift": "SELECT CAST('abc' AS VARBYTE)",
|
||||
},
|
||||
)
|
||||
self.validate_all(
|
||||
"CREATE TABLE a (b BINARY VARYING(10))",
|
||||
write={
|
||||
"redshift": "CREATE TABLE a (b VARBYTE(10))",
|
||||
},
|
||||
)
|
||||
self.validate_all(
|
||||
"SELECT 'abc'::CHARACTER",
|
||||
write={
|
||||
|
|
|
@ -2,7 +2,6 @@ from unittest import mock
|
|||
|
||||
from sqlglot import exp, parse_one
|
||||
from sqlglot.dialects.dialect import Dialects
|
||||
from sqlglot.helper import logger as helper_logger
|
||||
from tests.dialects.test_dialect import Validator
|
||||
|
||||
|
||||
|
@ -294,17 +293,17 @@ TBLPROPERTIES (
|
|||
"SELECT STR_TO_MAP('a:1,b:2,c:3')",
|
||||
"SELECT STR_TO_MAP('a:1,b:2,c:3', ',', ':')",
|
||||
)
|
||||
|
||||
with self.assertLogs(helper_logger):
|
||||
self.validate_all(
|
||||
"SELECT TRY_ELEMENT_AT(ARRAY(1, 2, 3), 2)",
|
||||
read={
|
||||
"databricks": "SELECT TRY_ELEMENT_AT(ARRAY(1, 2, 3), 2)",
|
||||
"presto": "SELECT ELEMENT_AT(ARRAY[1, 2, 3], 2)",
|
||||
},
|
||||
write={
|
||||
"databricks": "SELECT TRY_ELEMENT_AT(ARRAY(1, 2, 3), 2)",
|
||||
"duckdb": "SELECT ([1, 2, 3])[3]",
|
||||
"spark": "SELECT TRY_ELEMENT_AT(ARRAY(1, 2, 3), 2)",
|
||||
"duckdb": "SELECT ([1, 2, 3])[2]",
|
||||
"presto": "SELECT ELEMENT_AT(ARRAY[1, 2, 3], 2)",
|
||||
},
|
||||
)
|
||||
|
||||
|
|
|
@ -26,6 +26,7 @@ class TestSQLite(Validator):
|
|||
"""SELECT item AS "item", some AS "some" FROM data WHERE (item = 'value_1' COLLATE NOCASE) AND (some = 't' COLLATE NOCASE) ORDER BY item ASC LIMIT 1 OFFSET 0"""
|
||||
)
|
||||
self.validate_identity("SELECT * FROM GENERATE_SERIES(1, 5)")
|
||||
self.validate_identity("SELECT INSTR(haystack, needle)")
|
||||
|
||||
self.validate_all("SELECT LIKE(y, x)", write={"sqlite": "SELECT x LIKE y"})
|
||||
self.validate_all("SELECT GLOB('*y*', 'xyz')", write={"sqlite": "SELECT 'xyz' GLOB '*y*'"})
|
||||
|
|
|
@ -4,6 +4,12 @@ from tests.dialects.test_dialect import Validator
|
|||
class TestTrino(Validator):
|
||||
dialect = "trino"
|
||||
|
||||
def test_trino(self):
|
||||
self.validate_identity("JSON_EXTRACT(content, json_path)")
|
||||
self.validate_identity("JSON_QUERY(content, 'lax $.HY.*')")
|
||||
self.validate_identity("JSON_QUERY(content, 'strict $.HY.*' WITH UNCONDITIONAL WRAPPER)")
|
||||
self.validate_identity("JSON_QUERY(content, 'strict $.HY.*' WITHOUT CONDITIONAL WRAPPER)")
|
||||
|
||||
def test_trim(self):
|
||||
self.validate_identity("SELECT TRIM('!' FROM '!foo!')")
|
||||
self.validate_identity("SELECT TRIM(BOTH '$' FROM '$var$')")
|
||||
|
|
|
@ -8,6 +8,11 @@ class TestTSQL(Validator):
|
|||
dialect = "tsql"
|
||||
|
||||
def test_tsql(self):
|
||||
self.validate_identity(
|
||||
"with x as (select 1) select * from x union select * from x order by 1 limit 0",
|
||||
"WITH x AS (SELECT 1 AS [1]) SELECT TOP 0 * FROM (SELECT * FROM x UNION SELECT * FROM x) AS _l_0 ORDER BY 1",
|
||||
)
|
||||
|
||||
# https://learn.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms187879(v=sql.105)?redirectedfrom=MSDN
|
||||
# tsql allows .. which means use the default schema
|
||||
self.validate_identity("SELECT * FROM a..b")
|
||||
|
@ -46,6 +51,10 @@ class TestTSQL(Validator):
|
|||
self.validate_identity(
|
||||
"COPY INTO test_1 FROM 'path' WITH (FORMAT_NAME = test, FILE_TYPE = 'CSV', CREDENTIAL = (IDENTITY='Shared Access Signature', SECRET='token'), FIELDTERMINATOR = ';', ROWTERMINATOR = '0X0A', ENCODING = 'UTF8', DATEFORMAT = 'ymd', MAXERRORS = 10, ERRORFILE = 'errorsfolder', IDENTITY_INSERT = 'ON')"
|
||||
)
|
||||
self.validate_identity(
|
||||
'SELECT 1 AS "[x]"',
|
||||
"SELECT 1 AS [[x]]]",
|
||||
)
|
||||
self.assertEqual(
|
||||
annotate_types(self.validate_identity("SELECT 1 WHERE EXISTS(SELECT 1)")).sql("tsql"),
|
||||
"SELECT 1 WHERE EXISTS(SELECT 1)",
|
||||
|
|
189
tests/fixtures/optimizer/annotate_functions.sql
vendored
Normal file
189
tests/fixtures/optimizer/annotate_functions.sql
vendored
Normal file
|
@ -0,0 +1,189 @@
|
|||
--------------------------------------
|
||||
-- Dialect
|
||||
--------------------------------------
|
||||
ABS(1);
|
||||
INT;
|
||||
|
||||
ABS(1.5);
|
||||
DOUBLE;
|
||||
|
||||
GREATEST(1, 2, 3);
|
||||
INT;
|
||||
|
||||
GREATEST(1, 2.5, 3);
|
||||
DOUBLE;
|
||||
|
||||
LEAST(1, 2, 3);
|
||||
INT;
|
||||
|
||||
LEAST(1, 2.5, 3);
|
||||
DOUBLE;
|
||||
|
||||
--------------------------------------
|
||||
-- Spark2 / Spark3 / Databricks
|
||||
--------------------------------------
|
||||
|
||||
# dialect: spark2, spark, databricks
|
||||
SUBSTRING(tbl.str_col, 0, 0);
|
||||
STRING;
|
||||
|
||||
# dialect: spark2, spark, databricks
|
||||
SUBSTRING(tbl.bin_col, 0, 0);
|
||||
BINARY;
|
||||
|
||||
# dialect: spark2, spark, databricks
|
||||
CONCAT(tbl.bin_col, tbl.bin_col);
|
||||
BINARY;
|
||||
|
||||
# dialect: spark2, spark, databricks
|
||||
CONCAT(tbl.bin_col, tbl.str_col);
|
||||
STRING;
|
||||
|
||||
# dialect: spark2, spark, databricks
|
||||
CONCAT(tbl.str_col, tbl.bin_col);
|
||||
STRING;
|
||||
|
||||
# dialect: spark2, spark, databricks
|
||||
CONCAT(tbl.str_col, tbl.str_col);
|
||||
STRING;
|
||||
|
||||
# dialect: spark2, spark, databricks
|
||||
CONCAT(tbl.str_col, unknown);
|
||||
STRING;
|
||||
|
||||
# dialect: spark2, spark, databricks
|
||||
CONCAT(tbl.bin_col, unknown);
|
||||
UNKNOWN;
|
||||
|
||||
# dialect: spark2, spark, databricks
|
||||
CONCAT(unknown, unknown);
|
||||
UNKNOWN;
|
||||
|
||||
# dialect: spark2, spark, databricks
|
||||
LPAD(tbl.bin_col, 1, tbl.bin_col);
|
||||
BINARY;
|
||||
|
||||
# dialect: spark2, spark, databricks
|
||||
RPAD(tbl.bin_col, 1, tbl.bin_col);
|
||||
BINARY;
|
||||
|
||||
# dialect: spark2, spark, databricks
|
||||
LPAD(tbl.bin_col, 1, tbl.str_col);
|
||||
STRING;
|
||||
|
||||
# dialect: spark2, spark, databricks
|
||||
RPAD(tbl.bin_col, 1, tbl.str_col);
|
||||
STRING;
|
||||
|
||||
# dialect: spark2, spark, databricks
|
||||
LPAD(tbl.str_col, 1, tbl.bin_col);
|
||||
STRING;
|
||||
|
||||
# dialect: spark2, spark, databricks
|
||||
RPAD(tbl.str_col, 1, tbl.bin_col);
|
||||
STRING;
|
||||
|
||||
# dialect: spark2, spark, databricks
|
||||
LPAD(tbl.str_col, 1, tbl.str_col);
|
||||
STRING;
|
||||
|
||||
# dialect: spark2, spark, databricks
|
||||
RPAD(tbl.str_col, 1, tbl.str_col);
|
||||
STRING;
|
||||
|
||||
|
||||
--------------------------------------
|
||||
-- BigQuery
|
||||
--------------------------------------
|
||||
|
||||
# dialect: bigquery
|
||||
SIGN(1);
|
||||
INT;
|
||||
|
||||
# dialect: bigquery
|
||||
SIGN(1.5);
|
||||
DOUBLE;
|
||||
|
||||
# dialect: bigquery
|
||||
CEIL(1);
|
||||
DOUBLE;
|
||||
|
||||
# dialect: bigquery
|
||||
CEIL(5.5);
|
||||
DOUBLE;
|
||||
|
||||
# dialect: bigquery
|
||||
CEIL(tbl.bignum_col);
|
||||
BIGDECIMAL;
|
||||
|
||||
# dialect: bigquery
|
||||
FLOOR(1);
|
||||
DOUBLE;
|
||||
|
||||
# dialect: bigquery
|
||||
FLOOR(5.5);
|
||||
DOUBLE;
|
||||
|
||||
# dialect: bigquery
|
||||
FLOOR(tbl.bignum_col);
|
||||
BIGDECIMAL;
|
||||
|
||||
# dialect: bigquery
|
||||
SQRT(1);
|
||||
DOUBLE;
|
||||
|
||||
# dialect: bigquery
|
||||
SQRT(5.5);
|
||||
DOUBLE;
|
||||
|
||||
# dialect: bigquery
|
||||
SQRT(tbl.bignum_col);
|
||||
BIGDECIMAL;
|
||||
|
||||
# dialect: bigquery
|
||||
LN(1);
|
||||
DOUBLE;
|
||||
|
||||
# dialect: bigquery
|
||||
LN(5.5);
|
||||
DOUBLE;
|
||||
|
||||
# dialect: bigquery
|
||||
LN(tbl.bignum_col);
|
||||
BIGDECIMAL;
|
||||
|
||||
# dialect: bigquery
|
||||
LOG(1);
|
||||
DOUBLE;
|
||||
|
||||
# dialect: bigquery
|
||||
LOG(5.5);
|
||||
DOUBLE;
|
||||
|
||||
# dialect: bigquery
|
||||
LOG(tbl.bignum_col);
|
||||
BIGDECIMAL;
|
||||
|
||||
# dialect: bigquery
|
||||
ROUND(1);
|
||||
DOUBLE;
|
||||
|
||||
# dialect: bigquery
|
||||
ROUND(5.5);
|
||||
DOUBLE;
|
||||
|
||||
# dialect: bigquery
|
||||
ROUND(tbl.bignum_col);
|
||||
BIGDECIMAL;
|
||||
|
||||
# dialect: bigquery
|
||||
EXP(1);
|
||||
DOUBLE;
|
||||
|
||||
# dialect: bigquery
|
||||
EXP(5.5);
|
||||
DOUBLE;
|
||||
|
||||
# dialect: bigquery
|
||||
EXP(tbl.bignum_col);
|
||||
BIGDECIMAL;
|
31
tests/fixtures/pretty.sql
vendored
31
tests/fixtures/pretty.sql
vendored
|
@ -418,3 +418,34 @@ INSERT FIRST
|
|||
SELECT
|
||||
salary
|
||||
FROM employees;
|
||||
|
||||
SELECT *
|
||||
FROM foo
|
||||
wHERE 1=1
|
||||
AND
|
||||
-- my comment
|
||||
EXISTS (
|
||||
SELECT 1
|
||||
FROM bar
|
||||
);
|
||||
SELECT
|
||||
*
|
||||
FROM foo
|
||||
WHERE
|
||||
1 = 1 AND EXISTS(
|
||||
SELECT
|
||||
1
|
||||
FROM bar
|
||||
) /* my comment */;
|
||||
|
||||
SELECT 1
|
||||
FROM foo
|
||||
WHERE 1=1
|
||||
AND -- first comment
|
||||
-- second comment
|
||||
foo.a = 1;
|
||||
SELECT
|
||||
1
|
||||
FROM foo
|
||||
WHERE
|
||||
1 = 1 AND /* first comment */ foo.a /* second comment */ = 1;
|
||||
|
|
|
@ -577,6 +577,36 @@ class TestBuild(unittest.TestCase):
|
|||
lambda: exp.update("tbl", {"x": 1}, from_="tbl2 cross join tbl3"),
|
||||
"UPDATE tbl SET x = 1 FROM tbl2 CROSS JOIN tbl3",
|
||||
),
|
||||
(
|
||||
lambda: exp.update(
|
||||
"my_table",
|
||||
{"x": 1},
|
||||
from_="baz",
|
||||
where="my_table.id = baz.id",
|
||||
with_={"baz": "SELECT id FROM foo UNION SELECT id FROM bar"},
|
||||
),
|
||||
"WITH baz AS (SELECT id FROM foo UNION SELECT id FROM bar) UPDATE my_table SET x = 1 FROM baz WHERE my_table.id = baz.id",
|
||||
),
|
||||
(
|
||||
lambda: exp.update("my_table").set_("x = 1"),
|
||||
"UPDATE my_table SET x = 1",
|
||||
),
|
||||
(
|
||||
lambda: exp.update("my_table").set_("x = 1").where("y = 2"),
|
||||
"UPDATE my_table SET x = 1 WHERE y = 2",
|
||||
),
|
||||
(
|
||||
lambda: exp.update("my_table").set_("a = 1").set_("b = 2"),
|
||||
"UPDATE my_table SET a = 1, b = 2",
|
||||
),
|
||||
(
|
||||
lambda: exp.update("my_table")
|
||||
.set_("x = 1")
|
||||
.where("my_table.id = baz.id")
|
||||
.from_("baz")
|
||||
.with_("baz", "SELECT id FROM foo"),
|
||||
"WITH baz AS (SELECT id FROM foo) UPDATE my_table SET x = 1 FROM baz WHERE my_table.id = baz.id",
|
||||
),
|
||||
(
|
||||
lambda: union("SELECT * FROM foo", "SELECT * FROM bla"),
|
||||
"SELECT * FROM foo UNION SELECT * FROM bla",
|
||||
|
|
|
@ -157,11 +157,20 @@ class TestDiff(unittest.TestCase):
|
|||
self._validate_delta_only(
|
||||
diff_delta_only(expr_src, expr_tgt),
|
||||
[
|
||||
Remove(parse_one("ROW_NUMBER()")), # the Anonymous node
|
||||
Insert(parse_one("RANK()")), # the Anonymous node
|
||||
Remove(parse_one("ROW_NUMBER()")),
|
||||
Insert(parse_one("RANK()")),
|
||||
Update(source=expr_src.selects[0], target=expr_tgt.selects[0]),
|
||||
],
|
||||
)
|
||||
|
||||
expr_src = parse_one("SELECT MAX(x) OVER (ORDER BY y) FROM z", "oracle")
|
||||
expr_tgt = parse_one("SELECT MAX(x) KEEP (DENSE_RANK LAST ORDER BY y) FROM z", "oracle")
|
||||
|
||||
self._validate_delta_only(
|
||||
diff_delta_only(expr_src, expr_tgt),
|
||||
[Update(source=expr_src.selects[0], target=expr_tgt.selects[0])],
|
||||
)
|
||||
|
||||
def test_pre_matchings(self):
|
||||
expr_src = parse_one("SELECT 1")
|
||||
expr_tgt = parse_one("SELECT 1, 2, 3, 4")
|
||||
|
@ -202,5 +211,34 @@ class TestDiff(unittest.TestCase):
|
|||
],
|
||||
)
|
||||
|
||||
expr_src = parse_one("SELECT 1 AS c1, 2 AS c2")
|
||||
expr_tgt = parse_one("SELECT 2 AS c1, 3 AS c2")
|
||||
|
||||
self._validate_delta_only(
|
||||
diff_delta_only(expr_src, expr_tgt),
|
||||
[
|
||||
Remove(expression=exp.alias_(1, "c1")),
|
||||
Remove(expression=exp.Literal.number(1)),
|
||||
Insert(expression=exp.alias_(3, "c2")),
|
||||
Insert(expression=exp.Literal.number(3)),
|
||||
Update(source=exp.alias_(2, "c2"), target=exp.alias_(2, "c1")),
|
||||
],
|
||||
)
|
||||
|
||||
def test_dialect_aware_diff(self):
|
||||
from sqlglot.generator import logger
|
||||
|
||||
with self.assertLogs(logger) as cm:
|
||||
# We want to assert there are no warnings, but the 'assertLogs' method does not support that.
|
||||
# Therefore, we are adding a dummy warning, and then we will assert it is the only warning.
|
||||
logger.warning("Dummy warning")
|
||||
|
||||
expression = parse_one("SELECT foo FROM bar FOR UPDATE", dialect="oracle")
|
||||
self._validate_delta_only(
|
||||
diff_delta_only(expression, expression.copy(), dialect="oracle"), []
|
||||
)
|
||||
|
||||
self.assertEqual(["WARNING:sqlglot:Dummy warning"], cm.output)
|
||||
|
||||
def _validate_delta_only(self, actual_delta, expected_delta):
|
||||
self.assertEqual(set(actual_delta), set(expected_delta))
|
||||
|
|
|
@ -350,6 +350,7 @@ class TestExpressions(unittest.TestCase):
|
|||
)
|
||||
|
||||
self.assertIsInstance(exp.func("instr", "x", "b", dialect="mysql"), exp.StrPosition)
|
||||
self.assertIsInstance(exp.func("instr", "x", "b", dialect="sqlite"), exp.StrPosition)
|
||||
self.assertIsInstance(exp.func("bla", 1, "foo"), exp.Anonymous)
|
||||
self.assertIsInstance(
|
||||
exp.func("cast", this=exp.Literal.number(5), to=exp.DataType.build("DOUBLE")),
|
||||
|
|
|
@ -54,6 +54,18 @@ def simplify(expression, **kwargs):
|
|||
return optimizer.simplify.simplify(expression, constant_propagation=True, **kwargs)
|
||||
|
||||
|
||||
def annotate_functions(expression, **kwargs):
|
||||
from sqlglot.dialects import Dialect
|
||||
|
||||
dialect = kwargs.get("dialect")
|
||||
schema = kwargs.get("schema")
|
||||
|
||||
annotators = Dialect.get_or_raise(dialect).ANNOTATORS
|
||||
annotated = annotate_types(expression, annotators=annotators, schema=schema)
|
||||
|
||||
return annotated.expressions[0]
|
||||
|
||||
|
||||
class TestOptimizer(unittest.TestCase):
|
||||
maxDiff = None
|
||||
|
||||
|
@ -787,6 +799,28 @@ FROM READ_CSV('tests/fixtures/optimizer/tpc-h/nation.csv.gz', 'delimiter', '|')
|
|||
with self.subTest(title):
|
||||
self.assertEqual(result.type.sql(), exp.DataType.build(expected).sql())
|
||||
|
||||
def test_annotate_funcs(self):
|
||||
test_schema = {
|
||||
"tbl": {"bin_col": "BINARY", "str_col": "STRING", "bignum_col": "BIGNUMERIC"}
|
||||
}
|
||||
|
||||
for i, (meta, sql, expected) in enumerate(
|
||||
load_sql_fixture_pairs("optimizer/annotate_functions.sql"), start=1
|
||||
):
|
||||
title = meta.get("title") or f"{i}, {sql}"
|
||||
dialect = meta.get("dialect") or ""
|
||||
sql = f"SELECT {sql} FROM tbl"
|
||||
|
||||
for dialect in dialect.split(", "):
|
||||
result = parse_and_optimize(
|
||||
annotate_functions, sql, dialect, schema=test_schema, dialect=dialect
|
||||
)
|
||||
|
||||
with self.subTest(title):
|
||||
self.assertEqual(
|
||||
result.type.sql(dialect), exp.DataType.build(expected).sql(dialect)
|
||||
)
|
||||
|
||||
def test_cast_type_annotation(self):
|
||||
expression = annotate_types(parse_one("CAST('2020-01-01' AS TIMESTAMPTZ(9))"))
|
||||
self.assertEqual(expression.type.this, exp.DataType.Type.TIMESTAMPTZ)
|
||||
|
@ -1377,26 +1411,3 @@ FROM READ_CSV('tests/fixtures/optimizer/tpc-h/nation.csv.gz', 'delimiter', '|')
|
|||
self.assertEqual(4, normalization_distance(gen_expr(2), max_=100))
|
||||
self.assertEqual(18, normalization_distance(gen_expr(3), max_=100))
|
||||
self.assertEqual(110, normalization_distance(gen_expr(10), max_=100))
|
||||
|
||||
def test_custom_annotators(self):
|
||||
# In Spark hierarchy, SUBSTRING result type is dependent on input expr type
|
||||
for dialect in ("spark2", "spark", "databricks"):
|
||||
for expr_type_pair in (
|
||||
("col", "STRING"),
|
||||
("col", "BINARY"),
|
||||
("'str_literal'", "STRING"),
|
||||
("CAST('str_literal' AS BINARY)", "BINARY"),
|
||||
):
|
||||
with self.subTest(
|
||||
f"Testing {dialect}'s SUBSTRING() result type for {expr_type_pair}"
|
||||
):
|
||||
expr, type = expr_type_pair
|
||||
ast = parse_one(f"SELECT substring({expr}, 2, 3) AS x FROM tbl", read=dialect)
|
||||
|
||||
subst_type = (
|
||||
optimizer.optimize(ast, schema={"tbl": {"col": type}}, dialect=dialect)
|
||||
.expressions[0]
|
||||
.type
|
||||
)
|
||||
|
||||
self.assertEqual(subst_type.sql(dialect), exp.DataType.build(type).sql(dialect))
|
||||
|
|
|
@ -563,7 +563,36 @@ FROM x""",
|
|||
)
|
||||
self.validate(
|
||||
"""with a as /* comment */ ( select * from b) select * from a""",
|
||||
"""WITH a AS (SELECT * FROM b) /* comment */ SELECT * FROM a""",
|
||||
"""WITH a /* comment */ AS (SELECT * FROM b) SELECT * FROM a""",
|
||||
)
|
||||
self.validate(
|
||||
"""
|
||||
-- comment at the top
|
||||
WITH
|
||||
-- comment for tbl1
|
||||
tbl1 AS (SELECT 1)
|
||||
-- comment for tbl2
|
||||
, tbl2 AS (SELECT 2)
|
||||
-- comment for tbl3
|
||||
, tbl3 AS (SELECT 3)
|
||||
-- comment for final select
|
||||
SELECT * FROM tbl1""",
|
||||
"""/* comment at the top */
|
||||
WITH tbl1 /* comment for tbl1 */ AS (
|
||||
SELECT
|
||||
1
|
||||
), tbl2 /* comment for tbl2 */ AS (
|
||||
SELECT
|
||||
2
|
||||
), tbl3 /* comment for tbl3 */ AS (
|
||||
SELECT
|
||||
3
|
||||
)
|
||||
/* comment for final select */
|
||||
SELECT
|
||||
*
|
||||
FROM tbl1""",
|
||||
pretty=True,
|
||||
)
|
||||
|
||||
def test_types(self):
|
||||
|
|
Loading…
Add table
Reference in a new issue