1
0
Fork 0

Adding upstream version 25.18.0.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-02-13 21:52:44 +01:00
parent 1341bc6477
commit b35dbeb6b6
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
81 changed files with 34133 additions and 33517 deletions

View file

@ -1,6 +1,31 @@
Changelog Changelog
========= =========
## [v25.17.0] - 2024-08-26
### :boom: BREAKING CHANGES
- due to [`0a9ba05`](https://github.com/tobymao/sqlglot/commit/0a9ba0536235e10aed02d4ff5e571e435a00febc) - 0 is falsey *(commit by [@tobymao](https://github.com/tobymao))*:
0 is falsey
### :bug: Bug Fixes
- [`42b725e`](https://github.com/tobymao/sqlglot/commit/42b725e4821a1426fe7c93f9fecbd4ec372accc9) - flaky test closes [#3961](https://github.com/tobymao/sqlglot/pull/3961) *(commit by [@tobymao](https://github.com/tobymao))*
- [`cc29921`](https://github.com/tobymao/sqlglot/commit/cc299217f5d31a0406ba3c4778bb1ce581fe3f4a) - Parse LTRIM/RTRIM functions as positional exp.Trim *(PR [#3958](https://github.com/tobymao/sqlglot/pull/3958) by [@VaggelisD](https://github.com/VaggelisD))*
- :arrow_lower_right: *fixes issue [#3957](https://github.com/tobymao/sqlglot/issues/3957) opened by [@sleshJdev](https://github.com/sleshJdev)*
- [`678e692`](https://github.com/tobymao/sqlglot/commit/678e6926fdbefb16efbbcaef9cd6c5ca284af54a) - make sample an arg of table, not a wrapper *(PR [#3963](https://github.com/tobymao/sqlglot/pull/3963) by [@barakalon](https://github.com/barakalon))*
- [`0a9ba05`](https://github.com/tobymao/sqlglot/commit/0a9ba0536235e10aed02d4ff5e571e435a00febc) - 0 is falsey *(commit by [@tobymao](https://github.com/tobymao))*
- [`c1ac987`](https://github.com/tobymao/sqlglot/commit/c1ac9872a6f77acd52546edbc9da53e350ebf080) - **starrocks**: exp.Array generation, exp.Unnest alias *(PR [#3964](https://github.com/tobymao/sqlglot/pull/3964) by [@VaggelisD](https://github.com/VaggelisD))*
- :arrow_lower_right: *fixes issue [#3962](https://github.com/tobymao/sqlglot/issues/3962) opened by [@hellozepp](https://github.com/hellozepp)*
## [v25.16.1] - 2024-08-23
### :bug: Bug Fixes
- [`c4e5be7`](https://github.com/tobymao/sqlglot/commit/c4e5be7d3f4d7a9075d11dc56ece02774f32e749) - include dialect when parsing inside cast *(PR [#3960](https://github.com/tobymao/sqlglot/pull/3960) by [@eakmanrq](https://github.com/eakmanrq))*
### :wrench: Chores
- [`794dc4c`](https://github.com/tobymao/sqlglot/commit/794dc4cea3c4298c8986ade8e0fee88479851b34) - update readme to include onboarding doc *(commit by [@georgesittas](https://github.com/georgesittas))*
## [v25.16.0] - 2024-08-22 ## [v25.16.0] - 2024-08-22
### :boom: BREAKING CHANGES ### :boom: BREAKING CHANGES
- due to [`f68d155`](https://github.com/tobymao/sqlglot/commit/f68d155c38a79a6527685c37f8de8773ce790bca) - exp.Merge, for Trino and Postgres, dont strip the target alias from then WHEN MATCHED condition to prevent an ambiguous column error *(PR [#3940](https://github.com/tobymao/sqlglot/pull/3940) by [@erindru](https://github.com/erindru))*: - due to [`f68d155`](https://github.com/tobymao/sqlglot/commit/f68d155c38a79a6527685c37f8de8773ce790bca) - exp.Merge, for Trino and Postgres, dont strip the target alias from then WHEN MATCHED condition to prevent an ambiguous column error *(PR [#3940](https://github.com/tobymao/sqlglot/pull/3940) by [@erindru](https://github.com/erindru))*:
@ -4491,3 +4516,5 @@ Changelog
[v25.14.0]: https://github.com/tobymao/sqlglot/compare/v25.13.0...v25.14.0 [v25.14.0]: https://github.com/tobymao/sqlglot/compare/v25.13.0...v25.14.0
[v25.15.0]: https://github.com/tobymao/sqlglot/compare/v25.14.0...v25.15.0 [v25.15.0]: https://github.com/tobymao/sqlglot/compare/v25.14.0...v25.15.0
[v25.16.0]: https://github.com/tobymao/sqlglot/compare/v25.15.0...v25.16.0 [v25.16.0]: https://github.com/tobymao/sqlglot/compare/v25.15.0...v25.16.0
[v25.16.1]: https://github.com/tobymao/sqlglot/compare/v25.16.0...v25.16.1
[v25.17.0]: https://github.com/tobymao/sqlglot/compare/v25.16.1...v25.17.0

View file

@ -10,7 +10,7 @@ Syntax [errors](#parser-errors) are highlighted and dialect incompatibilities ca
Learn more about SQLGlot in the API [documentation](https://sqlglot.com/) and the expression tree [primer](https://github.com/tobymao/sqlglot/blob/main/posts/ast_primer.md). Learn more about SQLGlot in the API [documentation](https://sqlglot.com/) and the expression tree [primer](https://github.com/tobymao/sqlglot/blob/main/posts/ast_primer.md).
Contributions are very welcome in SQLGlot; read the [contribution guide](https://github.com/tobymao/sqlglot/blob/main/CONTRIBUTING.md) and the [onboarding document](https://github.com/tobymao/sqlglot/blob/main/posts/ast_primer.md) to get started! Contributions are very welcome in SQLGlot; read the [contribution guide](https://github.com/tobymao/sqlglot/blob/main/CONTRIBUTING.md) and the [onboarding document](https://github.com/tobymao/sqlglot/blob/main/posts/onboarding.md) to get started!
## Table of Contents ## Table of Contents

File diff suppressed because one or more lines are too long

View file

@ -107,7 +107,7 @@
<p>Learn more about SQLGlot in the API <a href="https://sqlglot.com/">documentation</a> and the expression tree <a href="https://github.com/tobymao/sqlglot/blob/main/posts/ast_primer.md">primer</a>.</p> <p>Learn more about SQLGlot in the API <a href="https://sqlglot.com/">documentation</a> and the expression tree <a href="https://github.com/tobymao/sqlglot/blob/main/posts/ast_primer.md">primer</a>.</p>
<p>Contributions are very welcome in SQLGlot; read the <a href="https://github.com/tobymao/sqlglot/blob/main/CONTRIBUTING.md">contribution guide</a> to get started!</p> <p>Contributions are very welcome in SQLGlot; read the <a href="https://github.com/tobymao/sqlglot/blob/main/CONTRIBUTING.md">contribution guide</a> and the <a href="https://github.com/tobymao/sqlglot/blob/main/posts/onboarding.md">onboarding document</a> to get started!</p>
<h2 id="table-of-contents">Table of Contents</h2> <h2 id="table-of-contents">Table of Contents</h2>

View file

@ -76,8 +76,8 @@
</span><span id="L-12"><a href="#L-12"><span class="linenos">12</span></a><span class="n">__version_tuple__</span><span class="p">:</span> <span class="n">VERSION_TUPLE</span> </span><span id="L-12"><a href="#L-12"><span class="linenos">12</span></a><span class="n">__version_tuple__</span><span class="p">:</span> <span class="n">VERSION_TUPLE</span>
</span><span id="L-13"><a href="#L-13"><span class="linenos">13</span></a><span class="n">version_tuple</span><span class="p">:</span> <span class="n">VERSION_TUPLE</span> </span><span id="L-13"><a href="#L-13"><span class="linenos">13</span></a><span class="n">version_tuple</span><span class="p">:</span> <span class="n">VERSION_TUPLE</span>
</span><span id="L-14"><a href="#L-14"><span class="linenos">14</span></a> </span><span id="L-14"><a href="#L-14"><span class="linenos">14</span></a>
</span><span id="L-15"><a href="#L-15"><span class="linenos">15</span></a><span class="n">__version__</span> <span class="o">=</span> <span class="n">version</span> <span class="o">=</span> <span class="s1">&#39;25.16.0&#39;</span> </span><span id="L-15"><a href="#L-15"><span class="linenos">15</span></a><span class="n">__version__</span> <span class="o">=</span> <span class="n">version</span> <span class="o">=</span> <span class="s1">&#39;25.17.0&#39;</span>
</span><span id="L-16"><a href="#L-16"><span class="linenos">16</span></a><span class="n">__version_tuple__</span> <span class="o">=</span> <span class="n">version_tuple</span> <span class="o">=</span> <span class="p">(</span><span class="mi">25</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span> </span><span id="L-16"><a href="#L-16"><span class="linenos">16</span></a><span class="n">__version_tuple__</span> <span class="o">=</span> <span class="n">version_tuple</span> <span class="o">=</span> <span class="p">(</span><span class="mi">25</span><span class="p">,</span> <span class="mi">17</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
</span></pre></div> </span></pre></div>
@ -97,7 +97,7 @@
<section id="version"> <section id="version">
<div class="attr variable"> <div class="attr variable">
<span class="name">version</span><span class="annotation">: str</span> = <span class="name">version</span><span class="annotation">: str</span> =
<span class="default_value">&#39;25.16.0&#39;</span> <span class="default_value">&#39;25.17.0&#39;</span>
</div> </div>
@ -109,7 +109,7 @@
<section id="version_tuple"> <section id="version_tuple">
<div class="attr variable"> <div class="attr variable">
<span class="name">version_tuple</span><span class="annotation">: object</span> = <span class="name">version_tuple</span><span class="annotation">: object</span> =
<span class="default_value">(25, 16, 0)</span> <span class="default_value">(25, 17, 0)</span>
</div> </div>

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1895,7 +1895,7 @@ belong to some totally-ordered set.</p>
<section id="DATE_UNITS"> <section id="DATE_UNITS">
<div class="attr variable"> <div class="attr variable">
<span class="name">DATE_UNITS</span> = <span class="name">DATE_UNITS</span> =
<span class="default_value">{&#39;year&#39;, &#39;quarter&#39;, &#39;week&#39;, &#39;day&#39;, &#39;month&#39;, &#39;year_month&#39;}</span> <span class="default_value">{&#39;day&#39;, &#39;quarter&#39;, &#39;year_month&#39;, &#39;month&#39;, &#39;year&#39;, &#39;week&#39;}</span>
</div> </div>

View file

@ -586,7 +586,7 @@
<div class="attr variable"> <div class="attr variable">
<span class="name">ALL_JSON_PATH_PARTS</span> = <span class="name">ALL_JSON_PATH_PARTS</span> =
<input id="ALL_JSON_PATH_PARTS-view-value" class="view-value-toggle-state" type="checkbox" aria-hidden="true" tabindex="-1"> <input id="ALL_JSON_PATH_PARTS-view-value" class="view-value-toggle-state" type="checkbox" aria-hidden="true" tabindex="-1">
<label class="view-value-button pdoc-button" for="ALL_JSON_PATH_PARTS-view-value"></label><span class="default_value">{&lt;class &#39;<a href="expressions.html#JSONPathUnion">sqlglot.expressions.JSONPathUnion</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathSelector">sqlglot.expressions.JSONPathSelector</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathSlice">sqlglot.expressions.JSONPathSlice</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathScript">sqlglot.expressions.JSONPathScript</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathRoot">sqlglot.expressions.JSONPathRoot</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathRecursive">sqlglot.expressions.JSONPathRecursive</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathKey">sqlglot.expressions.JSONPathKey</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathWildcard">sqlglot.expressions.JSONPathWildcard</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathFilter">sqlglot.expressions.JSONPathFilter</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathSubscript">sqlglot.expressions.JSONPathSubscript</a>&#39;&gt;}</span> <label class="view-value-button pdoc-button" for="ALL_JSON_PATH_PARTS-view-value"></label><span class="default_value">{&lt;class &#39;<a href="expressions.html#JSONPathKey">sqlglot.expressions.JSONPathKey</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathWildcard">sqlglot.expressions.JSONPathWildcard</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathFilter">sqlglot.expressions.JSONPathFilter</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathUnion">sqlglot.expressions.JSONPathUnion</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathSubscript">sqlglot.expressions.JSONPathSubscript</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathSelector">sqlglot.expressions.JSONPathSelector</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathSlice">sqlglot.expressions.JSONPathSlice</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathScript">sqlglot.expressions.JSONPathScript</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathRoot">sqlglot.expressions.JSONPathRoot</a>&#39;&gt;, &lt;class &#39;<a href="expressions.html#JSONPathRecursive">sqlglot.expressions.JSONPathRecursive</a>&#39;&gt;}</span>
</div> </div>

File diff suppressed because one or more lines are too long

View file

@ -586,7 +586,7 @@ queries if it would result in multiple table selects in a single query:</p>
<div class="attr variable"> <div class="attr variable">
<span class="name">UNMERGABLE_ARGS</span> = <span class="name">UNMERGABLE_ARGS</span> =
<input id="UNMERGABLE_ARGS-view-value" class="view-value-toggle-state" type="checkbox" aria-hidden="true" tabindex="-1"> <input id="UNMERGABLE_ARGS-view-value" class="view-value-toggle-state" type="checkbox" aria-hidden="true" tabindex="-1">
<label class="view-value-button pdoc-button" for="UNMERGABLE_ARGS-view-value"></label><span class="default_value">{&#39;connect&#39;, &#39;options&#39;, &#39;group&#39;, &#39;having&#39;, &#39;qualify&#39;, &#39;sample&#39;, &#39;offset&#39;, &#39;with&#39;, &#39;cluster&#39;, &#39;limit&#39;, &#39;settings&#39;, &#39;distinct&#39;, &#39;sort&#39;, &#39;match&#39;, &#39;windows&#39;, &#39;format&#39;, &#39;pivots&#39;, &#39;kind&#39;, &#39;prewhere&#39;, &#39;into&#39;, &#39;laterals&#39;, &#39;locks&#39;, &#39;distribute&#39;}</span> <label class="view-value-button pdoc-button" for="UNMERGABLE_ARGS-view-value"></label><span class="default_value">{&#39;windows&#39;, &#39;with&#39;, &#39;kind&#39;, &#39;qualify&#39;, &#39;settings&#39;, &#39;distribute&#39;, &#39;pivots&#39;, &#39;match&#39;, &#39;offset&#39;, &#39;sample&#39;, &#39;sort&#39;, &#39;cluster&#39;, &#39;limit&#39;, &#39;group&#39;, &#39;prewhere&#39;, &#39;into&#39;, &#39;having&#39;, &#39;connect&#39;, &#39;format&#39;, &#39;distinct&#39;, &#39;options&#39;, &#39;locks&#39;, &#39;laterals&#39;}</span>
</div> </div>

View file

@ -142,7 +142,7 @@
</span><span id="L-84"><a href="#L-84"><span class="linenos"> 84</span></a> </span><span id="L-84"><a href="#L-84"><span class="linenos"> 84</span></a>
</span><span id="L-85"><a href="#L-85"><span class="linenos"> 85</span></a> <span class="k">for</span> <span class="n">name</span><span class="p">,</span> <span class="n">source</span> <span class="ow">in</span> <span class="n">scope</span><span class="o">.</span><span class="n">sources</span><span class="o">.</span><span class="n">items</span><span class="p">():</span> </span><span id="L-85"><a href="#L-85"><span class="linenos"> 85</span></a> <span class="k">for</span> <span class="n">name</span><span class="p">,</span> <span class="n">source</span> <span class="ow">in</span> <span class="n">scope</span><span class="o">.</span><span class="n">sources</span><span class="o">.</span><span class="n">items</span><span class="p">():</span>
</span><span id="L-86"><a href="#L-86"><span class="linenos"> 86</span></a> <span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">source</span><span class="p">,</span> <span class="n">exp</span><span class="o">.</span><span class="n">Table</span><span class="p">):</span> </span><span id="L-86"><a href="#L-86"><span class="linenos"> 86</span></a> <span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">source</span><span class="p">,</span> <span class="n">exp</span><span class="o">.</span><span class="n">Table</span><span class="p">):</span>
</span><span id="L-87"><a href="#L-87"><span class="linenos"> 87</span></a> <span class="n">pivots</span> <span class="o">=</span> <span class="n">pivots</span> <span class="o">=</span> <span class="n">source</span><span class="o">.</span><span class="n">args</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">&quot;pivots&quot;</span><span class="p">)</span> </span><span id="L-87"><a href="#L-87"><span class="linenos"> 87</span></a> <span class="n">pivots</span> <span class="o">=</span> <span class="n">source</span><span class="o">.</span><span class="n">args</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">&quot;pivots&quot;</span><span class="p">)</span>
</span><span id="L-88"><a href="#L-88"><span class="linenos"> 88</span></a> <span class="k">if</span> <span class="ow">not</span> <span class="n">source</span><span class="o">.</span><span class="n">alias</span><span class="p">:</span> </span><span id="L-88"><a href="#L-88"><span class="linenos"> 88</span></a> <span class="k">if</span> <span class="ow">not</span> <span class="n">source</span><span class="o">.</span><span class="n">alias</span><span class="p">:</span>
</span><span id="L-89"><a href="#L-89"><span class="linenos"> 89</span></a> <span class="c1"># Don&#39;t add the pivot&#39;s alias to the pivoted table, use the table&#39;s name instead</span> </span><span id="L-89"><a href="#L-89"><span class="linenos"> 89</span></a> <span class="c1"># Don&#39;t add the pivot&#39;s alias to the pivoted table, use the table&#39;s name instead</span>
</span><span id="L-90"><a href="#L-90"><span class="linenos"> 90</span></a> <span class="k">if</span> <span class="n">pivots</span> <span class="ow">and</span> <span class="n">pivots</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">alias</span> <span class="o">==</span> <span class="n">name</span><span class="p">:</span> </span><span id="L-90"><a href="#L-90"><span class="linenos"> 90</span></a> <span class="k">if</span> <span class="n">pivots</span> <span class="ow">and</span> <span class="n">pivots</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">alias</span> <span class="o">==</span> <span class="n">name</span><span class="p">:</span>
@ -289,7 +289,7 @@
</span><span id="qualify_tables-85"><a href="#qualify_tables-85"><span class="linenos"> 85</span></a> </span><span id="qualify_tables-85"><a href="#qualify_tables-85"><span class="linenos"> 85</span></a>
</span><span id="qualify_tables-86"><a href="#qualify_tables-86"><span class="linenos"> 86</span></a> <span class="k">for</span> <span class="n">name</span><span class="p">,</span> <span class="n">source</span> <span class="ow">in</span> <span class="n">scope</span><span class="o">.</span><span class="n">sources</span><span class="o">.</span><span class="n">items</span><span class="p">():</span> </span><span id="qualify_tables-86"><a href="#qualify_tables-86"><span class="linenos"> 86</span></a> <span class="k">for</span> <span class="n">name</span><span class="p">,</span> <span class="n">source</span> <span class="ow">in</span> <span class="n">scope</span><span class="o">.</span><span class="n">sources</span><span class="o">.</span><span class="n">items</span><span class="p">():</span>
</span><span id="qualify_tables-87"><a href="#qualify_tables-87"><span class="linenos"> 87</span></a> <span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">source</span><span class="p">,</span> <span class="n">exp</span><span class="o">.</span><span class="n">Table</span><span class="p">):</span> </span><span id="qualify_tables-87"><a href="#qualify_tables-87"><span class="linenos"> 87</span></a> <span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">source</span><span class="p">,</span> <span class="n">exp</span><span class="o">.</span><span class="n">Table</span><span class="p">):</span>
</span><span id="qualify_tables-88"><a href="#qualify_tables-88"><span class="linenos"> 88</span></a> <span class="n">pivots</span> <span class="o">=</span> <span class="n">pivots</span> <span class="o">=</span> <span class="n">source</span><span class="o">.</span><span class="n">args</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">&quot;pivots&quot;</span><span class="p">)</span> </span><span id="qualify_tables-88"><a href="#qualify_tables-88"><span class="linenos"> 88</span></a> <span class="n">pivots</span> <span class="o">=</span> <span class="n">source</span><span class="o">.</span><span class="n">args</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">&quot;pivots&quot;</span><span class="p">)</span>
</span><span id="qualify_tables-89"><a href="#qualify_tables-89"><span class="linenos"> 89</span></a> <span class="k">if</span> <span class="ow">not</span> <span class="n">source</span><span class="o">.</span><span class="n">alias</span><span class="p">:</span> </span><span id="qualify_tables-89"><a href="#qualify_tables-89"><span class="linenos"> 89</span></a> <span class="k">if</span> <span class="ow">not</span> <span class="n">source</span><span class="o">.</span><span class="n">alias</span><span class="p">:</span>
</span><span id="qualify_tables-90"><a href="#qualify_tables-90"><span class="linenos"> 90</span></a> <span class="c1"># Don&#39;t add the pivot&#39;s alias to the pivoted table, use the table&#39;s name instead</span> </span><span id="qualify_tables-90"><a href="#qualify_tables-90"><span class="linenos"> 90</span></a> <span class="c1"># Don&#39;t add the pivot&#39;s alias to the pivoted table, use the table&#39;s name instead</span>
</span><span id="qualify_tables-91"><a href="#qualify_tables-91"><span class="linenos"> 91</span></a> <span class="k">if</span> <span class="n">pivots</span> <span class="ow">and</span> <span class="n">pivots</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">alias</span> <span class="o">==</span> <span class="n">name</span><span class="p">:</span> </span><span id="qualify_tables-91"><a href="#qualify_tables-91"><span class="linenos"> 91</span></a> <span class="k">if</span> <span class="n">pivots</span> <span class="ow">and</span> <span class="n">pivots</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">alias</span> <span class="o">==</span> <span class="n">name</span><span class="p">:</span>

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -8834,7 +8834,7 @@
<div class="attr variable"> <div class="attr variable">
<span class="name">COMMANDS</span> = <span class="name">COMMANDS</span> =
<input id="Tokenizer.COMMANDS-view-value" class="view-value-toggle-state" type="checkbox" aria-hidden="true" tabindex="-1"> <input id="Tokenizer.COMMANDS-view-value" class="view-value-toggle-state" type="checkbox" aria-hidden="true" tabindex="-1">
<label class="view-value-button pdoc-button" for="Tokenizer.COMMANDS-view-value"></label><span class="default_value">{&lt;<a href="#TokenType.FETCH">TokenType.FETCH</a>: &#39;FETCH&#39;&gt;, &lt;<a href="#TokenType.EXECUTE">TokenType.EXECUTE</a>: &#39;EXECUTE&#39;&gt;, &lt;<a href="#TokenType.COMMAND">TokenType.COMMAND</a>: &#39;COMMAND&#39;&gt;, &lt;<a href="#TokenType.SHOW">TokenType.SHOW</a>: &#39;SHOW&#39;&gt;, &lt;<a href="#TokenType.RENAME">TokenType.RENAME</a>: &#39;RENAME&#39;&gt;}</span> <label class="view-value-button pdoc-button" for="Tokenizer.COMMANDS-view-value"></label><span class="default_value">{&lt;<a href="#TokenType.EXECUTE">TokenType.EXECUTE</a>: &#39;EXECUTE&#39;&gt;, &lt;<a href="#TokenType.SHOW">TokenType.SHOW</a>: &#39;SHOW&#39;&gt;, &lt;<a href="#TokenType.COMMAND">TokenType.COMMAND</a>: &#39;COMMAND&#39;&gt;, &lt;<a href="#TokenType.FETCH">TokenType.FETCH</a>: &#39;FETCH&#39;&gt;, &lt;<a href="#TokenType.RENAME">TokenType.RENAME</a>: &#39;RENAME&#39;&gt;}</span>
</div> </div>
@ -8846,7 +8846,7 @@
<div id="Tokenizer.COMMAND_PREFIX_TOKENS" class="classattr"> <div id="Tokenizer.COMMAND_PREFIX_TOKENS" class="classattr">
<div class="attr variable"> <div class="attr variable">
<span class="name">COMMAND_PREFIX_TOKENS</span> = <span class="name">COMMAND_PREFIX_TOKENS</span> =
<span class="default_value">{&lt;<a href="#TokenType.SEMICOLON">TokenType.SEMICOLON</a>: &#39;SEMICOLON&#39;&gt;, &lt;<a href="#TokenType.BEGIN">TokenType.BEGIN</a>: &#39;BEGIN&#39;&gt;}</span> <span class="default_value">{&lt;<a href="#TokenType.BEGIN">TokenType.BEGIN</a>: &#39;BEGIN&#39;&gt;, &lt;<a href="#TokenType.SEMICOLON">TokenType.SEMICOLON</a>: &#39;SEMICOLON&#39;&gt;}</span>
</div> </div>

File diff suppressed because it is too large Load diff

View file

@ -20,6 +20,7 @@ from sqlglot.dialects.dialect import (
var_map_sql, var_map_sql,
timestamptrunc_sql, timestamptrunc_sql,
unit_to_var, unit_to_var,
trim_sql,
) )
from sqlglot.generator import Generator from sqlglot.generator import Generator
from sqlglot.helper import is_int, seq_get from sqlglot.helper import is_int, seq_get
@ -875,6 +876,7 @@ class ClickHouse(Dialect):
exp.SHA2: sha256_sql, exp.SHA2: sha256_sql,
exp.UnixToTime: _unix_to_time_sql, exp.UnixToTime: _unix_to_time_sql,
exp.TimestampTrunc: timestamptrunc_sql(zone=True), exp.TimestampTrunc: timestamptrunc_sql(zone=True),
exp.Trim: trim_sql,
exp.Variance: rename_func("varSamp"), exp.Variance: rename_func("varSamp"),
exp.SchemaCommentProperty: lambda self, e: self.naked_property(e), exp.SchemaCommentProperty: lambda self, e: self.naked_property(e),
exp.Stddev: rename_func("stddevSamp"), exp.Stddev: rename_func("stddevSamp"),
@ -890,6 +892,7 @@ class ClickHouse(Dialect):
# There's no list in docs, but it can be found in Clickhouse code # There's no list in docs, but it can be found in Clickhouse code
# see `ClickHouse/src/Parsers/ParserCreate*.cpp` # see `ClickHouse/src/Parsers/ParserCreate*.cpp`
ON_CLUSTER_TARGETS = { ON_CLUSTER_TARGETS = {
"SCHEMA", # Transpiled CREATE SCHEMA may have OnCluster property set
"DATABASE", "DATABASE",
"TABLE", "TABLE",
"VIEW", "VIEW",

View file

@ -11,7 +11,7 @@ from sqlglot.generator import Generator
from sqlglot.helper import AutoName, flatten, is_int, seq_get, subclasses from sqlglot.helper import AutoName, flatten, is_int, seq_get, subclasses
from sqlglot.jsonpath import JSONPathTokenizer, parse as parse_json_path from sqlglot.jsonpath import JSONPathTokenizer, parse as parse_json_path
from sqlglot.parser import Parser from sqlglot.parser import Parser
from sqlglot.time import TIMEZONES, format_time from sqlglot.time import TIMEZONES, format_time, subsecond_precision
from sqlglot.tokens import Token, Tokenizer, TokenType from sqlglot.tokens import Token, Tokenizer, TokenType
from sqlglot.trie import new_trie from sqlglot.trie import new_trie
@ -1243,13 +1243,24 @@ def right_to_substring_sql(self: Generator, expression: exp.Left) -> str:
) )
def timestrtotime_sql(self: Generator, expression: exp.TimeStrToTime) -> str: def timestrtotime_sql(
datatype = ( self: Generator,
expression: exp.TimeStrToTime,
include_precision: bool = False,
) -> str:
datatype = exp.DataType.build(
exp.DataType.Type.TIMESTAMPTZ exp.DataType.Type.TIMESTAMPTZ
if expression.args.get("zone") if expression.args.get("zone")
else exp.DataType.Type.TIMESTAMP else exp.DataType.Type.TIMESTAMP
) )
if isinstance(expression.this, exp.Literal) and include_precision:
precision = subsecond_precision(expression.this.name)
if precision > 0:
datatype = exp.DataType.build(
datatype.this, expressions=[exp.DataTypeParam(this=exp.Literal.number(precision))]
)
return self.sql(exp.cast(expression.this, datatype, dialect=self.dialect)) return self.sql(exp.cast(expression.this, datatype, dialect=self.dialect))
@ -1295,7 +1306,7 @@ def trim_sql(self: Generator, expression: exp.Trim) -> str:
collation = self.sql(expression, "collation") collation = self.sql(expression, "collation")
# Use TRIM/LTRIM/RTRIM syntax if the expression isn't database-specific # Use TRIM/LTRIM/RTRIM syntax if the expression isn't database-specific
if not remove_chars and not collation: if not remove_chars:
return self.trim_sql(expression) return self.trim_sql(expression)
trim_type = f"{trim_type} " if trim_type else "" trim_type = f"{trim_type} " if trim_type else ""

View file

@ -33,6 +33,7 @@ from sqlglot.dialects.dialect import (
timestrtotime_sql, timestrtotime_sql,
unit_to_var, unit_to_var,
unit_to_str, unit_to_str,
sha256_sql,
) )
from sqlglot.helper import seq_get from sqlglot.helper import seq_get
from sqlglot.tokens import TokenType from sqlglot.tokens import TokenType
@ -41,6 +42,14 @@ DATETIME_DELTA = t.Union[
exp.DateAdd, exp.TimeAdd, exp.DatetimeAdd, exp.TsOrDsAdd, exp.DateSub, exp.DatetimeSub exp.DateAdd, exp.TimeAdd, exp.DatetimeAdd, exp.TsOrDsAdd, exp.DateSub, exp.DatetimeSub
] ]
WINDOW_FUNCS_WITH_IGNORE_NULLS = (
exp.FirstValue,
exp.LastValue,
exp.Lag,
exp.Lead,
exp.NthValue,
)
def _date_delta_sql(self: DuckDB.Generator, expression: DATETIME_DELTA) -> str: def _date_delta_sql(self: DuckDB.Generator, expression: DATETIME_DELTA) -> str:
this = expression.this this = expression.this
@ -376,6 +385,7 @@ class DuckDB(Dialect):
} }
FUNCTIONS.pop("DATE_SUB") FUNCTIONS.pop("DATE_SUB")
FUNCTIONS.pop("GLOB")
FUNCTION_PARSERS = parser.Parser.FUNCTION_PARSERS.copy() FUNCTION_PARSERS = parser.Parser.FUNCTION_PARSERS.copy()
FUNCTION_PARSERS.pop("DECODE") FUNCTION_PARSERS.pop("DECODE")
@ -539,6 +549,7 @@ class DuckDB(Dialect):
exp.ReturnsProperty: lambda self, e: "TABLE" if isinstance(e.this, exp.Schema) else "", exp.ReturnsProperty: lambda self, e: "TABLE" if isinstance(e.this, exp.Schema) else "",
exp.Rand: rename_func("RANDOM"), exp.Rand: rename_func("RANDOM"),
exp.SafeDivide: no_safe_divide_sql, exp.SafeDivide: no_safe_divide_sql,
exp.SHA2: sha256_sql,
exp.Split: rename_func("STR_SPLIT"), exp.Split: rename_func("STR_SPLIT"),
exp.SortArray: _sort_array_sql, exp.SortArray: _sort_array_sql,
exp.StrPosition: str_position_sql, exp.StrPosition: str_position_sql,
@ -546,6 +557,7 @@ class DuckDB(Dialect):
"EPOCH", self.func("STRPTIME", e.this, self.format_time(e)) "EPOCH", self.func("STRPTIME", e.this, self.format_time(e))
), ),
exp.Struct: _struct_sql, exp.Struct: _struct_sql,
exp.Transform: rename_func("LIST_TRANSFORM"),
exp.TimeAdd: _date_delta_sql, exp.TimeAdd: _date_delta_sql,
exp.Time: no_time_sql, exp.Time: no_time_sql,
exp.TimeDiff: _timediff_sql, exp.TimeDiff: _timediff_sql,
@ -753,7 +765,6 @@ class DuckDB(Dialect):
def tablesample_sql( def tablesample_sql(
self, self,
expression: exp.TableSample, expression: exp.TableSample,
sep: str = " AS ",
tablesample_keyword: t.Optional[str] = None, tablesample_keyword: t.Optional[str] = None,
) -> str: ) -> str:
if not isinstance(expression.parent, exp.Select): if not isinstance(expression.parent, exp.Select):
@ -769,9 +780,7 @@ class DuckDB(Dialect):
) )
expression.set("method", exp.var("RESERVOIR")) expression.set("method", exp.var("RESERVOIR"))
return super().tablesample_sql( return super().tablesample_sql(expression, tablesample_keyword=tablesample_keyword)
expression, sep=sep, tablesample_keyword=tablesample_keyword
)
def interval_sql(self, expression: exp.Interval) -> str: def interval_sql(self, expression: exp.Interval) -> str:
multiplier: t.Optional[int] = None multiplier: t.Optional[int] = None
@ -910,3 +919,11 @@ class DuckDB(Dialect):
return self.sql(select) return self.sql(select)
return super().unnest_sql(expression) return super().unnest_sql(expression)
def ignorenulls_sql(self, expression: exp.IgnoreNulls) -> str:
if isinstance(expression.this, WINDOW_FUNCS_WITH_IGNORE_NULLS):
# DuckDB should render IGNORE NULLS only for the general-purpose
# window functions that accept it e.g. FIRST_VALUE(... IGNORE NULLS) OVER (...)
return super().ignorenulls_sql(expression)
return self.sql(expression, "this")

View file

@ -436,6 +436,14 @@ class Hive(Dialect):
self._match(TokenType.R_BRACE) self._match(TokenType.R_BRACE)
return self.expression(exp.Parameter, this=this, expression=expression) return self.expression(exp.Parameter, this=this, expression=expression)
def _to_prop_eq(self, expression: exp.Expression, index: int) -> exp.Expression:
if isinstance(expression, exp.Column):
key = expression.this
else:
key = exp.to_identifier(f"col{index + 1}")
return self.expression(exp.PropertyEQ, this=key, expression=expression)
class Generator(generator.Generator): class Generator(generator.Generator):
LIMIT_FETCH = "LIMIT" LIMIT_FETCH = "LIMIT"
TABLESAMPLE_WITH_METHOD = False TABLESAMPLE_WITH_METHOD = False

View file

@ -24,6 +24,8 @@ from sqlglot.dialects.dialect import (
rename_func, rename_func,
strposition_to_locate_sql, strposition_to_locate_sql,
unit_to_var, unit_to_var,
trim_sql,
timestrtotime_sql,
) )
from sqlglot.helper import seq_get from sqlglot.helper import seq_get
from sqlglot.tokens import TokenType from sqlglot.tokens import TokenType
@ -95,21 +97,6 @@ def _str_to_date_sql(
return self.func("STR_TO_DATE", expression.this, self.format_time(expression)) return self.func("STR_TO_DATE", expression.this, self.format_time(expression))
def _trim_sql(self: MySQL.Generator, expression: exp.Trim) -> str:
target = self.sql(expression, "this")
trim_type = self.sql(expression, "position")
remove_chars = self.sql(expression, "expression")
# Use TRIM/LTRIM/RTRIM syntax if the expression isn't mysql-specific
if not remove_chars:
return self.trim_sql(expression)
trim_type = f"{trim_type} " if trim_type else ""
remove_chars = f"{remove_chars} " if remove_chars else ""
from_part = "FROM " if trim_type or remove_chars else ""
return f"TRIM({trim_type}{remove_chars}{from_part}{target})"
def _unix_to_time_sql(self: MySQL.Generator, expression: exp.UnixToTime) -> str: def _unix_to_time_sql(self: MySQL.Generator, expression: exp.UnixToTime) -> str:
scale = expression.args.get("scale") scale = expression.args.get("scale")
timestamp = expression.this timestamp = expression.this
@ -348,6 +335,7 @@ class MySQL(Dialect):
"VALUES": lambda self: self.expression( "VALUES": lambda self: self.expression(
exp.Anonymous, this="VALUES", expressions=[self._parse_id_var()] exp.Anonymous, this="VALUES", expressions=[self._parse_id_var()]
), ),
"JSON_VALUE": lambda self: self._parse_json_value(),
} }
STATEMENT_PARSERS = { STATEMENT_PARSERS = {
@ -677,6 +665,33 @@ class MySQL(Dialect):
return self.expression(exp.GroupConcat, this=this, separator=separator) return self.expression(exp.GroupConcat, this=this, separator=separator)
def _parse_json_value(self) -> exp.JSONValue:
def _parse_on_options() -> t.Optional[exp.Expression] | str:
if self._match_texts(("NULL", "ERROR")):
value = self._prev.text.upper()
else:
value = self._match(TokenType.DEFAULT) and self._parse_bitwise()
self._match_text_seq("ON")
self._match_texts(("EMPTY", "ERROR"))
return value
this = self._parse_bitwise()
self._match(TokenType.COMMA)
path = self._parse_bitwise()
returning = self._match(TokenType.RETURNING) and self._parse_type()
return self.expression(
exp.JSONValue,
this=this,
path=self.dialect.to_json_path(path),
returning=returning,
on_error=_parse_on_options(),
on_empty=_parse_on_options(),
)
class Generator(generator.Generator): class Generator(generator.Generator):
INTERVAL_ALLOWS_PLURAL_FORM = False INTERVAL_ALLOWS_PLURAL_FORM = False
LOCKING_READS_SUPPORTED = True LOCKING_READS_SUPPORTED = True
@ -742,13 +757,15 @@ class MySQL(Dialect):
), ),
exp.TimestampSub: date_add_interval_sql("DATE", "SUB"), exp.TimestampSub: date_add_interval_sql("DATE", "SUB"),
exp.TimeStrToUnix: rename_func("UNIX_TIMESTAMP"), exp.TimeStrToUnix: rename_func("UNIX_TIMESTAMP"),
exp.TimeStrToTime: lambda self, e: self.sql( exp.TimeStrToTime: lambda self, e: timestrtotime_sql(
exp.cast(e.this, exp.DataType.Type.DATETIME, copy=True) self,
e,
include_precision=not e.args.get("zone"),
), ),
exp.TimeToStr: _remove_ts_or_ds_to_date( exp.TimeToStr: _remove_ts_or_ds_to_date(
lambda self, e: self.func("DATE_FORMAT", e.this, self.format_time(e)) lambda self, e: self.func("DATE_FORMAT", e.this, self.format_time(e))
), ),
exp.Trim: _trim_sql, exp.Trim: trim_sql,
exp.TryCast: no_trycast_sql, exp.TryCast: no_trycast_sql,
exp.TsOrDsAdd: date_add_sql("ADD"), exp.TsOrDsAdd: date_add_sql("ADD"),
exp.TsOrDsDiff: lambda self, e: self.func("DATEDIFF", e.this, e.expression), exp.TsOrDsDiff: lambda self, e: self.func("DATEDIFF", e.this, e.expression),
@ -1224,3 +1241,7 @@ class MySQL(Dialect):
dt = expression.args.get("timestamp") dt = expression.args.get("timestamp")
return self.func("CONVERT_TZ", dt, from_tz, to_tz) return self.func("CONVERT_TZ", dt, from_tz, to_tz)
def attimezone_sql(self, expression: exp.AtTimeZone) -> str:
self.unsupported("AT TIME ZONE is not supported by MySQL")
return self.sql(expression.this)

View file

@ -33,6 +33,15 @@ def _build_timetostr_or_tochar(args: t.List) -> exp.TimeToStr | exp.ToChar:
return exp.ToChar.from_arg_list(args) return exp.ToChar.from_arg_list(args)
def _trim_sql(self: Oracle.Generator, expression: exp.Trim) -> str:
position = expression.args.get("position")
if position and position.upper() in ("LEADING", "TRAILING"):
return self.trim_sql(expression)
return trim_sql(self, expression)
class Oracle(Dialect): class Oracle(Dialect):
ALIAS_POST_TABLESAMPLE = True ALIAS_POST_TABLESAMPLE = True
LOCKING_READS_SUPPORTED = True LOCKING_READS_SUPPORTED = True
@ -267,12 +276,12 @@ class Oracle(Dialect):
exp.Subquery: lambda self, e: self.subquery_sql(e, sep=" "), exp.Subquery: lambda self, e: self.subquery_sql(e, sep=" "),
exp.Substring: rename_func("SUBSTR"), exp.Substring: rename_func("SUBSTR"),
exp.Table: lambda self, e: self.table_sql(e, sep=" "), exp.Table: lambda self, e: self.table_sql(e, sep=" "),
exp.TableSample: lambda self, e: self.tablesample_sql(e, sep=" "), exp.TableSample: lambda self, e: self.tablesample_sql(e),
exp.TemporaryProperty: lambda _, e: f"{e.name or 'GLOBAL'} TEMPORARY", exp.TemporaryProperty: lambda _, e: f"{e.name or 'GLOBAL'} TEMPORARY",
exp.TimeToStr: lambda self, e: self.func("TO_CHAR", e.this, self.format_time(e)), exp.TimeToStr: lambda self, e: self.func("TO_CHAR", e.this, self.format_time(e)),
exp.ToChar: lambda self, e: self.function_fallback_sql(e), exp.ToChar: lambda self, e: self.function_fallback_sql(e),
exp.ToNumber: to_number_with_nls_param, exp.ToNumber: to_number_with_nls_param,
exp.Trim: trim_sql, exp.Trim: _trim_sql,
exp.UnixToTime: lambda self, exp.UnixToTime: lambda self,
e: f"TO_DATE('1970-01-01', 'YYYY-MM-DD') + ({self.sql(e, 'this')} / 86400)", e: f"TO_DATE('1970-01-01', 'YYYY-MM-DD') + ({self.sql(e, 'this')} / 86400)",
} }

View file

@ -93,7 +93,9 @@ def _build_date_time_add(expr_type: t.Type[E]) -> t.Callable[[t.List], E]:
# https://docs.snowflake.com/en/sql-reference/functions/div0 # https://docs.snowflake.com/en/sql-reference/functions/div0
def _build_if_from_div0(args: t.List) -> exp.If: def _build_if_from_div0(args: t.List) -> exp.If:
cond = exp.EQ(this=seq_get(args, 1), expression=exp.Literal.number(0)) cond = exp.EQ(this=seq_get(args, 1), expression=exp.Literal.number(0)).and_(
exp.Is(this=seq_get(args, 0), expression=exp.null()).not_()
)
true = exp.Literal.number(0) true = exp.Literal.number(0)
false = exp.Div(this=seq_get(args, 0), expression=seq_get(args, 1)) false = exp.Div(this=seq_get(args, 0), expression=seq_get(args, 1))
return exp.If(this=cond, true=true, false=false) return exp.If(this=cond, true=true, false=false)

View file

@ -1,5 +1,7 @@
from __future__ import annotations from __future__ import annotations
import typing as t
from sqlglot import exp from sqlglot import exp
from sqlglot.dialects.dialect import ( from sqlglot.dialects.dialect import (
approx_count_distinct_sql, approx_count_distinct_sql,
@ -7,6 +9,7 @@ from sqlglot.dialects.dialect import (
build_timestamp_trunc, build_timestamp_trunc,
rename_func, rename_func,
unit_to_str, unit_to_str,
inline_array_sql,
) )
from sqlglot.dialects.mysql import MySQL from sqlglot.dialects.mysql import MySQL
from sqlglot.helper import seq_get from sqlglot.helper import seq_get
@ -26,6 +29,19 @@ class StarRocks(MySQL):
"REGEXP": exp.RegexpLike.from_arg_list, "REGEXP": exp.RegexpLike.from_arg_list,
} }
def _parse_unnest(self, with_alias: bool = True) -> t.Optional[exp.Unnest]:
unnest = super()._parse_unnest(with_alias=with_alias)
if unnest:
alias = unnest.args.get("alias")
if alias and not alias.args.get("columns"):
# Starrocks defaults to naming the UNNEST column as "unnest"
# if it's not otherwise specified
alias.set("columns", [exp.to_identifier("unnest")])
return unnest
class Generator(MySQL.Generator): class Generator(MySQL.Generator):
CAST_MAPPING = {} CAST_MAPPING = {}
@ -38,6 +54,7 @@ class StarRocks(MySQL):
TRANSFORMS = { TRANSFORMS = {
**MySQL.Generator.TRANSFORMS, **MySQL.Generator.TRANSFORMS,
exp.Array: inline_array_sql,
exp.ApproxDistinct: approx_count_distinct_sql, exp.ApproxDistinct: approx_count_distinct_sql,
exp.DateDiff: lambda self, e: self.func( exp.DateDiff: lambda self, e: self.func(
"DATE_DIFF", unit_to_str(e), e.this, e.expression "DATE_DIFF", unit_to_str(e), e.this, e.expression

View file

@ -278,7 +278,6 @@ class Teradata(Dialect):
def tablesample_sql( def tablesample_sql(
self, self,
expression: exp.TableSample, expression: exp.TableSample,
sep: str = " AS ",
tablesample_keyword: t.Optional[str] = None, tablesample_keyword: t.Optional[str] = None,
) -> str: ) -> str:
return f"{self.sql(expression, 'this')} SAMPLE {self.expressions(expression)}" return f"{self.sql(expression, 'this')} SAMPLE {self.expressions(expression)}"

View file

@ -1,7 +1,7 @@
from __future__ import annotations from __future__ import annotations
from sqlglot import exp from sqlglot import exp
from sqlglot.dialects.dialect import merge_without_target_sql, trim_sql from sqlglot.dialects.dialect import merge_without_target_sql, trim_sql, timestrtotime_sql
from sqlglot.dialects.presto import Presto from sqlglot.dialects.presto import Presto
@ -21,6 +21,7 @@ class Trino(Presto):
exp.ArraySum: lambda self, exp.ArraySum: lambda self,
e: f"REDUCE({self.sql(e, 'this')}, 0, (acc, x) -> acc + x, acc -> acc)", e: f"REDUCE({self.sql(e, 'this')}, 0, (acc, x) -> acc + x, acc -> acc)",
exp.Merge: merge_without_target_sql, exp.Merge: merge_without_target_sql,
exp.TimeStrToTime: lambda self, e: timestrtotime_sql(self, e, include_precision=True),
exp.Trim: trim_sql, exp.Trim: trim_sql,
} }

View file

@ -351,12 +351,13 @@ def _timestrtotime_sql(self: TSQL.Generator, expression: exp.TimeStrToTime):
class TSQL(Dialect): class TSQL(Dialect):
NORMALIZATION_STRATEGY = NormalizationStrategy.CASE_INSENSITIVE
TIME_FORMAT = "'yyyy-mm-dd hh:mm:ss'"
SUPPORTS_SEMI_ANTI_JOIN = False SUPPORTS_SEMI_ANTI_JOIN = False
LOG_BASE_FIRST = False LOG_BASE_FIRST = False
TYPED_DIVISION = True TYPED_DIVISION = True
CONCAT_COALESCE = True CONCAT_COALESCE = True
NORMALIZATION_STRATEGY = NormalizationStrategy.CASE_INSENSITIVE
TIME_FORMAT = "'yyyy-mm-dd hh:mm:ss'"
TIME_MAPPING = { TIME_MAPPING = {
"year": "%Y", "year": "%Y",
@ -395,7 +396,7 @@ class TSQL(Dialect):
"HH": "%H", "HH": "%H",
"H": "%-H", "H": "%-H",
"h": "%-I", "h": "%-I",
"S": "%f", "ffffff": "%f",
"yyyy": "%Y", "yyyy": "%Y",
"yy": "%y", "yy": "%y",
} }
@ -983,7 +984,9 @@ class TSQL(Dialect):
return super().setitem_sql(expression) return super().setitem_sql(expression)
def boolean_sql(self, expression: exp.Boolean) -> str: def boolean_sql(self, expression: exp.Boolean) -> str:
if type(expression.parent) in BIT_TYPES: if type(expression.parent) in BIT_TYPES or isinstance(
expression.find_ancestor(exp.Values, exp.Select), exp.Values
):
return "1" if expression.this else "0" return "1" if expression.this else "0"
return "(1 = 1)" if expression.this else "(1 = 0)" return "(1 = 1)" if expression.this else "(1 = 0)"

View file

@ -2172,6 +2172,7 @@ class Insert(DDL, DML):
"stored": False, "stored": False,
"partition": False, "partition": False,
"settings": False, "settings": False,
"source": False,
} }
def with_( def with_(
@ -2280,6 +2281,18 @@ class Group(Expression):
} }
class Cube(Expression):
arg_types = {"expressions": False}
class Rollup(Expression):
arg_types = {"expressions": False}
class GroupingSets(Expression):
arg_types = {"expressions": True}
class Lambda(Expression): class Lambda(Expression):
arg_types = {"this": True, "expressions": True} arg_types = {"this": True, "expressions": True}
@ -3074,6 +3087,7 @@ class Table(Expression):
"partition": False, "partition": False,
"changes": False, "changes": False,
"rows_from": False, "rows_from": False,
"sample": False,
} }
@property @property
@ -3846,7 +3860,6 @@ class Subquery(DerivedTable, Query):
class TableSample(Expression): class TableSample(Expression):
arg_types = { arg_types = {
"this": False,
"expressions": False, "expressions": False,
"method": False, "method": False,
"bucket_numerator": False, "bucket_numerator": False,
@ -5441,6 +5454,11 @@ class IsInf(Func):
_sql_names = ["IS_INF", "ISINF"] _sql_names = ["IS_INF", "ISINF"]
# https://www.postgresql.org/docs/current/functions-json.html
class JSON(Expression):
arg_types = {"this": False, "with": False, "unique": False}
class JSONPath(Expression): class JSONPath(Expression):
arg_types = {"expressions": True} arg_types = {"expressions": True}
@ -5553,6 +5571,17 @@ class JSONSchema(Expression):
arg_types = {"expressions": True} arg_types = {"expressions": True}
# https://dev.mysql.com/doc/refman/8.4/en/json-search-functions.html#function_json-value
class JSONValue(Expression):
arg_types = {
"this": True,
"path": True,
"returning": False,
"on_empty": False,
"on_error": False,
}
# # https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/JSON_TABLE.html # # https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/JSON_TABLE.html
class JSONTable(Func): class JSONTable(Func):
arg_types = { arg_types = {

View file

@ -1669,7 +1669,10 @@ class Generator(metaclass=_Generator):
settings = self.sql(expression, "settings") settings = self.sql(expression, "settings")
settings = f" {settings}" if settings else "" settings = f" {settings}" if settings else ""
sql = f"INSERT{hint}{alternative}{ignore}{this}{stored}{by_name}{exists}{partition_by}{settings}{where}{expression_sql}" source = self.sql(expression, "source")
source = f"TABLE {source}" if source else ""
sql = f"INSERT{hint}{alternative}{ignore}{this}{stored}{by_name}{exists}{partition_by}{settings}{where}{expression_sql}{source}"
return self.prepend_ctes(expression, sql) return self.prepend_ctes(expression, sql)
def intersect_sql(self, expression: exp.Intersect) -> str: def intersect_sql(self, expression: exp.Intersect) -> str:
@ -1764,6 +1767,15 @@ class Generator(metaclass=_Generator):
version = f" {version}" if version else "" version = f" {version}" if version else ""
alias = self.sql(expression, "alias") alias = self.sql(expression, "alias")
alias = f"{sep}{alias}" if alias else "" alias = f"{sep}{alias}" if alias else ""
sample = self.sql(expression, "sample")
if self.dialect.ALIAS_POST_TABLESAMPLE:
sample_pre_alias = sample
sample_post_alias = ""
else:
sample_pre_alias = ""
sample_post_alias = sample
hints = self.expressions(expression, key="hints", sep=" ") hints = self.expressions(expression, key="hints", sep=" ")
hints = f" {hints}" if hints and self.TABLE_HINTS else "" hints = f" {hints}" if hints and self.TABLE_HINTS else ""
pivots = self.expressions(expression, key="pivots", sep="", flat=True) pivots = self.expressions(expression, key="pivots", sep="", flat=True)
@ -1794,23 +1806,13 @@ class Generator(metaclass=_Generator):
if rows_from: if rows_from:
table = f"ROWS FROM {self.wrap(rows_from)}" table = f"ROWS FROM {self.wrap(rows_from)}"
return f"{only}{table}{changes}{partition}{version}{file_format}{alias}{hints}{pivots}{joins}{laterals}{ordinality}" return f"{only}{table}{changes}{partition}{version}{file_format}{sample_pre_alias}{alias}{hints}{pivots}{sample_post_alias}{joins}{laterals}{ordinality}"
def tablesample_sql( def tablesample_sql(
self, self,
expression: exp.TableSample, expression: exp.TableSample,
sep: str = " AS ",
tablesample_keyword: t.Optional[str] = None, tablesample_keyword: t.Optional[str] = None,
) -> str: ) -> str:
if self.dialect.ALIAS_POST_TABLESAMPLE and expression.this and expression.this.alias:
table = expression.this.copy()
table.set("alias", None)
this = self.sql(table)
alias = f"{sep}{self.sql(expression.this, 'alias')}"
else:
this = self.sql(expression, "this")
alias = ""
method = self.sql(expression, "method") method = self.sql(expression, "method")
method = f"{method} " if method and self.TABLESAMPLE_WITH_METHOD else "" method = f"{method} " if method and self.TABLESAMPLE_WITH_METHOD else ""
numerator = self.sql(expression, "bucket_numerator") numerator = self.sql(expression, "bucket_numerator")
@ -1833,9 +1835,7 @@ class Generator(metaclass=_Generator):
if self.TABLESAMPLE_REQUIRES_PARENS: if self.TABLESAMPLE_REQUIRES_PARENS:
expr = f"({expr})" expr = f"({expr})"
return ( return f" {tablesample_keyword or self.TABLESAMPLE_KEYWORDS} {method}{expr}{seed}"
f"{this} {tablesample_keyword or self.TABLESAMPLE_KEYWORDS} {method}{expr}{seed}{alias}"
)
def pivot_sql(self, expression: exp.Pivot) -> str: def pivot_sql(self, expression: exp.Pivot) -> str:
expressions = self.expressions(expression, flat=True) expressions = self.expressions(expression, flat=True)
@ -1946,6 +1946,18 @@ class Generator(metaclass=_Generator):
def from_sql(self, expression: exp.From) -> str: def from_sql(self, expression: exp.From) -> str:
return f"{self.seg('FROM')} {self.sql(expression, 'this')}" return f"{self.seg('FROM')} {self.sql(expression, 'this')}"
def groupingsets_sql(self, expression: exp.GroupingSets) -> str:
grouping_sets = self.expressions(expression, indent=False)
return f"GROUPING SETS {self.wrap(grouping_sets)}"
def rollup_sql(self, expression: exp.Rollup) -> str:
expressions = self.expressions(expression, indent=False)
return f"ROLLUP {self.wrap(expressions)}" if expressions else "WITH ROLLUP"
def cube_sql(self, expression: exp.Cube) -> str:
expressions = self.expressions(expression, indent=False)
return f"CUBE {self.wrap(expressions)}" if expressions else "WITH CUBE"
def group_sql(self, expression: exp.Group) -> str: def group_sql(self, expression: exp.Group) -> str:
group_by_all = expression.args.get("all") group_by_all = expression.args.get("all")
if group_by_all is True: if group_by_all is True:
@ -1957,34 +1969,23 @@ class Generator(metaclass=_Generator):
group_by = self.op_expressions(f"GROUP BY{modifier}", expression) group_by = self.op_expressions(f"GROUP BY{modifier}", expression)
grouping_sets = self.expressions(expression, key="grouping_sets", indent=False) grouping_sets = self.expressions(expression, key="grouping_sets")
grouping_sets = ( cube = self.expressions(expression, key="cube")
f"{self.seg('GROUPING SETS')} {self.wrap(grouping_sets)}" if grouping_sets else "" rollup = self.expressions(expression, key="rollup")
)
cube = expression.args.get("cube", [])
if seq_get(cube, 0) is True:
return f"{group_by}{self.seg('WITH CUBE')}"
else:
cube_sql = self.expressions(expression, key="cube", indent=False)
cube_sql = f"{self.seg('CUBE')} {self.wrap(cube_sql)}" if cube_sql else ""
rollup = expression.args.get("rollup", [])
if seq_get(rollup, 0) is True:
return f"{group_by}{self.seg('WITH ROLLUP')}"
else:
rollup_sql = self.expressions(expression, key="rollup", indent=False)
rollup_sql = f"{self.seg('ROLLUP')} {self.wrap(rollup_sql)}" if rollup_sql else ""
groupings = csv( groupings = csv(
grouping_sets, self.seg(grouping_sets) if grouping_sets else "",
cube_sql, self.seg(cube) if cube else "",
rollup_sql, self.seg(rollup) if rollup else "",
self.seg("WITH TOTALS") if expression.args.get("totals") else "", self.seg("WITH TOTALS") if expression.args.get("totals") else "",
sep=self.GROUPINGS_SEP, sep=self.GROUPINGS_SEP,
) )
if expression.args.get("expressions") and groupings: if (
expression.expressions
and groupings
and groupings.strip() not in ("WITH CUBE", "WITH ROLLUP")
):
group_by = f"{group_by}{self.GROUPINGS_SEP}" group_by = f"{group_by}{self.GROUPINGS_SEP}"
return f"{group_by}{groupings}" return f"{group_by}{groupings}"
@ -2446,6 +2447,13 @@ class Generator(metaclass=_Generator):
def subquery_sql(self, expression: exp.Subquery, sep: str = " AS ") -> str: def subquery_sql(self, expression: exp.Subquery, sep: str = " AS ") -> str:
alias = self.sql(expression, "alias") alias = self.sql(expression, "alias")
alias = f"{sep}{alias}" if alias else "" alias = f"{sep}{alias}" if alias else ""
sample = self.sql(expression, "sample")
if self.dialect.ALIAS_POST_TABLESAMPLE and sample:
alias = f"{sample}{alias}"
# Set to None so it's not generated again by self.query_modifiers()
expression.set("sample", None)
pivots = self.expressions(expression, key="pivots", sep="", flat=True) pivots = self.expressions(expression, key="pivots", sep="", flat=True)
sql = self.query_modifiers(expression, self.wrap(expression), alias, pivots) sql = self.query_modifiers(expression, self.wrap(expression), alias, pivots)
return self.prepend_ctes(expression, sql) return self.prepend_ctes(expression, sql)
@ -2648,11 +2656,13 @@ class Generator(metaclass=_Generator):
trim_type = self.sql(expression, "position") trim_type = self.sql(expression, "position")
if trim_type == "LEADING": if trim_type == "LEADING":
return self.func("LTRIM", expression.this) func_name = "LTRIM"
elif trim_type == "TRAILING": elif trim_type == "TRAILING":
return self.func("RTRIM", expression.this) func_name = "RTRIM"
else: else:
return self.func("TRIM", expression.this, expression.expression) func_name = "TRIM"
return self.func(func_name, expression.this, expression.expression)
def convert_concat_args(self, expression: exp.Concat | exp.ConcatWs) -> t.List[exp.Expression]: def convert_concat_args(self, expression: exp.Concat | exp.ConcatWs) -> t.List[exp.Expression]:
args = expression.expressions args = expression.expressions
@ -2889,7 +2899,12 @@ class Generator(metaclass=_Generator):
return f"REFERENCES {this}{expressions}{options}" return f"REFERENCES {this}{expressions}{options}"
def anonymous_sql(self, expression: exp.Anonymous) -> str: def anonymous_sql(self, expression: exp.Anonymous) -> str:
return self.func(self.sql(expression, "this"), *expression.expressions) # We don't normalize qualified functions such as a.b.foo(), because they can be case-sensitive
parent = expression.parent
is_qualified = isinstance(parent, exp.Dot) and expression is parent.expression
return self.func(
self.sql(expression, "this"), *expression.expressions, normalize=not is_qualified
)
def paren_sql(self, expression: exp.Paren) -> str: def paren_sql(self, expression: exp.Paren) -> str:
sql = self.seg(self.indent(self.sql(expression, "this")), sep="") sql = self.seg(self.indent(self.sql(expression, "this")), sep="")
@ -3398,8 +3413,10 @@ class Generator(metaclass=_Generator):
*args: t.Optional[exp.Expression | str], *args: t.Optional[exp.Expression | str],
prefix: str = "(", prefix: str = "(",
suffix: str = ")", suffix: str = ")",
normalize: bool = True,
) -> str: ) -> str:
return f"{self.normalize_func(name)}{prefix}{self.format_args(*args)}{suffix}" name = self.normalize_func(name) if normalize else name
return f"{name}{prefix}{self.format_args(*args)}{suffix}"
def format_args(self, *args: t.Optional[str | exp.Expression]) -> str: def format_args(self, *args: t.Optional[str | exp.Expression]) -> str:
arg_sqls = tuple( arg_sqls = tuple(
@ -4137,3 +4154,36 @@ class Generator(metaclass=_Generator):
expr = exp.AtTimeZone(this=timestamp, zone=target_tz) expr = exp.AtTimeZone(this=timestamp, zone=target_tz)
return self.sql(expr) return self.sql(expr)
def json_sql(self, expression: exp.JSON) -> str:
this = self.sql(expression, "this")
this = f" {this}" if this else ""
_with = expression.args.get("with")
if _with is None:
with_sql = ""
elif not _with:
with_sql = " WITHOUT"
else:
with_sql = " WITH"
unique_sql = " UNIQUE KEYS" if expression.args.get("unique") else ""
return f"JSON{this}{with_sql}{unique_sql}"
def jsonvalue_sql(self, expression: exp.JSONValue) -> str:
def _generate_on_options(arg: t.Any) -> str:
return arg if isinstance(arg, str) else f"DEFAULT {self.sql(arg)}"
path = self.sql(expression, "path")
returning = self.sql(expression, "returning")
returning = f" RETURNING {returning}" if returning else ""
on_empty = expression.args.get("on_empty")
on_empty = f" {_generate_on_options(on_empty)} ON EMPTY" if on_empty else ""
on_error = expression.args.get("on_error")
on_error = f" {_generate_on_options(on_error)} ON ERROR" if on_error else ""
return self.func("JSON_VALUE", expression.this, f"{path}{returning}{on_empty}{on_error}")

View file

@ -84,7 +84,7 @@ def qualify_tables(
for name, source in scope.sources.items(): for name, source in scope.sources.items():
if isinstance(source, exp.Table): if isinstance(source, exp.Table):
pivots = pivots = source.args.get("pivots") pivots = source.args.get("pivots")
if not source.alias: if not source.alias:
# Don't add the pivot's alias to the pivoted table, use the table's name instead # Don't add the pivot's alias to the pivoted table, use the table's name instead
if pivots and pivots[0].alias == name: if pivots and pivots[0].alias == name:

View file

@ -267,13 +267,11 @@ def flatten(expression):
def simplify_connectors(expression, root=True): def simplify_connectors(expression, root=True):
def _simplify_connectors(expression, left, right): def _simplify_connectors(expression, left, right):
if left == right:
if isinstance(expression, exp.Xor):
return exp.false()
return left
if isinstance(expression, exp.And): if isinstance(expression, exp.And):
if is_false(left) or is_false(right): if is_false(left) or is_false(right):
return exp.false() return exp.false()
if is_zero(left) or is_zero(right):
return exp.false()
if is_null(left) or is_null(right): if is_null(left) or is_null(right):
return exp.null() return exp.null()
if always_true(left) and always_true(right): if always_true(left) and always_true(right):
@ -286,12 +284,10 @@ def simplify_connectors(expression, root=True):
elif isinstance(expression, exp.Or): elif isinstance(expression, exp.Or):
if always_true(left) or always_true(right): if always_true(left) or always_true(right):
return exp.true() return exp.true()
if is_false(left) and is_false(right):
return exp.false()
if ( if (
(is_null(left) and is_null(right)) (is_null(left) and is_null(right))
or (is_null(left) and is_false(right)) or (is_null(left) and always_false(right))
or (is_false(left) and is_null(right)) or (always_false(left) and is_null(right))
): ):
return exp.null() return exp.null()
if is_false(left): if is_false(left):
@ -299,6 +295,9 @@ def simplify_connectors(expression, root=True):
if is_false(right): if is_false(right):
return left return left
return _simplify_comparison(expression, left, right, or_=True) return _simplify_comparison(expression, left, right, or_=True)
elif isinstance(expression, exp.Xor):
if left == right:
return exp.false()
if isinstance(expression, exp.Connector): if isinstance(expression, exp.Connector):
return _flat_simplify(expression, _simplify_connectors, root) return _flat_simplify(expression, _simplify_connectors, root)
@ -1108,13 +1107,17 @@ def remove_where_true(expression):
def always_true(expression): def always_true(expression):
return (isinstance(expression, exp.Boolean) and expression.this) or isinstance( return (isinstance(expression, exp.Boolean) and expression.this) or (
expression, exp.Literal isinstance(expression, exp.Literal) and not is_zero(expression)
) )
def always_false(expression): def always_false(expression):
return is_false(expression) or is_null(expression) return is_false(expression) or is_null(expression) or is_zero(expression)
def is_zero(expression):
return isinstance(expression, exp.Literal) and expression.to_py() == 0
def is_complement(a, b): def is_complement(a, b):

View file

@ -140,6 +140,14 @@ def build_convert_timezone(
return exp.ConvertTimezone.from_arg_list(args) return exp.ConvertTimezone.from_arg_list(args)
def build_trim(args: t.List, is_left: bool = True):
return exp.Trim(
this=seq_get(args, 0),
expression=seq_get(args, 1),
position="LEADING" if is_left else "TRAILING",
)
class _Parser(type): class _Parser(type):
def __new__(cls, clsname, bases, attrs): def __new__(cls, clsname, bases, attrs):
klass = super().__new__(cls, clsname, bases, attrs) klass = super().__new__(cls, clsname, bases, attrs)
@ -200,9 +208,11 @@ class Parser(metaclass=_Parser):
"LOWER": build_lower, "LOWER": build_lower,
"LPAD": lambda args: build_pad(args), "LPAD": lambda args: build_pad(args),
"LEFTPAD": lambda args: build_pad(args), "LEFTPAD": lambda args: build_pad(args),
"LTRIM": lambda args: build_trim(args),
"MOD": build_mod, "MOD": build_mod,
"RPAD": lambda args: build_pad(args, is_left=False),
"RIGHTPAD": lambda args: build_pad(args, is_left=False), "RIGHTPAD": lambda args: build_pad(args, is_left=False),
"RPAD": lambda args: build_pad(args, is_left=False),
"RTRIM": lambda args: build_trim(args, is_left=False),
"SCOPE_RESOLUTION": lambda args: exp.ScopeResolution(expression=seq_get(args, 0)) "SCOPE_RESOLUTION": lambda args: exp.ScopeResolution(expression=seq_get(args, 0))
if len(args) != 2 if len(args) != 2
else exp.ScopeResolution(this=seq_get(args, 0), expression=seq_get(args, 1)), else exp.ScopeResolution(this=seq_get(args, 0), expression=seq_get(args, 1)),
@ -1242,6 +1252,8 @@ class Parser(metaclass=_Parser):
COPY_INTO_VARLEN_OPTIONS = {"FILE_FORMAT", "COPY_OPTIONS", "FORMAT_OPTIONS", "CREDENTIAL"} COPY_INTO_VARLEN_OPTIONS = {"FILE_FORMAT", "COPY_OPTIONS", "FORMAT_OPTIONS", "CREDENTIAL"}
IS_JSON_PREDICATE_KIND = {"VALUE", "SCALAR", "ARRAY", "OBJECT"}
STRICT_CAST = True STRICT_CAST = True
PREFIXED_PIVOT_COLUMNS = False PREFIXED_PIVOT_COLUMNS = False
@ -2557,6 +2569,7 @@ class Parser(metaclass=_Parser):
overwrite=overwrite, overwrite=overwrite,
alternative=alternative, alternative=alternative,
ignore=ignore, ignore=ignore,
source=self._match(TokenType.TABLE) and self._parse_table(),
) )
def _parse_kill(self) -> exp.Kill: def _parse_kill(self) -> exp.Kill:
@ -2973,6 +2986,7 @@ class Parser(metaclass=_Parser):
this=this, this=this,
pivots=self._parse_pivots(), pivots=self._parse_pivots(),
alias=self._parse_table_alias() if parse_alias else None, alias=self._parse_table_alias() if parse_alias else None,
sample=self._parse_table_sample(),
) )
def _implicit_unnests_to_explicit(self, this: E) -> E: def _implicit_unnests_to_explicit(self, this: E) -> E:
@ -3543,7 +3557,7 @@ class Parser(metaclass=_Parser):
this.set("version", version) this.set("version", version)
if self.dialect.ALIAS_POST_TABLESAMPLE: if self.dialect.ALIAS_POST_TABLESAMPLE:
table_sample = self._parse_table_sample() this.set("sample", self._parse_table_sample())
alias = self._parse_table_alias(alias_tokens=alias_tokens or self.TABLE_ALIAS_TOKENS) alias = self._parse_table_alias(alias_tokens=alias_tokens or self.TABLE_ALIAS_TOKENS)
if alias: if alias:
@ -3560,11 +3574,7 @@ class Parser(metaclass=_Parser):
this.set("pivots", self._parse_pivots()) this.set("pivots", self._parse_pivots())
if not self.dialect.ALIAS_POST_TABLESAMPLE: if not self.dialect.ALIAS_POST_TABLESAMPLE:
table_sample = self._parse_table_sample() this.set("sample", self._parse_table_sample())
if table_sample:
table_sample.set("this", this)
this = table_sample
if joins: if joins:
for join in self._parse_joins(): for join in self._parse_joins():
@ -3907,48 +3917,50 @@ class Parser(metaclass=_Parser):
elements["all"] = False elements["all"] = False
while True: while True:
expressions = self._parse_csv( index = self._index
elements["expressions"].extend(
self._parse_csv(
lambda: None lambda: None
if self._match_set((TokenType.CUBE, TokenType.ROLLUP), advance=False) if self._match_set((TokenType.CUBE, TokenType.ROLLUP), advance=False)
else self._parse_assignment() else self._parse_assignment()
) )
if expressions: )
elements["expressions"].extend(expressions)
grouping_sets = self._parse_grouping_sets() before_with_index = self._index
if grouping_sets: with_prefix = self._match(TokenType.WITH)
elements["grouping_sets"].extend(grouping_sets)
rollup = None
cube = None
totals = None
index = self._index
with_ = self._match(TokenType.WITH)
if self._match(TokenType.ROLLUP): if self._match(TokenType.ROLLUP):
rollup = with_ or self._parse_wrapped_csv(self._parse_column) elements["rollup"].append(
elements["rollup"].extend(ensure_list(rollup)) self._parse_cube_or_rollup(exp.Rollup, with_prefix=with_prefix)
)
if self._match(TokenType.CUBE): elif self._match(TokenType.CUBE):
cube = with_ or self._parse_wrapped_csv(self._parse_column) elements["cube"].append(
elements["cube"].extend(ensure_list(cube)) self._parse_cube_or_rollup(exp.Cube, with_prefix=with_prefix)
)
if self._match_text_seq("TOTALS"): elif self._match(TokenType.GROUPING_SETS):
totals = True elements["grouping_sets"].append(
self.expression(
exp.GroupingSets,
expressions=self._parse_wrapped_csv(self._parse_grouping_set),
)
)
elif self._match_text_seq("TOTALS"):
elements["totals"] = True # type: ignore elements["totals"] = True # type: ignore
if not (grouping_sets or rollup or cube or totals): if before_with_index <= self._index <= before_with_index + 1:
if with_: self._retreat(before_with_index)
self._retreat(index) break
if index == self._index:
break break
return self.expression(exp.Group, **elements) # type: ignore return self.expression(exp.Group, **elements) # type: ignore
def _parse_grouping_sets(self) -> t.Optional[t.List[exp.Expression]]: def _parse_cube_or_rollup(self, kind: t.Type[E], with_prefix: bool = False) -> E:
if not self._match(TokenType.GROUPING_SETS): return self.expression(
return None kind, expressions=[] if with_prefix else self._parse_wrapped_csv(self._parse_column)
)
return self._parse_wrapped_csv(self._parse_grouping_set)
def _parse_grouping_set(self) -> t.Optional[exp.Expression]: def _parse_grouping_set(self) -> t.Optional[exp.Expression]:
if self._match(TokenType.L_PAREN): if self._match(TokenType.L_PAREN):
@ -4282,7 +4294,23 @@ class Parser(metaclass=_Parser):
klass = exp.NullSafeEQ if negate else exp.NullSafeNEQ klass = exp.NullSafeEQ if negate else exp.NullSafeNEQ
return self.expression(klass, this=this, expression=self._parse_bitwise()) return self.expression(klass, this=this, expression=self._parse_bitwise())
expression = self._parse_null() or self._parse_boolean() if self._match(TokenType.JSON):
kind = self._match_texts(self.IS_JSON_PREDICATE_KIND) and self._prev.text.upper()
if self._match_text_seq("WITH"):
_with = True
elif self._match_text_seq("WITHOUT"):
_with = False
else:
_with = None
unique = self._match(TokenType.UNIQUE)
self._match_text_seq("KEYS")
expression: t.Optional[exp.Expression] = self.expression(
exp.JSON, **{"this": kind, "with": _with, "unique": unique}
)
else:
expression = self._parse_primary() or self._parse_null()
if not expression: if not expression:
self._retreat(index) self._retreat(index)
return None return None
@ -5087,10 +5115,13 @@ class Parser(metaclass=_Parser):
self._match_r_paren(this) self._match_r_paren(this)
return self._parse_window(this) return self._parse_window(this)
def _to_prop_eq(self, expression: exp.Expression, index: int) -> exp.Expression:
return expression
def _kv_to_prop_eq(self, expressions: t.List[exp.Expression]) -> t.List[exp.Expression]: def _kv_to_prop_eq(self, expressions: t.List[exp.Expression]) -> t.List[exp.Expression]:
transformed = [] transformed = []
for e in expressions: for index, e in enumerate(expressions):
if isinstance(e, self.KEY_VALUE_DEFINITIONS): if isinstance(e, self.KEY_VALUE_DEFINITIONS):
if isinstance(e, exp.Alias): if isinstance(e, exp.Alias):
e = self.expression(exp.PropertyEQ, this=e.args.get("alias"), expression=e.this) e = self.expression(exp.PropertyEQ, this=e.args.get("alias"), expression=e.this)
@ -5102,6 +5133,8 @@ class Parser(metaclass=_Parser):
if isinstance(e.this, exp.Column): if isinstance(e.this, exp.Column):
e.this.replace(e.this.this) e.this.replace(e.this.this)
else:
e = self._to_prop_eq(e, index)
transformed.append(e) transformed.append(e)

View file

@ -1,4 +1,5 @@
import typing as t import typing as t
import datetime
# The generic time format is based on python time.strftime. # The generic time format is based on python time.strftime.
# https://docs.python.org/3/library/time.html#time.strftime # https://docs.python.org/3/library/time.html#time.strftime
@ -661,3 +662,26 @@ TIMEZONES = {
"Zulu", "Zulu",
) )
} }
def subsecond_precision(timestamp_literal: str) -> int:
"""
Given an ISO-8601 timestamp literal, eg '2023-01-01 12:13:14.123456+00:00'
figure out its subsecond precision so we can construct types like DATETIME(6)
Note that in practice, this is either 3 or 6 digits (3 = millisecond precision, 6 = microsecond precision)
- 6 is the maximum because strftime's '%f' formats to microseconds and almost every database supports microsecond precision in timestamps
- Except Presto/Trino which in most cases only supports millisecond precision but will still honour '%f' and format to microseconds (replacing the remaining 3 digits with 0's)
- Python prior to 3.11 only supports 0, 3 or 6 digits in a timestamp literal. Any other amounts will throw a 'ValueError: Invalid isoformat string:' error
"""
try:
parsed = datetime.datetime.fromisoformat(timestamp_literal)
subsecond_digit_count = len(str(parsed.microsecond).rstrip("0"))
precision = 0
if subsecond_digit_count > 3:
precision = 6
elif subsecond_digit_count > 0:
precision = 3
return precision
except ValueError:
return 0

View file

@ -317,10 +317,14 @@ def unnest_to_explode(expression: exp.Expression) -> exp.Expression:
) )
for join in expression.args.get("joins") or []: for join in expression.args.get("joins") or []:
unnest = join.this join_expr = join.this
is_lateral = isinstance(join_expr, exp.Lateral)
unnest = join_expr.this if is_lateral else join_expr
if isinstance(unnest, exp.Unnest): if isinstance(unnest, exp.Unnest):
alias = unnest.args.get("alias") alias = join_expr.args.get("alias") if is_lateral else unnest.args.get("alias")
udtf = exp.Posexplode if unnest.args.get("offset") else exp.Explode udtf = exp.Posexplode if unnest.args.get("offset") else exp.Explode
expression.args["joins"].remove(join) expression.args["joins"].remove(join)

View file

@ -370,7 +370,7 @@ LANGUAGE js AS
}, },
write={ write={
"bigquery": "SELECT SUM(x IGNORE NULLS) AS x", "bigquery": "SELECT SUM(x IGNORE NULLS) AS x",
"duckdb": "SELECT SUM(x IGNORE NULLS) AS x", "duckdb": "SELECT SUM(x) AS x",
"postgres": "SELECT SUM(x) IGNORE NULLS AS x", "postgres": "SELECT SUM(x) IGNORE NULLS AS x",
"spark": "SELECT SUM(x) IGNORE NULLS AS x", "spark": "SELECT SUM(x) IGNORE NULLS AS x",
"snowflake": "SELECT SUM(x) IGNORE NULLS AS x", "snowflake": "SELECT SUM(x) IGNORE NULLS AS x",
@ -405,7 +405,7 @@ LANGUAGE js AS
"SELECT ARRAY_AGG(DISTINCT x IGNORE NULLS ORDER BY a, b DESC LIMIT 10) AS x", "SELECT ARRAY_AGG(DISTINCT x IGNORE NULLS ORDER BY a, b DESC LIMIT 10) AS x",
write={ write={
"bigquery": "SELECT ARRAY_AGG(DISTINCT x IGNORE NULLS ORDER BY a, b DESC LIMIT 10) AS x", "bigquery": "SELECT ARRAY_AGG(DISTINCT x IGNORE NULLS ORDER BY a, b DESC LIMIT 10) AS x",
"duckdb": "SELECT ARRAY_AGG(DISTINCT x IGNORE NULLS ORDER BY a NULLS FIRST, b DESC LIMIT 10) AS x", "duckdb": "SELECT ARRAY_AGG(DISTINCT x ORDER BY a NULLS FIRST, b DESC LIMIT 10) AS x",
"spark": "SELECT COLLECT_LIST(DISTINCT x ORDER BY a, b DESC LIMIT 10) IGNORE NULLS AS x", "spark": "SELECT COLLECT_LIST(DISTINCT x ORDER BY a, b DESC LIMIT 10) IGNORE NULLS AS x",
}, },
) )
@ -413,7 +413,7 @@ LANGUAGE js AS
"SELECT ARRAY_AGG(DISTINCT x IGNORE NULLS ORDER BY a, b DESC LIMIT 1, 10) AS x", "SELECT ARRAY_AGG(DISTINCT x IGNORE NULLS ORDER BY a, b DESC LIMIT 1, 10) AS x",
write={ write={
"bigquery": "SELECT ARRAY_AGG(DISTINCT x IGNORE NULLS ORDER BY a, b DESC LIMIT 1, 10) AS x", "bigquery": "SELECT ARRAY_AGG(DISTINCT x IGNORE NULLS ORDER BY a, b DESC LIMIT 1, 10) AS x",
"duckdb": "SELECT ARRAY_AGG(DISTINCT x IGNORE NULLS ORDER BY a NULLS FIRST, b DESC LIMIT 1, 10) AS x", "duckdb": "SELECT ARRAY_AGG(DISTINCT x ORDER BY a NULLS FIRST, b DESC LIMIT 1, 10) AS x",
"spark": "SELECT COLLECT_LIST(DISTINCT x ORDER BY a, b DESC LIMIT 1, 10) IGNORE NULLS AS x", "spark": "SELECT COLLECT_LIST(DISTINCT x ORDER BY a, b DESC LIMIT 1, 10) IGNORE NULLS AS x",
}, },
) )
@ -814,6 +814,7 @@ LANGUAGE js AS
"presto": "SHA256(x)", "presto": "SHA256(x)",
"redshift": "SHA2(x, 256)", "redshift": "SHA2(x, 256)",
"trino": "SHA256(x)", "trino": "SHA256(x)",
"duckdb": "SHA256(x)",
}, },
) )
self.validate_all( self.validate_all(

View file

@ -492,6 +492,8 @@ class TestClickhouse(Validator):
"postgres": "INSERT INTO t (col1, col2) VALUES ('abcd', 1234)", "postgres": "INSERT INTO t (col1, col2) VALUES ('abcd', 1234)",
}, },
) )
self.validate_identity("SELECT TRIM(TRAILING ')' FROM '( Hello, world! )')")
self.validate_identity("SELECT TRIM(LEADING '(' FROM '( Hello, world! )')")
def test_clickhouse_values(self): def test_clickhouse_values(self):
values = exp.select("*").from_( values = exp.select("*").from_(
@ -620,6 +622,14 @@ class TestClickhouse(Validator):
) )
self.assertEqual(create_with_cluster.sql("clickhouse"), "CREATE DATABASE foo ON CLUSTER c") self.assertEqual(create_with_cluster.sql("clickhouse"), "CREATE DATABASE foo ON CLUSTER c")
# Transpiled CREATE SCHEMA may have OnCluster property set
create_with_cluster = exp.Create(
this=db_table_expr,
kind="SCHEMA",
properties=exp.Properties(expressions=[exp.OnCluster(this=exp.to_identifier("c"))]),
)
self.assertEqual(create_with_cluster.sql("clickhouse"), "CREATE DATABASE foo ON CLUSTER c")
ctas_with_comment = exp.Create( ctas_with_comment = exp.Create(
this=exp.table_("foo"), this=exp.table_("foo"),
kind="TABLE", kind="TABLE",

View file

@ -655,6 +655,22 @@ class TestDialect(Validator):
"doris": "CAST('2020-01-01' AS DATETIME)", "doris": "CAST('2020-01-01' AS DATETIME)",
}, },
) )
self.validate_all(
"TIME_STR_TO_TIME('2020-01-01 12:13:14.123456+00:00')",
write={
"mysql": "CAST('2020-01-01 12:13:14.123456+00:00' AS DATETIME(6))",
"trino": "CAST('2020-01-01 12:13:14.123456+00:00' AS TIMESTAMP(6))",
"presto": "CAST('2020-01-01 12:13:14.123456+00:00' AS TIMESTAMP)",
},
)
self.validate_all(
"TIME_STR_TO_TIME('2020-01-01 12:13:14.123-08:00', 'America/Los_Angeles')",
write={
"mysql": "TIMESTAMP('2020-01-01 12:13:14.123-08:00')",
"trino": "CAST('2020-01-01 12:13:14.123-08:00' AS TIMESTAMP(3) WITH TIME ZONE)",
"presto": "CAST('2020-01-01 12:13:14.123-08:00' AS TIMESTAMP WITH TIME ZONE)",
},
)
self.validate_all( self.validate_all(
"TIME_STR_TO_TIME('2020-01-01 12:13:14-08:00', 'America/Los_Angeles')", "TIME_STR_TO_TIME('2020-01-01 12:13:14-08:00', 'America/Los_Angeles')",
write={ write={
@ -662,7 +678,7 @@ class TestDialect(Validator):
"databricks": "CAST('2020-01-01 12:13:14-08:00' AS TIMESTAMP)", "databricks": "CAST('2020-01-01 12:13:14-08:00' AS TIMESTAMP)",
"duckdb": "CAST('2020-01-01 12:13:14-08:00' AS TIMESTAMPTZ)", "duckdb": "CAST('2020-01-01 12:13:14-08:00' AS TIMESTAMPTZ)",
"tsql": "CAST('2020-01-01 12:13:14-08:00' AS DATETIMEOFFSET) AT TIME ZONE 'UTC'", "tsql": "CAST('2020-01-01 12:13:14-08:00' AS DATETIMEOFFSET) AT TIME ZONE 'UTC'",
"mysql": "CAST('2020-01-01 12:13:14-08:00' AS DATETIME)", "mysql": "TIMESTAMP('2020-01-01 12:13:14-08:00')",
"postgres": "CAST('2020-01-01 12:13:14-08:00' AS TIMESTAMPTZ)", "postgres": "CAST('2020-01-01 12:13:14-08:00' AS TIMESTAMPTZ)",
"redshift": "CAST('2020-01-01 12:13:14-08:00' AS TIMESTAMP WITH TIME ZONE)", "redshift": "CAST('2020-01-01 12:13:14-08:00' AS TIMESTAMP WITH TIME ZONE)",
"snowflake": "CAST('2020-01-01 12:13:14-08:00' AS TIMESTAMPTZ)", "snowflake": "CAST('2020-01-01 12:13:14-08:00' AS TIMESTAMPTZ)",
@ -683,7 +699,7 @@ class TestDialect(Validator):
"databricks": "CAST(col AS TIMESTAMP)", "databricks": "CAST(col AS TIMESTAMP)",
"duckdb": "CAST(col AS TIMESTAMPTZ)", "duckdb": "CAST(col AS TIMESTAMPTZ)",
"tsql": "CAST(col AS DATETIMEOFFSET) AT TIME ZONE 'UTC'", "tsql": "CAST(col AS DATETIMEOFFSET) AT TIME ZONE 'UTC'",
"mysql": "CAST(col AS DATETIME)", "mysql": "TIMESTAMP(col)",
"postgres": "CAST(col AS TIMESTAMPTZ)", "postgres": "CAST(col AS TIMESTAMPTZ)",
"redshift": "CAST(col AS TIMESTAMP WITH TIME ZONE)", "redshift": "CAST(col AS TIMESTAMP WITH TIME ZONE)",
"snowflake": "CAST(col AS TIMESTAMPTZ)", "snowflake": "CAST(col AS TIMESTAMPTZ)",
@ -722,6 +738,13 @@ class TestDialect(Validator):
"doris": "DATE_FORMAT(x, '%Y-%m-%d')", "doris": "DATE_FORMAT(x, '%Y-%m-%d')",
}, },
) )
self.validate_all(
"TIME_TO_STR(a, '%Y-%m-%d %H:%M:%S.%f')",
write={
"redshift": "TO_CHAR(a, 'YYYY-MM-DD HH24:MI:SS.US')",
"tsql": "FORMAT(a, 'yyyy-MM-dd HH:mm:ss.ffffff')",
},
)
self.validate_all( self.validate_all(
"TIME_TO_TIME_STR(x)", "TIME_TO_TIME_STR(x)",
write={ write={

View file

@ -231,10 +231,6 @@ class TestDuckDB(Validator):
}, },
) )
self.validate_identity("INSERT INTO x BY NAME SELECT 1 AS y")
self.validate_identity("SELECT 1 AS x UNION ALL BY NAME SELECT 2 AS x")
self.validate_identity("SELECT SUM(x) FILTER (x = 1)", "SELECT SUM(x) FILTER(WHERE x = 1)")
# https://github.com/duckdb/duckdb/releases/tag/v0.8.0 # https://github.com/duckdb/duckdb/releases/tag/v0.8.0
self.assertEqual( self.assertEqual(
parse_one("a / b", read="duckdb").assert_is(exp.Div).sql(dialect="duckdb"), "a / b" parse_one("a / b", read="duckdb").assert_is(exp.Div).sql(dialect="duckdb"), "a / b"
@ -243,6 +239,10 @@ class TestDuckDB(Validator):
parse_one("a // b", read="duckdb").assert_is(exp.IntDiv).sql(dialect="duckdb"), "a // b" parse_one("a // b", read="duckdb").assert_is(exp.IntDiv).sql(dialect="duckdb"), "a // b"
) )
self.validate_identity("INSERT INTO x BY NAME SELECT 1 AS y")
self.validate_identity("SELECT 1 AS x UNION ALL BY NAME SELECT 2 AS x")
self.validate_identity("SELECT SUM(x) FILTER (x = 1)", "SELECT SUM(x) FILTER(WHERE x = 1)")
self.validate_identity("SELECT * FROM GLOB(x)")
self.validate_identity("SELECT MAP(['key1', 'key2', 'key3'], [10, 20, 30])") self.validate_identity("SELECT MAP(['key1', 'key2', 'key3'], [10, 20, 30])")
self.validate_identity("SELECT MAP {'x': 1}") self.validate_identity("SELECT MAP {'x': 1}")
self.validate_identity("SELECT (MAP {'x': 1})['x']") self.validate_identity("SELECT (MAP {'x': 1})['x']")
@ -277,6 +277,15 @@ class TestDuckDB(Validator):
self.validate_identity("SUMMARIZE tbl").assert_is(exp.Summarize) self.validate_identity("SUMMARIZE tbl").assert_is(exp.Summarize)
self.validate_identity("SUMMARIZE SELECT * FROM tbl").assert_is(exp.Summarize) self.validate_identity("SUMMARIZE SELECT * FROM tbl").assert_is(exp.Summarize)
self.validate_identity("CREATE TABLE tbl_summary AS SELECT * FROM (SUMMARIZE tbl)") self.validate_identity("CREATE TABLE tbl_summary AS SELECT * FROM (SUMMARIZE tbl)")
self.validate_identity(
"SELECT species, island, COUNT(*) FROM t GROUP BY GROUPING SETS (species), GROUPING SETS (island)"
)
self.validate_identity(
"SELECT species, island, COUNT(*) FROM t GROUP BY CUBE (species), CUBE (island)"
)
self.validate_identity(
"SELECT species, island, COUNT(*) FROM t GROUP BY ROLLUP (species), ROLLUP (island)"
)
self.validate_identity( self.validate_identity(
"SUMMARIZE TABLE 'https://blobs.duckdb.org/data/Star_Trek-Season_1.csv'" "SUMMARIZE TABLE 'https://blobs.duckdb.org/data/Star_Trek-Season_1.csv'"
).assert_is(exp.Summarize) ).assert_is(exp.Summarize)
@ -999,6 +1008,13 @@ class TestDuckDB(Validator):
"duckdb": "SELECT * FROM example TABLESAMPLE RESERVOIR (3 ROWS) REPEATABLE (82)", "duckdb": "SELECT * FROM example TABLESAMPLE RESERVOIR (3 ROWS) REPEATABLE (82)",
}, },
) )
self.validate_all(
"SELECT * FROM (SELECT * FROM t) AS t1 TABLESAMPLE (1 ROWS), (SELECT * FROM t) AS t2 TABLESAMPLE (2 ROWS)",
write={
"duckdb": "SELECT * FROM (SELECT * FROM t) AS t1 TABLESAMPLE RESERVOIR (1 ROWS), (SELECT * FROM t) AS t2 TABLESAMPLE RESERVOIR (2 ROWS)",
"spark": "SELECT * FROM (SELECT * FROM t) TABLESAMPLE (1 ROWS) AS t1, (SELECT * FROM t) TABLESAMPLE (2 ROWS) AS t2",
},
)
def test_array(self): def test_array(self):
self.validate_identity("ARRAY(SELECT id FROM t)") self.validate_identity("ARRAY(SELECT id FROM t)")
@ -1256,3 +1272,20 @@ class TestDuckDB(Validator):
read={"bigquery": "SELECT @foo"}, read={"bigquery": "SELECT @foo"},
write={"bigquery": "SELECT @foo", "duckdb": "SELECT $foo"}, write={"bigquery": "SELECT @foo", "duckdb": "SELECT $foo"},
) )
def test_ignore_nulls(self):
# Note that DuckDB differentiates window functions (e.g. LEAD, LAG) from aggregate functions (e.g. SUM)
from sqlglot.dialects.duckdb import WINDOW_FUNCS_WITH_IGNORE_NULLS
agg_funcs = (exp.Sum, exp.Max, exp.Min)
for func_type in WINDOW_FUNCS_WITH_IGNORE_NULLS + agg_funcs:
func = func_type(this=exp.to_identifier("col"))
ignore_null = exp.IgnoreNulls(this=func)
windowed_ignore_null = exp.Window(this=ignore_null)
if func_type in WINDOW_FUNCS_WITH_IGNORE_NULLS:
self.assertIn("IGNORE NULLS", windowed_ignore_null.sql("duckdb"))
else:
self.assertEqual(ignore_null.sql("duckdb"), func.sql("duckdb"))
self.assertNotIn("IGNORE NULLS", windowed_ignore_null.sql("duckdb"))

View file

@ -1,3 +1,6 @@
import unittest
import sys
from sqlglot import expressions as exp from sqlglot import expressions as exp
from sqlglot.dialects.mysql import MySQL from sqlglot.dialects.mysql import MySQL
from tests.dialects.test_dialect import Validator from tests.dialects.test_dialect import Validator
@ -637,6 +640,53 @@ class TestMySQL(Validator):
}, },
) )
# No timezone, make sure DATETIME captures the correct precision
self.validate_identity(
"SELECT TIME_STR_TO_TIME('2023-01-01 13:14:15.123456+00:00')",
write_sql="SELECT CAST('2023-01-01 13:14:15.123456+00:00' AS DATETIME(6))",
)
self.validate_identity(
"SELECT TIME_STR_TO_TIME('2023-01-01 13:14:15.123+00:00')",
write_sql="SELECT CAST('2023-01-01 13:14:15.123+00:00' AS DATETIME(3))",
)
self.validate_identity(
"SELECT TIME_STR_TO_TIME('2023-01-01 13:14:15+00:00')",
write_sql="SELECT CAST('2023-01-01 13:14:15+00:00' AS DATETIME)",
)
# With timezone, make sure the TIMESTAMP constructor is used
# also TIMESTAMP doesnt have the subsecond precision truncation issue that DATETIME does so we dont need to TIMESTAMP(6)
self.validate_identity(
"SELECT TIME_STR_TO_TIME('2023-01-01 13:14:15-08:00', 'America/Los_Angeles')",
write_sql="SELECT TIMESTAMP('2023-01-01 13:14:15-08:00')",
)
self.validate_identity(
"SELECT TIME_STR_TO_TIME('2023-01-01 13:14:15-08:00', 'America/Los_Angeles')",
write_sql="SELECT TIMESTAMP('2023-01-01 13:14:15-08:00')",
)
@unittest.skipUnless(
sys.version_info >= (3, 11),
"Python 3.11 relaxed datetime.fromisoformat() parsing with regards to microseconds",
)
def test_mysql_time_python311(self):
self.validate_identity(
"SELECT TIME_STR_TO_TIME('2023-01-01 13:14:15.12345+00:00')",
write_sql="SELECT CAST('2023-01-01 13:14:15.12345+00:00' AS DATETIME(6))",
)
self.validate_identity(
"SELECT TIME_STR_TO_TIME('2023-01-01 13:14:15.1234+00:00')",
write_sql="SELECT CAST('2023-01-01 13:14:15.1234+00:00' AS DATETIME(6))",
)
self.validate_identity(
"SELECT TIME_STR_TO_TIME('2023-01-01 13:14:15.12+00:00')",
write_sql="SELECT CAST('2023-01-01 13:14:15.12+00:00' AS DATETIME(3))",
)
self.validate_identity(
"SELECT TIME_STR_TO_TIME('2023-01-01 13:14:15.1+00:00')",
write_sql="SELECT CAST('2023-01-01 13:14:15.1+00:00' AS DATETIME(3))",
)
def test_mysql(self): def test_mysql(self):
self.validate_all( self.validate_all(
"SELECT CONCAT('11', '22')", "SELECT CONCAT('11', '22')",
@ -1192,3 +1242,24 @@ COMMENT='客户账户表'"""
"mysql": f"DATE_ADD('0000-01-01 00:00:00', INTERVAL (TIMESTAMPDIFF({unit}, '0000-01-01 00:00:00', CAST('2001-02-16 20:38:40' AS DATETIME))) {unit})", "mysql": f"DATE_ADD('0000-01-01 00:00:00', INTERVAL (TIMESTAMPDIFF({unit}, '0000-01-01 00:00:00', CAST('2001-02-16 20:38:40' AS DATETIME))) {unit})",
}, },
) )
def test_at_time_zone(self):
with self.assertLogs() as cm:
# Check AT TIME ZONE doesnt discard the column name and also raises a warning
self.validate_identity(
"SELECT foo AT TIME ZONE 'UTC'",
write_sql="SELECT foo",
)
assert "AT TIME ZONE is not supported" in cm.output[0]
def test_json_value(self):
json_doc = """'{"item": "shoes", "price": "49.95"}'"""
self.validate_identity(f"""SELECT JSON_VALUE({json_doc}, '$.price')""")
self.validate_identity(
f"""SELECT JSON_VALUE({json_doc}, '$.price' RETURNING DECIMAL(4, 2))"""
)
for on_option in ("NULL", "ERROR", "DEFAULT 1"):
self.validate_identity(
f"""SELECT JSON_VALUE({json_doc}, '$.price' RETURNING DECIMAL(4, 2) {on_option} ON EMPTY {on_option} ON ERROR) AS price"""
)

View file

@ -251,7 +251,6 @@ class TestOracle(Validator):
"""SELECT * FROM t ORDER BY a ASC NULLS LAST, b ASC NULLS FIRST, c DESC NULLS LAST, d DESC NULLS FIRST""", """SELECT * FROM t ORDER BY a ASC NULLS LAST, b ASC NULLS FIRST, c DESC NULLS LAST, d DESC NULLS FIRST""",
"""SELECT * FROM t ORDER BY a ASC, b ASC NULLS FIRST, c DESC NULLS LAST, d DESC""", """SELECT * FROM t ORDER BY a ASC, b ASC NULLS FIRST, c DESC NULLS LAST, d DESC""",
) )
self.validate_all( self.validate_all(
"NVL(NULL, 1)", "NVL(NULL, 1)",
write={ write={
@ -260,6 +259,29 @@ class TestOracle(Validator):
"clickhouse": "COALESCE(NULL, 1)", "clickhouse": "COALESCE(NULL, 1)",
}, },
) )
self.validate_all(
"LTRIM('Hello World', 'H')",
write={
"": "LTRIM('Hello World', 'H')",
"oracle": "LTRIM('Hello World', 'H')",
"clickhouse": "TRIM(LEADING 'H' FROM 'Hello World')",
},
)
self.validate_all(
"RTRIM('Hello World', 'd')",
write={
"": "RTRIM('Hello World', 'd')",
"oracle": "RTRIM('Hello World', 'd')",
"clickhouse": "TRIM(TRAILING 'd' FROM 'Hello World')",
},
)
self.validate_all(
"TRIM(BOTH 'h' FROM 'Hello World')",
write={
"oracle": "TRIM(BOTH 'h' FROM 'Hello World')",
"clickhouse": "TRIM(BOTH 'h' FROM 'Hello World')",
},
)
def test_join_marker(self): def test_join_marker(self):
self.validate_identity("SELECT e1.x, e2.x FROM e e1, e e2 WHERE e1.y (+) = e2.y") self.validate_identity("SELECT e1.x, e2.x FROM e e1, e e2 WHERE e1.y (+) = e2.y")
@ -347,7 +369,7 @@ FROM warehouses, XMLTABLE(
FROM XMLTABLE( FROM XMLTABLE(
'ROWSET/ROW' 'ROWSET/ROW'
PASSING PASSING
dbms_xmlgen.GETXMLTYPE('SELECT table_name, column_name, data_default FROM user_tab_columns') dbms_xmlgen.getxmltype('SELECT table_name, column_name, data_default FROM user_tab_columns')
COLUMNS COLUMNS
table_name VARCHAR2(128) PATH '*[1]', table_name VARCHAR2(128) PATH '*[1]',
column_name VARCHAR2(128) PATH '*[2]', column_name VARCHAR2(128) PATH '*[2]',

View file

@ -675,13 +675,6 @@ class TestPostgres(Validator):
"presto": "RTRIM(' XXX ')", "presto": "RTRIM(' XXX ')",
}, },
) )
self.validate_all(
"""'{"a":1,"b":2}'::json->'b'""",
write={
"postgres": """CAST('{"a":1,"b":2}' AS JSON) -> 'b'""",
"redshift": """JSON_EXTRACT_PATH_TEXT('{"a":1,"b":2}', 'b')""",
},
)
self.validate_all( self.validate_all(
"TRIM(BOTH 'as' FROM 'as string as')", "TRIM(BOTH 'as' FROM 'as string as')",
write={ write={
@ -689,6 +682,21 @@ class TestPostgres(Validator):
"spark": "TRIM(BOTH 'as' FROM 'as string as')", "spark": "TRIM(BOTH 'as' FROM 'as string as')",
}, },
) )
self.validate_identity(
"""SELECT TRIM(LEADING ' XXX ' COLLATE "de_DE")""",
"""SELECT LTRIM(' XXX ' COLLATE "de_DE")""",
)
self.validate_identity(
"""SELECT TRIM(TRAILING ' XXX ' COLLATE "de_DE")""",
"""SELECT RTRIM(' XXX ' COLLATE "de_DE")""",
)
self.validate_all(
"""'{"a":1,"b":2}'::json->'b'""",
write={
"postgres": """CAST('{"a":1,"b":2}' AS JSON) -> 'b'""",
"redshift": """JSON_EXTRACT_PATH_TEXT('{"a":1,"b":2}', 'b')""",
},
)
self.validate_all( self.validate_all(
"""merge into x as x using (select id) as y on a = b WHEN matched then update set X."A" = y.b""", """merge into x as x using (select id) as y on a = b WHEN matched then update set X."A" = y.b""",
write={ write={
@ -780,6 +788,13 @@ class TestPostgres(Validator):
}, },
) )
self.validate_identity(
'SELECT js, js IS JSON AS "json?", js IS JSON VALUE AS "scalar?", js IS JSON SCALAR AS "scalar?", js IS JSON OBJECT AS "object?", js IS JSON ARRAY AS "array?" FROM t'
)
self.validate_identity(
'SELECT js, js IS JSON ARRAY WITH UNIQUE KEYS AS "array w. UK?", js IS JSON ARRAY WITHOUT UNIQUE KEYS AS "array w/o UK?", js IS JSON ARRAY UNIQUE KEYS AS "array w UK 2?" FROM t'
)
def test_ddl(self): def test_ddl(self):
# Checks that user-defined types are parsed into DataType instead of Identifier # Checks that user-defined types are parsed into DataType instead of Identifier
self.parse_one("CREATE TABLE t (a udt)").this.expressions[0].args["kind"].assert_is( self.parse_one("CREATE TABLE t (a udt)").this.expressions[0].args["kind"].assert_is(

View file

@ -634,6 +634,7 @@ class TestPresto(Validator):
}, },
) )
self.validate_identity("SELECT a FROM t GROUP BY a, ROLLUP (b), ROLLUP (c), ROLLUP (d)")
self.validate_identity("SELECT a FROM test TABLESAMPLE BERNOULLI (50)") self.validate_identity("SELECT a FROM test TABLESAMPLE BERNOULLI (50)")
self.validate_identity("SELECT a FROM test TABLESAMPLE SYSTEM (75)") self.validate_identity("SELECT a FROM test TABLESAMPLE SYSTEM (75)")
self.validate_identity("string_agg(x, ',')", "ARRAY_JOIN(ARRAY_AGG(x), ',')") self.validate_identity("string_agg(x, ',')", "ARRAY_JOIN(ARRAY_AGG(x), ',')")
@ -715,9 +716,6 @@ class TestPresto(Validator):
) )
self.validate_all( self.validate_all(
"SELECT ROW(1, 2)", "SELECT ROW(1, 2)",
read={
"spark": "SELECT STRUCT(1, 2)",
},
write={ write={
"presto": "SELECT ROW(1, 2)", "presto": "SELECT ROW(1, 2)",
"spark": "SELECT STRUCT(1, 2)", "spark": "SELECT STRUCT(1, 2)",
@ -835,12 +833,6 @@ class TestPresto(Validator):
"trino": "ARRAY_AGG(x ORDER BY y DESC)", "trino": "ARRAY_AGG(x ORDER BY y DESC)",
}, },
) )
self.validate_all(
"SELECT a FROM t GROUP BY a, ROLLUP(b), ROLLUP(c), ROLLUP(d)",
write={
"presto": "SELECT a FROM t GROUP BY a, ROLLUP (b, c, d)",
},
)
self.validate_all( self.validate_all(
'SELECT a."b" FROM "foo"', 'SELECT a."b" FROM "foo"',
write={ write={

View file

@ -598,12 +598,12 @@ WHERE
self.validate_all( self.validate_all(
"DIV0(foo, bar)", "DIV0(foo, bar)",
write={ write={
"snowflake": "IFF(bar = 0, 0, foo / bar)", "snowflake": "IFF(bar = 0 AND NOT foo IS NULL, 0, foo / bar)",
"sqlite": "IIF(bar = 0, 0, CAST(foo AS REAL) / bar)", "sqlite": "IIF(bar = 0 AND NOT foo IS NULL, 0, CAST(foo AS REAL) / bar)",
"presto": "IF(bar = 0, 0, CAST(foo AS DOUBLE) / bar)", "presto": "IF(bar = 0 AND NOT foo IS NULL, 0, CAST(foo AS DOUBLE) / bar)",
"spark": "IF(bar = 0, 0, foo / bar)", "spark": "IF(bar = 0 AND NOT foo IS NULL, 0, foo / bar)",
"hive": "IF(bar = 0, 0, foo / bar)", "hive": "IF(bar = 0 AND NOT foo IS NULL, 0, foo / bar)",
"duckdb": "CASE WHEN bar = 0 THEN 0 ELSE foo / bar END", "duckdb": "CASE WHEN bar = 0 AND NOT foo IS NULL THEN 0 ELSE foo / bar END",
}, },
) )
self.validate_all( self.validate_all(

View file

@ -10,6 +10,7 @@ class TestSpark(Validator):
dialect = "spark" dialect = "spark"
def test_ddl(self): def test_ddl(self):
self.validate_identity("INSERT OVERWRITE TABLE db1.tb1 TABLE db2.tb2")
self.validate_identity("CREATE TABLE foo AS WITH t AS (SELECT 1 AS col) SELECT col FROM t") self.validate_identity("CREATE TABLE foo AS WITH t AS (SELECT 1 AS col) SELECT col FROM t")
self.validate_identity("CREATE TEMPORARY VIEW test AS SELECT 1") self.validate_identity("CREATE TEMPORARY VIEW test AS SELECT 1")
self.validate_identity("CREATE TABLE foo (col VARCHAR(50))") self.validate_identity("CREATE TABLE foo (col VARCHAR(50))")
@ -484,7 +485,7 @@ TBLPROPERTIES (
) )
self.validate_all( self.validate_all(
"SELECT CAST(STRUCT('fooo') AS STRUCT<a: VARCHAR(2)>)", "SELECT CAST(STRUCT('fooo') AS STRUCT<a: VARCHAR(2)>)",
write={"spark": "SELECT CAST(STRUCT('fooo') AS STRUCT<a: STRING>)"}, write={"spark": "SELECT CAST(STRUCT('fooo' AS col1) AS STRUCT<a: STRING>)"},
) )
self.validate_all( self.validate_all(
"SELECT CAST(123456 AS VARCHAR(3))", "SELECT CAST(123456 AS VARCHAR(3))",
@ -710,6 +711,29 @@ TBLPROPERTIES (
) )
self.validate_identity("DESCRIBE schema.test PARTITION(ds = '2024-01-01')") self.validate_identity("DESCRIBE schema.test PARTITION(ds = '2024-01-01')")
self.validate_all(
"SELECT ANY_VALUE(col, true), FIRST(col, true), FIRST_VALUE(col, true) OVER ()",
write={
"duckdb": "SELECT ANY_VALUE(col), FIRST(col), FIRST_VALUE(col IGNORE NULLS) OVER ()"
},
)
self.validate_all(
"SELECT STRUCT(1, 2)",
write={
"spark": "SELECT STRUCT(1 AS col1, 2 AS col2)",
"presto": "SELECT CAST(ROW(1, 2) AS ROW(col1 INTEGER, col2 INTEGER))",
"duckdb": "SELECT {'col1': 1, 'col2': 2}",
},
)
self.validate_all(
"SELECT STRUCT(x, 1, y AS col3, STRUCT(5)) FROM t",
write={
"spark": "SELECT STRUCT(x AS x, 1 AS col2, y AS col3, STRUCT(5 AS col1) AS col4) FROM t",
"duckdb": "SELECT {'x': x, 'col2': 1, 'col3': y, 'col4': {'col1': 5}} FROM t",
},
)
def test_bool_or(self): def test_bool_or(self):
self.validate_all( self.validate_all(
"SELECT a, LOGICAL_OR(b) FROM table GROUP BY a", "SELECT a, LOGICAL_OR(b) FROM table GROUP BY a",

View file

@ -91,6 +91,10 @@ class TestSQLite(Validator):
read={"snowflake": "LEAST(x, y, z)"}, read={"snowflake": "LEAST(x, y, z)"},
write={"snowflake": "LEAST(x, y, z)"}, write={"snowflake": "LEAST(x, y, z)"},
) )
self.validate_identity(
"SELECT * FROM station WHERE city IS NOT ''",
"SELECT * FROM station WHERE NOT city IS ''",
)
def test_strftime(self): def test_strftime(self):
self.validate_identity("SELECT STRFTIME('%Y/%m/%d', 'now')") self.validate_identity("SELECT STRFTIME('%Y/%m/%d', 'now')")

View file

@ -7,6 +7,7 @@ class TestStarrocks(Validator):
def test_identity(self): def test_identity(self):
self.validate_identity("SELECT CAST(`a`.`b` AS INT) FROM foo") self.validate_identity("SELECT CAST(`a`.`b` AS INT) FROM foo")
self.validate_identity("SELECT APPROX_COUNT_DISTINCT(a) FROM x") self.validate_identity("SELECT APPROX_COUNT_DISTINCT(a) FROM x")
self.validate_identity("SELECT [1, 2, 3]")
def test_time(self): def test_time(self):
self.validate_identity("TIMESTAMP('2022-01-01')") self.validate_identity("TIMESTAMP('2022-01-01')")
@ -28,3 +29,24 @@ class TestStarrocks(Validator):
"mysql": "SELECT REGEXP_LIKE(abc, '%foo%')", "mysql": "SELECT REGEXP_LIKE(abc, '%foo%')",
}, },
) )
def test_unnest(self):
self.validate_identity(
"SELECT student, score, t.unnest FROM tests CROSS JOIN LATERAL UNNEST(scores) AS t",
"SELECT student, score, t.unnest FROM tests CROSS JOIN LATERAL UNNEST(scores) AS t(unnest)",
)
lateral_explode_sqls = [
"SELECT id, t.col FROM tbl, UNNEST(scores) AS t(col)",
"SELECT id, t.col FROM tbl CROSS JOIN LATERAL UNNEST(scores) AS t(col)",
]
for sql in lateral_explode_sqls:
with self.subTest(f"Testing Starrocks roundtrip & transpilation of: {sql}"):
self.validate_all(
sql,
write={
"starrocks": sql,
"spark": "SELECT id, t.col FROM tbl LATERAL VIEW EXPLODE(scores) t AS col",
},
)

View file

@ -31,6 +31,9 @@ class TestTSQL(Validator):
self.validate_identity("1 AND true", "1 <> 0 AND (1 = 1)") self.validate_identity("1 AND true", "1 <> 0 AND (1 = 1)")
self.validate_identity("CAST(x AS int) OR y", "CAST(x AS INTEGER) <> 0 OR y <> 0") self.validate_identity("CAST(x AS int) OR y", "CAST(x AS INTEGER) <> 0 OR y <> 0")
self.validate_identity("TRUNCATE TABLE t1 WITH (PARTITIONS(1, 2 TO 5, 10 TO 20, 84))") self.validate_identity("TRUNCATE TABLE t1 WITH (PARTITIONS(1, 2 TO 5, 10 TO 20, 84))")
self.validate_identity(
"SELECT TOP 10 s.RECORDID, n.c.value('(/*:FORM_ROOT/*:SOME_TAG)[1]', 'float') AS SOME_TAG_VALUE FROM source_table.dbo.source_data AS s(nolock) CROSS APPLY FormContent.nodes('/*:FORM_ROOT') AS N(C)"
)
self.validate_identity( self.validate_identity(
"CREATE CLUSTERED INDEX [IX_OfficeTagDetail_TagDetailID] ON [dbo].[OfficeTagDetail]([TagDetailID] ASC)" "CREATE CLUSTERED INDEX [IX_OfficeTagDetail_TagDetailID] ON [dbo].[OfficeTagDetail]([TagDetailID] ASC)"
) )
@ -412,6 +415,12 @@ class TestTSQL(Validator):
}, },
) )
# Check that TRUE and FALSE dont get expanded to (1=1) or (1=0) when used in a VALUES expression
self.validate_identity(
"SELECT val FROM (VALUES ((TRUE), (FALSE), (NULL))) AS t(val)",
write_sql="SELECT val FROM (VALUES ((1), (0), (NULL))) AS t(val)",
)
def test_option(self): def test_option(self):
possible_options = [ possible_options = [
"HASH GROUP", "HASH GROUP",
@ -1577,8 +1586,8 @@ WHERE
self.validate_all( self.validate_all(
"SELECT t.x, y.z FROM x OUTER APPLY a.b.tvfTest(t.x)y(z)", "SELECT t.x, y.z FROM x OUTER APPLY a.b.tvfTest(t.x)y(z)",
write={ write={
"spark": "SELECT t.x, y.z FROM x LEFT JOIN LATERAL a.b.TVFTEST(t.x) AS y(z)", "spark": "SELECT t.x, y.z FROM x LEFT JOIN LATERAL a.b.tvfTest(t.x) AS y(z)",
"tsql": "SELECT t.x, y.z FROM x OUTER APPLY a.b.TVFTEST(t.x) AS y(z)", "tsql": "SELECT t.x, y.z FROM x OUTER APPLY a.b.tvfTest(t.x) AS y(z)",
}, },
) )

View file

@ -632,6 +632,7 @@ SELECT x.a + 1 AS i, x.a + 1 + 1 AS j, x.a + 1 + 1 + 1 AS k FROM x AS x;
# title: noop - reference comes before alias # title: noop - reference comes before alias
# execute: false # execute: false
# validate_qualify_columns: false
SELECT i + 1 AS j, x.a + 1 AS i FROM x; SELECT i + 1 AS j, x.a + 1 AS i FROM x;
SELECT i + 1 AS j, x.a + 1 AS i FROM x AS x; SELECT i + 1 AS j, x.a + 1 AS i FROM x AS x;
@ -704,3 +705,6 @@ SELECT _q_0.a AS a, _q_0.b AS b, _q_1.b AS b, _q_1.c AS c FROM ((SELECT x.a AS a
SELECT b FROM ((SELECT a FROM x) INNER JOIN y ON a = b); SELECT b FROM ((SELECT a FROM x) INNER JOIN y ON a = b);
SELECT y.b AS b FROM ((SELECT x.a AS a FROM x AS x) AS _q_0 INNER JOIN y AS y ON _q_0.a = y.b); SELECT y.b AS b FROM ((SELECT x.a AS a FROM x AS x) AS _q_0 INNER JOIN y AS y ON _q_0.a = y.b);
SELECT a, c FROM x TABLESAMPLE SYSTEM (10 ROWS) CROSS JOIN y TABLESAMPLE SYSTEM (10 ROWS);
SELECT x.a AS a, y.c AS c FROM x AS x TABLESAMPLE SYSTEM (10 ROWS) CROSS JOIN y AS y TABLESAMPLE SYSTEM (10 ROWS);

View file

@ -180,3 +180,7 @@ SELECT a.id, a_2.id FROM cat.db1.a AS a JOIN c.db2.a AS a_2 ON a.id = a_2.id;
COPY INTO (SELECT * FROM x) TO 'data' WITH (FORMAT 'CSV'); COPY INTO (SELECT * FROM x) TO 'data' WITH (FORMAT 'CSV');
COPY INTO (SELECT * FROM c.db.x AS x) TO 'data' WITH (FORMAT 'CSV'); COPY INTO (SELECT * FROM c.db.x AS x) TO 'data' WITH (FORMAT 'CSV');
# title: tablesample
SELECT 1 FROM x TABLESAMPLE SYSTEM (10 PERCENT) CROSS JOIN y TABLESAMPLE SYSTEM (10 PERCENT);
SELECT 1 FROM c.db.x AS x TABLESAMPLE SYSTEM (10 PERCENT) CROSS JOIN c.db.y AS y TABLESAMPLE SYSTEM (10 PERCENT);

View file

@ -85,6 +85,27 @@ NULL;
NULL = NULL; NULL = NULL;
NULL; NULL;
1 AND 0;
FALSE;
0 AND 1;
FALSE;
0 OR 1;
TRUE;
0 OR NULL;
NULL;
NULL OR 0;
NULL;
0 AND NULL;
FALSE;
NULL AND 0;
FALSE;
-- Can't optimize this because different engines do different things -- Can't optimize this because different engines do different things
-- mysql converts to 0 and 1 but tsql does true and false -- mysql converts to 0 and 1 but tsql does true and false
NULL <=> NULL; NULL <=> NULL;

View file

@ -852,10 +852,10 @@ SELECT
SUM("x"."profit") AS "profit" SUM("x"."profit") AS "profit"
FROM "x" AS "x" FROM "x" AS "x"
GROUP BY GROUP BY
ROLLUP ( ROLLUP (
"x"."channel", "x"."channel",
"x"."id" "x"."id"
) )
ORDER BY ORDER BY
"channel", "channel",
"id" "id"
@ -2294,12 +2294,12 @@ SELECT
SUM("y"."number_sales") AS "_col_5" SUM("y"."number_sales") AS "_col_5"
FROM "y" AS "y" FROM "y" AS "y"
GROUP BY GROUP BY
ROLLUP ( ROLLUP (
"y"."channel", "y"."channel",
"y"."i_brand_id", "y"."i_brand_id",
"y"."i_class_id", "y"."i_class_id",
"y"."i_category_id" "y"."i_category_id"
) )
ORDER BY ORDER BY
"channel", "channel",
"i_brand_id", "i_brand_id",
@ -2608,12 +2608,12 @@ JOIN "customer_address" AS "customer_address"
ON "customer"."c_current_addr_sk" = "customer_address"."ca_address_sk" ON "customer"."c_current_addr_sk" = "customer_address"."ca_address_sk"
AND "customer_address"."ca_state" IN ('KS', 'IA', 'AL', 'UT', 'VA', 'NC', 'TX') AND "customer_address"."ca_state" IN ('KS', 'IA', 'AL', 'UT', 'VA', 'NC', 'TX')
GROUP BY GROUP BY
ROLLUP ( ROLLUP (
"item"."i_item_id", "item"."i_item_id",
"customer_address"."ca_country", "customer_address"."ca_country",
"customer_address"."ca_state", "customer_address"."ca_state",
"customer_address"."ca_county" "customer_address"."ca_county"
) )
ORDER BY ORDER BY
"ca_country", "ca_country",
"ca_state", "ca_state",
@ -2876,12 +2876,12 @@ JOIN "item" AS "item"
JOIN "warehouse" AS "warehouse" JOIN "warehouse" AS "warehouse"
ON "inventory"."inv_warehouse_sk" = "warehouse"."w_warehouse_sk" ON "inventory"."inv_warehouse_sk" = "warehouse"."w_warehouse_sk"
GROUP BY GROUP BY
ROLLUP ( ROLLUP (
"item"."i_product_name", "item"."i_product_name",
"item"."i_brand", "item"."i_brand",
"item"."i_class", "item"."i_class",
"item"."i_category" "item"."i_category"
) )
ORDER BY ORDER BY
"qoh", "qoh",
"i_product_name", "i_product_name",
@ -3347,10 +3347,10 @@ JOIN "store" AS "store"
ON "store"."s_state" IN ('TN', 'TN', 'TN', 'TN', 'TN', 'TN') ON "store"."s_state" IN ('TN', 'TN', 'TN', 'TN', 'TN', 'TN')
AND "store"."s_store_sk" = "store_sales"."ss_store_sk" AND "store"."s_store_sk" = "store_sales"."ss_store_sk"
GROUP BY GROUP BY
ROLLUP ( ROLLUP (
"item"."i_item_id", "item"."i_item_id",
"store"."s_state" "store"."s_state"
) )
ORDER BY ORDER BY
"i_item_id", "i_item_id",
"s_state" "s_state"
@ -4463,10 +4463,10 @@ JOIN "store" AS "store"
ON "store"."s_state" IN ('TN', 'TN', 'TN', 'TN', 'TN', 'TN', 'TN', 'TN') ON "store"."s_state" IN ('TN', 'TN', 'TN', 'TN', 'TN', 'TN', 'TN', 'TN')
AND "store"."s_store_sk" = "store_sales"."ss_store_sk" AND "store"."s_store_sk" = "store_sales"."ss_store_sk"
GROUP BY GROUP BY
ROLLUP ( ROLLUP (
"item"."i_category", "item"."i_category",
"item"."i_class" "item"."i_class"
) )
ORDER BY ORDER BY
"lochierarchy" DESC, "lochierarchy" DESC,
CASE WHEN "lochierarchy" = 0 THEN "i_category" END, CASE WHEN "lochierarchy" = 0 THEN "i_category" END,
@ -9396,10 +9396,10 @@ LEFT JOIN "_u_0" AS "_u_0"
WHERE WHERE
NOT "_u_0"."s_state" IS NULL NOT "_u_0"."s_state" IS NULL
GROUP BY GROUP BY
ROLLUP ( ROLLUP (
"store"."s_state", "store"."s_state",
"store"."s_county" "store"."s_county"
) )
ORDER BY ORDER BY
"lochierarchy" DESC, "lochierarchy" DESC,
CASE WHEN "lochierarchy" = 0 THEN "s_state" END, CASE WHEN "lochierarchy" = 0 THEN "s_state" END,
@ -10433,10 +10433,10 @@ SELECT
SUM("x"."profit") AS "profit" SUM("x"."profit") AS "profit"
FROM "x" AS "x" FROM "x" AS "x"
GROUP BY GROUP BY
ROLLUP ( ROLLUP (
"x"."channel", "x"."channel",
"x"."id" "x"."id"
) )
ORDER BY ORDER BY
"channel", "channel",
"id" "id"
@ -10937,10 +10937,10 @@ SELECT
SUM("x"."profit") AS "profit" SUM("x"."profit") AS "profit"
FROM "x" AS "x" FROM "x" AS "x"
GROUP BY GROUP BY
ROLLUP ( ROLLUP (
"x"."channel", "x"."channel",
"x"."id" "x"."id"
) )
ORDER BY ORDER BY
"channel", "channel",
"id" "id"
@ -11539,10 +11539,10 @@ JOIN "date_dim" AS "d1"
JOIN "item" AS "item" JOIN "item" AS "item"
ON "item"."i_item_sk" = "web_sales"."ws_item_sk" ON "item"."i_item_sk" = "web_sales"."ws_item_sk"
GROUP BY GROUP BY
ROLLUP ( ROLLUP (
"item"."i_category", "item"."i_category",
"item"."i_class" "item"."i_class"
) )
ORDER BY ORDER BY
"lochierarchy" DESC, "lochierarchy" DESC,
CASE WHEN "lochierarchy" = 0 THEN "i_category" END, CASE WHEN "lochierarchy" = 0 THEN "i_category" END,

View file

@ -1,4 +1,3 @@
import time
import unittest import unittest
from sqlglot import exp, parse_one from sqlglot import exp, parse_one
@ -46,8 +45,5 @@ class TestGenerator(unittest.TestCase):
assert parse_one("X as 1").sql(identify="safe") == 'X AS "1"' assert parse_one("X as 1").sql(identify="safe") == 'X AS "1"'
def test_generate_nested_binary(self): def test_generate_nested_binary(self):
sql = "SELECT 'foo'" + (" || 'foo'" * 500) sql = "SELECT 'foo'" + (" || 'foo'" * 1000)
self.assertEqual(parse_one(sql).sql(copy=False), sql)
now = time.time()
self.assertEqual(parse_one(sql).sql(), sql)
self.assertLessEqual(time.time() - now, 0.1)

View file

@ -27,11 +27,11 @@ def parse_and_optimize(func, sql, read_dialect, **kwargs):
return func(parse_one(sql, read=read_dialect), **kwargs) return func(parse_one(sql, read=read_dialect), **kwargs)
def qualify_columns(expression, **kwargs): def qualify_columns(expression, validate_qualify_columns=True, **kwargs):
expression = optimizer.qualify.qualify( expression = optimizer.qualify.qualify(
expression, expression,
infer_schema=True, infer_schema=True,
validate_qualify_columns=False, validate_qualify_columns=validate_qualify_columns,
identify=False, identify=False,
**kwargs, **kwargs,
) )
@ -135,11 +135,17 @@ class TestOptimizer(unittest.TestCase):
continue continue
dialect = meta.get("dialect") dialect = meta.get("dialect")
leave_tables_isolated = meta.get("leave_tables_isolated") leave_tables_isolated = meta.get("leave_tables_isolated")
validate_qualify_columns = meta.get("validate_qualify_columns")
func_kwargs = {**kwargs} func_kwargs = {**kwargs}
if leave_tables_isolated is not None: if leave_tables_isolated is not None:
func_kwargs["leave_tables_isolated"] = string_to_bool(leave_tables_isolated) func_kwargs["leave_tables_isolated"] = string_to_bool(leave_tables_isolated)
if validate_qualify_columns is not None:
func_kwargs["validate_qualify_columns"] = string_to_bool(
validate_qualify_columns
)
if set_dialect and dialect: if set_dialect and dialect:
func_kwargs["dialect"] = dialect func_kwargs["dialect"] = dialect

View file

@ -1,6 +1,7 @@
import unittest import unittest
import sys
from sqlglot.time import format_time from sqlglot.time import format_time, subsecond_precision
class TestTime(unittest.TestCase): class TestTime(unittest.TestCase):
@ -12,3 +13,22 @@ class TestTime(unittest.TestCase):
self.assertEqual(format_time("aa", mapping), "c") self.assertEqual(format_time("aa", mapping), "c")
self.assertEqual(format_time("aaada", mapping), "cbdb") self.assertEqual(format_time("aaada", mapping), "cbdb")
self.assertEqual(format_time("da", mapping), "db") self.assertEqual(format_time("da", mapping), "db")
def test_subsecond_precision(self):
self.assertEqual(6, subsecond_precision("2023-01-01 12:13:14.123456+00:00"))
self.assertEqual(3, subsecond_precision("2023-01-01 12:13:14.123+00:00"))
self.assertEqual(0, subsecond_precision("2023-01-01 12:13:14+00:00"))
self.assertEqual(0, subsecond_precision("2023-01-01 12:13:14"))
self.assertEqual(0, subsecond_precision("garbage"))
@unittest.skipUnless(
sys.version_info >= (3, 11),
"Python 3.11 relaxed datetime.fromisoformat() parsing with regards to microseconds",
)
def test_subsecond_precision_python311(self):
# ref: https://docs.python.org/3/whatsnew/3.11.html#datetime
self.assertEqual(6, subsecond_precision("2023-01-01 12:13:14.123456789+00:00"))
self.assertEqual(6, subsecond_precision("2023-01-01 12:13:14.12345+00:00"))
self.assertEqual(6, subsecond_precision("2023-01-01 12:13:14.1234+00:00"))
self.assertEqual(3, subsecond_precision("2023-01-01 12:13:14.12+00:00"))
self.assertEqual(3, subsecond_precision("2023-01-01 12:13:14.1+00:00"))