1
0
Fork 0

Merging upstream version 20.4.0.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-02-13 21:18:34 +01:00
parent 1194ddb03c
commit e4ef535ed9
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
20 changed files with 562 additions and 52 deletions

View file

@ -8,9 +8,6 @@ on:
permissions: permissions:
contents: read contents: read
env:
python_interpreters: 3.7 3.8 3.9 3.10 3.11
jobs: jobs:
build-rs: build-rs:
strategy: strategy:
@ -29,6 +26,9 @@ jobs:
- os: windows - os: windows
target: i686 target: i686
python-architecture: x86 python-architecture: x86
exclude:
- os: windows
target: aarch64
runs-on: ${{ (matrix.os == 'linux' && 'ubuntu') || matrix.os }}-latest runs-on: ${{ (matrix.os == 'linux' && 'ubuntu') || matrix.os }}-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
@ -37,36 +37,34 @@ jobs:
python-version: '3.10' python-version: '3.10'
architecture: ${{ matrix.python-architecture || 'x64' }} architecture: ${{ matrix.python-architecture || 'x64' }}
- name: Build wheels - name: Build wheels
working-directory: ./sqlglotrs
uses: PyO3/maturin-action@v1 uses: PyO3/maturin-action@v1
with: with:
target: ${{ matrix.target }} target: ${{ matrix.target }}
args: --release --out dist --interpreter $python_interpreters args: --release --out dist --interpreter 3.7 3.8 3.9 3.10 3.11 3.12
sccache: 'true' sccache: 'true'
manylinux: auto manylinux: auto
- name: Upload wheels
working-directory: ./sqlglotrs working-directory: ./sqlglotrs
- name: Upload wheels
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v3
with: with:
name: wheels name: wheels
path: dist path: sqlglotrs/dist
sdist-rs: sdist-rs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Build sdist - name: Build sdist
working-directory: ./sqlglotrs
uses: PyO3/maturin-action@v1 uses: PyO3/maturin-action@v1
with: with:
command: sdist command: sdist
args: --out dist args: --out dist
- name: Upload sdist
working-directory: ./sqlglotrs working-directory: ./sqlglotrs
- name: Upload sdist
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v3
with: with:
name: wheels name: wheels
path: dist path: sqlglotrs/dist
deploy-rs: deploy-rs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
@ -76,7 +74,6 @@ jobs:
with: with:
name: wheels name: wheels
- name: Publish to PyPI - name: Publish to PyPI
working-directory: ./sqlglotrs
uses: PyO3/maturin-action@v1 uses: PyO3/maturin-action@v1
env: env:
MATURIN_PYPI_TOKEN: ${{ secrets.PYPI_API_TOKEN }} MATURIN_PYPI_TOKEN: ${{ secrets.PYPI_API_TOKEN }}

View file

@ -8,7 +8,7 @@ You can easily [customize](#custom-dialects) the parser, [analyze](#metadata) qu
Syntax [errors](#parser-errors) are highlighted and dialect incompatibilities can warn or raise depending on configurations. However, it should be noted that SQL validation is not SQLGlots goal, so some syntax errors may go unnoticed. Syntax [errors](#parser-errors) are highlighted and dialect incompatibilities can warn or raise depending on configurations. However, it should be noted that SQL validation is not SQLGlots goal, so some syntax errors may go unnoticed.
Learn more about the SQLGlot API in the [documentation](https://sqlglot.com/). Learn more about SQLGlot in the API [documentation](https://sqlglot.com/) and the expression tree [primer](https://github.com/tobymao/sqlglot/blob/main/posts/ast_primer.md).
Contributions are very welcome in SQLGlot; read the [contribution guide](https://github.com/tobymao/sqlglot/blob/main/CONTRIBUTING.md) to get started! Contributions are very welcome in SQLGlot; read the [contribution guide](https://github.com/tobymao/sqlglot/blob/main/CONTRIBUTING.md) to get started!
@ -176,6 +176,8 @@ for table in parse_one("SELECT * FROM x JOIN y JOIN z").find_all(exp.Table):
print(table.name) print(table.name)
``` ```
Read the [ast primer](https://github.com/tobymao/sqlglot/blob/main/posts/ast_primer.md) to learn more about SQLGlot's internals.
### Parser Errors ### Parser Errors
When the parser detects an error in the syntax, it raises a ParseError: When the parser detects an error in the syntax, it raises a ParseError:

405
posts/ast_primer.md Normal file
View file

@ -0,0 +1,405 @@
# A Primer on SQLGlot's Abstract Syntax Tree
SQLGlot is a powerful tool for analyzing and transforming SQL, but the learning curve can be intimidating.
This post is intended to familiarize newbies with SQLGlot's abstract syntax trees, how to traverse them, and how to mutate them.
## The tree
SQLGlot parses SQL into an abstract syntax tree (AST).
```python
from sqlglot import parse_one
ast = parse_one("SELECT a FROM (SELECT a FROM x) AS x")
```
An AST is a data structure that represents a SQL statement. The best way to glean the structure of a particular AST is python's builtin `repr` function:
```python
repr(ast)
# (SELECT expressions:
# (COLUMN this:
# (IDENTIFIER this: a, quoted: False)), from:
# (FROM this:
# (SUBQUERY this:
# (SELECT expressions:
# (COLUMN this:
# (IDENTIFIER this: a, quoted: False)), from:
# (FROM this:
# (TABLE this:
# (IDENTIFIER this: x, quoted: False)))), alias:
# (TABLEALIAS this:
# (IDENTIFIER this: x, quoted: False)))))
```
This is a textual representation of the internal data structure. Here's a breakdown of some of its components:
```
Expression type child key
| /
(SELECT expressions:
(COLUMN this: ---------------------------------- COLUMN is a child node of SELECT
(IDENTIFIER this: a, quoted: False)), from: -- "from:" is another child key of SELECT
(FROM this: ------------------------------------- FROM is also a child node of SELECT
...
```
## Nodes of the tree
The nodes in this tree are instances of `sqlglot.Expression`. Nodes reference their children in `args` and their parent in `parent`:
```python
ast.args
# {
# "expressions": [(COLUMN this: ...)],
# "from": (FROM this: ...),
# ...
# }
ast.args["expressions"][0]
# (COLUMN this: ...)
ast.args["expressions"][0].args["this"]
# (IDENTIFIER this: ...)
ast.args["from"]
# (FROM this: ...)
assert ast.args["expressions"][0].args["this"].parent.parent is ast
```
Children can either be:
1. An Expression instance
2. A list of Expression instances
3. Another Python object, such as str or bool. This will always be a leaf node in the tree.
Navigating this tree requires an understanding of the different Expression types. The best way to browse Expression types is directly in the code at [expressions.py](../sqlglot/expressions.py). Let's look at a simplified version of one Expression type:
```python
class Column(Expression):
arg_types = {
"this": True,
"table": False,
...
}
```
`Column` subclasses `Expression`.
`arg_types` is a class attribute that specifies the possible children. The `args` keys of an Expression instance correspond to the `arg_types` keys of its class. The values of the `arg_types` dict are `True` if the key is required.
There are some common `arg_types` keys:
- "this": This is typically used for the primary child. In `Column`, "this" is the identifier for the column's name.
- "expression": This is typically used for the secondary child
- "expressions": This is typically used for a primary list of children
There aren't strict rules for when these keys are used, but they help with some of the convenience methods available on all Expression types:
- `Expression.this`: shorthand for `self.args.get("this")`
- `Expression.expression`: similarly, shorthand for the expression arg
- `Expression.expressions`: similarly, shorthand for the expressions list arg
- `Expression.name`: text name for whatever `this` is
`arg_types` don't specify the possible Expression types of children. This can be a challenge when you are writing code to traverse a particular AST and you don't know what to expect. A common trick is to parse an example query and print out the `repr`.
You can traverse an AST using just args, but there are some higher-order functions for programmatic traversal.
> [!NOTE]
> SQLGlot can parse and generate SQL for many different dialects. However, there is only a single set of Expression types for all dialects. We like to say that the AST can represent the _superset_ of all dialects.
>
> Sometimes, SQLGlot will parse SQL from a dialect into Expression types you didn't expect:
>
> ```python
> ast = parse_one("SELECT NOW()", dialect="postgres")
>
> repr(ast)
> # (SELECT expressions:
> # (CURRENTTIMESTAMP )) -- CURRENTTIMESTAMP, not NOW()
> ```
>
> This is because SQLGlot tries to converge dialects on a standard AST. This means you can often write one piece of code that handles multiple dialects.
## Traversing the AST
Analyzing a SQL statement requires traversing this data structure. There are a few ways to do this:
### Args
If you know the structure of an AST, you can use `Expression.args` just like above. However, this can be very limited if you're dealing with arbitrary SQL.
### Walk methods
The walk methods of `Expression` (`find`, `find_all`, and `walk`) are the simplest way to analyze an AST.
`find` and `find_all` search an AST for specific Expression types:
```python
from sqlglot import exp
ast.find(exp.Select)
# (SELECT expressions:
# (COLUMN this:
# (IDENTIFIER this: a, quoted: False)), from:
# ...
list(ast.find_all(exp.Select))
# [(SELECT expressions:
# (COLUMN this:
# (IDENTIFIER this: a, quoted: False)), from:
# ...
```
Both `find` and `find_all` are built on `walk`, which gives finer grained control:
```python
for (
node, # the current AST node
parent, # parent of the current AST node (this will be None for the root node)
key # The 'key' of this node in its parent's args
) in ast.walk():
...
```
> [!WARNING]
> Here's a common pitfall of the walk methods:
> ```python
> ast.find_all(exp.Table)
> ```
> At first glance, this seems like a great way to find all tables in a query. However, `Table` instances are not always tables in your database. Here's an example where this fails:
> ```python
> ast = parse_one("""
> WITH x AS (
> SELECT a FROM y
> )
> SELECT a FROM x
> """)
>
> # This is NOT a good way to find all tables in the query!
> for table in ast.find_all(exp.Table):
> print(table)
>
> # x -- this is a common table expression, NOT an actual table
> # y
> ```
>
> For programmatic traversal of ASTs that requires deeper semantic understanding of a query, you need "scope".
### Scope
Scope is a traversal module that handles more semantic context of SQL queries. It's harder to use than the `walk` methods but is more powerful:
```python
from sqlglot.optimizer.scope import build_scope
ast = parse_one("""
WITH x AS (
SELECT a FROM y
)
SELECT a FROM x
""")
root = build_scope(ast)
for scope in root.traverse():
print(scope)
# Scope<SELECT a FROM y>
# Scope<WITH x AS (SELECT a FROM y) SELECT a FROM x>
```
Let's use this for a better way to find all tables in a query:
```python
tables = [
source
# Traverse the Scope tree, not the AST
for scope in root.traverse()
# `selected_sources` contains sources that have been selected in this scope, e.g. in a FROM or JOIN clause.
# `alias` is the name of this source in this particular scope.
# `node` is the AST node instance
# if the selected source is a subquery (including common table expressions),
# then `source` will be the Scope instance for that subquery.
# if the selected source is a table,
# then `source` will be a Table instance.
for alias, (node, source) in scope.selected_sources.items()
if isinstance(source, exp.Table)
]
for table in tables:
print(table)
# y -- Success!
```
`build_scope` returns an instance of the `Scope` class. `Scope` has numerous methods for inspecting a query. The best way to browse these methods is directly in the code at [scope.py](../sqlglot/optimizer/scope.py). You can also look for examples of how Scope is used throughout SQLGlot's [optimizer](../sqlglot/optimizer) module.
Many methods of Scope depend on a fully qualified SQL expression. For example, let's say we want to trace the lineage of columns in this query:
```python
ast = parse_one("""
SELECT
a,
c
FROM (
SELECT
a,
b
FROM x
) AS x
JOIN (
SELECT
b,
c
FROM y
) AS y
ON x.b = y.b
""")
```
Just looking at the outer query, it's not obvious that column `a` comes from table `x` without looking at the columns of the subqueries.
We can use the [qualify](../sqlglot/optimizer/qualify.py) function to prefix all columns in an AST with their table name like so:
```python
from sqlglot.optimizer.qualify import qualify
qualify(ast)
# SELECT
# x.a AS a,
# y.c AS c
# FROM (
# SELECT
# x.a AS a,
# x.b AS b
# FROM x AS x
# ) AS x
# JOIN (
# SELECT
# y.b AS b,
# y.c AS c
# FROM y AS y
# ) AS y
# ON x.b = y.b
```
Now we can trace a column to its source. Here's how we might find the table or subquery for all columns in a qualified AST:
```python
from sqlglot.optimizer.scope import find_all_in_scope
root = build_scope(ast)
# `find_all_in_scope` is similar to `Expression.find_all`, except it doesn't traverse into subqueries
for column in find_all_in_scope(root.expression, exp.Column):
print(f"{column} => {root.sources[column.table]}")
# x.a => Scope<SELECT x.a AS a, x.b AS b FROM x AS x>
# y.c => Scope<SELECT y.b AS b, y.c AS c FROM y AS y>
# x.b => Scope<SELECT x.a AS a, x.b AS b FROM x AS x>
# y.b => Scope<SELECT y.b AS b, y.c AS c FROM y AS y>
```
For a complete example of tracing column lineage, check out the [lineage](../sqlglot/lineage.py) module.
> [!NOTE]
> Some queries require the database schema for disambiguation. For example:
>
> ```sql
> SELECT a FROM x CROSS JOIN y
> ```
>
> Column `a` might come from table `x` or `y`. In these cases, you must pass the `schema` into `qualify`.
## Mutating the tree
You can also modify an AST or build one from scratch. There are a few ways to do this.
### High-level builder methods
SQLGlot has methods for programmatically building up expressions similar to how you might in an ORM:
```python
ast = (
exp
.select("a", "b")
.from_("x")
.where("b < 4")
.limit(10)
)
```
> [!WARNING]
> High-level builder methods will attempt to parse string arguments into Expressions. This can be very convenient, but make sure to keep in mind the dialect of the string. If its written in a specific dialect, you need to set the `dialect` argument.
>
> You can avoid parsing by passing Expressions as arguments, e.g. `.where(exp.column("b") < 4)` instead of `.where("b < 4")`
These methods can be used on any AST, including ones you've parsed:
```python
ast = parse_one("""
SELECT * FROM (SELECT a, b FROM x)
""")
# To modify the AST in-place, set `copy=False`
ast.args["from"].this.this.select("c", copy=False)
print(ast)
# SELECT * FROM (SELECT a, b, c FROM x)
```
The best place to browse all the available high-level builder methods and their parameters is, as always, directly in the code at [expressions.py](../sqlglot/expressions.py).
### Low-level builder methods
High-level builder methods don't account for all possible expressions you might want to build. In the case where a particular high-level method is missing, use the low-level methods. Here are some examples:
```python
node = ast.args["from"].this.this
# These all do the same thing:
# high-level
node.select("c", copy=False)
# low-level
node.set("expressions", node.expressions + [exp.column("c")])
node.append("expressions", exp.column("c"))
node.replace(node.copy().select("c"))
```
> [!NOTE]
> In general, you should use `Expression.set` and `Expression.append` instead of mutating `Expression.args` directly. `set` and `append` take care to update node references like `parent`.
You can also instantiate AST nodes directly:
```python
col = exp.Column(
this=exp.to_identifier("c")
)
node.append("expressions", col)
```
> [!WARNING]
> Because SQLGlot doesn't verify the types of args, it's easy to instantiate an invalid AST Node that won't generate to SQL properly. Take extra care to inspect the expected types of a node using the methods described above.
### Transform
The `Expression.transform` method applies a function to all nodes in an AST in depth-first, pre-order.
```python
def transformer(node):
if isinstance(node, exp.Column) and node.name == "a":
# Return a new node to replace `node`
return exp.func("FUN", node)
# Or return `node` to do nothing and continue traversing the tree
return node
print(parse_one("SELECT a, b FROM x").transform(transformer))
# SELECT FUN(a), b FROM x
```
> [!WARNING]
> As with the walk methods, `transform` doesn't manage scope. For safely transforming the columns and tables in complex expressions, you should probably use Scope.
## Summed up
SQLGlot parses SQL statements into an abstract syntax tree (AST) where nodes are instances of `sqlglot.Expression`.
There are 3 ways to traverse an AST:
1. **args** - use this when you know the exact structure of the AST you're dealing with.
2. **walk methods** - this is the easiest way. Use this for simple cases.
3. **scope** - this is the hardest way. Use this for more complex cases that must handle the semantic context of a query.
There are 3 ways to mutate an AST
1. **high-level builder methods** - use this when you know the exact structure of the AST you're dealing with.
2. **low-level builder methods** - use this only when high-level builder methods don't exist for what you're trying to build.
3. **transform** - use this for simple transformations on arbitrary statements.
And, of course, these mechanisms can be mixed and matched. For example, maybe you need to use scope to traverse an arbitrary AST and the high-level builder methods to mutate it in-place.
Still need help? [Get in touch!](../README.md#get-in-touch)

View file

@ -418,11 +418,11 @@ def percentile_approx(
def rand(seed: t.Optional[ColumnOrLiteral] = None) -> Column: def rand(seed: t.Optional[ColumnOrLiteral] = None) -> Column:
return Column.invoke_anonymous_function(seed, "RAND") return Column.invoke_expression_over_column(seed, expression.Rand)
def randn(seed: t.Optional[ColumnOrLiteral] = None) -> Column: def randn(seed: t.Optional[ColumnOrLiteral] = None) -> Column:
return Column.invoke_anonymous_function(seed, "RANDN") return Column.invoke_expression_over_column(seed, expression.Randn)
def round(col: ColumnOrName, scale: t.Optional[int] = None) -> Column: def round(col: ColumnOrName, scale: t.Optional[int] = None) -> Column:

View file

@ -727,6 +727,7 @@ class BigQuery(Dialect):
def eq_sql(self, expression: exp.EQ) -> str: def eq_sql(self, expression: exp.EQ) -> str:
# Operands of = cannot be NULL in BigQuery # Operands of = cannot be NULL in BigQuery
if isinstance(expression.left, exp.Null) or isinstance(expression.right, exp.Null): if isinstance(expression.left, exp.Null) or isinstance(expression.right, exp.Null):
if not isinstance(expression.parent, exp.Update):
return "NULL" return "NULL"
return self.binary(expression, "=") return self.binary(expression, "=")

View file

@ -105,6 +105,7 @@ class ClickHouse(Dialect):
), ),
"MAP": parse_var_map, "MAP": parse_var_map,
"MATCH": exp.RegexpLike.from_arg_list, "MATCH": exp.RegexpLike.from_arg_list,
"RANDCANONICAL": exp.Rand.from_arg_list,
"UNIQ": exp.ApproxDistinct.from_arg_list, "UNIQ": exp.ApproxDistinct.from_arg_list,
"XOR": lambda args: exp.Xor(expressions=args), "XOR": lambda args: exp.Xor(expressions=args),
} }
@ -142,9 +143,10 @@ class ClickHouse(Dialect):
TABLE_ALIAS_TOKENS = parser.Parser.TABLE_ALIAS_TOKENS - { TABLE_ALIAS_TOKENS = parser.Parser.TABLE_ALIAS_TOKENS - {
TokenType.ANY, TokenType.ANY,
TokenType.SETTINGS,
TokenType.FORMAT,
TokenType.ARRAY, TokenType.ARRAY,
TokenType.FINAL,
TokenType.FORMAT,
TokenType.SETTINGS,
} }
LOG_DEFAULTS_TO_LN = True LOG_DEFAULTS_TO_LN = True
@ -397,6 +399,7 @@ class ClickHouse(Dialect):
exp.Pivot: no_pivot_sql, exp.Pivot: no_pivot_sql,
exp.Quantile: _quantile_sql, exp.Quantile: _quantile_sql,
exp.RegexpLike: lambda self, e: f"match({self.format_args(e.this, e.expression)})", exp.RegexpLike: lambda self, e: f"match({self.format_args(e.this, e.expression)})",
exp.Rand: rename_func("randCanonical"),
exp.StartsWith: rename_func("startsWith"), exp.StartsWith: rename_func("startsWith"),
exp.StrPosition: lambda self, e: f"position({self.format_args(e.this, e.args.get('substr'), e.args.get('position'))})", exp.StrPosition: lambda self, e: f"position({self.format_args(e.this, e.args.get('substr'), e.args.get('position'))})",
exp.VarMap: lambda self, e: _lower_func(var_map_sql(self, e)), exp.VarMap: lambda self, e: _lower_func(var_map_sql(self, e)),

View file

@ -352,6 +352,7 @@ class DuckDB(Dialect):
), ),
exp.RegexpLike: rename_func("REGEXP_MATCHES"), exp.RegexpLike: rename_func("REGEXP_MATCHES"),
exp.RegexpSplit: rename_func("STR_SPLIT_REGEX"), exp.RegexpSplit: rename_func("STR_SPLIT_REGEX"),
exp.Rand: rename_func("RANDOM"),
exp.SafeDivide: no_safe_divide_sql, exp.SafeDivide: no_safe_divide_sql,
exp.Split: rename_func("STR_SPLIT"), exp.Split: rename_func("STR_SPLIT"),
exp.SortArray: _sort_array_sql, exp.SortArray: _sort_array_sql,

View file

@ -445,6 +445,7 @@ class Postgres(Dialect):
), ),
exp.Pivot: no_pivot_sql, exp.Pivot: no_pivot_sql,
exp.Pow: lambda self, e: self.binary(e, "^"), exp.Pow: lambda self, e: self.binary(e, "^"),
exp.Rand: rename_func("RANDOM"),
exp.RegexpLike: lambda self, e: self.binary(e, "~"), exp.RegexpLike: lambda self, e: self.binary(e, "~"),
exp.RegexpILike: lambda self, e: self.binary(e, "~*"), exp.RegexpILike: lambda self, e: self.binary(e, "~*"),
exp.Select: transforms.preprocess( exp.Select: transforms.preprocess(

View file

@ -558,6 +558,7 @@ class Snowflake(Dialect):
[transforms.add_within_group_for_percentiles] [transforms.add_within_group_for_percentiles]
), ),
exp.RegexpILike: _regexpilike_sql, exp.RegexpILike: _regexpilike_sql,
exp.Rand: rename_func("RANDOM"),
exp.Select: transforms.preprocess( exp.Select: transforms.preprocess(
[ [
transforms.eliminate_distinct_on, transforms.eliminate_distinct_on,

View file

@ -127,6 +127,7 @@ class SQLite(Dialect):
exp.LogicalOr: rename_func("MAX"), exp.LogicalOr: rename_func("MAX"),
exp.LogicalAnd: rename_func("MIN"), exp.LogicalAnd: rename_func("MIN"),
exp.Pivot: no_pivot_sql, exp.Pivot: no_pivot_sql,
exp.Rand: rename_func("RANDOM"),
exp.Select: transforms.preprocess( exp.Select: transforms.preprocess(
[ [
transforms.eliminate_distinct_on, transforms.eliminate_distinct_on,

View file

@ -4988,6 +4988,15 @@ class ApproxQuantile(Quantile):
arg_types = {"this": True, "quantile": True, "accuracy": False, "weight": False} arg_types = {"this": True, "quantile": True, "accuracy": False, "weight": False}
class Rand(Func):
_sql_names = ["RAND", "RANDOM"]
arg_types = {"this": False}
class Randn(Func):
arg_types = {"this": False}
class RangeN(Func): class RangeN(Func):
arg_types = {"this": True, "expressions": True, "each": False} arg_types = {"this": True, "expressions": True, "each": False}
@ -6475,7 +6484,7 @@ def table_name(table: Table | str, dialect: DialectType = None, identify: bool =
raise ValueError(f"Cannot parse {table}") raise ValueError(f"Cannot parse {table}")
return ".".join( return ".".join(
part.sql(dialect=dialect, identify=True) part.sql(dialect=dialect, identify=True, copy=False)
if identify or not SAFE_IDENTIFIER_RE.match(part.name) if identify or not SAFE_IDENTIFIER_RE.match(part.name)
else part.name else part.name
for part in table.parts for part in table.parts

View file

@ -1,6 +1,7 @@
from __future__ import annotations from __future__ import annotations
import logging import logging
import re
import typing as t import typing as t
from collections import defaultdict from collections import defaultdict
from functools import reduce from functools import reduce
@ -17,6 +18,8 @@ if t.TYPE_CHECKING:
logger = logging.getLogger("sqlglot") logger = logging.getLogger("sqlglot")
ESCAPED_UNICODE_RE = re.compile(r"\\(\d+)")
class Generator: class Generator:
""" """
@ -917,11 +920,19 @@ class Generator:
def unicodestring_sql(self, expression: exp.UnicodeString) -> str: def unicodestring_sql(self, expression: exp.UnicodeString) -> str:
this = self.sql(expression, "this") this = self.sql(expression, "this")
escape = expression.args.get("escape")
if self.dialect.UNICODE_START: if self.dialect.UNICODE_START:
escape = self.sql(expression, "escape") escape = f" UESCAPE {self.sql(escape)}" if escape else ""
escape = f" UESCAPE {escape}" if escape else ""
return f"{self.dialect.UNICODE_START}{this}{self.dialect.UNICODE_END}{escape}" return f"{self.dialect.UNICODE_START}{this}{self.dialect.UNICODE_END}{escape}"
return this
if escape:
pattern = re.compile(rf"{escape.name}(\d+)")
else:
pattern = ESCAPED_UNICODE_RE
this = pattern.sub(r"\\u\1", this)
return f"{self.dialect.QUOTE_START}{this}{self.dialect.QUOTE_END}"
def rawstring_sql(self, expression: exp.RawString) -> str: def rawstring_sql(self, expression: exp.RawString) -> str:
string = self.escape_str(expression.this.replace("\\", "\\\\")) string = self.escape_str(expression.this.replace("\\", "\\\\"))

View file

@ -49,32 +49,32 @@ def simplify(
dialect = Dialect.get_or_raise(dialect) dialect = Dialect.get_or_raise(dialect)
def _simplify(expression, root=True):
if expression.meta.get(FINAL):
return expression
# group by expressions cannot be simplified, for example # group by expressions cannot be simplified, for example
# select x + 1 + 1 FROM y GROUP BY x + 1 + 1 # select x + 1 + 1 FROM y GROUP BY x + 1 + 1
# the projection must exactly match the group by key # the projection must exactly match the group by key
for group in expression.find_all(exp.Group): group = expression.args.get("group")
select = group.parent
assert select if group and hasattr(expression, "selects"):
groups = set(group.expressions) groups = set(group.expressions)
group.meta[FINAL] = True group.meta[FINAL] = True
for e in select.expressions: for e in expression.selects:
for node, *_ in e.walk(): for node, *_ in e.walk():
if node in groups: if node in groups:
e.meta[FINAL] = True e.meta[FINAL] = True
break break
having = select.args.get("having") having = expression.args.get("having")
if having: if having:
for node, *_ in having.walk(): for node, *_ in having.walk():
if node in groups: if node in groups:
having.meta[FINAL] = True having.meta[FINAL] = True
break break
def _simplify(expression, root=True):
if expression.meta.get(FINAL):
return expression
# Pre-order transformations # Pre-order transformations
node = expression node = expression
node = rewrite_between(node) node = rewrite_between(node)
@ -266,6 +266,8 @@ INVERSE_COMPARISONS: t.Dict[t.Type[exp.Expression], t.Type[exp.Expression]] = {
exp.GTE: exp.LTE, exp.GTE: exp.LTE,
} }
NONDETERMINISTIC = (exp.Rand, exp.Randn)
def _simplify_comparison(expression, left, right, or_=False): def _simplify_comparison(expression, left, right, or_=False):
if isinstance(left, COMPARISONS) and isinstance(right, COMPARISONS): if isinstance(left, COMPARISONS) and isinstance(right, COMPARISONS):
@ -276,7 +278,7 @@ def _simplify_comparison(expression, left, right, or_=False):
rargs = {rl, rr} rargs = {rl, rr}
matching = largs & rargs matching = largs & rargs
columns = {m for m in matching if isinstance(m, exp.Column)} columns = {m for m in matching if not _is_constant(m) and not m.find(*NONDETERMINISTIC)}
if matching and columns: if matching and columns:
try: try:
@ -292,6 +294,11 @@ def _simplify_comparison(expression, left, right, or_=False):
l = l.name l = l.name
r = r.name r = r.name
else: else:
l = extract_date(l)
if not l:
return None
r = extract_date(r)
if not r:
return None return None
for (a, av), (b, bv) in itertools.permutations(((left, l), (right, r))): for (a, av), (b, bv) in itertools.permutations(((left, l), (right, r))):

View file

@ -305,6 +305,7 @@ class Parser(metaclass=_Parser):
TokenType.FALSE, TokenType.FALSE,
TokenType.FIRST, TokenType.FIRST,
TokenType.FILTER, TokenType.FILTER,
TokenType.FINAL,
TokenType.FORMAT, TokenType.FORMAT,
TokenType.FULL, TokenType.FULL,
TokenType.IS, TokenType.IS,

View file

@ -71,7 +71,18 @@ impl Token {
impl Token { impl Token {
#[pyo3(name = "__repr__")] #[pyo3(name = "__repr__")]
fn python_repr(&self) -> PyResult<String> { fn python_repr(&self) -> PyResult<String> {
Ok(format!("{:?}", self)) Python::with_gil(|py| {
Ok(format!(
"<Token token_type: {}, text: {}, line: {}, col: {}, start: {}, end: {}, comments: {}>",
self.token_type_py.as_ref(py).repr()?,
self.text.as_ref(py).repr()?,
self.line,
self.col,
self.start,
self.end,
self.comments.as_ref(py).repr()?,
))
})
} }
} }

View file

@ -206,6 +206,7 @@ class TestBigQuery(Validator):
"duckdb": "SELECT * FROM t WHERE EXISTS(SELECT * FROM UNNEST(nums) AS _t(x) WHERE x > 1)", "duckdb": "SELECT * FROM t WHERE EXISTS(SELECT * FROM UNNEST(nums) AS _t(x) WHERE x > 1)",
}, },
) )
self.validate_identity("UPDATE x SET y = NULL")
self.validate_all( self.validate_all(
"NULL", "NULL",
read={ read={

View file

@ -47,6 +47,7 @@ class TestClickhouse(Validator):
self.validate_identity("SELECT INTERVAL t.days day") self.validate_identity("SELECT INTERVAL t.days day")
self.validate_identity("SELECT match('abc', '([a-z]+)')") self.validate_identity("SELECT match('abc', '([a-z]+)')")
self.validate_identity("dictGet(x, 'y')") self.validate_identity("dictGet(x, 'y')")
self.validate_identity("SELECT * FROM final")
self.validate_identity("SELECT * FROM x FINAL") self.validate_identity("SELECT * FROM x FINAL")
self.validate_identity("SELECT * FROM x AS y FINAL") self.validate_identity("SELECT * FROM x AS y FINAL")
self.validate_identity("'a' IN mapKeys(map('a', 1, 'b', 2))") self.validate_identity("'a' IN mapKeys(map('a', 1, 'b', 2))")

View file

@ -2056,3 +2056,40 @@ SELECT
self.assertEqual(expression.sql(dialect="mysql"), expected_sql) self.assertEqual(expression.sql(dialect="mysql"), expected_sql)
self.assertEqual(expression.sql(dialect="tsql"), expected_sql) self.assertEqual(expression.sql(dialect="tsql"), expected_sql)
def test_random(self):
self.validate_all(
"RAND()",
write={
"bigquery": "RAND()",
"clickhouse": "randCanonical()",
"databricks": "RAND()",
"doris": "RAND()",
"drill": "RAND()",
"duckdb": "RANDOM()",
"hive": "RAND()",
"mysql": "RAND()",
"oracle": "RAND()",
"postgres": "RANDOM()",
"presto": "RAND()",
"spark": "RAND()",
"sqlite": "RANDOM()",
"tsql": "RAND()",
},
read={
"bigquery": "RAND()",
"clickhouse": "randCanonical()",
"databricks": "RAND()",
"doris": "RAND()",
"drill": "RAND()",
"duckdb": "RANDOM()",
"hive": "RAND()",
"mysql": "RAND()",
"oracle": "RAND()",
"postgres": "RANDOM()",
"presto": "RAND()",
"spark": "RAND()",
"sqlite": "RANDOM()",
"tsql": "RAND()",
},
)

View file

@ -546,13 +546,21 @@ class TestPresto(Validator):
def test_unicode_string(self): def test_unicode_string(self):
for prefix in ("u&", "U&"): for prefix in ("u&", "U&"):
self.validate_identity( self.validate_all(
f"{prefix}'Hello winter \\2603 !'", f"{prefix}'Hello winter \\2603 !'",
"U&'Hello winter \\2603 !'", write={
"presto": "U&'Hello winter \\2603 !'",
"snowflake": "'Hello winter \\u2603 !'",
"spark": "'Hello winter \\u2603 !'",
},
) )
self.validate_identity( self.validate_all(
f"{prefix}'Hello winter #2603 !' UESCAPE '#'", f"{prefix}'Hello winter #2603 !' UESCAPE '#'",
"U&'Hello winter #2603 !' UESCAPE '#'", write={
"presto": "U&'Hello winter #2603 !' UESCAPE '#'",
"snowflake": "'Hello winter \\u2603 !'",
"spark": "'Hello winter \\u2603 !'",
},
) )
def test_presto(self): def test_presto(self):

View file

@ -696,6 +696,18 @@ x <> 1;
NOT 1 <> x; NOT 1 <> x;
x = 1; x = 1;
x > CAST('2024-01-01' AS DATE) OR x > CAST('2023-12-31' AS DATE);
x > CAST('2023-12-31' AS DATE);
CAST(x AS DATE) > CAST('2024-01-01' AS DATE) OR CAST(x AS DATE) > CAST('2023-12-31' AS DATE);
CAST(x AS DATE) > CAST('2023-12-31' AS DATE);
FUN() > 0 OR FUN() > 1;
FUN() > 0;
RAND() > 0 OR RAND() > 1;
RAND() > 0 OR RAND() > 1;
-------------------------------------- --------------------------------------
-- COALESCE -- COALESCE
-------------------------------------- --------------------------------------
@ -835,7 +847,7 @@ DATE_TRUNC('quarter', x) = CAST('2021-01-02' AS DATE);
DATE_TRUNC('quarter', x) = CAST('2021-01-02' AS DATE); DATE_TRUNC('quarter', x) = CAST('2021-01-02' AS DATE);
DATE_TRUNC('year', x) <> CAST('2021-01-01' AS DATE); DATE_TRUNC('year', x) <> CAST('2021-01-01' AS DATE);
x < CAST('2021-01-01' AS DATE) AND x >= CAST('2022-01-01' AS DATE); FALSE;
-- Always true, except for nulls -- Always true, except for nulls
DATE_TRUNC('year', x) <> CAST('2021-01-02' AS DATE); DATE_TRUNC('year', x) <> CAST('2021-01-02' AS DATE);