1
0
Fork 0

Merging upstream version 2.2.0.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-02-07 00:48:48 +01:00
parent ab1302c465
commit 95bca6b33d
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
42 changed files with 1085 additions and 840 deletions

2
.git-blame-ignore-revs Normal file
View file

@ -0,0 +1,2 @@
# Black all the code.
33e8b461b6ddb717859dde664b71209ce69c119a

View file

@ -7,3 +7,5 @@
<!--- We appreciate your help and want to give you credit. Please take a moment to put an `x` in the boxes below as you complete them. -->
- [ ] I've added this contribution to the `CHANGELOG`.
- [ ] I've added my name to the `AUTHORS` file (or it's already there).
- [ ] I installed pre-commit hooks (`pip install pre-commit && pre-commit install`), and ran `black` on my code.
- [x] Please squash merge this pull request (uncheck if you'd like us to merge as multiple commits)

6
.pre-commit-config.yaml Normal file
View file

@ -0,0 +1,6 @@
repos:
- repo: https://github.com/psf/black
rev: stable
hooks:
- id: black
language_version: python3.7

View file

@ -5,7 +5,7 @@ install: ./.travis/install.sh
script:
- source ~/.venv/bin/activate
- tox
- if [[ "$TOXENV" == "py37" ]]; then black --check cli_helpers tests ; else echo "Skipping black for $TOXENV"; fi
matrix:
include:
- os: linux

View file

@ -8,7 +8,7 @@ if [[ "$(uname -s)" == 'Darwin' ]]; then
git clone --depth 1 https://github.com/pyenv/pyenv ~/.pyenv
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv init --path)"
case "${TOXENV}" in
py36)
@ -22,4 +22,4 @@ fi
pip install virtualenv
python -m virtualenv ~/.venv
source ~/.venv/bin/activate
pip install tox
pip install -r requirements-dev.txt -U --upgrade-strategy only-if-needed

View file

@ -21,6 +21,8 @@ This project receives help from these awesome contributors:
- laixintao
- Georgy Frolov
- Michał Górny
- Waldir Pimenta
- Mel Dafert
Thanks
------

View file

@ -1,13 +1,23 @@
Changelog
=========
Version 2.2.0
-------------
(released on 2021-08-27)
* Remove dependency on terminaltables
* Add psql_unicode table format
* Add minimal table format
* Fix pip2 installing py3-only versions
* Format unprintable bytes (eg 0x00, 0x01) as hex
Version 2.1.0
-------------
(released on 2020-07-29)
* Speed up ouput styling of tables.
* Speed up output styling of tables.
Version 2.0.1
-------------
@ -40,6 +50,7 @@ Version 1.2.0
(released on 2019-04-05)
* Fix issue with writing non-ASCII characters to config files.
* Run tests on Python 3.7.
* Use twine check during packaging tests.
* Rename old tsv format to csv-tab (because it add quotes), introduce new tsv output adapter.

View file

@ -24,9 +24,9 @@ Ready to contribute? Here's how to set up CLI Helpers for local development.
$ pip install virtualenv
$ virtualenv cli_helpers_dev
We've just created a virtual environment that we'll use to install all the dependencies
and tools we need to work on CLI Helpers. Whenever you want to work on CLI Helpers, you
need to activate the virtual environment::
We've just created a virtual environment called ``cli_helpers_dev``
that we'll use to install all the dependencies and tools we need to work on CLI Helpers.
Whenever you want to work on CLI Helpers, you need to activate the virtual environment::
$ source cli_helpers_dev/bin/activate
@ -34,7 +34,7 @@ Ready to contribute? Here's how to set up CLI Helpers for local development.
$ deactivate
5. Install the dependencies and development tools::
5. From within the virtual environment, install the dependencies and development tools::
$ pip install -r requirements-dev.txt
$ pip install --editable .
@ -43,11 +43,14 @@ Ready to contribute? Here's how to set up CLI Helpers for local development.
$ git checkout -b <name-of-bugfix-or-feature> master
7. While you work on your bugfix or feature, be sure to pull the latest changes from ``upstream``. This ensures that your local codebase is up-to-date::
7. While you work on your bugfix or feature, be sure to pull the latest changes from ``upstream``.
This ensures that your local codebase is up-to-date::
$ git pull upstream master
8. When your work is ready for the CLI Helpers team to review it, push your branch to your fork::
8. When your work is ready for the CLI Helpers team to review it,
make sure to add an entry to CHANGELOG file, and add your name to the AUTHORS file.
Then, push your branch to your fork::
$ git push origin <name-of-bugfix-or-feature>
@ -77,18 +80,31 @@ You can also measure CLI Helper's test coverage by running::
Coding Style
------------
CLI Helpers requires code submissions to adhere to
`PEP 8 <https://www.python.org/dev/peps/pep-0008/>`_.
It's easy to check the style of your code, just run::
When you submit a PR, the changeset is checked for pep8 compliance using
`black <https://github.com/psf/black>`_. If you see a build failing because
of these checks, install ``black`` and apply style fixes:
$ pep8radius master
::
If you see any PEP 8 style issues, you can automatically fix them by running::
$ pip install black
$ black .
$ pep8radius master --in-place
Then commit and push the fixes.
Be sure to commit and push any PEP 8 fixes.
To enforce ``black`` applied on every commit, we also suggest installing ``pre-commit`` and
using the ``pre-commit`` hooks available in this repo:
::
$ pip install pre-commit
$ pre-commit install
Git blame
---------
Use ``git blame my_file.py --ignore-revs-file .git-blame-ignore-revs`` to exclude irrelevant commits
(specifically Black) from ``git blame``. For more information,
see `here <https://github.com/psf/black#migrating-your-code-style-without-ruining-git-blame>`_.
Documentation
-------------

View file

@ -6,3 +6,4 @@ recursive-include docs *.rst
recursive-include docs Makefile
recursive-include tests *.py
include tests/config_data/*
exclude .pre-commit-config.yaml .git-blame-ignore-revs

View file

@ -1 +1 @@
__version__ = '2.1.0'
__version__ = "2.2.0"

View file

@ -5,8 +5,8 @@ from decimal import Decimal
import sys
PY2 = sys.version_info[0] == 2
WIN = sys.platform.startswith('win')
MAC = sys.platform == 'darwin'
WIN = sys.platform.startswith("win")
MAC = sys.platform == "darwin"
if PY2:

View file

@ -16,11 +16,13 @@ logger = logging.getLogger(__name__)
class ConfigError(Exception):
"""Base class for exceptions in this module."""
pass
class DefaultConfigValidationError(ConfigError):
"""Indicates the default config file did not validate correctly."""
pass
@ -40,11 +42,19 @@ class Config(UserDict, object):
file.
"""
def __init__(self, app_name, app_author, filename, default=None,
validate=False, write_default=False, additional_dirs=()):
def __init__(
self,
app_name,
app_author,
filename,
default=None,
validate=False,
write_default=False,
additional_dirs=(),
):
super(Config, self).__init__()
#: The :class:`ConfigObj` instance.
self.data = ConfigObj()
self.data = ConfigObj(encoding="utf8")
self.default = {}
self.default_file = self.default_config = None
@ -64,15 +74,19 @@ class Config(UserDict, object):
elif default is not None:
raise TypeError(
'"default" must be a dict or {}, not {}'.format(
text_type.__name__, type(default)))
text_type.__name__, type(default)
)
)
if self.write_default and not self.default_file:
raise ValueError('Cannot use "write_default" without specifying '
'a default file.')
raise ValueError(
'Cannot use "write_default" without specifying ' "a default file."
)
if self.validate and not self.default_file:
raise ValueError('Cannot use "validate" without specifying a '
'default file.')
raise ValueError(
'Cannot use "validate" without specifying a ' "default file."
)
def read_default_config(self):
"""Read the default config file.
@ -81,11 +95,18 @@ class Config(UserDict, object):
the *default* file.
"""
if self.validate:
self.default_config = ConfigObj(configspec=self.default_file,
list_values=False, _inspec=True,
encoding='utf8')
valid = self.default_config.validate(Validator(), copy=True,
preserve_errors=True)
self.default_config = ConfigObj(
configspec=self.default_file,
list_values=False,
_inspec=True,
encoding="utf8",
)
# ConfigObj does not set the encoding on the configspec.
self.default_config.configspec.encoding = "utf8"
valid = self.default_config.validate(
Validator(), copy=True, preserve_errors=True
)
if valid is not True:
for name, section in valid.items():
if section is True:
@ -93,8 +114,8 @@ class Config(UserDict, object):
for key, value in section.items():
if isinstance(value, ValidateError):
raise DefaultConfigValidationError(
'section [{}], key "{}": {}'.format(
name, key, value))
'section [{}], key "{}": {}'.format(name, key, value)
)
elif self.default_file:
self.default_config, _ = self.read_config_file(self.default_file)
@ -113,13 +134,15 @@ class Config(UserDict, object):
def user_config_file(self):
"""Get the absolute path to the user config file."""
return os.path.join(
get_user_config_dir(self.app_name, self.app_author),
self.filename)
get_user_config_dir(self.app_name, self.app_author), self.filename
)
def system_config_files(self):
"""Get a list of absolute paths to the system config files."""
return [os.path.join(f, self.filename) for f in get_system_config_dirs(
self.app_name, self.app_author)]
return [
os.path.join(f, self.filename)
for f in get_system_config_dirs(self.app_name, self.app_author)
]
def additional_files(self):
"""Get a list of absolute paths to the additional config files."""
@ -127,8 +150,11 @@ class Config(UserDict, object):
def all_config_files(self):
"""Get a list of absolute paths to all the config files."""
return (self.additional_files() + self.system_config_files() +
[self.user_config_file()])
return (
self.additional_files()
+ self.system_config_files()
+ [self.user_config_file()]
)
def write_default_config(self, overwrite=False):
"""Write the default config to the user's config file.
@ -139,7 +165,7 @@ class Config(UserDict, object):
if not overwrite and os.path.exists(destination):
return
with io.open(destination, mode='wb') as f:
with io.open(destination, mode="wb") as f:
self.default_config.write(f)
def write(self, outfile=None, section=None):
@ -149,7 +175,7 @@ class Config(UserDict, object):
:param None/str section: The config section to write, or :data:`None`
to write the entire config.
"""
with io.open(outfile or self.user_config_file(), 'wb') as f:
with io.open(outfile or self.user_config_file(), "wb") as f:
self.data.write(outfile=f, section=section)
def read_config_file(self, f):
@ -159,18 +185,21 @@ class Config(UserDict, object):
"""
configspec = self.default_file if self.validate else None
try:
config = ConfigObj(infile=f, configspec=configspec,
interpolation=False, encoding='utf8')
config = ConfigObj(
infile=f, configspec=configspec, interpolation=False, encoding="utf8"
)
# ConfigObj does not set the encoding on the configspec.
if config.configspec is not None:
config.configspec.encoding = "utf8"
except ConfigObjError as e:
logger.warning(
'Unable to parse line {} of config file {}'.format(
e.line_number, f))
"Unable to parse line {} of config file {}".format(e.line_number, f)
)
config = e.config
valid = True
if self.validate:
valid = config.validate(Validator(), preserve_errors=True,
copy=True)
valid = config.validate(Validator(), preserve_errors=True, copy=True)
if bool(config):
self.config_filenames.append(config.filename)
@ -220,15 +249,17 @@ def get_user_config_dir(app_name, app_author, roaming=True, force_xdg=True):
"""
if WIN:
key = 'APPDATA' if roaming else 'LOCALAPPDATA'
folder = os.path.expanduser(os.environ.get(key, '~'))
key = "APPDATA" if roaming else "LOCALAPPDATA"
folder = os.path.expanduser(os.environ.get(key, "~"))
return os.path.join(folder, app_author, app_name)
if MAC and not force_xdg:
return os.path.join(os.path.expanduser(
'~/Library/Application Support'), app_name)
return os.path.join(
os.path.expanduser(os.environ.get('XDG_CONFIG_HOME', '~/.config')),
_pathify(app_name))
os.path.expanduser("~/Library/Application Support"), app_name
)
return os.path.join(
os.path.expanduser(os.environ.get("XDG_CONFIG_HOME", "~/.config")),
_pathify(app_name),
)
def get_system_config_dirs(app_name, app_author, force_xdg=True):
@ -256,15 +287,15 @@ def get_system_config_dirs(app_name, app_author, force_xdg=True):
"""
if WIN:
folder = os.environ.get('PROGRAMDATA')
folder = os.environ.get("PROGRAMDATA")
return [os.path.join(folder, app_author, app_name)]
if MAC and not force_xdg:
return [os.path.join('/Library/Application Support', app_name)]
dirs = os.environ.get('XDG_CONFIG_DIRS', '/etc/xdg')
return [os.path.join("/Library/Application Support", app_name)]
dirs = os.environ.get("XDG_CONFIG_DIRS", "/etc/xdg")
paths = [os.path.expanduser(x) for x in dirs.split(os.pathsep)]
return [os.path.join(d, _pathify(app_name)) for d in paths]
def _pathify(s):
"""Convert spaces to hyphens and lowercase a string."""
return '-'.join(s.split()).lower()
return "-".join(s.split()).lower()

View file

@ -10,4 +10,4 @@ When formatting data, you'll primarily use the
from .output_formatter import format_output, TabularOutputFormatter
__all__ = ['format_output', 'TabularOutputFormatter']
__all__ = ["format_output", "TabularOutputFormatter"]

View file

@ -8,7 +8,7 @@ from cli_helpers.compat import csv, StringIO
from cli_helpers.utils import filter_dict_by_key
from .preprocessors import bytes_to_string, override_missing_value
supported_formats = ('csv', 'csv-tab')
supported_formats = ("csv", "csv-tab")
preprocessors = (override_missing_value, bytes_to_string)
@ -23,18 +23,26 @@ class linewriter(object):
self.line = d
def adapter(data, headers, table_format='csv', **kwargs):
def adapter(data, headers, table_format="csv", **kwargs):
"""Wrap the formatting inside a function for TabularOutputFormatter."""
keys = ('dialect', 'delimiter', 'doublequote', 'escapechar',
'quotechar', 'quoting', 'skipinitialspace', 'strict')
if table_format == 'csv':
delimiter = ','
elif table_format == 'csv-tab':
delimiter = '\t'
keys = (
"dialect",
"delimiter",
"doublequote",
"escapechar",
"quotechar",
"quoting",
"skipinitialspace",
"strict",
)
if table_format == "csv":
delimiter = ","
elif table_format == "csv-tab":
delimiter = "\t"
else:
raise ValueError('Invalid table_format specified.')
raise ValueError("Invalid table_format specified.")
ckwargs = {'delimiter': delimiter, 'lineterminator': ''}
ckwargs = {"delimiter": delimiter, "lineterminator": ""}
ckwargs.update(filter_dict_by_key(kwargs, keys))
l = linewriter()

View file

@ -4,16 +4,25 @@
from __future__ import unicode_literals
from collections import namedtuple
from cli_helpers.compat import (text_type, binary_type, int_types, float_types,
zip_longest)
from cli_helpers.compat import (
text_type,
binary_type,
int_types,
float_types,
zip_longest,
)
from cli_helpers.utils import unique_items
from . import (delimited_output_adapter, vertical_table_adapter,
tabulate_adapter, terminaltables_adapter, tsv_output_adapter)
from . import (
delimited_output_adapter,
vertical_table_adapter,
tabulate_adapter,
tsv_output_adapter,
)
from decimal import Decimal
import itertools
MISSING_VALUE = '<null>'
MISSING_VALUE = "<null>"
MAX_FIELD_WIDTH = 500
TYPES = {
@ -23,12 +32,12 @@ TYPES = {
float: 3,
Decimal: 3,
binary_type: 4,
text_type: 5
text_type: 5,
}
OutputFormatHandler = namedtuple(
'OutputFormatHandler',
'format_name preprocessors formatter formatter_args')
"OutputFormatHandler", "format_name preprocessors formatter formatter_args"
)
class TabularOutputFormatter(object):
@ -96,8 +105,7 @@ class TabularOutputFormatter(object):
if format_name in self.supported_formats:
self._format_name = format_name
else:
raise ValueError('unrecognized format_name "{}"'.format(
format_name))
raise ValueError('unrecognized format_name "{}"'.format(format_name))
@property
def supported_formats(self):
@ -105,8 +113,9 @@ class TabularOutputFormatter(object):
return tuple(self._output_formats.keys())
@classmethod
def register_new_formatter(cls, format_name, handler, preprocessors=(),
kwargs=None):
def register_new_formatter(
cls, format_name, handler, preprocessors=(), kwargs=None
):
"""Register a new output formatter.
:param str format_name: The name of the format.
@ -117,10 +126,18 @@ class TabularOutputFormatter(object):
"""
cls._output_formats[format_name] = OutputFormatHandler(
format_name, preprocessors, handler, kwargs or {})
format_name, preprocessors, handler, kwargs or {}
)
def format_output(self, data, headers, format_name=None,
preprocessors=(), column_types=None, **kwargs):
def format_output(
self,
data,
headers,
format_name=None,
preprocessors=(),
column_types=None,
**kwargs
):
r"""Format the headers and data using a specific formatter.
*format_name* must be a supported formatter (see
@ -142,15 +159,13 @@ class TabularOutputFormatter(object):
if format_name not in self.supported_formats:
raise ValueError('unrecognized format "{}"'.format(format_name))
(_, _preprocessors, formatter,
fkwargs) = self._output_formats[format_name]
(_, _preprocessors, formatter, fkwargs) = self._output_formats[format_name]
fkwargs.update(kwargs)
if column_types is None:
data = list(data)
column_types = self._get_column_types(data)
for f in unique_items(preprocessors + _preprocessors):
data, headers = f(data, headers, column_types=column_types,
**fkwargs)
data, headers = f(data, headers, column_types=column_types, **fkwargs)
return formatter(list(data), headers, column_types=column_types, **fkwargs)
def _get_column_types(self, data):
@ -197,32 +212,44 @@ def format_output(data, headers, format_name, **kwargs):
for vertical_format in vertical_table_adapter.supported_formats:
TabularOutputFormatter.register_new_formatter(
vertical_format, vertical_table_adapter.adapter,
vertical_format,
vertical_table_adapter.adapter,
vertical_table_adapter.preprocessors,
{'table_format': vertical_format, 'missing_value': MISSING_VALUE, 'max_field_width': None})
{
"table_format": vertical_format,
"missing_value": MISSING_VALUE,
"max_field_width": None,
},
)
for delimited_format in delimited_output_adapter.supported_formats:
TabularOutputFormatter.register_new_formatter(
delimited_format, delimited_output_adapter.adapter,
delimited_format,
delimited_output_adapter.adapter,
delimited_output_adapter.preprocessors,
{'table_format': delimited_format, 'missing_value': '', 'max_field_width': None})
{
"table_format": delimited_format,
"missing_value": "",
"max_field_width": None,
},
)
for tabulate_format in tabulate_adapter.supported_formats:
TabularOutputFormatter.register_new_formatter(
tabulate_format, tabulate_adapter.adapter,
tabulate_adapter.preprocessors +
(tabulate_adapter.style_output_table(tabulate_format),),
{'table_format': tabulate_format, 'missing_value': MISSING_VALUE, 'max_field_width': MAX_FIELD_WIDTH})
for terminaltables_format in terminaltables_adapter.supported_formats:
TabularOutputFormatter.register_new_formatter(
terminaltables_format, terminaltables_adapter.adapter,
terminaltables_adapter.preprocessors +
(terminaltables_adapter.style_output_table(terminaltables_format),),
{'table_format': terminaltables_format, 'missing_value': MISSING_VALUE, 'max_field_width': MAX_FIELD_WIDTH})
tabulate_format,
tabulate_adapter.adapter,
tabulate_adapter.get_preprocessors(tabulate_format),
{
"table_format": tabulate_format,
"missing_value": MISSING_VALUE,
"max_field_width": MAX_FIELD_WIDTH,
},
),
for tsv_format in tsv_output_adapter.supported_formats:
TabularOutputFormatter.register_new_formatter(
tsv_format, tsv_output_adapter.adapter,
tsv_format,
tsv_output_adapter.adapter,
tsv_output_adapter.preprocessors,
{'table_format': tsv_format, 'missing_value': '', 'max_field_width': None})
{"table_format": tsv_format, "missing_value": "", "max_field_width": None},
)

View file

@ -7,7 +7,9 @@ from cli_helpers import utils
from cli_helpers.compat import text_type, int_types, float_types, HAS_PYGMENTS
def truncate_string(data, headers, max_field_width=None, skip_multiline_string=True, **_):
def truncate_string(
data, headers, max_field_width=None, skip_multiline_string=True, **_
):
"""Truncate very long strings. Only needed for tabular
representation, because trying to tabulate very long data
is problematic in terms of performance, and does not make any
@ -19,8 +21,19 @@ def truncate_string(data, headers, max_field_width=None, skip_multiline_string=T
:return: The processed data and headers.
:rtype: tuple
"""
return (([utils.truncate_string(v, max_field_width, skip_multiline_string) for v in row] for row in data),
[utils.truncate_string(h, max_field_width, skip_multiline_string) for h in headers])
return (
(
[
utils.truncate_string(v, max_field_width, skip_multiline_string)
for v in row
]
for row in data
),
[
utils.truncate_string(h, max_field_width, skip_multiline_string)
for h in headers
],
)
def convert_to_string(data, headers, **_):
@ -35,13 +48,20 @@ def convert_to_string(data, headers, **_):
:rtype: tuple
"""
return (([utils.to_string(v) for v in row] for row in data),
[utils.to_string(h) for h in headers])
return (
([utils.to_string(v) for v in row] for row in data),
[utils.to_string(h) for h in headers],
)
def override_missing_value(data, headers, style=None,
def override_missing_value(
data,
headers,
style=None,
missing_value_token="Token.Output.Null",
missing_value='', **_):
missing_value="",
**_
):
"""Override missing values in the *data* with *missing_value*.
A missing value is any value that is :data:`None`.
@ -55,12 +75,15 @@ def override_missing_value(data, headers, style=None,
:rtype: tuple
"""
def fields():
for row in data:
processed = []
for field in row:
if field is None and style and HAS_PYGMENTS:
styled = utils.style_field(missing_value_token, missing_value, style)
styled = utils.style_field(
missing_value_token, missing_value, style
)
processed.append(styled)
elif field is None:
processed.append(missing_value)
@ -71,7 +94,7 @@ def override_missing_value(data, headers, style=None,
return (fields(), headers)
def override_tab_value(data, headers, new_value=' ', **_):
def override_tab_value(data, headers, new_value=" ", **_):
"""Override tab values in the *data* with *new_value*.
:param iterable data: An :term:`iterable` (e.g. list) of rows.
@ -81,9 +104,13 @@ def override_tab_value(data, headers, new_value=' ', **_):
:rtype: tuple
"""
return (([v.replace('\t', new_value) if isinstance(v, text_type) else v
for v in row] for row in data),
headers)
return (
(
[v.replace("\t", new_value) if isinstance(v, text_type) else v for v in row]
for row in data
),
headers,
)
def escape_newlines(data, headers, **_):
@ -121,8 +148,10 @@ def bytes_to_string(data, headers, **_):
:rtype: tuple
"""
return (([utils.bytes_to_string(v) for v in row] for row in data),
[utils.bytes_to_string(h) for h in headers])
return (
([utils.bytes_to_string(v) for v in row] for row in data),
[utils.bytes_to_string(h) for h in headers],
)
def align_decimals(data, headers, column_types=(), **_):
@ -204,17 +233,26 @@ def quote_whitespaces(data, headers, quotestyle="'", **_):
for row in data:
result = []
for i, v in enumerate(row):
quotation = quotestyle if quote[i] else ''
result.append('{quotestyle}{value}{quotestyle}'.format(
quotestyle=quotation, value=v))
quotation = quotestyle if quote[i] else ""
result.append(
"{quotestyle}{value}{quotestyle}".format(
quotestyle=quotation, value=v
)
)
yield result
return results(data), headers
def style_output(data, headers, style=None,
header_token='Token.Output.Header',
odd_row_token='Token.Output.OddRow',
even_row_token='Token.Output.EvenRow', **_):
def style_output(
data,
headers,
style=None,
header_token="Token.Output.Header",
odd_row_token="Token.Output.OddRow",
even_row_token="Token.Output.EvenRow",
**_
):
"""Style the *data* and *headers* (e.g. bold, italic, and colors)
.. NOTE::
@ -253,19 +291,32 @@ def style_output(data, headers, style=None,
"""
from cli_helpers.utils import filter_style_table
relevant_styles = filter_style_table(style, header_token, odd_row_token, even_row_token)
relevant_styles = filter_style_table(
style, header_token, odd_row_token, even_row_token
)
if style and HAS_PYGMENTS:
if relevant_styles.get(header_token):
headers = [utils.style_field(header_token, header, style) for header in headers]
headers = [
utils.style_field(header_token, header, style) for header in headers
]
if relevant_styles.get(odd_row_token) or relevant_styles.get(even_row_token):
data = ([utils.style_field(odd_row_token if i % 2 else even_row_token, f, style)
for f in r] for i, r in enumerate(data, 1))
data = (
[
utils.style_field(
odd_row_token if i % 2 else even_row_token, f, style
)
for f in r
]
for i, r in enumerate(data, 1)
)
return iter(data), headers
def format_numbers(data, headers, column_types=(), integer_format=None,
float_format=None, **_):
def format_numbers(
data, headers, column_types=(), integer_format=None, float_format=None, **_
):
"""Format numbers according to a format specification.
This uses Python's format specification to format numbers of the following
@ -296,5 +347,7 @@ def format_numbers(data, headers, column_types=(), integer_format=None,
return format(field, float_format)
return field
data = ([_format_number(v, column_types[i]) for i, v in enumerate(row)] for row in data)
data = (
[_format_number(v, column_types[i]) for i, v in enumerate(row)] for row in data
)
return data, headers

View file

@ -4,24 +4,110 @@
from __future__ import unicode_literals
from cli_helpers.utils import filter_dict_by_key
from cli_helpers.compat import (Terminal256Formatter, StringIO)
from .preprocessors import (convert_to_string, truncate_string, override_missing_value,
style_output, HAS_PYGMENTS)
from cli_helpers.compat import Terminal256Formatter, StringIO
from .preprocessors import (
convert_to_string,
truncate_string,
override_missing_value,
style_output,
HAS_PYGMENTS,
escape_newlines,
)
import tabulate
supported_markup_formats = ('mediawiki', 'html', 'latex', 'latex_booktabs',
'textile', 'moinmoin', 'jira')
supported_table_formats = ('plain', 'simple', 'grid', 'fancy_grid', 'pipe',
'orgtbl', 'psql', 'rst')
tabulate.MIN_PADDING = 0
tabulate._table_formats["psql_unicode"] = tabulate.TableFormat(
lineabove=tabulate.Line("", "", "", ""),
linebelowheader=tabulate.Line("", "", "", ""),
linebetweenrows=None,
linebelow=tabulate.Line("", "", "", ""),
headerrow=tabulate.DataRow("", "", ""),
datarow=tabulate.DataRow("", "", ""),
padding=1,
with_header_hide=None,
)
tabulate._table_formats["double"] = tabulate.TableFormat(
lineabove=tabulate.Line("", "", "", ""),
linebelowheader=tabulate.Line("", "", "", ""),
linebetweenrows=None,
linebelow=tabulate.Line("", "", "", ""),
headerrow=tabulate.DataRow("", "", ""),
datarow=tabulate.DataRow("", "", ""),
padding=1,
with_header_hide=None,
)
tabulate._table_formats["ascii"] = tabulate.TableFormat(
lineabove=tabulate.Line("+", "-", "+", "+"),
linebelowheader=tabulate.Line("+", "-", "+", "+"),
linebetweenrows=None,
linebelow=tabulate.Line("+", "-", "+", "+"),
headerrow=tabulate.DataRow("|", "|", "|"),
datarow=tabulate.DataRow("|", "|", "|"),
padding=1,
with_header_hide=None,
)
# "minimal" is the same as "plain", but without headers
tabulate._table_formats["minimal"] = tabulate._table_formats["plain"]
supported_markup_formats = (
"mediawiki",
"html",
"latex",
"latex_booktabs",
"textile",
"moinmoin",
"jira",
)
supported_table_formats = (
"ascii",
"plain",
"simple",
"minimal",
"grid",
"fancy_grid",
"pipe",
"orgtbl",
"psql",
"psql_unicode",
"rst",
"github",
"double",
)
supported_formats = supported_markup_formats + supported_table_formats
preprocessors = (override_missing_value, convert_to_string, truncate_string, style_output)
default_kwargs = {"ascii": {"numalign": "left"}}
headless_formats = ("minimal",)
def get_preprocessors(format_name):
common_formatters = (
override_missing_value,
convert_to_string,
truncate_string,
style_output,
)
if tabulate.multiline_formats.get(format_name):
return common_formatters + (style_output_table(format_name),)
else:
return common_formatters + (escape_newlines, style_output_table(format_name))
def style_output_table(format_name=""):
def style_output(data, headers, style=None,
table_separator_token='Token.Output.TableSeparator', **_):
def style_output(
data,
headers,
style=None,
table_separator_token="Token.Output.TableSeparator",
**_
):
"""Style the *table* a(e.g. bold, italic, and colors)
.. NOTE::
@ -71,24 +157,28 @@ def style_output_table(format_name=""):
if not elt:
return elt
if elt.__class__ == tabulate.Line:
return tabulate.Line(*(style_field(table_separator_token, val) for val in elt))
return tabulate.Line(
*(style_field(table_separator_token, val) for val in elt)
)
if elt.__class__ == tabulate.DataRow:
return tabulate.DataRow(*(style_field(table_separator_token, val) for val in elt))
return tabulate.DataRow(
*(style_field(table_separator_token, val) for val in elt)
)
return elt
srcfmt = tabulate._table_formats[format_name]
newfmt = tabulate.TableFormat(
*(addColorInElt(val) for val in srcfmt))
newfmt = tabulate.TableFormat(*(addColorInElt(val) for val in srcfmt))
tabulate._table_formats[format_name] = newfmt
return iter(data), headers
return style_output
def adapter(data, headers, table_format=None, preserve_whitespace=False,
**kwargs):
def adapter(data, headers, table_format=None, preserve_whitespace=False, **kwargs):
"""Wrap tabulate inside a function for TabularOutputFormatter."""
keys = ('floatfmt', 'numalign', 'stralign', 'showindex', 'disable_numparse')
tkwargs = {'tablefmt': table_format}
keys = ("floatfmt", "numalign", "stralign", "showindex", "disable_numparse")
tkwargs = {"tablefmt": table_format}
tkwargs.update(filter_dict_by_key(kwargs, keys))
if table_format in supported_markup_formats:
@ -96,4 +186,7 @@ def adapter(data, headers, table_format=None, preserve_whitespace=False,
tabulate.PRESERVE_WHITESPACE = preserve_whitespace
return iter(tabulate.tabulate(data, headers, **tkwargs).split('\n'))
tkwargs.update(default_kwargs.get(table_format, {}))
if table_format in headless_formats:
headers = []
return iter(tabulate.tabulate(data, headers, **tkwargs).split("\n"))

View file

@ -1,97 +0,0 @@
# -*- coding: utf-8 -*-
"""Format adapter for the terminaltables module."""
from __future__ import unicode_literals
import terminaltables
import itertools
from cli_helpers.utils import filter_dict_by_key
from cli_helpers.compat import (Terminal256Formatter, StringIO)
from .preprocessors import (convert_to_string, truncate_string, override_missing_value,
style_output, HAS_PYGMENTS,
override_tab_value, escape_newlines)
supported_formats = ('ascii', 'double', 'github')
preprocessors = (
override_missing_value, convert_to_string, override_tab_value,
truncate_string, style_output, escape_newlines
)
table_format_handler = {
'ascii': terminaltables.AsciiTable,
'double': terminaltables.DoubleTable,
'github': terminaltables.GithubFlavoredMarkdownTable,
}
def style_output_table(format_name=""):
def style_output(data, headers, style=None,
table_separator_token='Token.Output.TableSeparator', **_):
"""Style the *table* (e.g. bold, italic, and colors)
.. NOTE::
This requires the `Pygments <http://pygments.org/>`_ library to
be installed. You can install it with CLI Helpers as an extra::
$ pip install cli_helpers[styles]
Example usage::
from cli_helpers.tabular_output import terminaltables_adapter
from pygments.style import Style
from pygments.token import Token
class YourStyle(Style):
default_style = ""
styles = {
Token.Output.TableSeparator: '#ansigray'
}
headers = ('First Name', 'Last Name')
data = [['Fred', 'Roberts'], ['George', 'Smith']]
style_output_table = terminaltables_adapter.style_output_table('psql')
style_output_table(data, headers, style=CliStyle)
output = terminaltables_adapter.adapter(data, headers, style=YourStyle)
:param iterable data: An :term:`iterable` (e.g. list) of rows.
:param iterable headers: The column headers.
:param str/pygments.style.Style style: A Pygments style. You can `create
your own styles <https://pygments.org/docs/styles#creating-own-styles>`_.
:param str table_separator_token: The token type to be used for the table separator.
:return: data and headers.
:rtype: tuple
"""
if style and HAS_PYGMENTS and format_name in supported_formats:
formatter = Terminal256Formatter(style=style)
def style_field(token, field):
"""Get the styled text for a *field* using *token* type."""
s = StringIO()
formatter.format(((token, field),), s)
return s.getvalue()
clss = table_format_handler[format_name]
for char in [char for char in terminaltables.base_table.BaseTable.__dict__ if char.startswith("CHAR_")]:
setattr(clss, char, style_field(
table_separator_token, getattr(clss, char)))
return iter(data), headers
return style_output
def adapter(data, headers, table_format=None, **kwargs):
"""Wrap terminaltables inside a function for TabularOutputFormatter."""
keys = ('title', )
table = table_format_handler[table_format]
t = table([headers] + list(data), **filter_dict_by_key(kwargs, keys))
dimensions = terminaltables.width_and_alignment.max_dimensions(
t.table_data,
t.padding_left,
t.padding_right)[:3]
for r in t.gen_table(*dimensions):
yield u''.join(r)

View file

@ -7,10 +7,11 @@ from .preprocessors import bytes_to_string, override_missing_value, convert_to_s
from itertools import chain
from cli_helpers.utils import replace
supported_formats = ('tsv',)
supported_formats = ("tsv",)
preprocessors = (override_missing_value, bytes_to_string, convert_to_string)
def adapter(data, headers, **kwargs):
"""Wrap the formatting inside a function for TabularOutputFormatter."""
for row in chain((headers,), data):
yield "\t".join((replace(r, (('\n', r'\n'), ('\t', r'\t'))) for r in row))
yield "\t".join((replace(r, (("\n", r"\n"), ("\t", r"\t"))) for r in row))

View file

@ -4,10 +4,9 @@
from __future__ import unicode_literals
from cli_helpers.utils import filter_dict_by_key
from .preprocessors import (convert_to_string, override_missing_value,
style_output)
from .preprocessors import convert_to_string, override_missing_value, style_output
supported_formats = ('vertical', )
supported_formats = ("vertical",)
preprocessors = (override_missing_value, convert_to_string, style_output)
@ -21,17 +20,19 @@ def _get_separator(num, sep_title, sep_character, sep_length):
title = sep_title.format(n=num + 1)
return "{left_divider}[ {title} ]{right_divider}\n".format(
left_divider=left_divider, right_divider=right_divider, title=title)
left_divider=left_divider, right_divider=right_divider, title=title
)
def _format_row(headers, row):
"""Format a row."""
formatted_row = [' | '.join(field) for field in zip(headers, row)]
return '\n'.join(formatted_row)
formatted_row = [" | ".join(field) for field in zip(headers, row)]
return "\n".join(formatted_row)
def vertical_table(data, headers, sep_title='{n}. row', sep_character='*',
sep_length=27):
def vertical_table(
data, headers, sep_title="{n}. row", sep_character="*", sep_length=27
):
"""Format *data* and *headers* as an vertical table.
The values in *data* and *headers* must be strings.
@ -62,5 +63,5 @@ def vertical_table(data, headers, sep_title='{n}. row', sep_character='*',
def adapter(data, headers, **kwargs):
"""Wrap vertical table in a function for TabularOutputFormatter."""
keys = ('sep_title', 'sep_character', 'sep_length')
keys = ("sep_title", "sep_character", "sep_length")
return vertical_table(data, headers, **filter_dict_by_key(kwargs, keys))

View file

@ -7,6 +7,7 @@ from functools import lru_cache
from typing import Dict
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from pygments.style import StyleMeta
@ -20,10 +21,16 @@ def bytes_to_string(b):
"""
if isinstance(b, binary_type):
needs_hex = False
try:
return b.decode('utf8')
result = b.decode("utf8")
needs_hex = not result.isprintable()
except UnicodeDecodeError:
return '0x' + binascii.hexlify(b).decode('ascii')
needs_hex = True
if needs_hex:
return "0x" + binascii.hexlify(b).decode("ascii")
else:
return result
return b
@ -37,16 +44,20 @@ def to_string(value):
def truncate_string(value, max_width=None, skip_multiline_string=True):
"""Truncate string values."""
if skip_multiline_string and isinstance(value, text_type) and '\n' in value:
if skip_multiline_string and isinstance(value, text_type) and "\n" in value:
return value
elif isinstance(value, text_type) and max_width is not None and len(value) > max_width:
elif (
isinstance(value, text_type)
and max_width is not None
and len(value) > max_width
):
return value[: max_width - 3] + "..."
return value
def intlen(n):
"""Find the length of the integer part of a number *n*."""
pos = n.find('.')
pos = n.find(".")
return len(n) if pos < 0 else pos
@ -61,12 +72,12 @@ def unique_items(seq):
return [x for x in seq if not (x in seen or seen.add(x))]
_ansi_re = re.compile('\033\\[((?:\\d|;)*)([a-zA-Z])')
_ansi_re = re.compile("\033\\[((?:\\d|;)*)([a-zA-Z])")
def strip_ansi(value):
"""Strip the ANSI escape sequences from a string."""
return _ansi_re.sub('', value)
return _ansi_re.sub("", value)
def replace(s, replace):
@ -98,9 +109,8 @@ def filter_style_table(style: "StyleMeta", *relevant_styles: str) -> Dict:
'Token.Output.OddRow': "",
}
"""
_styles_iter = ((str(key), val) for key, val in getattr(style, 'styles', {}).items())
_relevant_styles_iter = filter(
lambda tpl: tpl[0] in relevant_styles,
_styles_iter
_styles_iter = (
(str(key), val) for key, val in getattr(style, "styles", {}).items()
)
_relevant_styles_iter = filter(lambda tpl: tpl[0] in relevant_styles, _styles_iter)
return {key: val for key, val in _relevant_styles_iter}

View file

@ -19,8 +19,10 @@
#
import ast
from collections import OrderedDict
# import os
import re
# import sys
# sys.path.insert(0, os.path.abspath('.'))
@ -34,22 +36,18 @@ import re
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.viewcode'
]
extensions = ["sphinx.ext.autodoc", "sphinx.ext.intersphinx", "sphinx.ext.viewcode"]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
templates_path = ["_templates"]
html_sidebars = {
'**': [
'about.html',
'navigation.html',
'relations.html',
'searchbox.html',
'donate.html',
"**": [
"about.html",
"navigation.html",
"relations.html",
"searchbox.html",
"donate.html",
]
}
@ -57,25 +55,26 @@ html_sidebars = {
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
source_suffix = ".rst"
# The master toctree document.
master_doc = 'index'
master_doc = "index"
# General information about the project.
project = 'CLI Helpers'
author = 'dbcli'
description = 'Python helpers for common CLI tasks'
copyright = '2017, dbcli'
project = "CLI Helpers"
author = "dbcli"
description = "Python helpers for common CLI tasks"
copyright = "2017, dbcli"
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
_version_re = re.compile(r'__version__\s+=\s+(.*)')
with open('../../cli_helpers/__init__.py', 'rb') as f:
version = str(ast.literal_eval(_version_re.search(
f.read().decode('utf-8')).group(1)))
_version_re = re.compile(r"__version__\s+=\s+(.*)")
with open("../../cli_helpers/__init__.py", "rb") as f:
version = str(
ast.literal_eval(_version_re.search(f.read().decode("utf-8")).group(1))
)
# The full version, including alpha/beta/rc tags.
release = version
@ -93,7 +92,7 @@ language = None
exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
pygments_style = "sphinx"
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
@ -104,40 +103,42 @@ todo_include_todos = False
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'alabaster'
html_theme = "alabaster"
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
nav_links = OrderedDict((
('CLI Helpers at GitHub', 'https://github.com/dbcli/cli_helpers'),
('CLI Helpers at PyPI', 'https://pypi.org/project/cli_helpers'),
('Issue Tracker', 'https://github.com/dbcli/cli_helpers/issues')
))
nav_links = OrderedDict(
(
("CLI Helpers at GitHub", "https://github.com/dbcli/cli_helpers"),
("CLI Helpers at PyPI", "https://pypi.org/project/cli_helpers"),
("Issue Tracker", "https://github.com/dbcli/cli_helpers/issues"),
)
)
html_theme_options = {
'description': description,
'github_user': 'dbcli',
'github_repo': 'cli_helpers',
'github_banner': False,
'github_button': False,
'github_type': 'watch',
'github_count': False,
'extra_nav_links': nav_links
"description": description,
"github_user": "dbcli",
"github_repo": "cli_helpers",
"github_banner": False,
"github_button": False,
"github_type": "watch",
"github_count": False,
"extra_nav_links": nav_links,
}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
html_static_path = ["_static"]
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'CLIHelpersdoc'
htmlhelp_basename = "CLIHelpersdoc"
# -- Options for LaTeX output ---------------------------------------------
@ -146,15 +147,12 @@ latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
@ -164,8 +162,7 @@ latex_elements = {
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'CLIHelpers.tex', 'CLI Helpers Documentation',
'dbcli', 'manual'),
(master_doc, "CLIHelpers.tex", "CLI Helpers Documentation", "dbcli", "manual"),
]
@ -173,10 +170,7 @@ latex_documents = [
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'clihelpers', 'CLI Helpers Documentation',
[author], 1)
]
man_pages = [(master_doc, "clihelpers", "CLI Helpers Documentation", [author], 1)]
# -- Options for Texinfo output -------------------------------------------
@ -185,16 +179,24 @@ man_pages = [
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'CLIHelpers', 'CLI Helpers Documentation',
author, 'CLIHelpers', description,
'Miscellaneous'),
(
master_doc,
"CLIHelpers",
"CLI Helpers Documentation",
author,
"CLIHelpers",
description,
"Miscellaneous",
),
]
intersphinx_mapping = {
'python': ('https://docs.python.org/3', None),
'py2': ('https://docs.python.org/2', None),
'pymysql': ('https://pymysql.readthedocs.io/en/latest/', None),
'numpy': ('https://docs.scipy.org/doc/numpy', None),
'configobj': ('https://configobj.readthedocs.io/en/latest', None)
"python": ("https://docs.python.org/3", None),
"py2": ("https://docs.python.org/2", None),
"pymysql": ("https://pymysql.readthedocs.io/en/latest/", None),
"numpy": ("https://docs.scipy.org/doc/numpy", None),
"configobj": ("https://configobj.readthedocs.io/en/latest", None),
}
linkcheck_ignore = ["https://github.com/psf/black.*"]

View file

@ -50,7 +50,7 @@ Let's get a list of all the supported format names::
>>> from cli_helpers.tabular_output import TabularOutputFormatter
>>> formatter = TabularOutputFormatter()
>>> formatter.supported_formats
('vertical', 'csv', 'tsv', 'mediawiki', 'html', 'latex', 'latex_booktabs', 'textile', 'moinmoin', 'jira', 'plain', 'simple', 'grid', 'fancy_grid', 'pipe', 'orgtbl', 'psql', 'rst', 'ascii', 'double', 'github')
('vertical', 'csv', 'tsv', 'mediawiki', 'html', 'latex', 'latex_booktabs', 'textile', 'moinmoin', 'jira', 'plain', 'minimal', 'simple', 'grid', 'fancy_grid', 'pipe', 'orgtbl', 'psql', 'psql_unicode', 'rst', 'ascii', 'double', 'github')
You can format your data in any of those supported formats. Let's take the
same data from our first example and put it in the ``fancy_grid`` format::

View file

@ -1,7 +1,7 @@
autopep8==1.3.3
codecov==2.0.9
coverage==4.3.4
pep8radius
black>=20.8b1
Pygments>=2.4.0
pytest==3.0.7
pytest-cov==2.4.0

View file

@ -8,11 +8,12 @@ import sys
from setuptools import find_packages, setup
_version_re = re.compile(r'__version__\s+=\s+(.*)')
_version_re = re.compile(r"__version__\s+=\s+(.*)")
with open('cli_helpers/__init__.py', 'rb') as f:
version = str(ast.literal_eval(_version_re.search(
f.read().decode('utf-8')).group(1)))
with open("cli_helpers/__init__.py", "rb") as f:
version = str(
ast.literal_eval(_version_re.search(f.read().decode("utf-8")).group(1))
)
def open_file(filename):
@ -21,42 +22,37 @@ def open_file(filename):
return f.read()
readme = open_file('README.rst')
if sys.version_info[0] == 2:
py2_reqs = ['backports.csv >= 1.0.0']
else:
py2_reqs = []
readme = open_file("README.rst")
setup(
name='cli_helpers',
author='dbcli',
author_email='thomas@roten.us',
name="cli_helpers",
author="dbcli",
author_email="thomas@roten.us",
version=version,
url='https://github.com/dbcli/cli_helpers',
packages=find_packages(exclude=['docs', 'tests', 'tests.tabular_output']),
url="https://github.com/dbcli/cli_helpers",
packages=find_packages(exclude=["docs", "tests", "tests.tabular_output"]),
include_package_data=True,
description='Helpers for building command-line apps',
description="Helpers for building command-line apps",
long_description=readme,
long_description_content_type='text/x-rst',
long_description_content_type="text/x-rst",
install_requires=[
'configobj >= 5.0.5',
'tabulate[widechars] >= 0.8.2',
'terminaltables >= 3.0.0',
] + py2_reqs,
"configobj >= 5.0.5",
"tabulate[widechars] >= 0.8.2",
],
extras_require={
'styles': ['Pygments >= 1.6'],
"styles": ["Pygments >= 1.6"],
},
python_requires=">=3.6",
classifiers=[
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Operating System :: Unix',
'Programming Language :: Python',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Topic :: Software Development',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Terminals :: Terminal Emulators/X Terminals',
]
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Operating System :: Unix",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Topic :: Software Development",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Terminals :: Terminal Emulators/X Terminals",
],
)

View file

@ -13,7 +13,7 @@ class BaseCommand(Command, object):
user_options = []
default_cmd_options = ('verbose', 'quiet', 'dry_run')
default_cmd_options = ("verbose", "quiet", "dry_run")
def __init__(self, *args, **kwargs):
super(BaseCommand, self).__init__(*args, **kwargs)
@ -40,54 +40,58 @@ class BaseCommand(Command, object):
def apply_options(self, cmd, options=()):
"""Apply command-line options."""
for option in (self.default_cmd_options + options):
cmd = self.apply_option(cmd, option,
active=getattr(self, option, False))
for option in self.default_cmd_options + options:
cmd = self.apply_option(cmd, option, active=getattr(self, option, False))
return cmd
def apply_option(self, cmd, option, active=True):
"""Apply a command-line option."""
return re.sub(r'{{{}\:(?P<option>[^}}]*)}}'.format(option),
r'\g<option>' if active else '', cmd)
return re.sub(
r"{{{}\:(?P<option>[^}}]*)}}".format(option),
r"\g<option>" if active else "",
cmd,
)
class lint(BaseCommand):
"""A PEP 8 lint command that optionally fixes violations."""
description = 'check code against PEP 8 (and fix violations)'
description = "check code against PEP 8 (and fix violations)"
user_options = [
('branch=', 'b', 'branch or revision to compare against (e.g. master)'),
('fix', 'f', 'fix the violations in place')
("branch=", "b", "branch or revision to compare against (e.g. master)"),
("fix", "f", "fix the violations in place"),
]
def initialize_options(self):
"""Set the default options."""
self.branch = 'master'
self.branch = "master"
self.fix = False
super(lint, self).initialize_options()
def run(self):
"""Run the linter."""
cmd = 'pep8radius {branch} {{fix: --in-place}}{{verbose: -vv}}'
cmd = "black ."
cmd = cmd.format(branch=self.branch)
self.call_and_exit(self.apply_options(cmd, ('fix', )))
self.call_and_exit(self.apply_options(cmd, ("fix",)))
class test(BaseCommand):
"""Run the test suites for this project."""
description = 'run the test suite'
description = "run the test suite"
user_options = [
('all', 'a', 'test against all supported versions of Python'),
('coverage', 'c', 'measure test coverage')
("all", "a", "test against all supported versions of Python"),
("coverage", "c", "measure test coverage"),
]
unit_test_cmd = ('pytest{quiet: -q}{verbose: -v}{dry_run: --setup-only}'
'{coverage: --cov-report= --cov=cli_helpers}')
test_all_cmd = 'tox{verbose: -v}{dry_run: --notest}'
coverage_cmd = 'coverage report'
unit_test_cmd = (
"pytest{quiet: -q}{verbose: -v}{dry_run: --setup-only}"
"{coverage: --cov-report= --cov=cli_helpers}"
)
test_all_cmd = "tox{verbose: -v}{dry_run: --notest}"
coverage_cmd = "coverage report"
def initialize_options(self):
"""Set the default options."""
@ -101,7 +105,7 @@ class test(BaseCommand):
cmd = self.apply_options(self.test_all_cmd)
self.call_and_exit(cmd)
else:
cmds = (self.apply_options(self.unit_test_cmd, ('coverage', )), )
cmds = (self.apply_options(self.unit_test_cmd, ("coverage",)),)
if self.coverage:
cmds += (self.apply_options(self.coverage_cmd),)
self.call_in_sequence(cmds)
@ -110,11 +114,11 @@ class test(BaseCommand):
class docs(BaseCommand):
"""Use Sphinx Makefile to generate documentation."""
description = 'generate the Sphinx HTML documentation'
description = "generate the Sphinx HTML documentation"
clean_docs_cmd = 'make -C docs clean'
html_docs_cmd = 'make -C docs html'
view_docs_cmd = 'open docs/build/html/index.html'
clean_docs_cmd = "make -C docs clean"
html_docs_cmd = "make -C docs html"
view_docs_cmd = "open docs/build/html/index.html"
def run(self):
"""Generate and view the documentation."""

View file

@ -28,7 +28,7 @@ class _TempDirectory(object):
name = None
_closed = False
def __init__(self, suffix="", prefix='tmp', dir=None):
def __init__(self, suffix="", prefix="tmp", dir=None):
self.name = _tempfile.mkdtemp(suffix, prefix, dir)
def __repr__(self):
@ -42,13 +42,14 @@ class _TempDirectory(object):
try:
_shutil.rmtree(self.name)
except (TypeError, AttributeError) as ex:
if "None" not in '%s' % (ex,):
if "None" not in "%s" % (ex,):
raise
self._rmtree(self.name)
self._closed = True
if _warn and _warnings.warn:
_warnings.warn("Implicitly cleaning up {!r}".format(self),
ResourceWarning)
_warnings.warn(
"Implicitly cleaning up {!r}".format(self), ResourceWarning
)
def __exit__(self, exc, value, tb):
self.cleanup()
@ -57,8 +58,15 @@ class _TempDirectory(object):
# Issue a ResourceWarning if implicit cleanup needed
self.cleanup(_warn=True)
def _rmtree(self, path, _OSError=OSError, _sep=_os.path.sep,
_listdir=_os.listdir, _remove=_os.remove, _rmdir=_os.rmdir):
def _rmtree(
self,
path,
_OSError=OSError,
_sep=_os.path.sep,
_listdir=_os.listdir,
_remove=_os.remove,
_rmdir=_os.rmdir,
):
# Essentially a stripped down version of shutil.rmtree. We can't
# use globals because they may be None'ed out at shutdown.
if not isinstance(path, str):

View file

@ -13,6 +13,6 @@ test_boolean_default = True
test_string_file = '~/myfile'
test_option = 'foobar'
test_option = 'foobar'
[section2]

View file

@ -15,6 +15,6 @@ test_boolean = boolean()
test_string_file = string(default='~/myfile')
test_option = option('foo', 'bar', 'foobar', default='foobar')
test_option = option('foo', 'bar', 'foobar', 'foobar✔', default='foobar')
[section2]

View file

@ -13,6 +13,6 @@ test_boolean_default True
test_string_file = '~/myfile'
test_option = 'foobar'
test_option = 'foobar'
[section2]

View file

@ -15,6 +15,6 @@ test_boolean = bool(default=False)
test_string_file = string(default='~/myfile')
test_option = option('foo', 'bar', 'foobar', default='foobar')
test_option = option('foo', 'bar', 'foobar', 'foobar✔', default='foobar')
[section2]

View file

@ -12,37 +12,44 @@ from cli_helpers.tabular_output import delimited_output_adapter
def test_csv_wrapper():
"""Test the delimited output adapter."""
# Test comma-delimited output.
data = [['abc', '1'], ['d', '456']]
headers = ['letters', 'number']
output = delimited_output_adapter.adapter(iter(data), headers, dialect='unix')
assert "\n".join(output) == dedent('''\
data = [["abc", "1"], ["d", "456"]]
headers = ["letters", "number"]
output = delimited_output_adapter.adapter(iter(data), headers, dialect="unix")
assert "\n".join(output) == dedent(
'''\
"letters","number"\n\
"abc","1"\n\
"d","456"''')
"d","456"'''
)
# Test tab-delimited output.
data = [['abc', '1'], ['d', '456']]
headers = ['letters', 'number']
data = [["abc", "1"], ["d", "456"]]
headers = ["letters", "number"]
output = delimited_output_adapter.adapter(
iter(data), headers, table_format='csv-tab', dialect='unix')
assert "\n".join(output) == dedent('''\
iter(data), headers, table_format="csv-tab", dialect="unix"
)
assert "\n".join(output) == dedent(
'''\
"letters"\t"number"\n\
"abc"\t"1"\n\
"d"\t"456"''')
"d"\t"456"'''
)
with pytest.raises(ValueError):
output = delimited_output_adapter.adapter(
iter(data), headers, table_format='foobar')
iter(data), headers, table_format="foobar"
)
list(output)
def test_unicode_with_csv():
"""Test that the csv wrapper can handle non-ascii characters."""
data = [['观音', '1'], ['Ποσειδῶν', '456']]
headers = ['letters', 'number']
data = [["观音", "1"], ["Ποσειδῶν", "456"]]
headers = ["letters", "number"]
output = delimited_output_adapter.adapter(data, headers)
assert "\n".join(output) == dedent('''\
assert "\n".join(output) == dedent(
"""\
letters,number\n\
观音,1\n\
Ποσειδῶν,456''')
Ποσειδῶν,456"""
)

View file

@ -14,14 +14,15 @@ from cli_helpers.utils import strip_ansi
def test_tabular_output_formatter():
"""Test the TabularOutputFormatter class."""
headers = ['text', 'numeric']
headers = ["text", "numeric"]
data = [
["abc", Decimal(1)],
["defg", Decimal("11.1")],
["hi", Decimal("1.1")],
["Pablo\rß\n", 0],
]
expected = dedent("""\
expected = dedent(
"""\
+------------+---------+
| text | numeric |
+------------+---------+
@ -33,66 +34,99 @@ def test_tabular_output_formatter():
)
print(expected)
print("\n".join(TabularOutputFormatter().format_output(
iter(data), headers, format_name='ascii')))
assert expected == "\n".join(TabularOutputFormatter().format_output(
iter(data), headers, format_name='ascii'))
print(
"\n".join(
TabularOutputFormatter().format_output(
iter(data), headers, format_name="ascii"
)
)
)
assert expected == "\n".join(
TabularOutputFormatter().format_output(iter(data), headers, format_name="ascii")
)
def test_tabular_format_output_wrapper():
"""Test the format_output wrapper."""
data = [['1', None], ['2', 'Sam'],
['3', 'Joe']]
headers = ['id', 'name']
expected = dedent('''\
data = [["1", None], ["2", "Sam"], ["3", "Joe"]]
headers = ["id", "name"]
expected = dedent(
"""\
+----+------+
| id | name |
+----+------+
| 1 | N/A |
| 2 | Sam |
| 3 | Joe |
+----+------+''')
+----+------+"""
)
assert expected == "\n".join(format_output(iter(data), headers, format_name='ascii',
missing_value='N/A'))
assert expected == "\n".join(
format_output(iter(data), headers, format_name="ascii", missing_value="N/A")
)
def test_additional_preprocessors():
"""Test that additional preprocessors are run."""
def hello_world(data, headers, **_):
def hello_world_data(data):
for row in data:
for i, value in enumerate(row):
if value == 'hello':
if value == "hello":
row[i] = "{}, world".format(value)
yield row
return hello_world_data(data), headers
data = [['foo', None], ['hello!', 'hello']]
headers = 'ab'
data = [["foo", None], ["hello!", "hello"]]
headers = "ab"
expected = dedent('''\
expected = dedent(
"""\
+--------+--------------+
| a | b |
+--------+--------------+
| foo | hello |
| hello! | hello, world |
+--------+--------------+''')
+--------+--------------+"""
)
assert expected == "\n".join(TabularOutputFormatter().format_output(
iter(data), headers, format_name='ascii', preprocessors=(hello_world,),
missing_value='hello'))
assert expected == "\n".join(
TabularOutputFormatter().format_output(
iter(data),
headers,
format_name="ascii",
preprocessors=(hello_world,),
missing_value="hello",
)
)
def test_format_name_attribute():
"""Test the the format_name attribute be set and retrieved."""
formatter = TabularOutputFormatter(format_name='plain')
assert formatter.format_name == 'plain'
formatter.format_name = 'simple'
assert formatter.format_name == 'simple'
formatter = TabularOutputFormatter(format_name="plain")
assert formatter.format_name == "plain"
formatter.format_name = "simple"
assert formatter.format_name == "simple"
with pytest.raises(ValueError):
formatter.format_name = 'foobar'
formatter.format_name = "foobar"
def test_headless_tabulate_format():
"""Test that a headless formatter doesn't display headers"""
formatter = TabularOutputFormatter(format_name="minimal")
headers = ["text", "numeric"]
data = [["a"], ["b"], ["c"]]
expected = "a\nb\nc"
assert expected == "\n".join(
TabularOutputFormatter().format_output(
iter(data),
headers,
format_name="minimal",
)
)
def test_unsupported_format():
@ -100,23 +134,27 @@ def test_unsupported_format():
formatter = TabularOutputFormatter()
with pytest.raises(ValueError):
formatter.format_name = 'foobar'
formatter.format_name = "foobar"
with pytest.raises(ValueError):
formatter.format_output((), (), format_name='foobar')
formatter.format_output((), (), format_name="foobar")
def test_tabulate_ansi_escape_in_default_value():
"""Test that ANSI escape codes work with tabulate."""
data = [['1', None], ['2', 'Sam'],
['3', 'Joe']]
headers = ['id', 'name']
data = [["1", None], ["2", "Sam"], ["3", "Joe"]]
headers = ["id", "name"]
styled = format_output(iter(data), headers, format_name='psql',
missing_value='\x1b[38;5;10mNULL\x1b[39m')
unstyled = format_output(iter(data), headers, format_name='psql',
missing_value='NULL')
styled = format_output(
iter(data),
headers,
format_name="psql",
missing_value="\x1b[38;5;10mNULL\x1b[39m",
)
unstyled = format_output(
iter(data), headers, format_name="psql", missing_value="NULL"
)
stripped_styled = [strip_ansi(s) for s in styled]
@ -127,8 +165,14 @@ def test_get_type():
"""Test that _get_type returns the expected type."""
formatter = TabularOutputFormatter()
tests = ((1, int), (2.0, float), (b'binary', binary_type),
('text', text_type), (None, type(None)), ((), text_type))
tests = (
(1, int),
(2.0, float),
(b"binary", binary_type),
("text", text_type),
(None, type(None)),
((), text_type),
)
for value, data_type in tests:
assert data_type is formatter._get_type(value)
@ -138,36 +182,45 @@ def test_provide_column_types():
"""Test that provided column types are passed to preprocessors."""
expected_column_types = (bool, float)
data = ((1, 1.0), (0, 2))
headers = ('a', 'b')
headers = ("a", "b")
def preprocessor(data, headers, column_types=(), **_):
assert expected_column_types == column_types
return data, headers
format_output(data, headers, 'csv',
format_output(
data,
headers,
"csv",
column_types=expected_column_types,
preprocessors=(preprocessor,))
preprocessors=(preprocessor,),
)
def test_enforce_iterable():
"""Test that all output formatters accept iterable"""
formatter = TabularOutputFormatter()
loremipsum = 'lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiusmod'.split(' ')
loremipsum = (
"lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiusmod".split(
" "
)
)
for format_name in formatter.supported_formats:
formatter.format_name = format_name
try:
formatted = next(formatter.format_output(
zip(loremipsum), ['lorem']))
formatted = next(formatter.format_output(zip(loremipsum), ["lorem"]))
except TypeError:
assert False, "{0} doesn't return iterable".format(format_name)
def test_all_text_type():
"""Test the TabularOutputFormatter class."""
data = [[1, u"", None, Decimal(2)]]
headers = ['col1', 'col2', 'col3', 'col4']
data = [[1, "", None, Decimal(2)]]
headers = ["col1", "col2", "col3", "col4"]
output_formatter = TabularOutputFormatter()
for format_name in output_formatter.supported_formats:
for row in output_formatter.format_output(iter(data), headers, format_name=format_name):
for row in output_formatter.format_output(
iter(data), headers, format_name=format_name
):
assert isinstance(row, text_type), "not unicode for {}".format(format_name)

View file

@ -8,8 +8,15 @@ import pytest
from cli_helpers.compat import HAS_PYGMENTS
from cli_helpers.tabular_output.preprocessors import (
align_decimals, bytes_to_string, convert_to_string, quote_whitespaces,
override_missing_value, override_tab_value, style_output, format_numbers)
align_decimals,
bytes_to_string,
convert_to_string,
quote_whitespaces,
override_missing_value,
override_tab_value,
style_output,
format_numbers,
)
if HAS_PYGMENTS:
from pygments.style import Style
@ -22,9 +29,9 @@ import types
def test_convert_to_string():
"""Test the convert_to_string() function."""
data = [[1, 'John'], [2, 'Jill']]
headers = [0, 'name']
expected = ([['1', 'John'], ['2', 'Jill']], ['0', 'name'])
data = [[1, "John"], [2, "Jill"]]
headers = [0, "name"]
expected = ([["1", "John"], ["2", "Jill"]], ["0", "name"])
results = convert_to_string(data, headers)
assert expected == (list(results[0]), results[1])
@ -32,42 +39,41 @@ def test_convert_to_string():
def test_override_missing_values():
"""Test the override_missing_values() function."""
data = [[1, None], [2, 'Jill']]
headers = [0, 'name']
expected = ([[1, '<EMPTY>'], [2, 'Jill']], [0, 'name'])
results = override_missing_value(data, headers, missing_value='<EMPTY>')
data = [[1, None], [2, "Jill"]]
headers = [0, "name"]
expected = ([[1, "<EMPTY>"], [2, "Jill"]], [0, "name"])
results = override_missing_value(data, headers, missing_value="<EMPTY>")
assert expected == (list(results[0]), results[1])
@pytest.mark.skipif(not HAS_PYGMENTS, reason='requires the Pygments library')
@pytest.mark.skipif(not HAS_PYGMENTS, reason="requires the Pygments library")
def test_override_missing_value_with_style():
"""Test that *override_missing_value()* styles output."""
class NullStyle(Style):
styles = {
Token.Output.Null: '#0f0'
}
styles = {Token.Output.Null: "#0f0"}
headers = ['h1', 'h2']
data = [[None, '2'], ['abc', None]]
headers = ["h1", "h2"]
data = [[None, "2"], ["abc", None]]
expected_headers = ['h1', 'h2']
expected_headers = ["h1", "h2"]
expected_data = [
['\x1b[38;5;10m<null>\x1b[39m', '2'],
['abc', '\x1b[38;5;10m<null>\x1b[39m']
["\x1b[38;5;10m<null>\x1b[39m", "2"],
["abc", "\x1b[38;5;10m<null>\x1b[39m"],
]
results = override_missing_value(data, headers,
style=NullStyle, missing_value="<null>")
results = override_missing_value(
data, headers, style=NullStyle, missing_value="<null>"
)
assert (expected_data, expected_headers) == (list(results[0]), results[1])
def test_override_tab_value():
"""Test the override_tab_value() function."""
data = [[1, '\tJohn'], [2, 'Jill']]
headers = ['id', 'name']
expected = ([[1, ' John'], [2, 'Jill']], ['id', 'name'])
data = [[1, "\tJohn"], [2, "Jill"]]
headers = ["id", "name"]
expected = ([[1, " John"], [2, "Jill"]], ["id", "name"])
results = override_tab_value(data, headers)
assert expected == (list(results[0]), results[1])
@ -75,9 +81,9 @@ def test_override_tab_value():
def test_bytes_to_string():
"""Test the bytes_to_string() function."""
data = [[1, 'John'], [2, b'Jill']]
headers = [0, 'name']
expected = ([[1, 'John'], [2, 'Jill']], [0, 'name'])
data = [[1, "John"], [2, b"Jill"]]
headers = [0, "name"]
expected = ([[1, "John"], [2, "Jill"]], [0, "name"])
results = bytes_to_string(data, headers)
assert expected == (list(results[0]), results[1])
@ -85,11 +91,10 @@ def test_bytes_to_string():
def test_align_decimals():
"""Test the align_decimals() function."""
data = [[Decimal('200'), Decimal('1')], [
Decimal('1.00002'), Decimal('1.0')]]
headers = ['num1', 'num2']
data = [[Decimal("200"), Decimal("1")], [Decimal("1.00002"), Decimal("1.0")]]
headers = ["num1", "num2"]
column_types = (float, float)
expected = ([['200', '1'], [' 1.00002', '1.0']], ['num1', 'num2'])
expected = ([["200", "1"], [" 1.00002", "1.0"]], ["num1", "num2"])
results = align_decimals(data, headers, column_types=column_types)
assert expected == (list(results[0]), results[1])
@ -98,9 +103,9 @@ def test_align_decimals():
def test_align_decimals_empty_result():
"""Test align_decimals() with no results."""
data = []
headers = ['num1', 'num2']
headers = ["num1", "num2"]
column_types = ()
expected = ([], ['num1', 'num2'])
expected = ([], ["num1", "num2"])
results = align_decimals(data, headers, column_types=column_types)
assert expected == (list(results[0]), results[1])
@ -108,10 +113,10 @@ def test_align_decimals_empty_result():
def test_align_decimals_non_decimals():
"""Test align_decimals() with non-decimals."""
data = [[Decimal('200.000'), Decimal('1.000')], [None, None]]
headers = ['num1', 'num2']
data = [[Decimal("200.000"), Decimal("1.000")], [None, None]]
headers = ["num1", "num2"]
column_types = (float, float)
expected = ([['200.000', '1.000'], [None, None]], ['num1', 'num2'])
expected = ([["200.000", "1.000"], [None, None]], ["num1", "num2"])
results = align_decimals(data, headers, column_types=column_types)
assert expected == (list(results[0]), results[1])
@ -120,9 +125,8 @@ def test_align_decimals_non_decimals():
def test_quote_whitespaces():
"""Test the quote_whitespaces() function."""
data = [[" before", "after "], [" both ", "none"]]
headers = ['h1', 'h2']
expected = ([["' before'", "'after '"], ["' both '", "'none'"]],
['h1', 'h2'])
headers = ["h1", "h2"]
expected = ([["' before'", "'after '"], ["' both '", "'none'"]], ["h1", "h2"])
results = quote_whitespaces(data, headers)
assert expected == (list(results[0]), results[1])
@ -131,8 +135,8 @@ def test_quote_whitespaces():
def test_quote_whitespaces_empty_result():
"""Test the quote_whitespaces() function with no results."""
data = []
headers = ['h1', 'h2']
expected = ([], ['h1', 'h2'])
headers = ["h1", "h2"]
expected = ([], ["h1", "h2"])
results = quote_whitespaces(data, headers)
assert expected == (list(results[0]), results[1])
@ -141,106 +145,115 @@ def test_quote_whitespaces_empty_result():
def test_quote_whitespaces_non_spaces():
"""Test the quote_whitespaces() function with non-spaces."""
data = [["\tbefore", "after \r"], ["\n both ", "none"]]
headers = ['h1', 'h2']
expected = ([["'\tbefore'", "'after \r'"], ["'\n both '", "'none'"]],
['h1', 'h2'])
headers = ["h1", "h2"]
expected = ([["'\tbefore'", "'after \r'"], ["'\n both '", "'none'"]], ["h1", "h2"])
results = quote_whitespaces(data, headers)
assert expected == (list(results[0]), results[1])
@pytest.mark.skipif(not HAS_PYGMENTS, reason='requires the Pygments library')
@pytest.mark.skipif(not HAS_PYGMENTS, reason="requires the Pygments library")
def test_style_output_no_styles():
"""Test that *style_output()* does not style without styles."""
headers = ['h1', 'h2']
data = [['1', '2'], ['a', 'b']]
headers = ["h1", "h2"]
data = [["1", "2"], ["a", "b"]]
results = style_output(data, headers)
assert (data, headers) == (list(results[0]), results[1])
@pytest.mark.skipif(HAS_PYGMENTS,
reason='requires the Pygments library be missing')
@pytest.mark.skipif(HAS_PYGMENTS, reason="requires the Pygments library be missing")
def test_style_output_no_pygments():
"""Test that *style_output()* does not try to style without Pygments."""
headers = ['h1', 'h2']
data = [['1', '2'], ['a', 'b']]
headers = ["h1", "h2"]
data = [["1", "2"], ["a", "b"]]
results = style_output(data, headers)
assert (data, headers) == (list(results[0]), results[1])
@pytest.mark.skipif(not HAS_PYGMENTS, reason='requires the Pygments library')
@pytest.mark.skipif(not HAS_PYGMENTS, reason="requires the Pygments library")
def test_style_output():
"""Test that *style_output()* styles output."""
class CliStyle(Style):
default_style = ""
styles = {
Token.Output.Header: 'bold ansibrightred',
Token.Output.OddRow: 'bg:#eee #111',
Token.Output.EvenRow: '#0f0'
Token.Output.Header: "bold ansibrightred",
Token.Output.OddRow: "bg:#eee #111",
Token.Output.EvenRow: "#0f0",
}
headers = ['h1', 'h2']
data = [['观音', '2'], ['Ποσειδῶν', 'b']]
expected_headers = ['\x1b[91;01mh1\x1b[39;00m', '\x1b[91;01mh2\x1b[39;00m']
expected_data = [['\x1b[38;5;233;48;5;7m观音\x1b[39;49m',
'\x1b[38;5;233;48;5;7m2\x1b[39;49m'],
['\x1b[38;5;10mΠοσειδῶν\x1b[39m', '\x1b[38;5;10mb\x1b[39m']]
headers = ["h1", "h2"]
data = [["观音", "2"], ["Ποσειδῶν", "b"]]
expected_headers = ["\x1b[91;01mh1\x1b[39;00m", "\x1b[91;01mh2\x1b[39;00m"]
expected_data = [
["\x1b[38;5;233;48;5;7m观音\x1b[39;49m", "\x1b[38;5;233;48;5;7m2\x1b[39;49m"],
["\x1b[38;5;10mΠοσειδῶν\x1b[39m", "\x1b[38;5;10mb\x1b[39m"],
]
results = style_output(data, headers, style=CliStyle)
assert (expected_data, expected_headers) == (list(results[0]), results[1])
@pytest.mark.skipif(not HAS_PYGMENTS, reason='requires the Pygments library')
@pytest.mark.skipif(not HAS_PYGMENTS, reason="requires the Pygments library")
def test_style_output_with_newlines():
"""Test that *style_output()* styles output with newlines in it."""
class CliStyle(Style):
default_style = ""
styles = {
Token.Output.Header: 'bold ansibrightred',
Token.Output.OddRow: 'bg:#eee #111',
Token.Output.EvenRow: '#0f0'
Token.Output.Header: "bold ansibrightred",
Token.Output.OddRow: "bg:#eee #111",
Token.Output.EvenRow: "#0f0",
}
headers = ['h1', 'h2']
data = [['观音\nLine2', 'Ποσειδῶν']]
expected_headers = ['\x1b[91;01mh1\x1b[39;00m', '\x1b[91;01mh2\x1b[39;00m']
headers = ["h1", "h2"]
data = [["观音\nLine2", "Ποσειδῶν"]]
expected_headers = ["\x1b[91;01mh1\x1b[39;00m", "\x1b[91;01mh2\x1b[39;00m"]
expected_data = [
['\x1b[38;5;233;48;5;7m观音\x1b[39;49m\n\x1b[38;5;233;48;5;7m'
'Line2\x1b[39;49m',
'\x1b[38;5;233;48;5;7mΠοσειδῶν\x1b[39;49m']]
[
"\x1b[38;5;233;48;5;7m观音\x1b[39;49m\n\x1b[38;5;233;48;5;7m"
"Line2\x1b[39;49m",
"\x1b[38;5;233;48;5;7mΠοσειδῶν\x1b[39;49m",
]
]
results = style_output(data, headers, style=CliStyle)
assert (expected_data, expected_headers) == (list(results[0]), results[1])
@pytest.mark.skipif(not HAS_PYGMENTS, reason='requires the Pygments library')
@pytest.mark.skipif(not HAS_PYGMENTS, reason="requires the Pygments library")
def test_style_output_custom_tokens():
"""Test that *style_output()* styles output with custom token names."""
class CliStyle(Style):
default_style = ""
styles = {
Token.Results.Headers: 'bold ansibrightred',
Token.Results.OddRows: 'bg:#eee #111',
Token.Results.EvenRows: '#0f0'
Token.Results.Headers: "bold ansibrightred",
Token.Results.OddRows: "bg:#eee #111",
Token.Results.EvenRows: "#0f0",
}
headers = ['h1', 'h2']
data = [['1', '2'], ['a', 'b']]
expected_headers = ['\x1b[91;01mh1\x1b[39;00m', '\x1b[91;01mh2\x1b[39;00m']
expected_data = [['\x1b[38;5;233;48;5;7m1\x1b[39;49m',
'\x1b[38;5;233;48;5;7m2\x1b[39;49m'],
['\x1b[38;5;10ma\x1b[39m', '\x1b[38;5;10mb\x1b[39m']]
headers = ["h1", "h2"]
data = [["1", "2"], ["a", "b"]]
expected_headers = ["\x1b[91;01mh1\x1b[39;00m", "\x1b[91;01mh2\x1b[39;00m"]
expected_data = [
["\x1b[38;5;233;48;5;7m1\x1b[39;49m", "\x1b[38;5;233;48;5;7m2\x1b[39;49m"],
["\x1b[38;5;10ma\x1b[39m", "\x1b[38;5;10mb\x1b[39m"],
]
output = style_output(
data, headers, style=CliStyle,
header_token='Token.Results.Headers',
odd_row_token='Token.Results.OddRows',
even_row_token='Token.Results.EvenRows')
data,
headers,
style=CliStyle,
header_token="Token.Results.Headers",
odd_row_token="Token.Results.OddRows",
even_row_token="Token.Results.EvenRows",
)
assert (expected_data, expected_headers) == (list(output[0]), output[1])
@ -248,29 +261,25 @@ def test_style_output_custom_tokens():
def test_format_integer():
"""Test formatting for an INTEGER datatype."""
data = [[1], [1000], [1000000]]
headers = ['h1']
result_data, result_headers = format_numbers(data,
headers,
column_types=(int,),
integer_format=',',
float_format=',')
headers = ["h1"]
result_data, result_headers = format_numbers(
data, headers, column_types=(int,), integer_format=",", float_format=","
)
expected = [['1'], ['1,000'], ['1,000,000']]
expected = [["1"], ["1,000"], ["1,000,000"]]
assert expected == list(result_data)
assert headers == result_headers
def test_format_decimal():
"""Test formatting for a DECIMAL(12, 4) datatype."""
data = [[Decimal('1.0000')], [Decimal('1000.0000')], [Decimal('1000000.0000')]]
headers = ['h1']
result_data, result_headers = format_numbers(data,
headers,
column_types=(float,),
integer_format=',',
float_format=',')
data = [[Decimal("1.0000")], [Decimal("1000.0000")], [Decimal("1000000.0000")]]
headers = ["h1"]
result_data, result_headers = format_numbers(
data, headers, column_types=(float,), integer_format=",", float_format=","
)
expected = [['1.0000'], ['1,000.0000'], ['1,000,000.0000']]
expected = [["1.0000"], ["1,000.0000"], ["1,000,000.0000"]]
assert expected == list(result_data)
assert headers == result_headers
@ -278,13 +287,11 @@ def test_format_decimal():
def test_format_float():
"""Test formatting for a REAL datatype."""
data = [[1.0], [1000.0], [1000000.0]]
headers = ['h1']
result_data, result_headers = format_numbers(data,
headers,
column_types=(float,),
integer_format=',',
float_format=',')
expected = [['1.0'], ['1,000.0'], ['1,000,000.0']]
headers = ["h1"]
result_data, result_headers = format_numbers(
data, headers, column_types=(float,), integer_format=",", float_format=","
)
expected = [["1.0"], ["1,000.0"], ["1,000,000.0"]]
assert expected == list(result_data)
assert headers == result_headers
@ -292,11 +299,12 @@ def test_format_float():
def test_format_integer_only():
"""Test that providing one format string works."""
data = [[1, 1.0], [1000, 1000.0], [1000000, 1000000.0]]
headers = ['h1', 'h2']
result_data, result_headers = format_numbers(data, headers, column_types=(int, float),
integer_format=',')
headers = ["h1", "h2"]
result_data, result_headers = format_numbers(
data, headers, column_types=(int, float), integer_format=","
)
expected = [['1', 1.0], ['1,000', 1000.0], ['1,000,000', 1000000.0]]
expected = [["1", 1.0], ["1,000", 1000.0], ["1,000,000", 1000000.0]]
assert expected == list(result_data)
assert headers == result_headers
@ -304,7 +312,7 @@ def test_format_integer_only():
def test_format_numbers_no_format_strings():
"""Test that numbers aren't formatted without format strings."""
data = ((1), (1000), (1000000))
headers = ('h1',)
headers = ("h1",)
result_data, result_headers = format_numbers(data, headers, column_types=(int,))
assert list(data) == list(result_data)
assert headers == result_headers
@ -313,17 +321,25 @@ def test_format_numbers_no_format_strings():
def test_format_numbers_no_column_types():
"""Test that numbers aren't formatted without column types."""
data = ((1), (1000), (1000000))
headers = ('h1',)
result_data, result_headers = format_numbers(data, headers, integer_format=',',
float_format=',')
headers = ("h1",)
result_data, result_headers = format_numbers(
data, headers, integer_format=",", float_format=","
)
assert list(data) == list(result_data)
assert headers == result_headers
def test_enforce_iterable():
preprocessors = inspect.getmembers(cli_helpers.tabular_output.preprocessors, inspect.isfunction)
loremipsum = 'lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiusmod'.split(' ')
preprocessors = inspect.getmembers(
cli_helpers.tabular_output.preprocessors, inspect.isfunction
)
loremipsum = (
"lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiusmod".split(
" "
)
)
for name, preprocessor in preprocessors:
preprocessed = preprocessor(zip(loremipsum), ['lorem'], column_types=(str,))
preprocessed = preprocessor(zip(loremipsum), ["lorem"], column_types=(str,))
try:
first = next(preprocessed[0])
except StopIteration:

View file

@ -16,35 +16,53 @@ if HAS_PYGMENTS:
def test_tabulate_wrapper():
"""Test the *output_formatter.tabulate_wrapper()* function."""
data = [['abc', 1], ['d', 456]]
headers = ['letters', 'number']
output = tabulate_adapter.adapter(iter(data), headers, table_format='psql')
assert "\n".join(output) == dedent('''\
+-----------+----------+
data = [["abc", 1], ["d", 456]]
headers = ["letters", "number"]
output = tabulate_adapter.adapter(iter(data), headers, table_format="psql")
assert "\n".join(output) == dedent(
"""\
+---------+--------+
| letters | number |
|-----------+----------|
|---------+--------|
| abc | 1 |
| d | 456 |
+-----------+----------+''')
+---------+--------+"""
)
data = [['{1,2,3}', '{{1,2},{3,4}}', '{å,魚,текст}'], ['{}', '<null>', '{<null>}']]
headers = ['bigint_array', 'nested_numeric_array', '配列']
output = tabulate_adapter.adapter(iter(data), headers, table_format='psql')
assert "\n".join(output) == dedent('''\
+----------------+------------------------+--------------+
data = [["abc", 1], ["d", 456]]
headers = ["letters", "number"]
output = tabulate_adapter.adapter(iter(data), headers, table_format="psql_unicode")
assert "\n".join(output) == dedent(
"""\
letters number
abc 1
d 456
"""
)
data = [["{1,2,3}", "{{1,2},{3,4}}", "{å,魚,текст}"], ["{}", "<null>", "{<null>}"]]
headers = ["bigint_array", "nested_numeric_array", "配列"]
output = tabulate_adapter.adapter(iter(data), headers, table_format="psql")
assert "\n".join(output) == dedent(
"""\
+--------------+----------------------+--------------+
| bigint_array | nested_numeric_array | 配列 |
|----------------+------------------------+--------------|
|--------------+----------------------+--------------|
| {1,2,3} | {{1,2},{3,4}} | {å,,текст} |
| {} | <null> | {<null>} |
+----------------+------------------------+--------------+''')
+--------------+----------------------+--------------+"""
)
def test_markup_format():
"""Test that markup formats do not have number align or string align."""
data = [['abc', 1], ['d', 456]]
headers = ['letters', 'number']
output = tabulate_adapter.adapter(iter(data), headers, table_format='mediawiki')
assert "\n".join(output) == dedent('''\
data = [["abc", 1], ["d", 456]]
headers = ["letters", "number"]
output = tabulate_adapter.adapter(iter(data), headers, table_format="mediawiki")
assert "\n".join(output) == dedent(
"""\
{| class="wikitable" style="text-align: left;"
|+ <!-- caption -->
|-
@ -53,44 +71,43 @@ def test_markup_format():
| abc || 1
|-
| d || 456
|}''')
|}"""
)
@pytest.mark.skipif(not HAS_PYGMENTS, reason='requires the Pygments library')
@pytest.mark.skipif(not HAS_PYGMENTS, reason="requires the Pygments library")
def test_style_output_table():
"""Test that *style_output_table()* styles the output table."""
class CliStyle(Style):
default_style = ""
styles = {
Token.Output.TableSeparator: 'ansibrightred',
Token.Output.TableSeparator: "ansibrightred",
}
headers = ['h1', 'h2']
data = [['观音', '2'], ['Ποσειδῶν', 'b']]
style_output_table = tabulate_adapter.style_output_table('psql')
headers = ["h1", "h2"]
data = [["观音", "2"], ["Ποσειδῶν", "b"]]
style_output_table = tabulate_adapter.style_output_table("psql")
style_output_table(data, headers, style=CliStyle)
output = tabulate_adapter.adapter(iter(data), headers, table_format='psql')
output = tabulate_adapter.adapter(iter(data), headers, table_format="psql")
PLUS = "\x1b[91m+\x1b[39m"
MINUS = "\x1b[91m-\x1b[39m"
PIPE = "\x1b[91m|\x1b[39m"
assert "\n".join(output) == dedent('''\
\x1b[91m+\x1b[39m''' + (
('\x1b[91m-\x1b[39m' * 10) +
'\x1b[91m+\x1b[39m' +
('\x1b[91m-\x1b[39m' * 6)) +
'''\x1b[91m+\x1b[39m
\x1b[91m|\x1b[39m h1 \x1b[91m|\x1b[39m''' +
''' h2 \x1b[91m|\x1b[39m
''' + '\x1b[91m|\x1b[39m' + (
('\x1b[91m-\x1b[39m' * 10) +
'\x1b[91m+\x1b[39m' +
('\x1b[91m-\x1b[39m' * 6)) +
'''\x1b[91m|\x1b[39m
\x1b[91m|\x1b[39m 观音 \x1b[91m|\x1b[39m''' +
''' 2 \x1b[91m|\x1b[39m
\x1b[91m|\x1b[39m Ποσειδῶν \x1b[91m|\x1b[39m''' +
''' b \x1b[91m|\x1b[39m
''' + '\x1b[91m+\x1b[39m' + (
('\x1b[91m-\x1b[39m' * 10) +
'\x1b[91m+\x1b[39m' +
('\x1b[91m-\x1b[39m' * 6)) +
'\x1b[91m+\x1b[39m')
expected = (
dedent(
"""\
+----------+----+
| h1 | h2 |
|----------+----|
| 观音 | 2 |
| Ποσειδῶν | b |
+----------+----+"""
)
.replace("+", PLUS)
.replace("-", MINUS)
.replace("|", PIPE)
)
assert "\n".join(output) == expected

View file

@ -1,69 +0,0 @@
# -*- coding: utf-8 -*-
"""Test the terminaltables output adapter."""
from __future__ import unicode_literals
from textwrap import dedent
import pytest
from cli_helpers.compat import HAS_PYGMENTS
from cli_helpers.tabular_output import terminaltables_adapter
if HAS_PYGMENTS:
from pygments.style import Style
from pygments.token import Token
def test_terminal_tables_adapter():
"""Test the terminaltables output adapter."""
data = [['abc', 1], ['d', 456]]
headers = ['letters', 'number']
output = terminaltables_adapter.adapter(
iter(data), headers, table_format='ascii')
assert "\n".join(output) == dedent('''\
+---------+--------+
| letters | number |
+---------+--------+
| abc | 1 |
| d | 456 |
+---------+--------+''')
@pytest.mark.skipif(not HAS_PYGMENTS, reason='requires the Pygments library')
def test_style_output_table():
"""Test that *style_output_table()* styles the output table."""
class CliStyle(Style):
default_style = ""
styles = {
Token.Output.TableSeparator: 'ansibrightred',
}
headers = ['h1', 'h2']
data = [['观音', '2'], ['Ποσειδῶν', 'b']]
style_output_table = terminaltables_adapter.style_output_table('ascii')
style_output_table(data, headers, style=CliStyle)
output = terminaltables_adapter.adapter(iter(data), headers, table_format='ascii')
assert "\n".join(output) == dedent('''\
\x1b[91m+\x1b[39m''' + (
('\x1b[91m-\x1b[39m' * 10) +
'\x1b[91m+\x1b[39m' +
('\x1b[91m-\x1b[39m' * 4)) +
'''\x1b[91m+\x1b[39m
\x1b[91m|\x1b[39m h1 \x1b[91m|\x1b[39m''' +
''' h2 \x1b[91m|\x1b[39m
''' + '\x1b[91m+\x1b[39m' + (
('\x1b[91m-\x1b[39m' * 10) +
'\x1b[91m+\x1b[39m' +
('\x1b[91m-\x1b[39m' * 4)) +
'''\x1b[91m+\x1b[39m
\x1b[91m|\x1b[39m 观音 \x1b[91m|\x1b[39m''' +
''' 2 \x1b[91m|\x1b[39m
\x1b[91m|\x1b[39m Ποσειδῶν \x1b[91m|\x1b[39m''' +
''' b \x1b[91m|\x1b[39m
''' + '\x1b[91m+\x1b[39m' + (
('\x1b[91m-\x1b[39m' * 10) +
'\x1b[91m+\x1b[39m' +
('\x1b[91m-\x1b[39m' * 4)) +
'\x1b[91m+\x1b[39m')

View file

@ -12,22 +12,25 @@ from cli_helpers.tabular_output import tsv_output_adapter
def test_tsv_wrapper():
"""Test the tsv output adapter."""
# Test tab-delimited output.
data = [['ab\r\nc', '1'], ['d', '456']]
headers = ['letters', 'number']
output = tsv_output_adapter.adapter(
iter(data), headers, table_format='tsv')
assert "\n".join(output) == dedent('''\
data = [["ab\r\nc", "1"], ["d", "456"]]
headers = ["letters", "number"]
output = tsv_output_adapter.adapter(iter(data), headers, table_format="tsv")
assert "\n".join(output) == dedent(
"""\
letters\tnumber\n\
ab\r\\nc\t1\n\
d\t456''')
d\t456"""
)
def test_unicode_with_tsv():
"""Test that the tsv wrapper can handle non-ascii characters."""
data = [['观音', '1'], ['Ποσειδῶν', '456']]
headers = ['letters', 'number']
data = [["观音", "1"], ["Ποσειδῶν", "456"]]
headers = ["letters", "number"]
output = tsv_output_adapter.adapter(data, headers)
assert "\n".join(output) == dedent('''\
assert "\n".join(output) == dedent(
"""\
letters\tnumber\n\
观音\t1\n\
Ποσειδῶν\t456''')
Ποσειδῶν\t456"""
)

View file

@ -9,30 +9,41 @@ from cli_helpers.tabular_output import vertical_table_adapter
def test_vertical_table():
"""Test the default settings for vertical_table()."""
results = [('hello', text_type(123)), ('world', text_type(456))]
results = [("hello", text_type(123)), ("world", text_type(456))]
expected = dedent("""\
expected = dedent(
"""\
***************************[ 1. row ]***************************
name | hello
age | 123
***************************[ 2. row ]***************************
name | world
age | 456""")
age | 456"""
)
assert expected == "\n".join(
vertical_table_adapter.adapter(results, ('name', 'age')))
vertical_table_adapter.adapter(results, ("name", "age"))
)
def test_vertical_table_customized():
"""Test customized settings for vertical_table()."""
results = [('john', text_type(47)), ('jill', text_type(50))]
results = [("john", text_type(47)), ("jill", text_type(50))]
expected = dedent("""\
expected = dedent(
"""\
-[ PERSON 1 ]-----
name | john
age | 47
-[ PERSON 2 ]-----
name | jill
age | 50""")
assert expected == "\n".join(vertical_table_adapter.adapter(
results, ('name', 'age'), sep_title='PERSON {n}',
sep_character='-', sep_length=(1, 5)))
age | 50"""
)
assert expected == "\n".join(
vertical_table_adapter.adapter(
results,
("name", "age"),
sep_title="PERSON {n}",
sep_character="-",
sep_length=(1, 5),
)
)

View file

@ -8,56 +8,61 @@ from unittest.mock import MagicMock
import pytest
from cli_helpers.compat import MAC, text_type, WIN
from cli_helpers.config import (Config, DefaultConfigValidationError,
get_system_config_dirs, get_user_config_dir,
_pathify)
from cli_helpers.config import (
Config,
DefaultConfigValidationError,
get_system_config_dirs,
get_user_config_dir,
_pathify,
)
from .utils import with_temp_dir
APP_NAME, APP_AUTHOR = 'Test', 'Acme'
TEST_DATA_DIR = os.path.join(os.path.dirname(__file__), 'config_data')
APP_NAME, APP_AUTHOR = "Test", "Acme"
TEST_DATA_DIR = os.path.join(os.path.dirname(__file__), "config_data")
DEFAULT_CONFIG = {
'section': {
'test_boolean_default': 'True',
'test_string_file': '~/myfile',
'test_option': 'foobar'
"section": {
"test_boolean_default": "True",
"test_string_file": "~/myfile",
"test_option": "foobar✔",
},
'section2': {}
"section2": {},
}
DEFAULT_VALID_CONFIG = {
'section': {
'test_boolean_default': True,
'test_string_file': '~/myfile',
'test_option': 'foobar'
"section": {
"test_boolean_default": True,
"test_string_file": "~/myfile",
"test_option": "foobar✔",
},
'section2': {}
"section2": {},
}
def _mocked_user_config(temp_dir, *args, **kwargs):
config = Config(*args, **kwargs)
config.user_config_file = MagicMock(return_value=os.path.join(
temp_dir, config.filename))
config.user_config_file = MagicMock(
return_value=os.path.join(temp_dir, config.filename)
)
return config
def test_user_config_dir():
"""Test that the config directory is a string with the app name in it."""
if 'XDG_CONFIG_HOME' in os.environ:
del os.environ['XDG_CONFIG_HOME']
if "XDG_CONFIG_HOME" in os.environ:
del os.environ["XDG_CONFIG_HOME"]
config_dir = get_user_config_dir(APP_NAME, APP_AUTHOR)
assert isinstance(config_dir, text_type)
assert (config_dir.endswith(APP_NAME) or
config_dir.endswith(_pathify(APP_NAME)))
assert config_dir.endswith(APP_NAME) or config_dir.endswith(_pathify(APP_NAME))
def test_sys_config_dirs():
"""Test that the sys config directories are returned correctly."""
if 'XDG_CONFIG_DIRS' in os.environ:
del os.environ['XDG_CONFIG_DIRS']
if "XDG_CONFIG_DIRS" in os.environ:
del os.environ["XDG_CONFIG_DIRS"]
config_dirs = get_system_config_dirs(APP_NAME, APP_AUTHOR)
assert isinstance(config_dirs, list)
assert (config_dirs[0].endswith(APP_NAME) or
config_dirs[0].endswith(_pathify(APP_NAME)))
assert config_dirs[0].endswith(APP_NAME) or config_dirs[0].endswith(
_pathify(APP_NAME)
)
@pytest.mark.skipif(not WIN, reason="requires Windows")
@ -66,7 +71,7 @@ def test_windows_user_config_dir_no_roaming():
config_dir = get_user_config_dir(APP_NAME, APP_AUTHOR, roaming=False)
assert isinstance(config_dir, text_type)
assert config_dir.endswith(APP_NAME)
assert 'Local' in config_dir
assert "Local" in config_dir
@pytest.mark.skipif(not MAC, reason="requires macOS")
@ -75,7 +80,7 @@ def test_mac_user_config_dir_no_xdg():
config_dir = get_user_config_dir(APP_NAME, APP_AUTHOR, force_xdg=False)
assert isinstance(config_dir, text_type)
assert config_dir.endswith(APP_NAME)
assert 'Library' in config_dir
assert "Library" in config_dir
@pytest.mark.skipif(not MAC, reason="requires macOS")
@ -84,53 +89,61 @@ def test_mac_system_config_dirs_no_xdg():
config_dirs = get_system_config_dirs(APP_NAME, APP_AUTHOR, force_xdg=False)
assert isinstance(config_dirs, list)
assert config_dirs[0].endswith(APP_NAME)
assert 'Library' in config_dirs[0]
assert "Library" in config_dirs[0]
def test_config_reading_raise_errors():
"""Test that instantiating Config will raise errors when appropriate."""
with pytest.raises(ValueError):
Config(APP_NAME, APP_AUTHOR, 'test_config', write_default=True)
Config(APP_NAME, APP_AUTHOR, "test_config", write_default=True)
with pytest.raises(ValueError):
Config(APP_NAME, APP_AUTHOR, 'test_config', validate=True)
Config(APP_NAME, APP_AUTHOR, "test_config", validate=True)
with pytest.raises(TypeError):
Config(APP_NAME, APP_AUTHOR, 'test_config', default=b'test')
Config(APP_NAME, APP_AUTHOR, "test_config", default=b"test")
def test_config_user_file():
"""Test that the Config user_config_file is appropriate."""
config = Config(APP_NAME, APP_AUTHOR, 'test_config')
assert (get_user_config_dir(APP_NAME, APP_AUTHOR) in
config.user_config_file())
config = Config(APP_NAME, APP_AUTHOR, "test_config")
assert get_user_config_dir(APP_NAME, APP_AUTHOR) in config.user_config_file()
def test_config_reading_default_dict():
"""Test that the Config constructor will read in defaults from a dict."""
default = {'main': {'foo': 'bar'}}
config = Config(APP_NAME, APP_AUTHOR, 'test_config', default=default)
default = {"main": {"foo": "bar"}}
config = Config(APP_NAME, APP_AUTHOR, "test_config", default=default)
assert config.data == default
def test_config_reading_no_default():
"""Test that the Config constructor will work without any defaults."""
config = Config(APP_NAME, APP_AUTHOR, 'test_config')
config = Config(APP_NAME, APP_AUTHOR, "test_config")
assert config.data == {}
def test_config_reading_default_file():
"""Test that the Config will work with a default file."""
config = Config(APP_NAME, APP_AUTHOR, 'test_config',
default=os.path.join(TEST_DATA_DIR, 'configrc'))
config = Config(
APP_NAME,
APP_AUTHOR,
"test_config",
default=os.path.join(TEST_DATA_DIR, "configrc"),
)
config.read_default_config()
assert config.data == DEFAULT_CONFIG
def test_config_reading_configspec():
"""Test that the Config default file will work with a configspec."""
config = Config(APP_NAME, APP_AUTHOR, 'test_config', validate=True,
default=os.path.join(TEST_DATA_DIR, 'configspecrc'))
config = Config(
APP_NAME,
APP_AUTHOR,
"test_config",
validate=True,
default=os.path.join(TEST_DATA_DIR, "configspecrc"),
)
config.read_default_config()
assert config.data == DEFAULT_VALID_CONFIG
@ -138,134 +151,143 @@ def test_config_reading_configspec():
def test_config_reading_configspec_with_error():
"""Test that reading an invalid configspec raises and exception."""
with pytest.raises(DefaultConfigValidationError):
config = Config(APP_NAME, APP_AUTHOR, 'test_config', validate=True,
default=os.path.join(TEST_DATA_DIR,
'invalid_configspecrc'))
config = Config(
APP_NAME,
APP_AUTHOR,
"test_config",
validate=True,
default=os.path.join(TEST_DATA_DIR, "invalid_configspecrc"),
)
config.read_default_config()
@with_temp_dir
def test_write_and_read_default_config(temp_dir=None):
config_file = 'test_config'
default_file = os.path.join(TEST_DATA_DIR, 'configrc')
config_file = "test_config"
default_file = os.path.join(TEST_DATA_DIR, "configrc")
temp_config_file = os.path.join(temp_dir, config_file)
config = _mocked_user_config(temp_dir, APP_NAME, APP_AUTHOR, config_file,
default=default_file)
config = _mocked_user_config(
temp_dir, APP_NAME, APP_AUTHOR, config_file, default=default_file
)
config.read_default_config()
config.write_default_config()
user_config = _mocked_user_config(temp_dir, APP_NAME, APP_AUTHOR,
config_file, default=default_file)
user_config = _mocked_user_config(
temp_dir, APP_NAME, APP_AUTHOR, config_file, default=default_file
)
user_config.read()
assert temp_config_file in user_config.config_filenames
assert user_config == config
with open(temp_config_file) as f:
contents = f.read()
assert '# Test file comment' in contents
assert '# Test section comment' in contents
assert '# Test field comment' in contents
assert '# Test field commented out' in contents
assert "# Test file comment" in contents
assert "# Test section comment" in contents
assert "# Test field comment" in contents
assert "# Test field commented out" in contents
@with_temp_dir
def test_write_and_read_default_config_from_configspec(temp_dir=None):
config_file = 'test_config'
default_file = os.path.join(TEST_DATA_DIR, 'configspecrc')
config_file = "test_config"
default_file = os.path.join(TEST_DATA_DIR, "configspecrc")
temp_config_file = os.path.join(temp_dir, config_file)
config = _mocked_user_config(temp_dir, APP_NAME, APP_AUTHOR, config_file,
default=default_file, validate=True)
config = _mocked_user_config(
temp_dir, APP_NAME, APP_AUTHOR, config_file, default=default_file, validate=True
)
config.read_default_config()
config.write_default_config()
user_config = _mocked_user_config(temp_dir, APP_NAME, APP_AUTHOR,
config_file, default=default_file,
validate=True)
user_config = _mocked_user_config(
temp_dir, APP_NAME, APP_AUTHOR, config_file, default=default_file, validate=True
)
user_config.read()
assert temp_config_file in user_config.config_filenames
assert user_config == config
with open(temp_config_file) as f:
contents = f.read()
assert '# Test file comment' in contents
assert '# Test section comment' in contents
assert '# Test field comment' in contents
assert '# Test field commented out' in contents
assert "# Test file comment" in contents
assert "# Test section comment" in contents
assert "# Test field comment" in contents
assert "# Test field commented out" in contents
@with_temp_dir
def test_overwrite_default_config_from_configspec(temp_dir=None):
config_file = 'test_config'
default_file = os.path.join(TEST_DATA_DIR, 'configspecrc')
config_file = "test_config"
default_file = os.path.join(TEST_DATA_DIR, "configspecrc")
temp_config_file = os.path.join(temp_dir, config_file)
config = _mocked_user_config(temp_dir, APP_NAME, APP_AUTHOR, config_file,
default=default_file, validate=True)
config = _mocked_user_config(
temp_dir, APP_NAME, APP_AUTHOR, config_file, default=default_file, validate=True
)
config.read_default_config()
config.write_default_config()
with open(temp_config_file, 'a') as f:
f.write('--APPEND--')
with open(temp_config_file, "a") as f:
f.write("--APPEND--")
config.write_default_config()
with open(temp_config_file) as f:
assert '--APPEND--' in f.read()
assert "--APPEND--" in f.read()
config.write_default_config(overwrite=True)
with open(temp_config_file) as f:
assert '--APPEND--' not in f.read()
assert "--APPEND--" not in f.read()
def test_read_invalid_config_file():
config_file = 'invalid_configrc'
config_file = "invalid_configrc"
config = _mocked_user_config(TEST_DATA_DIR, APP_NAME, APP_AUTHOR,
config_file)
config = _mocked_user_config(TEST_DATA_DIR, APP_NAME, APP_AUTHOR, config_file)
config.read()
assert 'section' in config
assert 'test_string_file' in config['section']
assert 'test_boolean_default' not in config['section']
assert 'section2' in config
assert "section" in config
assert "test_string_file" in config["section"]
assert "test_boolean_default" not in config["section"]
assert "section2" in config
@with_temp_dir
def test_write_to_user_config(temp_dir=None):
config_file = 'test_config'
default_file = os.path.join(TEST_DATA_DIR, 'configrc')
config_file = "test_config"
default_file = os.path.join(TEST_DATA_DIR, "configrc")
temp_config_file = os.path.join(temp_dir, config_file)
config = _mocked_user_config(temp_dir, APP_NAME, APP_AUTHOR, config_file,
default=default_file)
config = _mocked_user_config(
temp_dir, APP_NAME, APP_AUTHOR, config_file, default=default_file
)
config.read_default_config()
config.write_default_config()
with open(temp_config_file) as f:
assert 'test_boolean_default = True' in f.read()
assert "test_boolean_default = True" in f.read()
config['section']['test_boolean_default'] = False
config["section"]["test_boolean_default"] = False
config.write()
with open(temp_config_file) as f:
assert 'test_boolean_default = False' in f.read()
assert "test_boolean_default = False" in f.read()
@with_temp_dir
def test_write_to_outfile(temp_dir=None):
config_file = 'test_config'
outfile = os.path.join(temp_dir, 'foo')
default_file = os.path.join(TEST_DATA_DIR, 'configrc')
config_file = "test_config"
outfile = os.path.join(temp_dir, "foo")
default_file = os.path.join(TEST_DATA_DIR, "configrc")
config = _mocked_user_config(temp_dir, APP_NAME, APP_AUTHOR, config_file,
default=default_file)
config = _mocked_user_config(
temp_dir, APP_NAME, APP_AUTHOR, config_file, default=default_file
)
config.read_default_config()
config.write_default_config()
config['section']['test_boolean_default'] = False
config["section"]["test_boolean_default"] = False
config.write(outfile=outfile)
with open(outfile) as f:
assert 'test_boolean_default = False' in f.read()
assert "test_boolean_default = False" in f.read()

View file

@ -8,63 +8,70 @@ from cli_helpers import utils
def test_bytes_to_string_hexlify():
"""Test that bytes_to_string() hexlifies binary data."""
assert utils.bytes_to_string(b'\xff') == '0xff'
assert utils.bytes_to_string(b"\xff") == "0xff"
def test_bytes_to_string_decode_bytes():
"""Test that bytes_to_string() decodes bytes."""
assert utils.bytes_to_string(b'foobar') == 'foobar'
assert utils.bytes_to_string(b"foobar") == "foobar"
def test_bytes_to_string_unprintable():
"""Test that bytes_to_string() hexlifies data that is valid unicode, but unprintable."""
assert utils.bytes_to_string(b"\0") == "0x00"
assert utils.bytes_to_string(b"\1") == "0x01"
assert utils.bytes_to_string(b"a\0") == "0x6100"
def test_bytes_to_string_non_bytes():
"""Test that bytes_to_string() returns non-bytes untouched."""
assert utils.bytes_to_string('abc') == 'abc'
assert utils.bytes_to_string("abc") == "abc"
assert utils.bytes_to_string(1) == 1
def test_to_string_bytes():
"""Test that to_string() converts bytes to a string."""
assert utils.to_string(b"foo") == 'foo'
assert utils.to_string(b"foo") == "foo"
def test_to_string_non_bytes():
"""Test that to_string() converts non-bytes to a string."""
assert utils.to_string(1) == '1'
assert utils.to_string(2.29) == '2.29'
assert utils.to_string(1) == "1"
assert utils.to_string(2.29) == "2.29"
def test_truncate_string():
"""Test string truncate preprocessor."""
val = 'x' * 100
assert utils.truncate_string(val, 10) == 'xxxxxxx...'
val = "x" * 100
assert utils.truncate_string(val, 10) == "xxxxxxx..."
val = 'x ' * 100
assert utils.truncate_string(val, 10) == 'x x x x...'
val = "x " * 100
assert utils.truncate_string(val, 10) == "x x x x..."
val = 'x' * 100
assert utils.truncate_string(val) == 'x' * 100
val = "x" * 100
assert utils.truncate_string(val) == "x" * 100
val = ['x'] * 100
val[20] = '\n'
str_val = ''.join(val)
val = ["x"] * 100
val[20] = "\n"
str_val = "".join(val)
assert utils.truncate_string(str_val, 10, skip_multiline_string=True) == str_val
def test_intlen_with_decimal():
"""Test that intlen() counts correctly with a decimal place."""
assert utils.intlen('11.1') == 2
assert utils.intlen('1.1') == 1
assert utils.intlen("11.1") == 2
assert utils.intlen("1.1") == 1
def test_intlen_without_decimal():
"""Test that intlen() counts correctly without a decimal place."""
assert utils.intlen('11') == 2
assert utils.intlen("11") == 2
def test_filter_dict_by_key():
"""Test that filter_dict_by_key() filter unwanted items."""
keys = ('foo', 'bar')
d = {'foo': 1, 'foobar': 2}
keys = ("foo", "bar")
d = {"foo": 1, "foobar": 2}
fd = utils.filter_dict_by_key(d, keys)
assert len(fd) == 1
assert all([k in keys for k in fd])

View file

@ -9,8 +9,10 @@ from .compat import TemporaryDirectory
def with_temp_dir(f):
"""A wrapper that creates and deletes a temporary directory."""
@wraps(f)
def wrapped(*args, **kwargs):
with TemporaryDirectory() as temp_dir:
return f(*args, temp_dir=temp_dir, **kwargs)
return wrapped

View file

@ -12,7 +12,6 @@ setenv =
commands =
pytest --cov-report= --cov=cli_helpers
coverage report
pep8radius master
bash -c 'if [ -n "$CODECOV" ]; then {envbindir}/coverage xml && {envbindir}/codecov; fi'
deps = -r{toxinidir}/requirements-dev.txt
usedevelop = True