1
0
Fork 0

Merging upstream version 2.2.0.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-02-07 00:48:48 +01:00
parent ab1302c465
commit 95bca6b33d
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
42 changed files with 1085 additions and 840 deletions

2
.git-blame-ignore-revs Normal file
View file

@ -0,0 +1,2 @@
# Black all the code.
33e8b461b6ddb717859dde664b71209ce69c119a

View file

@ -7,3 +7,5 @@
<!--- We appreciate your help and want to give you credit. Please take a moment to put an `x` in the boxes below as you complete them. --> <!--- We appreciate your help and want to give you credit. Please take a moment to put an `x` in the boxes below as you complete them. -->
- [ ] I've added this contribution to the `CHANGELOG`. - [ ] I've added this contribution to the `CHANGELOG`.
- [ ] I've added my name to the `AUTHORS` file (or it's already there). - [ ] I've added my name to the `AUTHORS` file (or it's already there).
- [ ] I installed pre-commit hooks (`pip install pre-commit && pre-commit install`), and ran `black` on my code.
- [x] Please squash merge this pull request (uncheck if you'd like us to merge as multiple commits)

6
.pre-commit-config.yaml Normal file
View file

@ -0,0 +1,6 @@
repos:
- repo: https://github.com/psf/black
rev: stable
hooks:
- id: black
language_version: python3.7

View file

@ -5,7 +5,7 @@ install: ./.travis/install.sh
script: script:
- source ~/.venv/bin/activate - source ~/.venv/bin/activate
- tox - tox
- if [[ "$TOXENV" == "py37" ]]; then black --check cli_helpers tests ; else echo "Skipping black for $TOXENV"; fi
matrix: matrix:
include: include:
- os: linux - os: linux

View file

@ -8,7 +8,7 @@ if [[ "$(uname -s)" == 'Darwin' ]]; then
git clone --depth 1 https://github.com/pyenv/pyenv ~/.pyenv git clone --depth 1 https://github.com/pyenv/pyenv ~/.pyenv
export PYENV_ROOT="$HOME/.pyenv" export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH" export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)" eval "$(pyenv init --path)"
case "${TOXENV}" in case "${TOXENV}" in
py36) py36)
@ -22,4 +22,4 @@ fi
pip install virtualenv pip install virtualenv
python -m virtualenv ~/.venv python -m virtualenv ~/.venv
source ~/.venv/bin/activate source ~/.venv/bin/activate
pip install tox pip install -r requirements-dev.txt -U --upgrade-strategy only-if-needed

View file

@ -21,6 +21,8 @@ This project receives help from these awesome contributors:
- laixintao - laixintao
- Georgy Frolov - Georgy Frolov
- Michał Górny - Michał Górny
- Waldir Pimenta
- Mel Dafert
Thanks Thanks
------ ------

View file

@ -1,13 +1,23 @@
Changelog Changelog
========= =========
Version 2.2.0
-------------
(released on 2021-08-27)
* Remove dependency on terminaltables
* Add psql_unicode table format
* Add minimal table format
* Fix pip2 installing py3-only versions
* Format unprintable bytes (eg 0x00, 0x01) as hex
Version 2.1.0 Version 2.1.0
------------- -------------
(released on 2020-07-29) (released on 2020-07-29)
* Speed up ouput styling of tables. * Speed up output styling of tables.
Version 2.0.1 Version 2.0.1
------------- -------------
@ -40,6 +50,7 @@ Version 1.2.0
(released on 2019-04-05) (released on 2019-04-05)
* Fix issue with writing non-ASCII characters to config files.
* Run tests on Python 3.7. * Run tests on Python 3.7.
* Use twine check during packaging tests. * Use twine check during packaging tests.
* Rename old tsv format to csv-tab (because it add quotes), introduce new tsv output adapter. * Rename old tsv format to csv-tab (because it add quotes), introduce new tsv output adapter.

View file

@ -24,9 +24,9 @@ Ready to contribute? Here's how to set up CLI Helpers for local development.
$ pip install virtualenv $ pip install virtualenv
$ virtualenv cli_helpers_dev $ virtualenv cli_helpers_dev
We've just created a virtual environment that we'll use to install all the dependencies We've just created a virtual environment called ``cli_helpers_dev``
and tools we need to work on CLI Helpers. Whenever you want to work on CLI Helpers, you that we'll use to install all the dependencies and tools we need to work on CLI Helpers.
need to activate the virtual environment:: Whenever you want to work on CLI Helpers, you need to activate the virtual environment::
$ source cli_helpers_dev/bin/activate $ source cli_helpers_dev/bin/activate
@ -34,7 +34,7 @@ Ready to contribute? Here's how to set up CLI Helpers for local development.
$ deactivate $ deactivate
5. Install the dependencies and development tools:: 5. From within the virtual environment, install the dependencies and development tools::
$ pip install -r requirements-dev.txt $ pip install -r requirements-dev.txt
$ pip install --editable . $ pip install --editable .
@ -43,11 +43,14 @@ Ready to contribute? Here's how to set up CLI Helpers for local development.
$ git checkout -b <name-of-bugfix-or-feature> master $ git checkout -b <name-of-bugfix-or-feature> master
7. While you work on your bugfix or feature, be sure to pull the latest changes from ``upstream``. This ensures that your local codebase is up-to-date:: 7. While you work on your bugfix or feature, be sure to pull the latest changes from ``upstream``.
This ensures that your local codebase is up-to-date::
$ git pull upstream master $ git pull upstream master
8. When your work is ready for the CLI Helpers team to review it, push your branch to your fork:: 8. When your work is ready for the CLI Helpers team to review it,
make sure to add an entry to CHANGELOG file, and add your name to the AUTHORS file.
Then, push your branch to your fork::
$ git push origin <name-of-bugfix-or-feature> $ git push origin <name-of-bugfix-or-feature>
@ -77,18 +80,31 @@ You can also measure CLI Helper's test coverage by running::
Coding Style Coding Style
------------ ------------
CLI Helpers requires code submissions to adhere to When you submit a PR, the changeset is checked for pep8 compliance using
`PEP 8 <https://www.python.org/dev/peps/pep-0008/>`_. `black <https://github.com/psf/black>`_. If you see a build failing because
It's easy to check the style of your code, just run:: of these checks, install ``black`` and apply style fixes:
$ pep8radius master ::
If you see any PEP 8 style issues, you can automatically fix them by running:: $ pip install black
$ black .
$ pep8radius master --in-place Then commit and push the fixes.
Be sure to commit and push any PEP 8 fixes. To enforce ``black`` applied on every commit, we also suggest installing ``pre-commit`` and
using the ``pre-commit`` hooks available in this repo:
::
$ pip install pre-commit
$ pre-commit install
Git blame
---------
Use ``git blame my_file.py --ignore-revs-file .git-blame-ignore-revs`` to exclude irrelevant commits
(specifically Black) from ``git blame``. For more information,
see `here <https://github.com/psf/black#migrating-your-code-style-without-ruining-git-blame>`_.
Documentation Documentation
------------- -------------

View file

@ -6,3 +6,4 @@ recursive-include docs *.rst
recursive-include docs Makefile recursive-include docs Makefile
recursive-include tests *.py recursive-include tests *.py
include tests/config_data/* include tests/config_data/*
exclude .pre-commit-config.yaml .git-blame-ignore-revs

View file

@ -1 +1 @@
__version__ = '2.1.0' __version__ = "2.2.0"

View file

@ -5,8 +5,8 @@ from decimal import Decimal
import sys import sys
PY2 = sys.version_info[0] == 2 PY2 = sys.version_info[0] == 2
WIN = sys.platform.startswith('win') WIN = sys.platform.startswith("win")
MAC = sys.platform == 'darwin' MAC = sys.platform == "darwin"
if PY2: if PY2:

View file

@ -16,11 +16,13 @@ logger = logging.getLogger(__name__)
class ConfigError(Exception): class ConfigError(Exception):
"""Base class for exceptions in this module.""" """Base class for exceptions in this module."""
pass pass
class DefaultConfigValidationError(ConfigError): class DefaultConfigValidationError(ConfigError):
"""Indicates the default config file did not validate correctly.""" """Indicates the default config file did not validate correctly."""
pass pass
@ -40,11 +42,19 @@ class Config(UserDict, object):
file. file.
""" """
def __init__(self, app_name, app_author, filename, default=None, def __init__(
validate=False, write_default=False, additional_dirs=()): self,
app_name,
app_author,
filename,
default=None,
validate=False,
write_default=False,
additional_dirs=(),
):
super(Config, self).__init__() super(Config, self).__init__()
#: The :class:`ConfigObj` instance. #: The :class:`ConfigObj` instance.
self.data = ConfigObj() self.data = ConfigObj(encoding="utf8")
self.default = {} self.default = {}
self.default_file = self.default_config = None self.default_file = self.default_config = None
@ -64,15 +74,19 @@ class Config(UserDict, object):
elif default is not None: elif default is not None:
raise TypeError( raise TypeError(
'"default" must be a dict or {}, not {}'.format( '"default" must be a dict or {}, not {}'.format(
text_type.__name__, type(default))) text_type.__name__, type(default)
)
)
if self.write_default and not self.default_file: if self.write_default and not self.default_file:
raise ValueError('Cannot use "write_default" without specifying ' raise ValueError(
'a default file.') 'Cannot use "write_default" without specifying ' "a default file."
)
if self.validate and not self.default_file: if self.validate and not self.default_file:
raise ValueError('Cannot use "validate" without specifying a ' raise ValueError(
'default file.') 'Cannot use "validate" without specifying a ' "default file."
)
def read_default_config(self): def read_default_config(self):
"""Read the default config file. """Read the default config file.
@ -81,11 +95,18 @@ class Config(UserDict, object):
the *default* file. the *default* file.
""" """
if self.validate: if self.validate:
self.default_config = ConfigObj(configspec=self.default_file, self.default_config = ConfigObj(
list_values=False, _inspec=True, configspec=self.default_file,
encoding='utf8') list_values=False,
valid = self.default_config.validate(Validator(), copy=True, _inspec=True,
preserve_errors=True) encoding="utf8",
)
# ConfigObj does not set the encoding on the configspec.
self.default_config.configspec.encoding = "utf8"
valid = self.default_config.validate(
Validator(), copy=True, preserve_errors=True
)
if valid is not True: if valid is not True:
for name, section in valid.items(): for name, section in valid.items():
if section is True: if section is True:
@ -93,8 +114,8 @@ class Config(UserDict, object):
for key, value in section.items(): for key, value in section.items():
if isinstance(value, ValidateError): if isinstance(value, ValidateError):
raise DefaultConfigValidationError( raise DefaultConfigValidationError(
'section [{}], key "{}": {}'.format( 'section [{}], key "{}": {}'.format(name, key, value)
name, key, value)) )
elif self.default_file: elif self.default_file:
self.default_config, _ = self.read_config_file(self.default_file) self.default_config, _ = self.read_config_file(self.default_file)
@ -113,13 +134,15 @@ class Config(UserDict, object):
def user_config_file(self): def user_config_file(self):
"""Get the absolute path to the user config file.""" """Get the absolute path to the user config file."""
return os.path.join( return os.path.join(
get_user_config_dir(self.app_name, self.app_author), get_user_config_dir(self.app_name, self.app_author), self.filename
self.filename) )
def system_config_files(self): def system_config_files(self):
"""Get a list of absolute paths to the system config files.""" """Get a list of absolute paths to the system config files."""
return [os.path.join(f, self.filename) for f in get_system_config_dirs( return [
self.app_name, self.app_author)] os.path.join(f, self.filename)
for f in get_system_config_dirs(self.app_name, self.app_author)
]
def additional_files(self): def additional_files(self):
"""Get a list of absolute paths to the additional config files.""" """Get a list of absolute paths to the additional config files."""
@ -127,8 +150,11 @@ class Config(UserDict, object):
def all_config_files(self): def all_config_files(self):
"""Get a list of absolute paths to all the config files.""" """Get a list of absolute paths to all the config files."""
return (self.additional_files() + self.system_config_files() + return (
[self.user_config_file()]) self.additional_files()
+ self.system_config_files()
+ [self.user_config_file()]
)
def write_default_config(self, overwrite=False): def write_default_config(self, overwrite=False):
"""Write the default config to the user's config file. """Write the default config to the user's config file.
@ -139,7 +165,7 @@ class Config(UserDict, object):
if not overwrite and os.path.exists(destination): if not overwrite and os.path.exists(destination):
return return
with io.open(destination, mode='wb') as f: with io.open(destination, mode="wb") as f:
self.default_config.write(f) self.default_config.write(f)
def write(self, outfile=None, section=None): def write(self, outfile=None, section=None):
@ -149,7 +175,7 @@ class Config(UserDict, object):
:param None/str section: The config section to write, or :data:`None` :param None/str section: The config section to write, or :data:`None`
to write the entire config. to write the entire config.
""" """
with io.open(outfile or self.user_config_file(), 'wb') as f: with io.open(outfile or self.user_config_file(), "wb") as f:
self.data.write(outfile=f, section=section) self.data.write(outfile=f, section=section)
def read_config_file(self, f): def read_config_file(self, f):
@ -159,18 +185,21 @@ class Config(UserDict, object):
""" """
configspec = self.default_file if self.validate else None configspec = self.default_file if self.validate else None
try: try:
config = ConfigObj(infile=f, configspec=configspec, config = ConfigObj(
interpolation=False, encoding='utf8') infile=f, configspec=configspec, interpolation=False, encoding="utf8"
)
# ConfigObj does not set the encoding on the configspec.
if config.configspec is not None:
config.configspec.encoding = "utf8"
except ConfigObjError as e: except ConfigObjError as e:
logger.warning( logger.warning(
'Unable to parse line {} of config file {}'.format( "Unable to parse line {} of config file {}".format(e.line_number, f)
e.line_number, f)) )
config = e.config config = e.config
valid = True valid = True
if self.validate: if self.validate:
valid = config.validate(Validator(), preserve_errors=True, valid = config.validate(Validator(), preserve_errors=True, copy=True)
copy=True)
if bool(config): if bool(config):
self.config_filenames.append(config.filename) self.config_filenames.append(config.filename)
@ -220,15 +249,17 @@ def get_user_config_dir(app_name, app_author, roaming=True, force_xdg=True):
""" """
if WIN: if WIN:
key = 'APPDATA' if roaming else 'LOCALAPPDATA' key = "APPDATA" if roaming else "LOCALAPPDATA"
folder = os.path.expanduser(os.environ.get(key, '~')) folder = os.path.expanduser(os.environ.get(key, "~"))
return os.path.join(folder, app_author, app_name) return os.path.join(folder, app_author, app_name)
if MAC and not force_xdg: if MAC and not force_xdg:
return os.path.join(os.path.expanduser(
'~/Library/Application Support'), app_name)
return os.path.join( return os.path.join(
os.path.expanduser(os.environ.get('XDG_CONFIG_HOME', '~/.config')), os.path.expanduser("~/Library/Application Support"), app_name
_pathify(app_name)) )
return os.path.join(
os.path.expanduser(os.environ.get("XDG_CONFIG_HOME", "~/.config")),
_pathify(app_name),
)
def get_system_config_dirs(app_name, app_author, force_xdg=True): def get_system_config_dirs(app_name, app_author, force_xdg=True):
@ -256,15 +287,15 @@ def get_system_config_dirs(app_name, app_author, force_xdg=True):
""" """
if WIN: if WIN:
folder = os.environ.get('PROGRAMDATA') folder = os.environ.get("PROGRAMDATA")
return [os.path.join(folder, app_author, app_name)] return [os.path.join(folder, app_author, app_name)]
if MAC and not force_xdg: if MAC and not force_xdg:
return [os.path.join('/Library/Application Support', app_name)] return [os.path.join("/Library/Application Support", app_name)]
dirs = os.environ.get('XDG_CONFIG_DIRS', '/etc/xdg') dirs = os.environ.get("XDG_CONFIG_DIRS", "/etc/xdg")
paths = [os.path.expanduser(x) for x in dirs.split(os.pathsep)] paths = [os.path.expanduser(x) for x in dirs.split(os.pathsep)]
return [os.path.join(d, _pathify(app_name)) for d in paths] return [os.path.join(d, _pathify(app_name)) for d in paths]
def _pathify(s): def _pathify(s):
"""Convert spaces to hyphens and lowercase a string.""" """Convert spaces to hyphens and lowercase a string."""
return '-'.join(s.split()).lower() return "-".join(s.split()).lower()

View file

@ -10,4 +10,4 @@ When formatting data, you'll primarily use the
from .output_formatter import format_output, TabularOutputFormatter from .output_formatter import format_output, TabularOutputFormatter
__all__ = ['format_output', 'TabularOutputFormatter'] __all__ = ["format_output", "TabularOutputFormatter"]

View file

@ -8,7 +8,7 @@ from cli_helpers.compat import csv, StringIO
from cli_helpers.utils import filter_dict_by_key from cli_helpers.utils import filter_dict_by_key
from .preprocessors import bytes_to_string, override_missing_value from .preprocessors import bytes_to_string, override_missing_value
supported_formats = ('csv', 'csv-tab') supported_formats = ("csv", "csv-tab")
preprocessors = (override_missing_value, bytes_to_string) preprocessors = (override_missing_value, bytes_to_string)
@ -23,18 +23,26 @@ class linewriter(object):
self.line = d self.line = d
def adapter(data, headers, table_format='csv', **kwargs): def adapter(data, headers, table_format="csv", **kwargs):
"""Wrap the formatting inside a function for TabularOutputFormatter.""" """Wrap the formatting inside a function for TabularOutputFormatter."""
keys = ('dialect', 'delimiter', 'doublequote', 'escapechar', keys = (
'quotechar', 'quoting', 'skipinitialspace', 'strict') "dialect",
if table_format == 'csv': "delimiter",
delimiter = ',' "doublequote",
elif table_format == 'csv-tab': "escapechar",
delimiter = '\t' "quotechar",
"quoting",
"skipinitialspace",
"strict",
)
if table_format == "csv":
delimiter = ","
elif table_format == "csv-tab":
delimiter = "\t"
else: else:
raise ValueError('Invalid table_format specified.') raise ValueError("Invalid table_format specified.")
ckwargs = {'delimiter': delimiter, 'lineterminator': ''} ckwargs = {"delimiter": delimiter, "lineterminator": ""}
ckwargs.update(filter_dict_by_key(kwargs, keys)) ckwargs.update(filter_dict_by_key(kwargs, keys))
l = linewriter() l = linewriter()

View file

@ -4,16 +4,25 @@
from __future__ import unicode_literals from __future__ import unicode_literals
from collections import namedtuple from collections import namedtuple
from cli_helpers.compat import (text_type, binary_type, int_types, float_types, from cli_helpers.compat import (
zip_longest) text_type,
binary_type,
int_types,
float_types,
zip_longest,
)
from cli_helpers.utils import unique_items from cli_helpers.utils import unique_items
from . import (delimited_output_adapter, vertical_table_adapter, from . import (
tabulate_adapter, terminaltables_adapter, tsv_output_adapter) delimited_output_adapter,
vertical_table_adapter,
tabulate_adapter,
tsv_output_adapter,
)
from decimal import Decimal from decimal import Decimal
import itertools import itertools
MISSING_VALUE = '<null>' MISSING_VALUE = "<null>"
MAX_FIELD_WIDTH = 500 MAX_FIELD_WIDTH = 500
TYPES = { TYPES = {
@ -23,12 +32,12 @@ TYPES = {
float: 3, float: 3,
Decimal: 3, Decimal: 3,
binary_type: 4, binary_type: 4,
text_type: 5 text_type: 5,
} }
OutputFormatHandler = namedtuple( OutputFormatHandler = namedtuple(
'OutputFormatHandler', "OutputFormatHandler", "format_name preprocessors formatter formatter_args"
'format_name preprocessors formatter formatter_args') )
class TabularOutputFormatter(object): class TabularOutputFormatter(object):
@ -96,8 +105,7 @@ class TabularOutputFormatter(object):
if format_name in self.supported_formats: if format_name in self.supported_formats:
self._format_name = format_name self._format_name = format_name
else: else:
raise ValueError('unrecognized format_name "{}"'.format( raise ValueError('unrecognized format_name "{}"'.format(format_name))
format_name))
@property @property
def supported_formats(self): def supported_formats(self):
@ -105,8 +113,9 @@ class TabularOutputFormatter(object):
return tuple(self._output_formats.keys()) return tuple(self._output_formats.keys())
@classmethod @classmethod
def register_new_formatter(cls, format_name, handler, preprocessors=(), def register_new_formatter(
kwargs=None): cls, format_name, handler, preprocessors=(), kwargs=None
):
"""Register a new output formatter. """Register a new output formatter.
:param str format_name: The name of the format. :param str format_name: The name of the format.
@ -117,10 +126,18 @@ class TabularOutputFormatter(object):
""" """
cls._output_formats[format_name] = OutputFormatHandler( cls._output_formats[format_name] = OutputFormatHandler(
format_name, preprocessors, handler, kwargs or {}) format_name, preprocessors, handler, kwargs or {}
)
def format_output(self, data, headers, format_name=None, def format_output(
preprocessors=(), column_types=None, **kwargs): self,
data,
headers,
format_name=None,
preprocessors=(),
column_types=None,
**kwargs
):
r"""Format the headers and data using a specific formatter. r"""Format the headers and data using a specific formatter.
*format_name* must be a supported formatter (see *format_name* must be a supported formatter (see
@ -142,15 +159,13 @@ class TabularOutputFormatter(object):
if format_name not in self.supported_formats: if format_name not in self.supported_formats:
raise ValueError('unrecognized format "{}"'.format(format_name)) raise ValueError('unrecognized format "{}"'.format(format_name))
(_, _preprocessors, formatter, (_, _preprocessors, formatter, fkwargs) = self._output_formats[format_name]
fkwargs) = self._output_formats[format_name]
fkwargs.update(kwargs) fkwargs.update(kwargs)
if column_types is None: if column_types is None:
data = list(data) data = list(data)
column_types = self._get_column_types(data) column_types = self._get_column_types(data)
for f in unique_items(preprocessors + _preprocessors): for f in unique_items(preprocessors + _preprocessors):
data, headers = f(data, headers, column_types=column_types, data, headers = f(data, headers, column_types=column_types, **fkwargs)
**fkwargs)
return formatter(list(data), headers, column_types=column_types, **fkwargs) return formatter(list(data), headers, column_types=column_types, **fkwargs)
def _get_column_types(self, data): def _get_column_types(self, data):
@ -197,32 +212,44 @@ def format_output(data, headers, format_name, **kwargs):
for vertical_format in vertical_table_adapter.supported_formats: for vertical_format in vertical_table_adapter.supported_formats:
TabularOutputFormatter.register_new_formatter( TabularOutputFormatter.register_new_formatter(
vertical_format, vertical_table_adapter.adapter, vertical_format,
vertical_table_adapter.adapter,
vertical_table_adapter.preprocessors, vertical_table_adapter.preprocessors,
{'table_format': vertical_format, 'missing_value': MISSING_VALUE, 'max_field_width': None}) {
"table_format": vertical_format,
"missing_value": MISSING_VALUE,
"max_field_width": None,
},
)
for delimited_format in delimited_output_adapter.supported_formats: for delimited_format in delimited_output_adapter.supported_formats:
TabularOutputFormatter.register_new_formatter( TabularOutputFormatter.register_new_formatter(
delimited_format, delimited_output_adapter.adapter, delimited_format,
delimited_output_adapter.adapter,
delimited_output_adapter.preprocessors, delimited_output_adapter.preprocessors,
{'table_format': delimited_format, 'missing_value': '', 'max_field_width': None}) {
"table_format": delimited_format,
"missing_value": "",
"max_field_width": None,
},
)
for tabulate_format in tabulate_adapter.supported_formats: for tabulate_format in tabulate_adapter.supported_formats:
TabularOutputFormatter.register_new_formatter( TabularOutputFormatter.register_new_formatter(
tabulate_format, tabulate_adapter.adapter, tabulate_format,
tabulate_adapter.preprocessors + tabulate_adapter.adapter,
(tabulate_adapter.style_output_table(tabulate_format),), tabulate_adapter.get_preprocessors(tabulate_format),
{'table_format': tabulate_format, 'missing_value': MISSING_VALUE, 'max_field_width': MAX_FIELD_WIDTH}) {
"table_format": tabulate_format,
for terminaltables_format in terminaltables_adapter.supported_formats: "missing_value": MISSING_VALUE,
TabularOutputFormatter.register_new_formatter( "max_field_width": MAX_FIELD_WIDTH,
terminaltables_format, terminaltables_adapter.adapter, },
terminaltables_adapter.preprocessors + ),
(terminaltables_adapter.style_output_table(terminaltables_format),),
{'table_format': terminaltables_format, 'missing_value': MISSING_VALUE, 'max_field_width': MAX_FIELD_WIDTH})
for tsv_format in tsv_output_adapter.supported_formats: for tsv_format in tsv_output_adapter.supported_formats:
TabularOutputFormatter.register_new_formatter( TabularOutputFormatter.register_new_formatter(
tsv_format, tsv_output_adapter.adapter, tsv_format,
tsv_output_adapter.adapter,
tsv_output_adapter.preprocessors, tsv_output_adapter.preprocessors,
{'table_format': tsv_format, 'missing_value': '', 'max_field_width': None}) {"table_format": tsv_format, "missing_value": "", "max_field_width": None},
)

View file

@ -7,7 +7,9 @@ from cli_helpers import utils
from cli_helpers.compat import text_type, int_types, float_types, HAS_PYGMENTS from cli_helpers.compat import text_type, int_types, float_types, HAS_PYGMENTS
def truncate_string(data, headers, max_field_width=None, skip_multiline_string=True, **_): def truncate_string(
data, headers, max_field_width=None, skip_multiline_string=True, **_
):
"""Truncate very long strings. Only needed for tabular """Truncate very long strings. Only needed for tabular
representation, because trying to tabulate very long data representation, because trying to tabulate very long data
is problematic in terms of performance, and does not make any is problematic in terms of performance, and does not make any
@ -19,8 +21,19 @@ def truncate_string(data, headers, max_field_width=None, skip_multiline_string=T
:return: The processed data and headers. :return: The processed data and headers.
:rtype: tuple :rtype: tuple
""" """
return (([utils.truncate_string(v, max_field_width, skip_multiline_string) for v in row] for row in data), return (
[utils.truncate_string(h, max_field_width, skip_multiline_string) for h in headers]) (
[
utils.truncate_string(v, max_field_width, skip_multiline_string)
for v in row
]
for row in data
),
[
utils.truncate_string(h, max_field_width, skip_multiline_string)
for h in headers
],
)
def convert_to_string(data, headers, **_): def convert_to_string(data, headers, **_):
@ -35,13 +48,20 @@ def convert_to_string(data, headers, **_):
:rtype: tuple :rtype: tuple
""" """
return (([utils.to_string(v) for v in row] for row in data), return (
[utils.to_string(h) for h in headers]) ([utils.to_string(v) for v in row] for row in data),
[utils.to_string(h) for h in headers],
)
def override_missing_value(data, headers, style=None, def override_missing_value(
data,
headers,
style=None,
missing_value_token="Token.Output.Null", missing_value_token="Token.Output.Null",
missing_value='', **_): missing_value="",
**_
):
"""Override missing values in the *data* with *missing_value*. """Override missing values in the *data* with *missing_value*.
A missing value is any value that is :data:`None`. A missing value is any value that is :data:`None`.
@ -55,12 +75,15 @@ def override_missing_value(data, headers, style=None,
:rtype: tuple :rtype: tuple
""" """
def fields(): def fields():
for row in data: for row in data:
processed = [] processed = []
for field in row: for field in row:
if field is None and style and HAS_PYGMENTS: if field is None and style and HAS_PYGMENTS:
styled = utils.style_field(missing_value_token, missing_value, style) styled = utils.style_field(
missing_value_token, missing_value, style
)
processed.append(styled) processed.append(styled)
elif field is None: elif field is None:
processed.append(missing_value) processed.append(missing_value)
@ -71,7 +94,7 @@ def override_missing_value(data, headers, style=None,
return (fields(), headers) return (fields(), headers)
def override_tab_value(data, headers, new_value=' ', **_): def override_tab_value(data, headers, new_value=" ", **_):
"""Override tab values in the *data* with *new_value*. """Override tab values in the *data* with *new_value*.
:param iterable data: An :term:`iterable` (e.g. list) of rows. :param iterable data: An :term:`iterable` (e.g. list) of rows.
@ -81,9 +104,13 @@ def override_tab_value(data, headers, new_value=' ', **_):
:rtype: tuple :rtype: tuple
""" """
return (([v.replace('\t', new_value) if isinstance(v, text_type) else v return (
for v in row] for row in data), (
headers) [v.replace("\t", new_value) if isinstance(v, text_type) else v for v in row]
for row in data
),
headers,
)
def escape_newlines(data, headers, **_): def escape_newlines(data, headers, **_):
@ -121,8 +148,10 @@ def bytes_to_string(data, headers, **_):
:rtype: tuple :rtype: tuple
""" """
return (([utils.bytes_to_string(v) for v in row] for row in data), return (
[utils.bytes_to_string(h) for h in headers]) ([utils.bytes_to_string(v) for v in row] for row in data),
[utils.bytes_to_string(h) for h in headers],
)
def align_decimals(data, headers, column_types=(), **_): def align_decimals(data, headers, column_types=(), **_):
@ -204,17 +233,26 @@ def quote_whitespaces(data, headers, quotestyle="'", **_):
for row in data: for row in data:
result = [] result = []
for i, v in enumerate(row): for i, v in enumerate(row):
quotation = quotestyle if quote[i] else '' quotation = quotestyle if quote[i] else ""
result.append('{quotestyle}{value}{quotestyle}'.format( result.append(
quotestyle=quotation, value=v)) "{quotestyle}{value}{quotestyle}".format(
quotestyle=quotation, value=v
)
)
yield result yield result
return results(data), headers return results(data), headers
def style_output(data, headers, style=None, def style_output(
header_token='Token.Output.Header', data,
odd_row_token='Token.Output.OddRow', headers,
even_row_token='Token.Output.EvenRow', **_): style=None,
header_token="Token.Output.Header",
odd_row_token="Token.Output.OddRow",
even_row_token="Token.Output.EvenRow",
**_
):
"""Style the *data* and *headers* (e.g. bold, italic, and colors) """Style the *data* and *headers* (e.g. bold, italic, and colors)
.. NOTE:: .. NOTE::
@ -253,19 +291,32 @@ def style_output(data, headers, style=None,
""" """
from cli_helpers.utils import filter_style_table from cli_helpers.utils import filter_style_table
relevant_styles = filter_style_table(style, header_token, odd_row_token, even_row_token)
relevant_styles = filter_style_table(
style, header_token, odd_row_token, even_row_token
)
if style and HAS_PYGMENTS: if style and HAS_PYGMENTS:
if relevant_styles.get(header_token): if relevant_styles.get(header_token):
headers = [utils.style_field(header_token, header, style) for header in headers] headers = [
utils.style_field(header_token, header, style) for header in headers
]
if relevant_styles.get(odd_row_token) or relevant_styles.get(even_row_token): if relevant_styles.get(odd_row_token) or relevant_styles.get(even_row_token):
data = ([utils.style_field(odd_row_token if i % 2 else even_row_token, f, style) data = (
for f in r] for i, r in enumerate(data, 1)) [
utils.style_field(
odd_row_token if i % 2 else even_row_token, f, style
)
for f in r
]
for i, r in enumerate(data, 1)
)
return iter(data), headers return iter(data), headers
def format_numbers(data, headers, column_types=(), integer_format=None, def format_numbers(
float_format=None, **_): data, headers, column_types=(), integer_format=None, float_format=None, **_
):
"""Format numbers according to a format specification. """Format numbers according to a format specification.
This uses Python's format specification to format numbers of the following This uses Python's format specification to format numbers of the following
@ -296,5 +347,7 @@ def format_numbers(data, headers, column_types=(), integer_format=None,
return format(field, float_format) return format(field, float_format)
return field return field
data = ([_format_number(v, column_types[i]) for i, v in enumerate(row)] for row in data) data = (
[_format_number(v, column_types[i]) for i, v in enumerate(row)] for row in data
)
return data, headers return data, headers

View file

@ -4,24 +4,110 @@
from __future__ import unicode_literals from __future__ import unicode_literals
from cli_helpers.utils import filter_dict_by_key from cli_helpers.utils import filter_dict_by_key
from cli_helpers.compat import (Terminal256Formatter, StringIO) from cli_helpers.compat import Terminal256Formatter, StringIO
from .preprocessors import (convert_to_string, truncate_string, override_missing_value, from .preprocessors import (
style_output, HAS_PYGMENTS) convert_to_string,
truncate_string,
override_missing_value,
style_output,
HAS_PYGMENTS,
escape_newlines,
)
import tabulate import tabulate
supported_markup_formats = ('mediawiki', 'html', 'latex', 'latex_booktabs',
'textile', 'moinmoin', 'jira') tabulate.MIN_PADDING = 0
supported_table_formats = ('plain', 'simple', 'grid', 'fancy_grid', 'pipe',
'orgtbl', 'psql', 'rst') tabulate._table_formats["psql_unicode"] = tabulate.TableFormat(
lineabove=tabulate.Line("", "", "", ""),
linebelowheader=tabulate.Line("", "", "", ""),
linebetweenrows=None,
linebelow=tabulate.Line("", "", "", ""),
headerrow=tabulate.DataRow("", "", ""),
datarow=tabulate.DataRow("", "", ""),
padding=1,
with_header_hide=None,
)
tabulate._table_formats["double"] = tabulate.TableFormat(
lineabove=tabulate.Line("", "", "", ""),
linebelowheader=tabulate.Line("", "", "", ""),
linebetweenrows=None,
linebelow=tabulate.Line("", "", "", ""),
headerrow=tabulate.DataRow("", "", ""),
datarow=tabulate.DataRow("", "", ""),
padding=1,
with_header_hide=None,
)
tabulate._table_formats["ascii"] = tabulate.TableFormat(
lineabove=tabulate.Line("+", "-", "+", "+"),
linebelowheader=tabulate.Line("+", "-", "+", "+"),
linebetweenrows=None,
linebelow=tabulate.Line("+", "-", "+", "+"),
headerrow=tabulate.DataRow("|", "|", "|"),
datarow=tabulate.DataRow("|", "|", "|"),
padding=1,
with_header_hide=None,
)
# "minimal" is the same as "plain", but without headers
tabulate._table_formats["minimal"] = tabulate._table_formats["plain"]
supported_markup_formats = (
"mediawiki",
"html",
"latex",
"latex_booktabs",
"textile",
"moinmoin",
"jira",
)
supported_table_formats = (
"ascii",
"plain",
"simple",
"minimal",
"grid",
"fancy_grid",
"pipe",
"orgtbl",
"psql",
"psql_unicode",
"rst",
"github",
"double",
)
supported_formats = supported_markup_formats + supported_table_formats supported_formats = supported_markup_formats + supported_table_formats
preprocessors = (override_missing_value, convert_to_string, truncate_string, style_output) default_kwargs = {"ascii": {"numalign": "left"}}
headless_formats = ("minimal",)
def get_preprocessors(format_name):
common_formatters = (
override_missing_value,
convert_to_string,
truncate_string,
style_output,
)
if tabulate.multiline_formats.get(format_name):
return common_formatters + (style_output_table(format_name),)
else:
return common_formatters + (escape_newlines, style_output_table(format_name))
def style_output_table(format_name=""): def style_output_table(format_name=""):
def style_output(data, headers, style=None, def style_output(
table_separator_token='Token.Output.TableSeparator', **_): data,
headers,
style=None,
table_separator_token="Token.Output.TableSeparator",
**_
):
"""Style the *table* a(e.g. bold, italic, and colors) """Style the *table* a(e.g. bold, italic, and colors)
.. NOTE:: .. NOTE::
@ -71,24 +157,28 @@ def style_output_table(format_name=""):
if not elt: if not elt:
return elt return elt
if elt.__class__ == tabulate.Line: if elt.__class__ == tabulate.Line:
return tabulate.Line(*(style_field(table_separator_token, val) for val in elt)) return tabulate.Line(
*(style_field(table_separator_token, val) for val in elt)
)
if elt.__class__ == tabulate.DataRow: if elt.__class__ == tabulate.DataRow:
return tabulate.DataRow(*(style_field(table_separator_token, val) for val in elt)) return tabulate.DataRow(
*(style_field(table_separator_token, val) for val in elt)
)
return elt return elt
srcfmt = tabulate._table_formats[format_name] srcfmt = tabulate._table_formats[format_name]
newfmt = tabulate.TableFormat( newfmt = tabulate.TableFormat(*(addColorInElt(val) for val in srcfmt))
*(addColorInElt(val) for val in srcfmt))
tabulate._table_formats[format_name] = newfmt tabulate._table_formats[format_name] = newfmt
return iter(data), headers return iter(data), headers
return style_output return style_output
def adapter(data, headers, table_format=None, preserve_whitespace=False,
**kwargs): def adapter(data, headers, table_format=None, preserve_whitespace=False, **kwargs):
"""Wrap tabulate inside a function for TabularOutputFormatter.""" """Wrap tabulate inside a function for TabularOutputFormatter."""
keys = ('floatfmt', 'numalign', 'stralign', 'showindex', 'disable_numparse') keys = ("floatfmt", "numalign", "stralign", "showindex", "disable_numparse")
tkwargs = {'tablefmt': table_format} tkwargs = {"tablefmt": table_format}
tkwargs.update(filter_dict_by_key(kwargs, keys)) tkwargs.update(filter_dict_by_key(kwargs, keys))
if table_format in supported_markup_formats: if table_format in supported_markup_formats:
@ -96,4 +186,7 @@ def adapter(data, headers, table_format=None, preserve_whitespace=False,
tabulate.PRESERVE_WHITESPACE = preserve_whitespace tabulate.PRESERVE_WHITESPACE = preserve_whitespace
return iter(tabulate.tabulate(data, headers, **tkwargs).split('\n')) tkwargs.update(default_kwargs.get(table_format, {}))
if table_format in headless_formats:
headers = []
return iter(tabulate.tabulate(data, headers, **tkwargs).split("\n"))

View file

@ -1,97 +0,0 @@
# -*- coding: utf-8 -*-
"""Format adapter for the terminaltables module."""
from __future__ import unicode_literals
import terminaltables
import itertools
from cli_helpers.utils import filter_dict_by_key
from cli_helpers.compat import (Terminal256Formatter, StringIO)
from .preprocessors import (convert_to_string, truncate_string, override_missing_value,
style_output, HAS_PYGMENTS,
override_tab_value, escape_newlines)
supported_formats = ('ascii', 'double', 'github')
preprocessors = (
override_missing_value, convert_to_string, override_tab_value,
truncate_string, style_output, escape_newlines
)
table_format_handler = {
'ascii': terminaltables.AsciiTable,
'double': terminaltables.DoubleTable,
'github': terminaltables.GithubFlavoredMarkdownTable,
}
def style_output_table(format_name=""):
def style_output(data, headers, style=None,
table_separator_token='Token.Output.TableSeparator', **_):
"""Style the *table* (e.g. bold, italic, and colors)
.. NOTE::
This requires the `Pygments <http://pygments.org/>`_ library to
be installed. You can install it with CLI Helpers as an extra::
$ pip install cli_helpers[styles]
Example usage::
from cli_helpers.tabular_output import terminaltables_adapter
from pygments.style import Style
from pygments.token import Token
class YourStyle(Style):
default_style = ""
styles = {
Token.Output.TableSeparator: '#ansigray'
}
headers = ('First Name', 'Last Name')
data = [['Fred', 'Roberts'], ['George', 'Smith']]
style_output_table = terminaltables_adapter.style_output_table('psql')
style_output_table(data, headers, style=CliStyle)
output = terminaltables_adapter.adapter(data, headers, style=YourStyle)
:param iterable data: An :term:`iterable` (e.g. list) of rows.
:param iterable headers: The column headers.
:param str/pygments.style.Style style: A Pygments style. You can `create
your own styles <https://pygments.org/docs/styles#creating-own-styles>`_.
:param str table_separator_token: The token type to be used for the table separator.
:return: data and headers.
:rtype: tuple
"""
if style and HAS_PYGMENTS and format_name in supported_formats:
formatter = Terminal256Formatter(style=style)
def style_field(token, field):
"""Get the styled text for a *field* using *token* type."""
s = StringIO()
formatter.format(((token, field),), s)
return s.getvalue()
clss = table_format_handler[format_name]
for char in [char for char in terminaltables.base_table.BaseTable.__dict__ if char.startswith("CHAR_")]:
setattr(clss, char, style_field(
table_separator_token, getattr(clss, char)))
return iter(data), headers
return style_output
def adapter(data, headers, table_format=None, **kwargs):
"""Wrap terminaltables inside a function for TabularOutputFormatter."""
keys = ('title', )
table = table_format_handler[table_format]
t = table([headers] + list(data), **filter_dict_by_key(kwargs, keys))
dimensions = terminaltables.width_and_alignment.max_dimensions(
t.table_data,
t.padding_left,
t.padding_right)[:3]
for r in t.gen_table(*dimensions):
yield u''.join(r)

View file

@ -7,10 +7,11 @@ from .preprocessors import bytes_to_string, override_missing_value, convert_to_s
from itertools import chain from itertools import chain
from cli_helpers.utils import replace from cli_helpers.utils import replace
supported_formats = ('tsv',) supported_formats = ("tsv",)
preprocessors = (override_missing_value, bytes_to_string, convert_to_string) preprocessors = (override_missing_value, bytes_to_string, convert_to_string)
def adapter(data, headers, **kwargs): def adapter(data, headers, **kwargs):
"""Wrap the formatting inside a function for TabularOutputFormatter.""" """Wrap the formatting inside a function for TabularOutputFormatter."""
for row in chain((headers,), data): for row in chain((headers,), data):
yield "\t".join((replace(r, (('\n', r'\n'), ('\t', r'\t'))) for r in row)) yield "\t".join((replace(r, (("\n", r"\n"), ("\t", r"\t"))) for r in row))

View file

@ -4,10 +4,9 @@
from __future__ import unicode_literals from __future__ import unicode_literals
from cli_helpers.utils import filter_dict_by_key from cli_helpers.utils import filter_dict_by_key
from .preprocessors import (convert_to_string, override_missing_value, from .preprocessors import convert_to_string, override_missing_value, style_output
style_output)
supported_formats = ('vertical', ) supported_formats = ("vertical",)
preprocessors = (override_missing_value, convert_to_string, style_output) preprocessors = (override_missing_value, convert_to_string, style_output)
@ -21,17 +20,19 @@ def _get_separator(num, sep_title, sep_character, sep_length):
title = sep_title.format(n=num + 1) title = sep_title.format(n=num + 1)
return "{left_divider}[ {title} ]{right_divider}\n".format( return "{left_divider}[ {title} ]{right_divider}\n".format(
left_divider=left_divider, right_divider=right_divider, title=title) left_divider=left_divider, right_divider=right_divider, title=title
)
def _format_row(headers, row): def _format_row(headers, row):
"""Format a row.""" """Format a row."""
formatted_row = [' | '.join(field) for field in zip(headers, row)] formatted_row = [" | ".join(field) for field in zip(headers, row)]
return '\n'.join(formatted_row) return "\n".join(formatted_row)
def vertical_table(data, headers, sep_title='{n}. row', sep_character='*', def vertical_table(
sep_length=27): data, headers, sep_title="{n}. row", sep_character="*", sep_length=27
):
"""Format *data* and *headers* as an vertical table. """Format *data* and *headers* as an vertical table.
The values in *data* and *headers* must be strings. The values in *data* and *headers* must be strings.
@ -62,5 +63,5 @@ def vertical_table(data, headers, sep_title='{n}. row', sep_character='*',
def adapter(data, headers, **kwargs): def adapter(data, headers, **kwargs):
"""Wrap vertical table in a function for TabularOutputFormatter.""" """Wrap vertical table in a function for TabularOutputFormatter."""
keys = ('sep_title', 'sep_character', 'sep_length') keys = ("sep_title", "sep_character", "sep_length")
return vertical_table(data, headers, **filter_dict_by_key(kwargs, keys)) return vertical_table(data, headers, **filter_dict_by_key(kwargs, keys))

View file

@ -7,6 +7,7 @@ from functools import lru_cache
from typing import Dict from typing import Dict
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
if TYPE_CHECKING: if TYPE_CHECKING:
from pygments.style import StyleMeta from pygments.style import StyleMeta
@ -20,10 +21,16 @@ def bytes_to_string(b):
""" """
if isinstance(b, binary_type): if isinstance(b, binary_type):
needs_hex = False
try: try:
return b.decode('utf8') result = b.decode("utf8")
needs_hex = not result.isprintable()
except UnicodeDecodeError: except UnicodeDecodeError:
return '0x' + binascii.hexlify(b).decode('ascii') needs_hex = True
if needs_hex:
return "0x" + binascii.hexlify(b).decode("ascii")
else:
return result
return b return b
@ -37,16 +44,20 @@ def to_string(value):
def truncate_string(value, max_width=None, skip_multiline_string=True): def truncate_string(value, max_width=None, skip_multiline_string=True):
"""Truncate string values.""" """Truncate string values."""
if skip_multiline_string and isinstance(value, text_type) and '\n' in value: if skip_multiline_string and isinstance(value, text_type) and "\n" in value:
return value return value
elif isinstance(value, text_type) and max_width is not None and len(value) > max_width: elif (
isinstance(value, text_type)
and max_width is not None
and len(value) > max_width
):
return value[: max_width - 3] + "..." return value[: max_width - 3] + "..."
return value return value
def intlen(n): def intlen(n):
"""Find the length of the integer part of a number *n*.""" """Find the length of the integer part of a number *n*."""
pos = n.find('.') pos = n.find(".")
return len(n) if pos < 0 else pos return len(n) if pos < 0 else pos
@ -61,12 +72,12 @@ def unique_items(seq):
return [x for x in seq if not (x in seen or seen.add(x))] return [x for x in seq if not (x in seen or seen.add(x))]
_ansi_re = re.compile('\033\\[((?:\\d|;)*)([a-zA-Z])') _ansi_re = re.compile("\033\\[((?:\\d|;)*)([a-zA-Z])")
def strip_ansi(value): def strip_ansi(value):
"""Strip the ANSI escape sequences from a string.""" """Strip the ANSI escape sequences from a string."""
return _ansi_re.sub('', value) return _ansi_re.sub("", value)
def replace(s, replace): def replace(s, replace):
@ -98,9 +109,8 @@ def filter_style_table(style: "StyleMeta", *relevant_styles: str) -> Dict:
'Token.Output.OddRow': "", 'Token.Output.OddRow': "",
} }
""" """
_styles_iter = ((str(key), val) for key, val in getattr(style, 'styles', {}).items()) _styles_iter = (
_relevant_styles_iter = filter( (str(key), val) for key, val in getattr(style, "styles", {}).items()
lambda tpl: tpl[0] in relevant_styles,
_styles_iter
) )
_relevant_styles_iter = filter(lambda tpl: tpl[0] in relevant_styles, _styles_iter)
return {key: val for key, val in _relevant_styles_iter} return {key: val for key, val in _relevant_styles_iter}

View file

@ -19,8 +19,10 @@
# #
import ast import ast
from collections import OrderedDict from collections import OrderedDict
# import os # import os
import re import re
# import sys # import sys
# sys.path.insert(0, os.path.abspath('.')) # sys.path.insert(0, os.path.abspath('.'))
@ -34,22 +36,18 @@ import re
# Add any Sphinx extension module names here, as strings. They can be # Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones. # ones.
extensions = [ extensions = ["sphinx.ext.autodoc", "sphinx.ext.intersphinx", "sphinx.ext.viewcode"]
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.viewcode'
]
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates'] templates_path = ["_templates"]
html_sidebars = { html_sidebars = {
'**': [ "**": [
'about.html', "about.html",
'navigation.html', "navigation.html",
'relations.html', "relations.html",
'searchbox.html', "searchbox.html",
'donate.html', "donate.html",
] ]
} }
@ -57,25 +55,26 @@ html_sidebars = {
# You can specify multiple suffix as a list of string: # You can specify multiple suffix as a list of string:
# #
# source_suffix = ['.rst', '.md'] # source_suffix = ['.rst', '.md']
source_suffix = '.rst' source_suffix = ".rst"
# The master toctree document. # The master toctree document.
master_doc = 'index' master_doc = "index"
# General information about the project. # General information about the project.
project = 'CLI Helpers' project = "CLI Helpers"
author = 'dbcli' author = "dbcli"
description = 'Python helpers for common CLI tasks' description = "Python helpers for common CLI tasks"
copyright = '2017, dbcli' copyright = "2017, dbcli"
# The version info for the project you're documenting, acts as replacement for # The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the # |version| and |release|, also used in various other places throughout the
# built documents. # built documents.
# #
_version_re = re.compile(r'__version__\s+=\s+(.*)') _version_re = re.compile(r"__version__\s+=\s+(.*)")
with open('../../cli_helpers/__init__.py', 'rb') as f: with open("../../cli_helpers/__init__.py", "rb") as f:
version = str(ast.literal_eval(_version_re.search( version = str(
f.read().decode('utf-8')).group(1))) ast.literal_eval(_version_re.search(f.read().decode("utf-8")).group(1))
)
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
release = version release = version
@ -93,7 +92,7 @@ language = None
exclude_patterns = [] exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use. # The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx' pygments_style = "sphinx"
# If true, `todo` and `todoList` produce output, else they produce nothing. # If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False todo_include_todos = False
@ -104,40 +103,42 @@ todo_include_todos = False
# The theme to use for HTML and HTML Help pages. See the documentation for # The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes. # a list of builtin themes.
# #
html_theme = 'alabaster' html_theme = "alabaster"
# Theme options are theme-specific and customize the look and feel of a theme # Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the # further. For a list of options available for each theme, see the
# documentation. # documentation.
nav_links = OrderedDict(( nav_links = OrderedDict(
('CLI Helpers at GitHub', 'https://github.com/dbcli/cli_helpers'), (
('CLI Helpers at PyPI', 'https://pypi.org/project/cli_helpers'), ("CLI Helpers at GitHub", "https://github.com/dbcli/cli_helpers"),
('Issue Tracker', 'https://github.com/dbcli/cli_helpers/issues') ("CLI Helpers at PyPI", "https://pypi.org/project/cli_helpers"),
)) ("Issue Tracker", "https://github.com/dbcli/cli_helpers/issues"),
)
)
html_theme_options = { html_theme_options = {
'description': description, "description": description,
'github_user': 'dbcli', "github_user": "dbcli",
'github_repo': 'cli_helpers', "github_repo": "cli_helpers",
'github_banner': False, "github_banner": False,
'github_button': False, "github_button": False,
'github_type': 'watch', "github_type": "watch",
'github_count': False, "github_count": False,
'extra_nav_links': nav_links "extra_nav_links": nav_links,
} }
# Add any paths that contain custom static files (such as style sheets) here, # Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files, # relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css". # so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static'] html_static_path = ["_static"]
# -- Options for HTMLHelp output ------------------------------------------ # -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder. # Output file base name for HTML help builder.
htmlhelp_basename = 'CLIHelpersdoc' htmlhelp_basename = "CLIHelpersdoc"
# -- Options for LaTeX output --------------------------------------------- # -- Options for LaTeX output ---------------------------------------------
@ -146,15 +147,12 @@ latex_elements = {
# The paper size ('letterpaper' or 'a4paper'). # The paper size ('letterpaper' or 'a4paper').
# #
# 'papersize': 'letterpaper', # 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt'). # The font size ('10pt', '11pt' or '12pt').
# #
# 'pointsize': '10pt', # 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble. # Additional stuff for the LaTeX preamble.
# #
# 'preamble': '', # 'preamble': '',
# Latex figure (float) alignment # Latex figure (float) alignment
# #
# 'figure_align': 'htbp', # 'figure_align': 'htbp',
@ -164,8 +162,7 @@ latex_elements = {
# (source start file, target name, title, # (source start file, target name, title,
# author, documentclass [howto, manual, or own class]). # author, documentclass [howto, manual, or own class]).
latex_documents = [ latex_documents = [
(master_doc, 'CLIHelpers.tex', 'CLI Helpers Documentation', (master_doc, "CLIHelpers.tex", "CLI Helpers Documentation", "dbcli", "manual"),
'dbcli', 'manual'),
] ]
@ -173,10 +170,7 @@ latex_documents = [
# One entry per manual page. List of tuples # One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section). # (source start file, name, description, authors, manual section).
man_pages = [ man_pages = [(master_doc, "clihelpers", "CLI Helpers Documentation", [author], 1)]
(master_doc, 'clihelpers', 'CLI Helpers Documentation',
[author], 1)
]
# -- Options for Texinfo output ------------------------------------------- # -- Options for Texinfo output -------------------------------------------
@ -185,16 +179,24 @@ man_pages = [
# (source start file, target name, title, author, # (source start file, target name, title, author,
# dir menu entry, description, category) # dir menu entry, description, category)
texinfo_documents = [ texinfo_documents = [
(master_doc, 'CLIHelpers', 'CLI Helpers Documentation', (
author, 'CLIHelpers', description, master_doc,
'Miscellaneous'), "CLIHelpers",
"CLI Helpers Documentation",
author,
"CLIHelpers",
description,
"Miscellaneous",
),
] ]
intersphinx_mapping = { intersphinx_mapping = {
'python': ('https://docs.python.org/3', None), "python": ("https://docs.python.org/3", None),
'py2': ('https://docs.python.org/2', None), "py2": ("https://docs.python.org/2", None),
'pymysql': ('https://pymysql.readthedocs.io/en/latest/', None), "pymysql": ("https://pymysql.readthedocs.io/en/latest/", None),
'numpy': ('https://docs.scipy.org/doc/numpy', None), "numpy": ("https://docs.scipy.org/doc/numpy", None),
'configobj': ('https://configobj.readthedocs.io/en/latest', None) "configobj": ("https://configobj.readthedocs.io/en/latest", None),
} }
linkcheck_ignore = ["https://github.com/psf/black.*"]

View file

@ -50,7 +50,7 @@ Let's get a list of all the supported format names::
>>> from cli_helpers.tabular_output import TabularOutputFormatter >>> from cli_helpers.tabular_output import TabularOutputFormatter
>>> formatter = TabularOutputFormatter() >>> formatter = TabularOutputFormatter()
>>> formatter.supported_formats >>> formatter.supported_formats
('vertical', 'csv', 'tsv', 'mediawiki', 'html', 'latex', 'latex_booktabs', 'textile', 'moinmoin', 'jira', 'plain', 'simple', 'grid', 'fancy_grid', 'pipe', 'orgtbl', 'psql', 'rst', 'ascii', 'double', 'github') ('vertical', 'csv', 'tsv', 'mediawiki', 'html', 'latex', 'latex_booktabs', 'textile', 'moinmoin', 'jira', 'plain', 'minimal', 'simple', 'grid', 'fancy_grid', 'pipe', 'orgtbl', 'psql', 'psql_unicode', 'rst', 'ascii', 'double', 'github')
You can format your data in any of those supported formats. Let's take the You can format your data in any of those supported formats. Let's take the
same data from our first example and put it in the ``fancy_grid`` format:: same data from our first example and put it in the ``fancy_grid`` format::

View file

@ -1,7 +1,7 @@
autopep8==1.3.3 autopep8==1.3.3
codecov==2.0.9 codecov==2.0.9
coverage==4.3.4 coverage==4.3.4
pep8radius black>=20.8b1
Pygments>=2.4.0 Pygments>=2.4.0
pytest==3.0.7 pytest==3.0.7
pytest-cov==2.4.0 pytest-cov==2.4.0

View file

@ -8,11 +8,12 @@ import sys
from setuptools import find_packages, setup from setuptools import find_packages, setup
_version_re = re.compile(r'__version__\s+=\s+(.*)') _version_re = re.compile(r"__version__\s+=\s+(.*)")
with open('cli_helpers/__init__.py', 'rb') as f: with open("cli_helpers/__init__.py", "rb") as f:
version = str(ast.literal_eval(_version_re.search( version = str(
f.read().decode('utf-8')).group(1))) ast.literal_eval(_version_re.search(f.read().decode("utf-8")).group(1))
)
def open_file(filename): def open_file(filename):
@ -21,42 +22,37 @@ def open_file(filename):
return f.read() return f.read()
readme = open_file('README.rst') readme = open_file("README.rst")
if sys.version_info[0] == 2:
py2_reqs = ['backports.csv >= 1.0.0']
else:
py2_reqs = []
setup( setup(
name='cli_helpers', name="cli_helpers",
author='dbcli', author="dbcli",
author_email='thomas@roten.us', author_email="thomas@roten.us",
version=version, version=version,
url='https://github.com/dbcli/cli_helpers', url="https://github.com/dbcli/cli_helpers",
packages=find_packages(exclude=['docs', 'tests', 'tests.tabular_output']), packages=find_packages(exclude=["docs", "tests", "tests.tabular_output"]),
include_package_data=True, include_package_data=True,
description='Helpers for building command-line apps', description="Helpers for building command-line apps",
long_description=readme, long_description=readme,
long_description_content_type='text/x-rst', long_description_content_type="text/x-rst",
install_requires=[ install_requires=[
'configobj >= 5.0.5', "configobj >= 5.0.5",
'tabulate[widechars] >= 0.8.2', "tabulate[widechars] >= 0.8.2",
'terminaltables >= 3.0.0', ],
] + py2_reqs,
extras_require={ extras_require={
'styles': ['Pygments >= 1.6'], "styles": ["Pygments >= 1.6"],
}, },
python_requires=">=3.6",
classifiers=[ classifiers=[
'Intended Audience :: Developers', "Intended Audience :: Developers",
'License :: OSI Approved :: BSD License', "License :: OSI Approved :: BSD License",
'Operating System :: Unix', "Operating System :: Unix",
'Programming Language :: Python', "Programming Language :: Python",
'Programming Language :: Python :: 3', "Programming Language :: Python :: 3",
'Programming Language :: Python :: 3.6', "Programming Language :: Python :: 3.6",
'Programming Language :: Python :: 3.7', "Programming Language :: Python :: 3.7",
'Topic :: Software Development', "Topic :: Software Development",
'Topic :: Software Development :: Libraries :: Python Modules', "Topic :: Software Development :: Libraries :: Python Modules",
'Topic :: Terminals :: Terminal Emulators/X Terminals', "Topic :: Terminals :: Terminal Emulators/X Terminals",
] ],
) )

View file

@ -13,7 +13,7 @@ class BaseCommand(Command, object):
user_options = [] user_options = []
default_cmd_options = ('verbose', 'quiet', 'dry_run') default_cmd_options = ("verbose", "quiet", "dry_run")
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super(BaseCommand, self).__init__(*args, **kwargs) super(BaseCommand, self).__init__(*args, **kwargs)
@ -40,54 +40,58 @@ class BaseCommand(Command, object):
def apply_options(self, cmd, options=()): def apply_options(self, cmd, options=()):
"""Apply command-line options.""" """Apply command-line options."""
for option in (self.default_cmd_options + options): for option in self.default_cmd_options + options:
cmd = self.apply_option(cmd, option, cmd = self.apply_option(cmd, option, active=getattr(self, option, False))
active=getattr(self, option, False))
return cmd return cmd
def apply_option(self, cmd, option, active=True): def apply_option(self, cmd, option, active=True):
"""Apply a command-line option.""" """Apply a command-line option."""
return re.sub(r'{{{}\:(?P<option>[^}}]*)}}'.format(option), return re.sub(
r'\g<option>' if active else '', cmd) r"{{{}\:(?P<option>[^}}]*)}}".format(option),
r"\g<option>" if active else "",
cmd,
)
class lint(BaseCommand): class lint(BaseCommand):
"""A PEP 8 lint command that optionally fixes violations.""" """A PEP 8 lint command that optionally fixes violations."""
description = 'check code against PEP 8 (and fix violations)' description = "check code against PEP 8 (and fix violations)"
user_options = [ user_options = [
('branch=', 'b', 'branch or revision to compare against (e.g. master)'), ("branch=", "b", "branch or revision to compare against (e.g. master)"),
('fix', 'f', 'fix the violations in place') ("fix", "f", "fix the violations in place"),
] ]
def initialize_options(self): def initialize_options(self):
"""Set the default options.""" """Set the default options."""
self.branch = 'master' self.branch = "master"
self.fix = False self.fix = False
super(lint, self).initialize_options() super(lint, self).initialize_options()
def run(self): def run(self):
"""Run the linter.""" """Run the linter."""
cmd = 'pep8radius {branch} {{fix: --in-place}}{{verbose: -vv}}' cmd = "black ."
cmd = cmd.format(branch=self.branch) cmd = cmd.format(branch=self.branch)
self.call_and_exit(self.apply_options(cmd, ('fix', ))) self.call_and_exit(self.apply_options(cmd, ("fix",)))
class test(BaseCommand): class test(BaseCommand):
"""Run the test suites for this project.""" """Run the test suites for this project."""
description = 'run the test suite' description = "run the test suite"
user_options = [ user_options = [
('all', 'a', 'test against all supported versions of Python'), ("all", "a", "test against all supported versions of Python"),
('coverage', 'c', 'measure test coverage') ("coverage", "c", "measure test coverage"),
] ]
unit_test_cmd = ('pytest{quiet: -q}{verbose: -v}{dry_run: --setup-only}' unit_test_cmd = (
'{coverage: --cov-report= --cov=cli_helpers}') "pytest{quiet: -q}{verbose: -v}{dry_run: --setup-only}"
test_all_cmd = 'tox{verbose: -v}{dry_run: --notest}' "{coverage: --cov-report= --cov=cli_helpers}"
coverage_cmd = 'coverage report' )
test_all_cmd = "tox{verbose: -v}{dry_run: --notest}"
coverage_cmd = "coverage report"
def initialize_options(self): def initialize_options(self):
"""Set the default options.""" """Set the default options."""
@ -101,7 +105,7 @@ class test(BaseCommand):
cmd = self.apply_options(self.test_all_cmd) cmd = self.apply_options(self.test_all_cmd)
self.call_and_exit(cmd) self.call_and_exit(cmd)
else: else:
cmds = (self.apply_options(self.unit_test_cmd, ('coverage', )), ) cmds = (self.apply_options(self.unit_test_cmd, ("coverage",)),)
if self.coverage: if self.coverage:
cmds += (self.apply_options(self.coverage_cmd),) cmds += (self.apply_options(self.coverage_cmd),)
self.call_in_sequence(cmds) self.call_in_sequence(cmds)
@ -110,11 +114,11 @@ class test(BaseCommand):
class docs(BaseCommand): class docs(BaseCommand):
"""Use Sphinx Makefile to generate documentation.""" """Use Sphinx Makefile to generate documentation."""
description = 'generate the Sphinx HTML documentation' description = "generate the Sphinx HTML documentation"
clean_docs_cmd = 'make -C docs clean' clean_docs_cmd = "make -C docs clean"
html_docs_cmd = 'make -C docs html' html_docs_cmd = "make -C docs html"
view_docs_cmd = 'open docs/build/html/index.html' view_docs_cmd = "open docs/build/html/index.html"
def run(self): def run(self):
"""Generate and view the documentation.""" """Generate and view the documentation."""

View file

@ -28,7 +28,7 @@ class _TempDirectory(object):
name = None name = None
_closed = False _closed = False
def __init__(self, suffix="", prefix='tmp', dir=None): def __init__(self, suffix="", prefix="tmp", dir=None):
self.name = _tempfile.mkdtemp(suffix, prefix, dir) self.name = _tempfile.mkdtemp(suffix, prefix, dir)
def __repr__(self): def __repr__(self):
@ -42,13 +42,14 @@ class _TempDirectory(object):
try: try:
_shutil.rmtree(self.name) _shutil.rmtree(self.name)
except (TypeError, AttributeError) as ex: except (TypeError, AttributeError) as ex:
if "None" not in '%s' % (ex,): if "None" not in "%s" % (ex,):
raise raise
self._rmtree(self.name) self._rmtree(self.name)
self._closed = True self._closed = True
if _warn and _warnings.warn: if _warn and _warnings.warn:
_warnings.warn("Implicitly cleaning up {!r}".format(self), _warnings.warn(
ResourceWarning) "Implicitly cleaning up {!r}".format(self), ResourceWarning
)
def __exit__(self, exc, value, tb): def __exit__(self, exc, value, tb):
self.cleanup() self.cleanup()
@ -57,8 +58,15 @@ class _TempDirectory(object):
# Issue a ResourceWarning if implicit cleanup needed # Issue a ResourceWarning if implicit cleanup needed
self.cleanup(_warn=True) self.cleanup(_warn=True)
def _rmtree(self, path, _OSError=OSError, _sep=_os.path.sep, def _rmtree(
_listdir=_os.listdir, _remove=_os.remove, _rmdir=_os.rmdir): self,
path,
_OSError=OSError,
_sep=_os.path.sep,
_listdir=_os.listdir,
_remove=_os.remove,
_rmdir=_os.rmdir,
):
# Essentially a stripped down version of shutil.rmtree. We can't # Essentially a stripped down version of shutil.rmtree. We can't
# use globals because they may be None'ed out at shutdown. # use globals because they may be None'ed out at shutdown.
if not isinstance(path, str): if not isinstance(path, str):

View file

@ -13,6 +13,6 @@ test_boolean_default = True
test_string_file = '~/myfile' test_string_file = '~/myfile'
test_option = 'foobar' test_option = 'foobar'
[section2] [section2]

View file

@ -15,6 +15,6 @@ test_boolean = boolean()
test_string_file = string(default='~/myfile') test_string_file = string(default='~/myfile')
test_option = option('foo', 'bar', 'foobar', default='foobar') test_option = option('foo', 'bar', 'foobar', 'foobar✔', default='foobar')
[section2] [section2]

View file

@ -13,6 +13,6 @@ test_boolean_default True
test_string_file = '~/myfile' test_string_file = '~/myfile'
test_option = 'foobar' test_option = 'foobar'
[section2] [section2]

View file

@ -15,6 +15,6 @@ test_boolean = bool(default=False)
test_string_file = string(default='~/myfile') test_string_file = string(default='~/myfile')
test_option = option('foo', 'bar', 'foobar', default='foobar') test_option = option('foo', 'bar', 'foobar', 'foobar✔', default='foobar')
[section2] [section2]

View file

@ -12,37 +12,44 @@ from cli_helpers.tabular_output import delimited_output_adapter
def test_csv_wrapper(): def test_csv_wrapper():
"""Test the delimited output adapter.""" """Test the delimited output adapter."""
# Test comma-delimited output. # Test comma-delimited output.
data = [['abc', '1'], ['d', '456']] data = [["abc", "1"], ["d", "456"]]
headers = ['letters', 'number'] headers = ["letters", "number"]
output = delimited_output_adapter.adapter(iter(data), headers, dialect='unix') output = delimited_output_adapter.adapter(iter(data), headers, dialect="unix")
assert "\n".join(output) == dedent('''\ assert "\n".join(output) == dedent(
'''\
"letters","number"\n\ "letters","number"\n\
"abc","1"\n\ "abc","1"\n\
"d","456"''') "d","456"'''
)
# Test tab-delimited output. # Test tab-delimited output.
data = [['abc', '1'], ['d', '456']] data = [["abc", "1"], ["d", "456"]]
headers = ['letters', 'number'] headers = ["letters", "number"]
output = delimited_output_adapter.adapter( output = delimited_output_adapter.adapter(
iter(data), headers, table_format='csv-tab', dialect='unix') iter(data), headers, table_format="csv-tab", dialect="unix"
assert "\n".join(output) == dedent('''\ )
assert "\n".join(output) == dedent(
'''\
"letters"\t"number"\n\ "letters"\t"number"\n\
"abc"\t"1"\n\ "abc"\t"1"\n\
"d"\t"456"''') "d"\t"456"'''
)
with pytest.raises(ValueError): with pytest.raises(ValueError):
output = delimited_output_adapter.adapter( output = delimited_output_adapter.adapter(
iter(data), headers, table_format='foobar') iter(data), headers, table_format="foobar"
)
list(output) list(output)
def test_unicode_with_csv(): def test_unicode_with_csv():
"""Test that the csv wrapper can handle non-ascii characters.""" """Test that the csv wrapper can handle non-ascii characters."""
data = [['观音', '1'], ['Ποσειδῶν', '456']] data = [["观音", "1"], ["Ποσειδῶν", "456"]]
headers = ['letters', 'number'] headers = ["letters", "number"]
output = delimited_output_adapter.adapter(data, headers) output = delimited_output_adapter.adapter(data, headers)
assert "\n".join(output) == dedent('''\ assert "\n".join(output) == dedent(
"""\
letters,number\n\ letters,number\n\
观音,1\n\ 观音,1\n\
Ποσειδῶν,456''') Ποσειδῶν,456"""
)

View file

@ -14,14 +14,15 @@ from cli_helpers.utils import strip_ansi
def test_tabular_output_formatter(): def test_tabular_output_formatter():
"""Test the TabularOutputFormatter class.""" """Test the TabularOutputFormatter class."""
headers = ['text', 'numeric'] headers = ["text", "numeric"]
data = [ data = [
["abc", Decimal(1)], ["abc", Decimal(1)],
["defg", Decimal("11.1")], ["defg", Decimal("11.1")],
["hi", Decimal("1.1")], ["hi", Decimal("1.1")],
["Pablo\rß\n", 0], ["Pablo\rß\n", 0],
] ]
expected = dedent("""\ expected = dedent(
"""\
+------------+---------+ +------------+---------+
| text | numeric | | text | numeric |
+------------+---------+ +------------+---------+
@ -33,66 +34,99 @@ def test_tabular_output_formatter():
) )
print(expected) print(expected)
print("\n".join(TabularOutputFormatter().format_output( print(
iter(data), headers, format_name='ascii'))) "\n".join(
assert expected == "\n".join(TabularOutputFormatter().format_output( TabularOutputFormatter().format_output(
iter(data), headers, format_name='ascii')) iter(data), headers, format_name="ascii"
)
)
)
assert expected == "\n".join(
TabularOutputFormatter().format_output(iter(data), headers, format_name="ascii")
)
def test_tabular_format_output_wrapper(): def test_tabular_format_output_wrapper():
"""Test the format_output wrapper.""" """Test the format_output wrapper."""
data = [['1', None], ['2', 'Sam'], data = [["1", None], ["2", "Sam"], ["3", "Joe"]]
['3', 'Joe']] headers = ["id", "name"]
headers = ['id', 'name'] expected = dedent(
expected = dedent('''\ """\
+----+------+ +----+------+
| id | name | | id | name |
+----+------+ +----+------+
| 1 | N/A | | 1 | N/A |
| 2 | Sam | | 2 | Sam |
| 3 | Joe | | 3 | Joe |
+----+------+''') +----+------+"""
)
assert expected == "\n".join(format_output(iter(data), headers, format_name='ascii', assert expected == "\n".join(
missing_value='N/A')) format_output(iter(data), headers, format_name="ascii", missing_value="N/A")
)
def test_additional_preprocessors(): def test_additional_preprocessors():
"""Test that additional preprocessors are run.""" """Test that additional preprocessors are run."""
def hello_world(data, headers, **_): def hello_world(data, headers, **_):
def hello_world_data(data): def hello_world_data(data):
for row in data: for row in data:
for i, value in enumerate(row): for i, value in enumerate(row):
if value == 'hello': if value == "hello":
row[i] = "{}, world".format(value) row[i] = "{}, world".format(value)
yield row yield row
return hello_world_data(data), headers return hello_world_data(data), headers
data = [['foo', None], ['hello!', 'hello']] data = [["foo", None], ["hello!", "hello"]]
headers = 'ab' headers = "ab"
expected = dedent('''\ expected = dedent(
"""\
+--------+--------------+ +--------+--------------+
| a | b | | a | b |
+--------+--------------+ +--------+--------------+
| foo | hello | | foo | hello |
| hello! | hello, world | | hello! | hello, world |
+--------+--------------+''') +--------+--------------+"""
)
assert expected == "\n".join(TabularOutputFormatter().format_output( assert expected == "\n".join(
iter(data), headers, format_name='ascii', preprocessors=(hello_world,), TabularOutputFormatter().format_output(
missing_value='hello')) iter(data),
headers,
format_name="ascii",
preprocessors=(hello_world,),
missing_value="hello",
)
)
def test_format_name_attribute(): def test_format_name_attribute():
"""Test the the format_name attribute be set and retrieved.""" """Test the the format_name attribute be set and retrieved."""
formatter = TabularOutputFormatter(format_name='plain') formatter = TabularOutputFormatter(format_name="plain")
assert formatter.format_name == 'plain' assert formatter.format_name == "plain"
formatter.format_name = 'simple' formatter.format_name = "simple"
assert formatter.format_name == 'simple' assert formatter.format_name == "simple"
with pytest.raises(ValueError): with pytest.raises(ValueError):
formatter.format_name = 'foobar' formatter.format_name = "foobar"
def test_headless_tabulate_format():
"""Test that a headless formatter doesn't display headers"""
formatter = TabularOutputFormatter(format_name="minimal")
headers = ["text", "numeric"]
data = [["a"], ["b"], ["c"]]
expected = "a\nb\nc"
assert expected == "\n".join(
TabularOutputFormatter().format_output(
iter(data),
headers,
format_name="minimal",
)
)
def test_unsupported_format(): def test_unsupported_format():
@ -100,23 +134,27 @@ def test_unsupported_format():
formatter = TabularOutputFormatter() formatter = TabularOutputFormatter()
with pytest.raises(ValueError): with pytest.raises(ValueError):
formatter.format_name = 'foobar' formatter.format_name = "foobar"
with pytest.raises(ValueError): with pytest.raises(ValueError):
formatter.format_output((), (), format_name='foobar') formatter.format_output((), (), format_name="foobar")
def test_tabulate_ansi_escape_in_default_value(): def test_tabulate_ansi_escape_in_default_value():
"""Test that ANSI escape codes work with tabulate.""" """Test that ANSI escape codes work with tabulate."""
data = [['1', None], ['2', 'Sam'], data = [["1", None], ["2", "Sam"], ["3", "Joe"]]
['3', 'Joe']] headers = ["id", "name"]
headers = ['id', 'name']
styled = format_output(iter(data), headers, format_name='psql', styled = format_output(
missing_value='\x1b[38;5;10mNULL\x1b[39m') iter(data),
unstyled = format_output(iter(data), headers, format_name='psql', headers,
missing_value='NULL') format_name="psql",
missing_value="\x1b[38;5;10mNULL\x1b[39m",
)
unstyled = format_output(
iter(data), headers, format_name="psql", missing_value="NULL"
)
stripped_styled = [strip_ansi(s) for s in styled] stripped_styled = [strip_ansi(s) for s in styled]
@ -127,8 +165,14 @@ def test_get_type():
"""Test that _get_type returns the expected type.""" """Test that _get_type returns the expected type."""
formatter = TabularOutputFormatter() formatter = TabularOutputFormatter()
tests = ((1, int), (2.0, float), (b'binary', binary_type), tests = (
('text', text_type), (None, type(None)), ((), text_type)) (1, int),
(2.0, float),
(b"binary", binary_type),
("text", text_type),
(None, type(None)),
((), text_type),
)
for value, data_type in tests: for value, data_type in tests:
assert data_type is formatter._get_type(value) assert data_type is formatter._get_type(value)
@ -138,36 +182,45 @@ def test_provide_column_types():
"""Test that provided column types are passed to preprocessors.""" """Test that provided column types are passed to preprocessors."""
expected_column_types = (bool, float) expected_column_types = (bool, float)
data = ((1, 1.0), (0, 2)) data = ((1, 1.0), (0, 2))
headers = ('a', 'b') headers = ("a", "b")
def preprocessor(data, headers, column_types=(), **_): def preprocessor(data, headers, column_types=(), **_):
assert expected_column_types == column_types assert expected_column_types == column_types
return data, headers return data, headers
format_output(data, headers, 'csv', format_output(
data,
headers,
"csv",
column_types=expected_column_types, column_types=expected_column_types,
preprocessors=(preprocessor,)) preprocessors=(preprocessor,),
)
def test_enforce_iterable(): def test_enforce_iterable():
"""Test that all output formatters accept iterable""" """Test that all output formatters accept iterable"""
formatter = TabularOutputFormatter() formatter = TabularOutputFormatter()
loremipsum = 'lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiusmod'.split(' ') loremipsum = (
"lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiusmod".split(
" "
)
)
for format_name in formatter.supported_formats: for format_name in formatter.supported_formats:
formatter.format_name = format_name formatter.format_name = format_name
try: try:
formatted = next(formatter.format_output( formatted = next(formatter.format_output(zip(loremipsum), ["lorem"]))
zip(loremipsum), ['lorem']))
except TypeError: except TypeError:
assert False, "{0} doesn't return iterable".format(format_name) assert False, "{0} doesn't return iterable".format(format_name)
def test_all_text_type(): def test_all_text_type():
"""Test the TabularOutputFormatter class.""" """Test the TabularOutputFormatter class."""
data = [[1, u"", None, Decimal(2)]] data = [[1, "", None, Decimal(2)]]
headers = ['col1', 'col2', 'col3', 'col4'] headers = ["col1", "col2", "col3", "col4"]
output_formatter = TabularOutputFormatter() output_formatter = TabularOutputFormatter()
for format_name in output_formatter.supported_formats: for format_name in output_formatter.supported_formats:
for row in output_formatter.format_output(iter(data), headers, format_name=format_name): for row in output_formatter.format_output(
iter(data), headers, format_name=format_name
):
assert isinstance(row, text_type), "not unicode for {}".format(format_name) assert isinstance(row, text_type), "not unicode for {}".format(format_name)

View file

@ -8,8 +8,15 @@ import pytest
from cli_helpers.compat import HAS_PYGMENTS from cli_helpers.compat import HAS_PYGMENTS
from cli_helpers.tabular_output.preprocessors import ( from cli_helpers.tabular_output.preprocessors import (
align_decimals, bytes_to_string, convert_to_string, quote_whitespaces, align_decimals,
override_missing_value, override_tab_value, style_output, format_numbers) bytes_to_string,
convert_to_string,
quote_whitespaces,
override_missing_value,
override_tab_value,
style_output,
format_numbers,
)
if HAS_PYGMENTS: if HAS_PYGMENTS:
from pygments.style import Style from pygments.style import Style
@ -22,9 +29,9 @@ import types
def test_convert_to_string(): def test_convert_to_string():
"""Test the convert_to_string() function.""" """Test the convert_to_string() function."""
data = [[1, 'John'], [2, 'Jill']] data = [[1, "John"], [2, "Jill"]]
headers = [0, 'name'] headers = [0, "name"]
expected = ([['1', 'John'], ['2', 'Jill']], ['0', 'name']) expected = ([["1", "John"], ["2", "Jill"]], ["0", "name"])
results = convert_to_string(data, headers) results = convert_to_string(data, headers)
assert expected == (list(results[0]), results[1]) assert expected == (list(results[0]), results[1])
@ -32,42 +39,41 @@ def test_convert_to_string():
def test_override_missing_values(): def test_override_missing_values():
"""Test the override_missing_values() function.""" """Test the override_missing_values() function."""
data = [[1, None], [2, 'Jill']] data = [[1, None], [2, "Jill"]]
headers = [0, 'name'] headers = [0, "name"]
expected = ([[1, '<EMPTY>'], [2, 'Jill']], [0, 'name']) expected = ([[1, "<EMPTY>"], [2, "Jill"]], [0, "name"])
results = override_missing_value(data, headers, missing_value='<EMPTY>') results = override_missing_value(data, headers, missing_value="<EMPTY>")
assert expected == (list(results[0]), results[1]) assert expected == (list(results[0]), results[1])
@pytest.mark.skipif(not HAS_PYGMENTS, reason='requires the Pygments library') @pytest.mark.skipif(not HAS_PYGMENTS, reason="requires the Pygments library")
def test_override_missing_value_with_style(): def test_override_missing_value_with_style():
"""Test that *override_missing_value()* styles output.""" """Test that *override_missing_value()* styles output."""
class NullStyle(Style): class NullStyle(Style):
styles = { styles = {Token.Output.Null: "#0f0"}
Token.Output.Null: '#0f0'
}
headers = ['h1', 'h2'] headers = ["h1", "h2"]
data = [[None, '2'], ['abc', None]] data = [[None, "2"], ["abc", None]]
expected_headers = ['h1', 'h2'] expected_headers = ["h1", "h2"]
expected_data = [ expected_data = [
['\x1b[38;5;10m<null>\x1b[39m', '2'], ["\x1b[38;5;10m<null>\x1b[39m", "2"],
['abc', '\x1b[38;5;10m<null>\x1b[39m'] ["abc", "\x1b[38;5;10m<null>\x1b[39m"],
] ]
results = override_missing_value(data, headers, results = override_missing_value(
style=NullStyle, missing_value="<null>") data, headers, style=NullStyle, missing_value="<null>"
)
assert (expected_data, expected_headers) == (list(results[0]), results[1]) assert (expected_data, expected_headers) == (list(results[0]), results[1])
def test_override_tab_value(): def test_override_tab_value():
"""Test the override_tab_value() function.""" """Test the override_tab_value() function."""
data = [[1, '\tJohn'], [2, 'Jill']] data = [[1, "\tJohn"], [2, "Jill"]]
headers = ['id', 'name'] headers = ["id", "name"]
expected = ([[1, ' John'], [2, 'Jill']], ['id', 'name']) expected = ([[1, " John"], [2, "Jill"]], ["id", "name"])
results = override_tab_value(data, headers) results = override_tab_value(data, headers)
assert expected == (list(results[0]), results[1]) assert expected == (list(results[0]), results[1])
@ -75,9 +81,9 @@ def test_override_tab_value():
def test_bytes_to_string(): def test_bytes_to_string():
"""Test the bytes_to_string() function.""" """Test the bytes_to_string() function."""
data = [[1, 'John'], [2, b'Jill']] data = [[1, "John"], [2, b"Jill"]]
headers = [0, 'name'] headers = [0, "name"]
expected = ([[1, 'John'], [2, 'Jill']], [0, 'name']) expected = ([[1, "John"], [2, "Jill"]], [0, "name"])
results = bytes_to_string(data, headers) results = bytes_to_string(data, headers)
assert expected == (list(results[0]), results[1]) assert expected == (list(results[0]), results[1])
@ -85,11 +91,10 @@ def test_bytes_to_string():
def test_align_decimals(): def test_align_decimals():
"""Test the align_decimals() function.""" """Test the align_decimals() function."""
data = [[Decimal('200'), Decimal('1')], [ data = [[Decimal("200"), Decimal("1")], [Decimal("1.00002"), Decimal("1.0")]]
Decimal('1.00002'), Decimal('1.0')]] headers = ["num1", "num2"]
headers = ['num1', 'num2']
column_types = (float, float) column_types = (float, float)
expected = ([['200', '1'], [' 1.00002', '1.0']], ['num1', 'num2']) expected = ([["200", "1"], [" 1.00002", "1.0"]], ["num1", "num2"])
results = align_decimals(data, headers, column_types=column_types) results = align_decimals(data, headers, column_types=column_types)
assert expected == (list(results[0]), results[1]) assert expected == (list(results[0]), results[1])
@ -98,9 +103,9 @@ def test_align_decimals():
def test_align_decimals_empty_result(): def test_align_decimals_empty_result():
"""Test align_decimals() with no results.""" """Test align_decimals() with no results."""
data = [] data = []
headers = ['num1', 'num2'] headers = ["num1", "num2"]
column_types = () column_types = ()
expected = ([], ['num1', 'num2']) expected = ([], ["num1", "num2"])
results = align_decimals(data, headers, column_types=column_types) results = align_decimals(data, headers, column_types=column_types)
assert expected == (list(results[0]), results[1]) assert expected == (list(results[0]), results[1])
@ -108,10 +113,10 @@ def test_align_decimals_empty_result():
def test_align_decimals_non_decimals(): def test_align_decimals_non_decimals():
"""Test align_decimals() with non-decimals.""" """Test align_decimals() with non-decimals."""
data = [[Decimal('200.000'), Decimal('1.000')], [None, None]] data = [[Decimal("200.000"), Decimal("1.000")], [None, None]]
headers = ['num1', 'num2'] headers = ["num1", "num2"]
column_types = (float, float) column_types = (float, float)
expected = ([['200.000', '1.000'], [None, None]], ['num1', 'num2']) expected = ([["200.000", "1.000"], [None, None]], ["num1", "num2"])
results = align_decimals(data, headers, column_types=column_types) results = align_decimals(data, headers, column_types=column_types)
assert expected == (list(results[0]), results[1]) assert expected == (list(results[0]), results[1])
@ -120,9 +125,8 @@ def test_align_decimals_non_decimals():
def test_quote_whitespaces(): def test_quote_whitespaces():
"""Test the quote_whitespaces() function.""" """Test the quote_whitespaces() function."""
data = [[" before", "after "], [" both ", "none"]] data = [[" before", "after "], [" both ", "none"]]
headers = ['h1', 'h2'] headers = ["h1", "h2"]
expected = ([["' before'", "'after '"], ["' both '", "'none'"]], expected = ([["' before'", "'after '"], ["' both '", "'none'"]], ["h1", "h2"])
['h1', 'h2'])
results = quote_whitespaces(data, headers) results = quote_whitespaces(data, headers)
assert expected == (list(results[0]), results[1]) assert expected == (list(results[0]), results[1])
@ -131,8 +135,8 @@ def test_quote_whitespaces():
def test_quote_whitespaces_empty_result(): def test_quote_whitespaces_empty_result():
"""Test the quote_whitespaces() function with no results.""" """Test the quote_whitespaces() function with no results."""
data = [] data = []
headers = ['h1', 'h2'] headers = ["h1", "h2"]
expected = ([], ['h1', 'h2']) expected = ([], ["h1", "h2"])
results = quote_whitespaces(data, headers) results = quote_whitespaces(data, headers)
assert expected == (list(results[0]), results[1]) assert expected == (list(results[0]), results[1])
@ -141,106 +145,115 @@ def test_quote_whitespaces_empty_result():
def test_quote_whitespaces_non_spaces(): def test_quote_whitespaces_non_spaces():
"""Test the quote_whitespaces() function with non-spaces.""" """Test the quote_whitespaces() function with non-spaces."""
data = [["\tbefore", "after \r"], ["\n both ", "none"]] data = [["\tbefore", "after \r"], ["\n both ", "none"]]
headers = ['h1', 'h2'] headers = ["h1", "h2"]
expected = ([["'\tbefore'", "'after \r'"], ["'\n both '", "'none'"]], expected = ([["'\tbefore'", "'after \r'"], ["'\n both '", "'none'"]], ["h1", "h2"])
['h1', 'h2'])
results = quote_whitespaces(data, headers) results = quote_whitespaces(data, headers)
assert expected == (list(results[0]), results[1]) assert expected == (list(results[0]), results[1])
@pytest.mark.skipif(not HAS_PYGMENTS, reason='requires the Pygments library') @pytest.mark.skipif(not HAS_PYGMENTS, reason="requires the Pygments library")
def test_style_output_no_styles(): def test_style_output_no_styles():
"""Test that *style_output()* does not style without styles.""" """Test that *style_output()* does not style without styles."""
headers = ['h1', 'h2'] headers = ["h1", "h2"]
data = [['1', '2'], ['a', 'b']] data = [["1", "2"], ["a", "b"]]
results = style_output(data, headers) results = style_output(data, headers)
assert (data, headers) == (list(results[0]), results[1]) assert (data, headers) == (list(results[0]), results[1])
@pytest.mark.skipif(HAS_PYGMENTS, @pytest.mark.skipif(HAS_PYGMENTS, reason="requires the Pygments library be missing")
reason='requires the Pygments library be missing')
def test_style_output_no_pygments(): def test_style_output_no_pygments():
"""Test that *style_output()* does not try to style without Pygments.""" """Test that *style_output()* does not try to style without Pygments."""
headers = ['h1', 'h2'] headers = ["h1", "h2"]
data = [['1', '2'], ['a', 'b']] data = [["1", "2"], ["a", "b"]]
results = style_output(data, headers) results = style_output(data, headers)
assert (data, headers) == (list(results[0]), results[1]) assert (data, headers) == (list(results[0]), results[1])
@pytest.mark.skipif(not HAS_PYGMENTS, reason='requires the Pygments library') @pytest.mark.skipif(not HAS_PYGMENTS, reason="requires the Pygments library")
def test_style_output(): def test_style_output():
"""Test that *style_output()* styles output.""" """Test that *style_output()* styles output."""
class CliStyle(Style): class CliStyle(Style):
default_style = "" default_style = ""
styles = { styles = {
Token.Output.Header: 'bold ansibrightred', Token.Output.Header: "bold ansibrightred",
Token.Output.OddRow: 'bg:#eee #111', Token.Output.OddRow: "bg:#eee #111",
Token.Output.EvenRow: '#0f0' Token.Output.EvenRow: "#0f0",
} }
headers = ['h1', 'h2']
data = [['观音', '2'], ['Ποσειδῶν', 'b']]
expected_headers = ['\x1b[91;01mh1\x1b[39;00m', '\x1b[91;01mh2\x1b[39;00m'] headers = ["h1", "h2"]
expected_data = [['\x1b[38;5;233;48;5;7m观音\x1b[39;49m', data = [["观音", "2"], ["Ποσειδῶν", "b"]]
'\x1b[38;5;233;48;5;7m2\x1b[39;49m'],
['\x1b[38;5;10mΠοσειδῶν\x1b[39m', '\x1b[38;5;10mb\x1b[39m']] expected_headers = ["\x1b[91;01mh1\x1b[39;00m", "\x1b[91;01mh2\x1b[39;00m"]
expected_data = [
["\x1b[38;5;233;48;5;7m观音\x1b[39;49m", "\x1b[38;5;233;48;5;7m2\x1b[39;49m"],
["\x1b[38;5;10mΠοσειδῶν\x1b[39m", "\x1b[38;5;10mb\x1b[39m"],
]
results = style_output(data, headers, style=CliStyle) results = style_output(data, headers, style=CliStyle)
assert (expected_data, expected_headers) == (list(results[0]), results[1]) assert (expected_data, expected_headers) == (list(results[0]), results[1])
@pytest.mark.skipif(not HAS_PYGMENTS, reason='requires the Pygments library') @pytest.mark.skipif(not HAS_PYGMENTS, reason="requires the Pygments library")
def test_style_output_with_newlines(): def test_style_output_with_newlines():
"""Test that *style_output()* styles output with newlines in it.""" """Test that *style_output()* styles output with newlines in it."""
class CliStyle(Style): class CliStyle(Style):
default_style = "" default_style = ""
styles = { styles = {
Token.Output.Header: 'bold ansibrightred', Token.Output.Header: "bold ansibrightred",
Token.Output.OddRow: 'bg:#eee #111', Token.Output.OddRow: "bg:#eee #111",
Token.Output.EvenRow: '#0f0' Token.Output.EvenRow: "#0f0",
} }
headers = ['h1', 'h2']
data = [['观音\nLine2', 'Ποσειδῶν']]
expected_headers = ['\x1b[91;01mh1\x1b[39;00m', '\x1b[91;01mh2\x1b[39;00m'] headers = ["h1", "h2"]
data = [["观音\nLine2", "Ποσειδῶν"]]
expected_headers = ["\x1b[91;01mh1\x1b[39;00m", "\x1b[91;01mh2\x1b[39;00m"]
expected_data = [ expected_data = [
['\x1b[38;5;233;48;5;7m观音\x1b[39;49m\n\x1b[38;5;233;48;5;7m' [
'Line2\x1b[39;49m', "\x1b[38;5;233;48;5;7m观音\x1b[39;49m\n\x1b[38;5;233;48;5;7m"
'\x1b[38;5;233;48;5;7mΠοσειδῶν\x1b[39;49m']] "Line2\x1b[39;49m",
"\x1b[38;5;233;48;5;7mΠοσειδῶν\x1b[39;49m",
]
]
results = style_output(data, headers, style=CliStyle) results = style_output(data, headers, style=CliStyle)
assert (expected_data, expected_headers) == (list(results[0]), results[1]) assert (expected_data, expected_headers) == (list(results[0]), results[1])
@pytest.mark.skipif(not HAS_PYGMENTS, reason='requires the Pygments library')
@pytest.mark.skipif(not HAS_PYGMENTS, reason="requires the Pygments library")
def test_style_output_custom_tokens(): def test_style_output_custom_tokens():
"""Test that *style_output()* styles output with custom token names.""" """Test that *style_output()* styles output with custom token names."""
class CliStyle(Style): class CliStyle(Style):
default_style = "" default_style = ""
styles = { styles = {
Token.Results.Headers: 'bold ansibrightred', Token.Results.Headers: "bold ansibrightred",
Token.Results.OddRows: 'bg:#eee #111', Token.Results.OddRows: "bg:#eee #111",
Token.Results.EvenRows: '#0f0' Token.Results.EvenRows: "#0f0",
} }
headers = ['h1', 'h2']
data = [['1', '2'], ['a', 'b']]
expected_headers = ['\x1b[91;01mh1\x1b[39;00m', '\x1b[91;01mh2\x1b[39;00m'] headers = ["h1", "h2"]
expected_data = [['\x1b[38;5;233;48;5;7m1\x1b[39;49m', data = [["1", "2"], ["a", "b"]]
'\x1b[38;5;233;48;5;7m2\x1b[39;49m'],
['\x1b[38;5;10ma\x1b[39m', '\x1b[38;5;10mb\x1b[39m']] expected_headers = ["\x1b[91;01mh1\x1b[39;00m", "\x1b[91;01mh2\x1b[39;00m"]
expected_data = [
["\x1b[38;5;233;48;5;7m1\x1b[39;49m", "\x1b[38;5;233;48;5;7m2\x1b[39;49m"],
["\x1b[38;5;10ma\x1b[39m", "\x1b[38;5;10mb\x1b[39m"],
]
output = style_output( output = style_output(
data, headers, style=CliStyle, data,
header_token='Token.Results.Headers', headers,
odd_row_token='Token.Results.OddRows', style=CliStyle,
even_row_token='Token.Results.EvenRows') header_token="Token.Results.Headers",
odd_row_token="Token.Results.OddRows",
even_row_token="Token.Results.EvenRows",
)
assert (expected_data, expected_headers) == (list(output[0]), output[1]) assert (expected_data, expected_headers) == (list(output[0]), output[1])
@ -248,29 +261,25 @@ def test_style_output_custom_tokens():
def test_format_integer(): def test_format_integer():
"""Test formatting for an INTEGER datatype.""" """Test formatting for an INTEGER datatype."""
data = [[1], [1000], [1000000]] data = [[1], [1000], [1000000]]
headers = ['h1'] headers = ["h1"]
result_data, result_headers = format_numbers(data, result_data, result_headers = format_numbers(
headers, data, headers, column_types=(int,), integer_format=",", float_format=","
column_types=(int,), )
integer_format=',',
float_format=',')
expected = [['1'], ['1,000'], ['1,000,000']] expected = [["1"], ["1,000"], ["1,000,000"]]
assert expected == list(result_data) assert expected == list(result_data)
assert headers == result_headers assert headers == result_headers
def test_format_decimal(): def test_format_decimal():
"""Test formatting for a DECIMAL(12, 4) datatype.""" """Test formatting for a DECIMAL(12, 4) datatype."""
data = [[Decimal('1.0000')], [Decimal('1000.0000')], [Decimal('1000000.0000')]] data = [[Decimal("1.0000")], [Decimal("1000.0000")], [Decimal("1000000.0000")]]
headers = ['h1'] headers = ["h1"]
result_data, result_headers = format_numbers(data, result_data, result_headers = format_numbers(
headers, data, headers, column_types=(float,), integer_format=",", float_format=","
column_types=(float,), )
integer_format=',',
float_format=',')
expected = [['1.0000'], ['1,000.0000'], ['1,000,000.0000']] expected = [["1.0000"], ["1,000.0000"], ["1,000,000.0000"]]
assert expected == list(result_data) assert expected == list(result_data)
assert headers == result_headers assert headers == result_headers
@ -278,13 +287,11 @@ def test_format_decimal():
def test_format_float(): def test_format_float():
"""Test formatting for a REAL datatype.""" """Test formatting for a REAL datatype."""
data = [[1.0], [1000.0], [1000000.0]] data = [[1.0], [1000.0], [1000000.0]]
headers = ['h1'] headers = ["h1"]
result_data, result_headers = format_numbers(data, result_data, result_headers = format_numbers(
headers, data, headers, column_types=(float,), integer_format=",", float_format=","
column_types=(float,), )
integer_format=',', expected = [["1.0"], ["1,000.0"], ["1,000,000.0"]]
float_format=',')
expected = [['1.0'], ['1,000.0'], ['1,000,000.0']]
assert expected == list(result_data) assert expected == list(result_data)
assert headers == result_headers assert headers == result_headers
@ -292,11 +299,12 @@ def test_format_float():
def test_format_integer_only(): def test_format_integer_only():
"""Test that providing one format string works.""" """Test that providing one format string works."""
data = [[1, 1.0], [1000, 1000.0], [1000000, 1000000.0]] data = [[1, 1.0], [1000, 1000.0], [1000000, 1000000.0]]
headers = ['h1', 'h2'] headers = ["h1", "h2"]
result_data, result_headers = format_numbers(data, headers, column_types=(int, float), result_data, result_headers = format_numbers(
integer_format=',') data, headers, column_types=(int, float), integer_format=","
)
expected = [['1', 1.0], ['1,000', 1000.0], ['1,000,000', 1000000.0]] expected = [["1", 1.0], ["1,000", 1000.0], ["1,000,000", 1000000.0]]
assert expected == list(result_data) assert expected == list(result_data)
assert headers == result_headers assert headers == result_headers
@ -304,7 +312,7 @@ def test_format_integer_only():
def test_format_numbers_no_format_strings(): def test_format_numbers_no_format_strings():
"""Test that numbers aren't formatted without format strings.""" """Test that numbers aren't formatted without format strings."""
data = ((1), (1000), (1000000)) data = ((1), (1000), (1000000))
headers = ('h1',) headers = ("h1",)
result_data, result_headers = format_numbers(data, headers, column_types=(int,)) result_data, result_headers = format_numbers(data, headers, column_types=(int,))
assert list(data) == list(result_data) assert list(data) == list(result_data)
assert headers == result_headers assert headers == result_headers
@ -313,17 +321,25 @@ def test_format_numbers_no_format_strings():
def test_format_numbers_no_column_types(): def test_format_numbers_no_column_types():
"""Test that numbers aren't formatted without column types.""" """Test that numbers aren't formatted without column types."""
data = ((1), (1000), (1000000)) data = ((1), (1000), (1000000))
headers = ('h1',) headers = ("h1",)
result_data, result_headers = format_numbers(data, headers, integer_format=',', result_data, result_headers = format_numbers(
float_format=',') data, headers, integer_format=",", float_format=","
)
assert list(data) == list(result_data) assert list(data) == list(result_data)
assert headers == result_headers assert headers == result_headers
def test_enforce_iterable(): def test_enforce_iterable():
preprocessors = inspect.getmembers(cli_helpers.tabular_output.preprocessors, inspect.isfunction) preprocessors = inspect.getmembers(
loremipsum = 'lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiusmod'.split(' ') cli_helpers.tabular_output.preprocessors, inspect.isfunction
)
loremipsum = (
"lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiusmod".split(
" "
)
)
for name, preprocessor in preprocessors: for name, preprocessor in preprocessors:
preprocessed = preprocessor(zip(loremipsum), ['lorem'], column_types=(str,)) preprocessed = preprocessor(zip(loremipsum), ["lorem"], column_types=(str,))
try: try:
first = next(preprocessed[0]) first = next(preprocessed[0])
except StopIteration: except StopIteration:

View file

@ -16,35 +16,53 @@ if HAS_PYGMENTS:
def test_tabulate_wrapper(): def test_tabulate_wrapper():
"""Test the *output_formatter.tabulate_wrapper()* function.""" """Test the *output_formatter.tabulate_wrapper()* function."""
data = [['abc', 1], ['d', 456]] data = [["abc", 1], ["d", 456]]
headers = ['letters', 'number'] headers = ["letters", "number"]
output = tabulate_adapter.adapter(iter(data), headers, table_format='psql') output = tabulate_adapter.adapter(iter(data), headers, table_format="psql")
assert "\n".join(output) == dedent('''\ assert "\n".join(output) == dedent(
+-----------+----------+ """\
+---------+--------+
| letters | number | | letters | number |
|-----------+----------| |---------+--------|
| abc | 1 | | abc | 1 |
| d | 456 | | d | 456 |
+-----------+----------+''') +---------+--------+"""
)
data = [['{1,2,3}', '{{1,2},{3,4}}', '{å,魚,текст}'], ['{}', '<null>', '{<null>}']] data = [["abc", 1], ["d", 456]]
headers = ['bigint_array', 'nested_numeric_array', '配列'] headers = ["letters", "number"]
output = tabulate_adapter.adapter(iter(data), headers, table_format='psql') output = tabulate_adapter.adapter(iter(data), headers, table_format="psql_unicode")
assert "\n".join(output) == dedent('''\ assert "\n".join(output) == dedent(
+----------------+------------------------+--------------+ """\
letters number
abc 1
d 456
"""
)
data = [["{1,2,3}", "{{1,2},{3,4}}", "{å,魚,текст}"], ["{}", "<null>", "{<null>}"]]
headers = ["bigint_array", "nested_numeric_array", "配列"]
output = tabulate_adapter.adapter(iter(data), headers, table_format="psql")
assert "\n".join(output) == dedent(
"""\
+--------------+----------------------+--------------+
| bigint_array | nested_numeric_array | 配列 | | bigint_array | nested_numeric_array | 配列 |
|----------------+------------------------+--------------| |--------------+----------------------+--------------|
| {1,2,3} | {{1,2},{3,4}} | {å,,текст} | | {1,2,3} | {{1,2},{3,4}} | {å,,текст} |
| {} | <null> | {<null>} | | {} | <null> | {<null>} |
+----------------+------------------------+--------------+''') +--------------+----------------------+--------------+"""
)
def test_markup_format(): def test_markup_format():
"""Test that markup formats do not have number align or string align.""" """Test that markup formats do not have number align or string align."""
data = [['abc', 1], ['d', 456]] data = [["abc", 1], ["d", 456]]
headers = ['letters', 'number'] headers = ["letters", "number"]
output = tabulate_adapter.adapter(iter(data), headers, table_format='mediawiki') output = tabulate_adapter.adapter(iter(data), headers, table_format="mediawiki")
assert "\n".join(output) == dedent('''\ assert "\n".join(output) == dedent(
"""\
{| class="wikitable" style="text-align: left;" {| class="wikitable" style="text-align: left;"
|+ <!-- caption --> |+ <!-- caption -->
|- |-
@ -53,44 +71,43 @@ def test_markup_format():
| abc || 1 | abc || 1
|- |-
| d || 456 | d || 456
|}''') |}"""
)
@pytest.mark.skipif(not HAS_PYGMENTS, reason='requires the Pygments library') @pytest.mark.skipif(not HAS_PYGMENTS, reason="requires the Pygments library")
def test_style_output_table(): def test_style_output_table():
"""Test that *style_output_table()* styles the output table.""" """Test that *style_output_table()* styles the output table."""
class CliStyle(Style): class CliStyle(Style):
default_style = "" default_style = ""
styles = { styles = {
Token.Output.TableSeparator: 'ansibrightred', Token.Output.TableSeparator: "ansibrightred",
} }
headers = ['h1', 'h2']
data = [['观音', '2'], ['Ποσειδῶν', 'b']] headers = ["h1", "h2"]
style_output_table = tabulate_adapter.style_output_table('psql') data = [["观音", "2"], ["Ποσειδῶν", "b"]]
style_output_table = tabulate_adapter.style_output_table("psql")
style_output_table(data, headers, style=CliStyle) style_output_table(data, headers, style=CliStyle)
output = tabulate_adapter.adapter(iter(data), headers, table_format='psql') output = tabulate_adapter.adapter(iter(data), headers, table_format="psql")
PLUS = "\x1b[91m+\x1b[39m"
MINUS = "\x1b[91m-\x1b[39m"
PIPE = "\x1b[91m|\x1b[39m"
assert "\n".join(output) == dedent('''\ expected = (
\x1b[91m+\x1b[39m''' + ( dedent(
('\x1b[91m-\x1b[39m' * 10) + """\
'\x1b[91m+\x1b[39m' + +----------+----+
('\x1b[91m-\x1b[39m' * 6)) + | h1 | h2 |
'''\x1b[91m+\x1b[39m |----------+----|
\x1b[91m|\x1b[39m h1 \x1b[91m|\x1b[39m''' + | 观音 | 2 |
''' h2 \x1b[91m|\x1b[39m | Ποσειδῶν | b |
''' + '\x1b[91m|\x1b[39m' + ( +----------+----+"""
('\x1b[91m-\x1b[39m' * 10) + )
'\x1b[91m+\x1b[39m' + .replace("+", PLUS)
('\x1b[91m-\x1b[39m' * 6)) + .replace("-", MINUS)
'''\x1b[91m|\x1b[39m .replace("|", PIPE)
\x1b[91m|\x1b[39m 观音 \x1b[91m|\x1b[39m''' + )
''' 2 \x1b[91m|\x1b[39m
\x1b[91m|\x1b[39m Ποσειδῶν \x1b[91m|\x1b[39m''' + assert "\n".join(output) == expected
''' b \x1b[91m|\x1b[39m
''' + '\x1b[91m+\x1b[39m' + (
('\x1b[91m-\x1b[39m' * 10) +
'\x1b[91m+\x1b[39m' +
('\x1b[91m-\x1b[39m' * 6)) +
'\x1b[91m+\x1b[39m')

View file

@ -1,69 +0,0 @@
# -*- coding: utf-8 -*-
"""Test the terminaltables output adapter."""
from __future__ import unicode_literals
from textwrap import dedent
import pytest
from cli_helpers.compat import HAS_PYGMENTS
from cli_helpers.tabular_output import terminaltables_adapter
if HAS_PYGMENTS:
from pygments.style import Style
from pygments.token import Token
def test_terminal_tables_adapter():
"""Test the terminaltables output adapter."""
data = [['abc', 1], ['d', 456]]
headers = ['letters', 'number']
output = terminaltables_adapter.adapter(
iter(data), headers, table_format='ascii')
assert "\n".join(output) == dedent('''\
+---------+--------+
| letters | number |
+---------+--------+
| abc | 1 |
| d | 456 |
+---------+--------+''')
@pytest.mark.skipif(not HAS_PYGMENTS, reason='requires the Pygments library')
def test_style_output_table():
"""Test that *style_output_table()* styles the output table."""
class CliStyle(Style):
default_style = ""
styles = {
Token.Output.TableSeparator: 'ansibrightred',
}
headers = ['h1', 'h2']
data = [['观音', '2'], ['Ποσειδῶν', 'b']]
style_output_table = terminaltables_adapter.style_output_table('ascii')
style_output_table(data, headers, style=CliStyle)
output = terminaltables_adapter.adapter(iter(data), headers, table_format='ascii')
assert "\n".join(output) == dedent('''\
\x1b[91m+\x1b[39m''' + (
('\x1b[91m-\x1b[39m' * 10) +
'\x1b[91m+\x1b[39m' +
('\x1b[91m-\x1b[39m' * 4)) +
'''\x1b[91m+\x1b[39m
\x1b[91m|\x1b[39m h1 \x1b[91m|\x1b[39m''' +
''' h2 \x1b[91m|\x1b[39m
''' + '\x1b[91m+\x1b[39m' + (
('\x1b[91m-\x1b[39m' * 10) +
'\x1b[91m+\x1b[39m' +
('\x1b[91m-\x1b[39m' * 4)) +
'''\x1b[91m+\x1b[39m
\x1b[91m|\x1b[39m 观音 \x1b[91m|\x1b[39m''' +
''' 2 \x1b[91m|\x1b[39m
\x1b[91m|\x1b[39m Ποσειδῶν \x1b[91m|\x1b[39m''' +
''' b \x1b[91m|\x1b[39m
''' + '\x1b[91m+\x1b[39m' + (
('\x1b[91m-\x1b[39m' * 10) +
'\x1b[91m+\x1b[39m' +
('\x1b[91m-\x1b[39m' * 4)) +
'\x1b[91m+\x1b[39m')

View file

@ -12,22 +12,25 @@ from cli_helpers.tabular_output import tsv_output_adapter
def test_tsv_wrapper(): def test_tsv_wrapper():
"""Test the tsv output adapter.""" """Test the tsv output adapter."""
# Test tab-delimited output. # Test tab-delimited output.
data = [['ab\r\nc', '1'], ['d', '456']] data = [["ab\r\nc", "1"], ["d", "456"]]
headers = ['letters', 'number'] headers = ["letters", "number"]
output = tsv_output_adapter.adapter( output = tsv_output_adapter.adapter(iter(data), headers, table_format="tsv")
iter(data), headers, table_format='tsv') assert "\n".join(output) == dedent(
assert "\n".join(output) == dedent('''\ """\
letters\tnumber\n\ letters\tnumber\n\
ab\r\\nc\t1\n\ ab\r\\nc\t1\n\
d\t456''') d\t456"""
)
def test_unicode_with_tsv(): def test_unicode_with_tsv():
"""Test that the tsv wrapper can handle non-ascii characters.""" """Test that the tsv wrapper can handle non-ascii characters."""
data = [['观音', '1'], ['Ποσειδῶν', '456']] data = [["观音", "1"], ["Ποσειδῶν", "456"]]
headers = ['letters', 'number'] headers = ["letters", "number"]
output = tsv_output_adapter.adapter(data, headers) output = tsv_output_adapter.adapter(data, headers)
assert "\n".join(output) == dedent('''\ assert "\n".join(output) == dedent(
"""\
letters\tnumber\n\ letters\tnumber\n\
观音\t1\n\ 观音\t1\n\
Ποσειδῶν\t456''') Ποσειδῶν\t456"""
)

View file

@ -9,30 +9,41 @@ from cli_helpers.tabular_output import vertical_table_adapter
def test_vertical_table(): def test_vertical_table():
"""Test the default settings for vertical_table().""" """Test the default settings for vertical_table()."""
results = [('hello', text_type(123)), ('world', text_type(456))] results = [("hello", text_type(123)), ("world", text_type(456))]
expected = dedent("""\ expected = dedent(
"""\
***************************[ 1. row ]*************************** ***************************[ 1. row ]***************************
name | hello name | hello
age | 123 age | 123
***************************[ 2. row ]*************************** ***************************[ 2. row ]***************************
name | world name | world
age | 456""") age | 456"""
)
assert expected == "\n".join( assert expected == "\n".join(
vertical_table_adapter.adapter(results, ('name', 'age'))) vertical_table_adapter.adapter(results, ("name", "age"))
)
def test_vertical_table_customized(): def test_vertical_table_customized():
"""Test customized settings for vertical_table().""" """Test customized settings for vertical_table()."""
results = [('john', text_type(47)), ('jill', text_type(50))] results = [("john", text_type(47)), ("jill", text_type(50))]
expected = dedent("""\ expected = dedent(
"""\
-[ PERSON 1 ]----- -[ PERSON 1 ]-----
name | john name | john
age | 47 age | 47
-[ PERSON 2 ]----- -[ PERSON 2 ]-----
name | jill name | jill
age | 50""") age | 50"""
assert expected == "\n".join(vertical_table_adapter.adapter( )
results, ('name', 'age'), sep_title='PERSON {n}', assert expected == "\n".join(
sep_character='-', sep_length=(1, 5))) vertical_table_adapter.adapter(
results,
("name", "age"),
sep_title="PERSON {n}",
sep_character="-",
sep_length=(1, 5),
)
)

View file

@ -8,56 +8,61 @@ from unittest.mock import MagicMock
import pytest import pytest
from cli_helpers.compat import MAC, text_type, WIN from cli_helpers.compat import MAC, text_type, WIN
from cli_helpers.config import (Config, DefaultConfigValidationError, from cli_helpers.config import (
get_system_config_dirs, get_user_config_dir, Config,
_pathify) DefaultConfigValidationError,
get_system_config_dirs,
get_user_config_dir,
_pathify,
)
from .utils import with_temp_dir from .utils import with_temp_dir
APP_NAME, APP_AUTHOR = 'Test', 'Acme' APP_NAME, APP_AUTHOR = "Test", "Acme"
TEST_DATA_DIR = os.path.join(os.path.dirname(__file__), 'config_data') TEST_DATA_DIR = os.path.join(os.path.dirname(__file__), "config_data")
DEFAULT_CONFIG = { DEFAULT_CONFIG = {
'section': { "section": {
'test_boolean_default': 'True', "test_boolean_default": "True",
'test_string_file': '~/myfile', "test_string_file": "~/myfile",
'test_option': 'foobar' "test_option": "foobar✔",
}, },
'section2': {} "section2": {},
} }
DEFAULT_VALID_CONFIG = { DEFAULT_VALID_CONFIG = {
'section': { "section": {
'test_boolean_default': True, "test_boolean_default": True,
'test_string_file': '~/myfile', "test_string_file": "~/myfile",
'test_option': 'foobar' "test_option": "foobar✔",
}, },
'section2': {} "section2": {},
} }
def _mocked_user_config(temp_dir, *args, **kwargs): def _mocked_user_config(temp_dir, *args, **kwargs):
config = Config(*args, **kwargs) config = Config(*args, **kwargs)
config.user_config_file = MagicMock(return_value=os.path.join( config.user_config_file = MagicMock(
temp_dir, config.filename)) return_value=os.path.join(temp_dir, config.filename)
)
return config return config
def test_user_config_dir(): def test_user_config_dir():
"""Test that the config directory is a string with the app name in it.""" """Test that the config directory is a string with the app name in it."""
if 'XDG_CONFIG_HOME' in os.environ: if "XDG_CONFIG_HOME" in os.environ:
del os.environ['XDG_CONFIG_HOME'] del os.environ["XDG_CONFIG_HOME"]
config_dir = get_user_config_dir(APP_NAME, APP_AUTHOR) config_dir = get_user_config_dir(APP_NAME, APP_AUTHOR)
assert isinstance(config_dir, text_type) assert isinstance(config_dir, text_type)
assert (config_dir.endswith(APP_NAME) or assert config_dir.endswith(APP_NAME) or config_dir.endswith(_pathify(APP_NAME))
config_dir.endswith(_pathify(APP_NAME)))
def test_sys_config_dirs(): def test_sys_config_dirs():
"""Test that the sys config directories are returned correctly.""" """Test that the sys config directories are returned correctly."""
if 'XDG_CONFIG_DIRS' in os.environ: if "XDG_CONFIG_DIRS" in os.environ:
del os.environ['XDG_CONFIG_DIRS'] del os.environ["XDG_CONFIG_DIRS"]
config_dirs = get_system_config_dirs(APP_NAME, APP_AUTHOR) config_dirs = get_system_config_dirs(APP_NAME, APP_AUTHOR)
assert isinstance(config_dirs, list) assert isinstance(config_dirs, list)
assert (config_dirs[0].endswith(APP_NAME) or assert config_dirs[0].endswith(APP_NAME) or config_dirs[0].endswith(
config_dirs[0].endswith(_pathify(APP_NAME))) _pathify(APP_NAME)
)
@pytest.mark.skipif(not WIN, reason="requires Windows") @pytest.mark.skipif(not WIN, reason="requires Windows")
@ -66,7 +71,7 @@ def test_windows_user_config_dir_no_roaming():
config_dir = get_user_config_dir(APP_NAME, APP_AUTHOR, roaming=False) config_dir = get_user_config_dir(APP_NAME, APP_AUTHOR, roaming=False)
assert isinstance(config_dir, text_type) assert isinstance(config_dir, text_type)
assert config_dir.endswith(APP_NAME) assert config_dir.endswith(APP_NAME)
assert 'Local' in config_dir assert "Local" in config_dir
@pytest.mark.skipif(not MAC, reason="requires macOS") @pytest.mark.skipif(not MAC, reason="requires macOS")
@ -75,7 +80,7 @@ def test_mac_user_config_dir_no_xdg():
config_dir = get_user_config_dir(APP_NAME, APP_AUTHOR, force_xdg=False) config_dir = get_user_config_dir(APP_NAME, APP_AUTHOR, force_xdg=False)
assert isinstance(config_dir, text_type) assert isinstance(config_dir, text_type)
assert config_dir.endswith(APP_NAME) assert config_dir.endswith(APP_NAME)
assert 'Library' in config_dir assert "Library" in config_dir
@pytest.mark.skipif(not MAC, reason="requires macOS") @pytest.mark.skipif(not MAC, reason="requires macOS")
@ -84,53 +89,61 @@ def test_mac_system_config_dirs_no_xdg():
config_dirs = get_system_config_dirs(APP_NAME, APP_AUTHOR, force_xdg=False) config_dirs = get_system_config_dirs(APP_NAME, APP_AUTHOR, force_xdg=False)
assert isinstance(config_dirs, list) assert isinstance(config_dirs, list)
assert config_dirs[0].endswith(APP_NAME) assert config_dirs[0].endswith(APP_NAME)
assert 'Library' in config_dirs[0] assert "Library" in config_dirs[0]
def test_config_reading_raise_errors(): def test_config_reading_raise_errors():
"""Test that instantiating Config will raise errors when appropriate.""" """Test that instantiating Config will raise errors when appropriate."""
with pytest.raises(ValueError): with pytest.raises(ValueError):
Config(APP_NAME, APP_AUTHOR, 'test_config', write_default=True) Config(APP_NAME, APP_AUTHOR, "test_config", write_default=True)
with pytest.raises(ValueError): with pytest.raises(ValueError):
Config(APP_NAME, APP_AUTHOR, 'test_config', validate=True) Config(APP_NAME, APP_AUTHOR, "test_config", validate=True)
with pytest.raises(TypeError): with pytest.raises(TypeError):
Config(APP_NAME, APP_AUTHOR, 'test_config', default=b'test') Config(APP_NAME, APP_AUTHOR, "test_config", default=b"test")
def test_config_user_file(): def test_config_user_file():
"""Test that the Config user_config_file is appropriate.""" """Test that the Config user_config_file is appropriate."""
config = Config(APP_NAME, APP_AUTHOR, 'test_config') config = Config(APP_NAME, APP_AUTHOR, "test_config")
assert (get_user_config_dir(APP_NAME, APP_AUTHOR) in assert get_user_config_dir(APP_NAME, APP_AUTHOR) in config.user_config_file()
config.user_config_file())
def test_config_reading_default_dict(): def test_config_reading_default_dict():
"""Test that the Config constructor will read in defaults from a dict.""" """Test that the Config constructor will read in defaults from a dict."""
default = {'main': {'foo': 'bar'}} default = {"main": {"foo": "bar"}}
config = Config(APP_NAME, APP_AUTHOR, 'test_config', default=default) config = Config(APP_NAME, APP_AUTHOR, "test_config", default=default)
assert config.data == default assert config.data == default
def test_config_reading_no_default(): def test_config_reading_no_default():
"""Test that the Config constructor will work without any defaults.""" """Test that the Config constructor will work without any defaults."""
config = Config(APP_NAME, APP_AUTHOR, 'test_config') config = Config(APP_NAME, APP_AUTHOR, "test_config")
assert config.data == {} assert config.data == {}
def test_config_reading_default_file(): def test_config_reading_default_file():
"""Test that the Config will work with a default file.""" """Test that the Config will work with a default file."""
config = Config(APP_NAME, APP_AUTHOR, 'test_config', config = Config(
default=os.path.join(TEST_DATA_DIR, 'configrc')) APP_NAME,
APP_AUTHOR,
"test_config",
default=os.path.join(TEST_DATA_DIR, "configrc"),
)
config.read_default_config() config.read_default_config()
assert config.data == DEFAULT_CONFIG assert config.data == DEFAULT_CONFIG
def test_config_reading_configspec(): def test_config_reading_configspec():
"""Test that the Config default file will work with a configspec.""" """Test that the Config default file will work with a configspec."""
config = Config(APP_NAME, APP_AUTHOR, 'test_config', validate=True, config = Config(
default=os.path.join(TEST_DATA_DIR, 'configspecrc')) APP_NAME,
APP_AUTHOR,
"test_config",
validate=True,
default=os.path.join(TEST_DATA_DIR, "configspecrc"),
)
config.read_default_config() config.read_default_config()
assert config.data == DEFAULT_VALID_CONFIG assert config.data == DEFAULT_VALID_CONFIG
@ -138,134 +151,143 @@ def test_config_reading_configspec():
def test_config_reading_configspec_with_error(): def test_config_reading_configspec_with_error():
"""Test that reading an invalid configspec raises and exception.""" """Test that reading an invalid configspec raises and exception."""
with pytest.raises(DefaultConfigValidationError): with pytest.raises(DefaultConfigValidationError):
config = Config(APP_NAME, APP_AUTHOR, 'test_config', validate=True, config = Config(
default=os.path.join(TEST_DATA_DIR, APP_NAME,
'invalid_configspecrc')) APP_AUTHOR,
"test_config",
validate=True,
default=os.path.join(TEST_DATA_DIR, "invalid_configspecrc"),
)
config.read_default_config() config.read_default_config()
@with_temp_dir @with_temp_dir
def test_write_and_read_default_config(temp_dir=None): def test_write_and_read_default_config(temp_dir=None):
config_file = 'test_config' config_file = "test_config"
default_file = os.path.join(TEST_DATA_DIR, 'configrc') default_file = os.path.join(TEST_DATA_DIR, "configrc")
temp_config_file = os.path.join(temp_dir, config_file) temp_config_file = os.path.join(temp_dir, config_file)
config = _mocked_user_config(temp_dir, APP_NAME, APP_AUTHOR, config_file, config = _mocked_user_config(
default=default_file) temp_dir, APP_NAME, APP_AUTHOR, config_file, default=default_file
)
config.read_default_config() config.read_default_config()
config.write_default_config() config.write_default_config()
user_config = _mocked_user_config(temp_dir, APP_NAME, APP_AUTHOR, user_config = _mocked_user_config(
config_file, default=default_file) temp_dir, APP_NAME, APP_AUTHOR, config_file, default=default_file
)
user_config.read() user_config.read()
assert temp_config_file in user_config.config_filenames assert temp_config_file in user_config.config_filenames
assert user_config == config assert user_config == config
with open(temp_config_file) as f: with open(temp_config_file) as f:
contents = f.read() contents = f.read()
assert '# Test file comment' in contents assert "# Test file comment" in contents
assert '# Test section comment' in contents assert "# Test section comment" in contents
assert '# Test field comment' in contents assert "# Test field comment" in contents
assert '# Test field commented out' in contents assert "# Test field commented out" in contents
@with_temp_dir @with_temp_dir
def test_write_and_read_default_config_from_configspec(temp_dir=None): def test_write_and_read_default_config_from_configspec(temp_dir=None):
config_file = 'test_config' config_file = "test_config"
default_file = os.path.join(TEST_DATA_DIR, 'configspecrc') default_file = os.path.join(TEST_DATA_DIR, "configspecrc")
temp_config_file = os.path.join(temp_dir, config_file) temp_config_file = os.path.join(temp_dir, config_file)
config = _mocked_user_config(temp_dir, APP_NAME, APP_AUTHOR, config_file, config = _mocked_user_config(
default=default_file, validate=True) temp_dir, APP_NAME, APP_AUTHOR, config_file, default=default_file, validate=True
)
config.read_default_config() config.read_default_config()
config.write_default_config() config.write_default_config()
user_config = _mocked_user_config(temp_dir, APP_NAME, APP_AUTHOR, user_config = _mocked_user_config(
config_file, default=default_file, temp_dir, APP_NAME, APP_AUTHOR, config_file, default=default_file, validate=True
validate=True) )
user_config.read() user_config.read()
assert temp_config_file in user_config.config_filenames assert temp_config_file in user_config.config_filenames
assert user_config == config assert user_config == config
with open(temp_config_file) as f: with open(temp_config_file) as f:
contents = f.read() contents = f.read()
assert '# Test file comment' in contents assert "# Test file comment" in contents
assert '# Test section comment' in contents assert "# Test section comment" in contents
assert '# Test field comment' in contents assert "# Test field comment" in contents
assert '# Test field commented out' in contents assert "# Test field commented out" in contents
@with_temp_dir @with_temp_dir
def test_overwrite_default_config_from_configspec(temp_dir=None): def test_overwrite_default_config_from_configspec(temp_dir=None):
config_file = 'test_config' config_file = "test_config"
default_file = os.path.join(TEST_DATA_DIR, 'configspecrc') default_file = os.path.join(TEST_DATA_DIR, "configspecrc")
temp_config_file = os.path.join(temp_dir, config_file) temp_config_file = os.path.join(temp_dir, config_file)
config = _mocked_user_config(temp_dir, APP_NAME, APP_AUTHOR, config_file, config = _mocked_user_config(
default=default_file, validate=True) temp_dir, APP_NAME, APP_AUTHOR, config_file, default=default_file, validate=True
)
config.read_default_config() config.read_default_config()
config.write_default_config() config.write_default_config()
with open(temp_config_file, 'a') as f: with open(temp_config_file, "a") as f:
f.write('--APPEND--') f.write("--APPEND--")
config.write_default_config() config.write_default_config()
with open(temp_config_file) as f: with open(temp_config_file) as f:
assert '--APPEND--' in f.read() assert "--APPEND--" in f.read()
config.write_default_config(overwrite=True) config.write_default_config(overwrite=True)
with open(temp_config_file) as f: with open(temp_config_file) as f:
assert '--APPEND--' not in f.read() assert "--APPEND--" not in f.read()
def test_read_invalid_config_file(): def test_read_invalid_config_file():
config_file = 'invalid_configrc' config_file = "invalid_configrc"
config = _mocked_user_config(TEST_DATA_DIR, APP_NAME, APP_AUTHOR, config = _mocked_user_config(TEST_DATA_DIR, APP_NAME, APP_AUTHOR, config_file)
config_file)
config.read() config.read()
assert 'section' in config assert "section" in config
assert 'test_string_file' in config['section'] assert "test_string_file" in config["section"]
assert 'test_boolean_default' not in config['section'] assert "test_boolean_default" not in config["section"]
assert 'section2' in config assert "section2" in config
@with_temp_dir @with_temp_dir
def test_write_to_user_config(temp_dir=None): def test_write_to_user_config(temp_dir=None):
config_file = 'test_config' config_file = "test_config"
default_file = os.path.join(TEST_DATA_DIR, 'configrc') default_file = os.path.join(TEST_DATA_DIR, "configrc")
temp_config_file = os.path.join(temp_dir, config_file) temp_config_file = os.path.join(temp_dir, config_file)
config = _mocked_user_config(temp_dir, APP_NAME, APP_AUTHOR, config_file, config = _mocked_user_config(
default=default_file) temp_dir, APP_NAME, APP_AUTHOR, config_file, default=default_file
)
config.read_default_config() config.read_default_config()
config.write_default_config() config.write_default_config()
with open(temp_config_file) as f: with open(temp_config_file) as f:
assert 'test_boolean_default = True' in f.read() assert "test_boolean_default = True" in f.read()
config['section']['test_boolean_default'] = False config["section"]["test_boolean_default"] = False
config.write() config.write()
with open(temp_config_file) as f: with open(temp_config_file) as f:
assert 'test_boolean_default = False' in f.read() assert "test_boolean_default = False" in f.read()
@with_temp_dir @with_temp_dir
def test_write_to_outfile(temp_dir=None): def test_write_to_outfile(temp_dir=None):
config_file = 'test_config' config_file = "test_config"
outfile = os.path.join(temp_dir, 'foo') outfile = os.path.join(temp_dir, "foo")
default_file = os.path.join(TEST_DATA_DIR, 'configrc') default_file = os.path.join(TEST_DATA_DIR, "configrc")
config = _mocked_user_config(temp_dir, APP_NAME, APP_AUTHOR, config_file, config = _mocked_user_config(
default=default_file) temp_dir, APP_NAME, APP_AUTHOR, config_file, default=default_file
)
config.read_default_config() config.read_default_config()
config.write_default_config() config.write_default_config()
config['section']['test_boolean_default'] = False config["section"]["test_boolean_default"] = False
config.write(outfile=outfile) config.write(outfile=outfile)
with open(outfile) as f: with open(outfile) as f:
assert 'test_boolean_default = False' in f.read() assert "test_boolean_default = False" in f.read()

View file

@ -8,63 +8,70 @@ from cli_helpers import utils
def test_bytes_to_string_hexlify(): def test_bytes_to_string_hexlify():
"""Test that bytes_to_string() hexlifies binary data.""" """Test that bytes_to_string() hexlifies binary data."""
assert utils.bytes_to_string(b'\xff') == '0xff' assert utils.bytes_to_string(b"\xff") == "0xff"
def test_bytes_to_string_decode_bytes(): def test_bytes_to_string_decode_bytes():
"""Test that bytes_to_string() decodes bytes.""" """Test that bytes_to_string() decodes bytes."""
assert utils.bytes_to_string(b'foobar') == 'foobar' assert utils.bytes_to_string(b"foobar") == "foobar"
def test_bytes_to_string_unprintable():
"""Test that bytes_to_string() hexlifies data that is valid unicode, but unprintable."""
assert utils.bytes_to_string(b"\0") == "0x00"
assert utils.bytes_to_string(b"\1") == "0x01"
assert utils.bytes_to_string(b"a\0") == "0x6100"
def test_bytes_to_string_non_bytes(): def test_bytes_to_string_non_bytes():
"""Test that bytes_to_string() returns non-bytes untouched.""" """Test that bytes_to_string() returns non-bytes untouched."""
assert utils.bytes_to_string('abc') == 'abc' assert utils.bytes_to_string("abc") == "abc"
assert utils.bytes_to_string(1) == 1 assert utils.bytes_to_string(1) == 1
def test_to_string_bytes(): def test_to_string_bytes():
"""Test that to_string() converts bytes to a string.""" """Test that to_string() converts bytes to a string."""
assert utils.to_string(b"foo") == 'foo' assert utils.to_string(b"foo") == "foo"
def test_to_string_non_bytes(): def test_to_string_non_bytes():
"""Test that to_string() converts non-bytes to a string.""" """Test that to_string() converts non-bytes to a string."""
assert utils.to_string(1) == '1' assert utils.to_string(1) == "1"
assert utils.to_string(2.29) == '2.29' assert utils.to_string(2.29) == "2.29"
def test_truncate_string(): def test_truncate_string():
"""Test string truncate preprocessor.""" """Test string truncate preprocessor."""
val = 'x' * 100 val = "x" * 100
assert utils.truncate_string(val, 10) == 'xxxxxxx...' assert utils.truncate_string(val, 10) == "xxxxxxx..."
val = 'x ' * 100 val = "x " * 100
assert utils.truncate_string(val, 10) == 'x x x x...' assert utils.truncate_string(val, 10) == "x x x x..."
val = 'x' * 100 val = "x" * 100
assert utils.truncate_string(val) == 'x' * 100 assert utils.truncate_string(val) == "x" * 100
val = ['x'] * 100 val = ["x"] * 100
val[20] = '\n' val[20] = "\n"
str_val = ''.join(val) str_val = "".join(val)
assert utils.truncate_string(str_val, 10, skip_multiline_string=True) == str_val assert utils.truncate_string(str_val, 10, skip_multiline_string=True) == str_val
def test_intlen_with_decimal(): def test_intlen_with_decimal():
"""Test that intlen() counts correctly with a decimal place.""" """Test that intlen() counts correctly with a decimal place."""
assert utils.intlen('11.1') == 2 assert utils.intlen("11.1") == 2
assert utils.intlen('1.1') == 1 assert utils.intlen("1.1") == 1
def test_intlen_without_decimal(): def test_intlen_without_decimal():
"""Test that intlen() counts correctly without a decimal place.""" """Test that intlen() counts correctly without a decimal place."""
assert utils.intlen('11') == 2 assert utils.intlen("11") == 2
def test_filter_dict_by_key(): def test_filter_dict_by_key():
"""Test that filter_dict_by_key() filter unwanted items.""" """Test that filter_dict_by_key() filter unwanted items."""
keys = ('foo', 'bar') keys = ("foo", "bar")
d = {'foo': 1, 'foobar': 2} d = {"foo": 1, "foobar": 2}
fd = utils.filter_dict_by_key(d, keys) fd = utils.filter_dict_by_key(d, keys)
assert len(fd) == 1 assert len(fd) == 1
assert all([k in keys for k in fd]) assert all([k in keys for k in fd])

View file

@ -9,8 +9,10 @@ from .compat import TemporaryDirectory
def with_temp_dir(f): def with_temp_dir(f):
"""A wrapper that creates and deletes a temporary directory.""" """A wrapper that creates and deletes a temporary directory."""
@wraps(f) @wraps(f)
def wrapped(*args, **kwargs): def wrapped(*args, **kwargs):
with TemporaryDirectory() as temp_dir: with TemporaryDirectory() as temp_dir:
return f(*args, temp_dir=temp_dir, **kwargs) return f(*args, temp_dir=temp_dir, **kwargs)
return wrapped return wrapped

View file

@ -12,7 +12,6 @@ setenv =
commands = commands =
pytest --cov-report= --cov=cli_helpers pytest --cov-report= --cov=cli_helpers
coverage report coverage report
pep8radius master
bash -c 'if [ -n "$CODECOV" ]; then {envbindir}/coverage xml && {envbindir}/codecov; fi' bash -c 'if [ -n "$CODECOV" ]; then {envbindir}/coverage xml && {envbindir}/codecov; fi'
deps = -r{toxinidir}/requirements-dev.txt deps = -r{toxinidir}/requirements-dev.txt
usedevelop = True usedevelop = True