Adding upstream version 1.8.0.
Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
parent
c48d95b7fa
commit
e40b3259c1
2403 changed files with 153656 additions and 0 deletions
42
.gitattributes
vendored
Normal file
42
.gitattributes
vendored
Normal file
|
@ -0,0 +1,42 @@
|
|||
# .gitattributes for pysilfont
|
||||
|
||||
# Based on https://github.com/alexkaratarakis/gitattributes/blob/master/Python.gitattributes
|
||||
# plus additions for pysilfont
|
||||
|
||||
# Designed to tell git how to best deal with certain file formats
|
||||
# The goal for eols is to have LF in the repo and platform-specific
|
||||
# endings, when appropriate, in the working copy on all platforms.
|
||||
# To tweak to your particular needs,
|
||||
# see https://git-scm.com/book/en/Customizing-Git-Git-Attributes
|
||||
|
||||
# The following causes git to auto detect text files which will have eol
|
||||
# conversion applied according to core.autocrlf and core.eol
|
||||
* text=auto
|
||||
|
||||
|
||||
# Text files
|
||||
# ============
|
||||
*.md text
|
||||
*.fea* text
|
||||
*.pxd text
|
||||
*.py text
|
||||
*.py3 text
|
||||
*.pyw text
|
||||
*.pyx text
|
||||
*.sh text
|
||||
*.txt text
|
||||
|
||||
# Binary files
|
||||
# ============
|
||||
*.db binary
|
||||
*.p binary
|
||||
*.pkl binary
|
||||
*.pyc binary
|
||||
*.pyd binary
|
||||
*.pyo binary
|
||||
*.png binary
|
||||
|
||||
# Note: .db, .p, and .pkl files are associated
|
||||
# with the python modules ``pickle``, ``dbm.*``,
|
||||
# ``shelve``, ``marshal``, ``anydbm``, & ``bsddb``
|
||||
# (among others).
|
24
.gitignore
vendored
Normal file
24
.gitignore
vendored
Normal file
|
@ -0,0 +1,24 @@
|
|||
*.pyc
|
||||
*.log
|
||||
*~
|
||||
*.sw?
|
||||
|
||||
|
||||
installed-files.txt
|
||||
pysilfont.cfg
|
||||
|
||||
/build
|
||||
/dist
|
||||
/local
|
||||
/dev
|
||||
/venv
|
||||
|
||||
/src/silfont.egg-info
|
||||
/src/silfont/__pycache__
|
||||
/.pytest_cache
|
||||
/tests/localufos.csv
|
||||
|
||||
.idea
|
||||
|
||||
.DS_Store
|
||||
|
220
CHANGELOG.md
Normal file
220
CHANGELOG.md
Normal file
|
@ -0,0 +1,220 @@
|
|||
# Changelog
|
||||
|
||||
## [1.8.0] - 2023-11-22 Updated packaging
|
||||
|
||||
Updated the packaging to follow PEP621 guidelines
|
||||
|
||||
Also
|
||||
|
||||
- Added do forlet to fea extensions
|
||||
- Updates to MacOS preflight support
|
||||
|
||||
## [1.7.0] - 2023-09-27 Maintenance Release - general updates
|
||||
|
||||
General updates over the last year
|
||||
|
||||
### Added
|
||||
|
||||
| Command | Description |
|
||||
|----------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------|
|
||||
| [psfcheckproject](docs/scripts.md#psfcheckproject) | Check UFOs in designspace files have consistent glyph inventory & unicode values (1.6.1.dev2) |
|
||||
| [update-preflight-libs-pyenv](preflight/update-preflight-libs-pyenv) | Preflight/preglyphs libs update shell script for macOS users (1.6.1.dev6) |
|
||||
| [psfpreflightversion](docs/scripts.md#psfpreflightversion) | Script to check version of modules but only for preflight (1.6.1.dev11) |
|
||||
|
||||
### Changes
|
||||
|
||||
- check&fix, used by most UFO commands, no longer warns if styleMapFamilyName or styleMapStyleName are missing in fontinfo (1.6.1.dev1)
|
||||
- Low-level bug fix to ufo.py found when running some temp code! Not previously found in live code. (1.6.1.dev1)
|
||||
- Glyphs roundtrip now preserves openTypeHeadFlags key (1.6.1.dev2)
|
||||
- Bug fix for psfmakefea for cases where there is adavnce height but no advance width got a glyph (1.6.1.dev3)
|
||||
- Update to core.py to avoid race condition creating logs folder (1.6.1.dev3)
|
||||
- psfglyphs2ufo now removes any advance heights in glyphs to counteract glyphslib changes (1.6.1.dev3)
|
||||
- psfsyncmasters now sets openTypeOS2WeightClass to be in the CSS coordinate space,
|
||||
not the design coordinate space. (1.6.1.dev5)
|
||||
- Various updates to gfr.py to support the Find a Font service (1.6.1.dev6)
|
||||
- psfsyncmasters now syncs public.skipExportGlyphs (1.6.1.dev7)
|
||||
- Add -d to psfaddanchors (1.6.1.dev7)
|
||||
- Global adjustments to use https: instead of http: (1.6.1.dev7)
|
||||
- Corrected ufo.py so plists still have http: in the DOCTYPE (1.6.1.dev8)
|
||||
- psfsyncmasters - removed checks relating to styleMapFamilyName and styleMapStyleName; --complex now does nothing (1.6.1.dev9)
|
||||
- psfrunfbchecks - general updates to reflect new Font Bakery checks (1.6.1.dev9)
|
||||
- psfrunfbchecks + fbtests modules - updates to relect structure changes in Font Bakery (1.6.1.dev10)
|
||||
- psfsubset - Added filtering (1.6.1.dev11)
|
||||
- psfufo2ttf - fix crash in cases where both `public` and `org.sil` keys for Variation Sequence data are present (1.6.1.dev11)
|
||||
- psfbuildcomp - updated to use g_blue,g_purple as the default colours for -c (1.6.1.dev11)
|
||||
- Fixed bug in setuptestdata.py used by pytest (1.6.1.dev11)
|
||||
- Bug-fix to check&fix where updates that empty an array failed (1.6.1.dev11)
|
||||
- update-preflight-libs-pyenv - adjusted dependencies, added conditional to modules installation report calling script only for the desired modules, made output terser, added stricter pyenv checking, dropped filename suffix (1.6.dev11)
|
||||
|
||||
### Removed
|
||||
|
||||
None
|
||||
|
||||
## [1.6.0] - 2022-07-25 Maintenance Release - general updates
|
||||
|
||||
General updates over the last two years, adding new scripts and updating existing in response to new
|
||||
needs or to adjust for changes to third-party software.
|
||||
|
||||
### Added
|
||||
|
||||
| Command | Description |
|
||||
| ------- | ----------- |
|
||||
| [psfcheckclassorders](docs/scripts.md#psfcheckclassorders) | Verify classes defined in xml have correct ordering where needed |
|
||||
| [psfcheckftml](docs/scripts.md#psfcheckftml) | Check ftml files for structural integrity |
|
||||
| [psfcheckglyphinventory](docs/scripts.md#psfcheckglyphinventory) | Warn for differences in glyph inventory and encoding between UFO and input file (e.g., glyph_data.csv) |
|
||||
| [psfcheckinterpolatable](docs/scripts.md#psfcheckinterpolatable) | Check UFOs in a designspace file are compatible with interpolation |
|
||||
| [psffixfontlab](docs/scripts.md#psffixfontlab) | Make changes needed to a UFO following processing by FontLab |
|
||||
| [psfsetdummydsig](docs/scripts.md#psfsetdummydsig) | Add a dummy DSIG table into a TTF font |
|
||||
| [psfsetglyphdata](docs/scripts.md#psfsetglyphdata) | Update and/or sort glyph_data.csv based on input file(s) |
|
||||
| [psfshownames](docs/scripts.md#psfshownames) | Display name fields and other bits for linking fonts into families |
|
||||
| [psfwoffit](docs/scripts.md#psfwoffit) | Convert between ttf, woff, and woff2 |
|
||||
|
||||
### Changed
|
||||
|
||||
Multiple changes!
|
||||
|
||||
### Removed
|
||||
|
||||
None
|
||||
|
||||
## [1.5.0] - 2020-05-20 - Maintenance Release; Python 2 support removed
|
||||
|
||||
Added support for Font Bakery to make it simple for projects to run a standard set ot checks designed to fit in
|
||||
with [Font Development Best Practices](https://silnrsi.github.io/FDBP/en-US/index.html).
|
||||
|
||||
Improvements to feax support
|
||||
|
||||
Many other updates
|
||||
|
||||
### Added
|
||||
|
||||
| Command | Description |
|
||||
| ------- | ----------- |
|
||||
| [psfftml2TThtml](docs/scripts.md#psfftml2TThtml) | Convert FTML document to html and fonts for testing TypeTuner |
|
||||
| [psfmakescaledshifted](docs/scripts.md#psfmakescaledshifted) | Creates scaled and shifted versions of glyphs |
|
||||
| [psfrunfbchecks](docs/scripts.md#psfrunfbchecks) | Run Font Bakery checks using a standard profile with option to specify an alternative profile |
|
||||
| [psfsetdummydsig](docs/scripts.md#psfsetdummydsig) | Put a dummy DSIG table into a TTF font |
|
||||
|
||||
### Changed
|
||||
|
||||
Multiple minor changes and bug fixes
|
||||
|
||||
### Removed
|
||||
|
||||
None
|
||||
|
||||
## [1.4.2] - 2019-07-30 - Maintenance release
|
||||
|
||||
Updated the execute() framework used by scripts to add support for opening fonts with fontParts and remove support for opening fonts with FontForge.
|
||||
|
||||
Updates to normalization and check&fix to work better with FontForge-based workflows
|
||||
|
||||
Improvements to command-line help to display info on params and default values
|
||||
|
||||
Improvements to log file creation, including logs, by default, going to a logs sub-directory
|
||||
|
||||
Some changes are detailed below, but check commit logs for full details.
|
||||
|
||||
### Added
|
||||
|
||||
| Command | Description |
|
||||
| ------- | ----------- |
|
||||
| [psfbuildcompgc](docs/scripts.md#psfbuildcompgc) | Add composite glyphs to UFO using glyphConstruction based on a CD file |
|
||||
| [psfdeflang](docs/scripts.md#psfdeflang) | Changes default language behaviour in a font |
|
||||
| [psfdupglyphs](docs/scripts.md#psfdupglyphs) | Duplicates glyphs in a UFO based on a csv definition |
|
||||
| [psfexportmarkcolors](docs/scripts.md#psfexportmarkcolors) | Export csv of mark colors |
|
||||
| [psffixffglifs](docs/scripts.md#psffixffglifs) | Make changes needed to a UFO following processing by FontForge |
|
||||
| [psfgetglyphnames](docs/scripts.md#psfgetglyphnames) | Create a file of glyphs to import from a list of characters to import |
|
||||
| [psfmakedeprecated](docs/scripts.md#psfmakedeprecated) | Creates deprecated versions of glyphs |
|
||||
| [psfsetmarkcolors](docs/scripts.md#psfsetmarkcolors) | Set mark colors based on csv file |
|
||||
| [psftuneraliases](docs/scripts.md#psftuneraliases) | Merge alias information into TypeTuner feature xml file |
|
||||
|
||||
### Changed
|
||||
|
||||
Multiple minor changes and bug fixes
|
||||
|
||||
### Removed
|
||||
|
||||
The following scripts moved from installed scripts to examples
|
||||
|
||||
- ffchangeglyphnames
|
||||
- ffcopyglyphs
|
||||
- ffremovealloverlaps
|
||||
|
||||
## [1.4.1] - 2019-03-04 - Maintenance release
|
||||
|
||||
Nearly all scripts should work under Python 2 & 3
|
||||
|
||||
**Future work will be tested just with Python 3** but most may still work with Python 2.
|
||||
|
||||
Some changes are detailed below, but check commit logs for full details.
|
||||
|
||||
### Added
|
||||
|
||||
psfversion - Report version info for pysilfont, python and various dependencies
|
||||
psfufo2gylphs - Creates a .gypyhs file from UFOs using glyphsLib
|
||||
|
||||
### Changed
|
||||
|
||||
psfremovegliflibkeys now has -b option to remove keys beginning with specified string
|
||||
|
||||
psfglyphs2ufo updated to match new psfufo2glyphs. Now has -r to restore specific keys
|
||||
|
||||
Many changes to .fea support
|
||||
|
||||
The pytest-based test setup has been expanded and refined
|
||||
|
||||
### Removed
|
||||
|
||||
Some scripts moved from installed scripts to examples
|
||||
|
||||
## [1.4.0] - 2018-10-03 - Python 2+3 support
|
||||
|
||||
### Added
|
||||
|
||||
### Changed
|
||||
|
||||
Libraries and most installed scripts updated to work with Python 2 and python 3
|
||||
|
||||
All scripts should work as before under Python 2, but a few scripts need further work to run under Python 3:
|
||||
- All ff* scripts
|
||||
- psfaddanchors
|
||||
- psfcsv2comp
|
||||
- psfexpandstroke
|
||||
- psfsubset
|
||||
|
||||
The following scripts have not been fully tested with the new libraries
|
||||
- psfchangegdlnames
|
||||
- psfcompdef2xml
|
||||
- psfcopymeta
|
||||
- psfexportpsnames
|
||||
- psfftml2odt
|
||||
- psfremovegliflibkeys
|
||||
- psfsetversion
|
||||
- psfsyncmeta
|
||||
- psftoneletters
|
||||
- psfxml2compdef
|
||||
|
||||
### Removed
|
||||
|
||||
## [1.3.1] - 2018-09-27 - Stable release prior to Python 2+3 merge
|
||||
|
||||
### Added
|
||||
- psfcopyglyphs
|
||||
- psfcreateinstances
|
||||
- psfcsv2comp
|
||||
- psfmakefea
|
||||
- psfremovegliflibkeys
|
||||
- psfsetkeys
|
||||
- psfsubset
|
||||
|
||||
Regression testing framework
|
||||
|
||||
### Changed
|
||||
|
||||
(Changes not documented here)
|
||||
|
||||
### Removed
|
||||
|
||||
|
||||
## [1.3.0] - 2018-04-25 - First versioned release
|
20
LICENSE
Normal file
20
LICENSE
Normal file
|
@ -0,0 +1,20 @@
|
|||
|
||||
Copyright (c) 2014-2023, SIL International (https://www.sil.org)
|
||||
Released under the MIT license cited below:
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
||||
the Software, and to permit persons to whom the Software is furnished to do so,
|
||||
subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
||||
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
||||
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
111
README.md
Normal file
111
README.md
Normal file
|
@ -0,0 +1,111 @@
|
|||
# Pysilfont - a collection of utilities for font development
|
||||
|
||||
Pysilfont is a collection of tools to support font development, with an emphasis on UFO-based workflows. With some limitations, all UFO scripts in Pysilfont should work with UFO2 or UFO3 source files - and can convert from one format to the other.
|
||||
|
||||
In addition, all scripts will output UFOs in a normalized form, designed to work with source control systems.
|
||||
|
||||
Please read the main [documentation](docs/docs.md) in the docs folder for more details. Within there is a list of [scripts](docs/scripts.md).
|
||||
|
||||
## Installation
|
||||
|
||||
Pysilfont requires Python 3.6+ and pip3. Some scripts also need other libraries.
|
||||
|
||||
### Updating your python toolchain to be current
|
||||
```
|
||||
sudo apt install python3-pip python3-setuptools
|
||||
python3 -m pip install --upgrade pip setuptools wheel build
|
||||
```
|
||||
|
||||
### Simple install
|
||||
To just install the main scripts (only in the user's folder not system-wide) without cloning the GitHub repository run:
|
||||
```
|
||||
python3 -m pip install git+https://github.com/silnrsi/pysilfont
|
||||
```
|
||||
|
||||
This will allow you to run the scripts listed in [scripts.md](docs/scripts.md), but won’t give access
|
||||
to the example scripts or give you the code locally to look at.
|
||||
|
||||
### Full install
|
||||
|
||||
First clone this repository or download the files from this [https://github.com/silnrsi/pysilfont](https://github.com/silnrsi/pysilfont). To better isolate changes from your system Python we will use a virtual environment.
|
||||
Then navigate to the newly created pysilfont directory and run:
|
||||
```
|
||||
sudo apt install python3-pip python3-venv python3-wheel python3-setuptools
|
||||
```
|
||||
|
||||
Then create a virtual environment:
|
||||
```
|
||||
python3 -m venv venv
|
||||
```
|
||||
Get inside the virtual environment, you have to do this every time you want to use the pysilfont tools again:
|
||||
```
|
||||
source venv/bin/activate
|
||||
```
|
||||
|
||||
Then install update the toolchain and install:
|
||||
```
|
||||
python3 -m pip install --upgrade pip setuptools wheel build
|
||||
python3 -m pip install .
|
||||
```
|
||||
|
||||
You can deactivate your virtual environment (venv) by typing:
|
||||
```
|
||||
deactivate
|
||||
```
|
||||
or by closing that Terminal.
|
||||
|
||||
|
||||
Alternatively to install in editable mode:
|
||||
```
|
||||
python3 -m pip install -e .
|
||||
```
|
||||
|
||||
By default the dependencies pulled in are using releases.
|
||||
|
||||
|
||||
|
||||
For developers:
|
||||
|
||||
Install from git main/master to track the freshest versions of the dependencies:
|
||||
```
|
||||
python3 -m pip install --upgrade -e .[git]
|
||||
```
|
||||
|
||||
To have more than one project in editable mode you should install each one separately and only install pysilfont at the last step, for example:
|
||||
```
|
||||
python3 -m pip install -e git+https://github.com/fonttools/fontbakery.git@main#egg=fontbakery
|
||||
python3 -m pip install -e git+https://github.com/fonttools/fonttools@main#egg=fonttools
|
||||
python3 -m pip install -e git+https://github.com/silnrsi/pysilfont.git@master#egg=silfont
|
||||
```
|
||||
|
||||
### Uninstalling pysilfont
|
||||
|
||||
pip3 can be used to uninstall silfont:
|
||||
|
||||
locally for your user:
|
||||
```
|
||||
pip3 uninstall silfont
|
||||
```
|
||||
|
||||
or to remove a venv (virtual environment):
|
||||
```
|
||||
deactivate (if you are inside the venv)
|
||||
rm -rf venv
|
||||
```
|
||||
|
||||
|
||||
or if you have it installed system-wide (only recommended inside a separate VM or container)
|
||||
```
|
||||
sudo pip3 uninstall silfont
|
||||
```
|
||||
|
||||
If you need palaso, you will need to install it separately.
|
||||
Follow the instructions on https://github.com/silnrsi/palaso-python
|
||||
|
||||
If you need fontbakery, you will need to install it separately.
|
||||
Follow the instructions on https://font-bakery.readthedocs.io
|
||||
|
||||
|
||||
## Contributing to the project
|
||||
|
||||
Pysilfont is developed and maintained by SIL International’s [Writing Systems Technology team ](https://software.sil.org/wstech/), though contributions from anyone are welcome. Pysilfont is copyright (c) 2014-2023 [SIL International](https://www.sil.org) and licensed under the [MIT license](https://en.wikipedia.org/wiki/MIT_License). The project is hosted at [https://github.com/silnrsi/pysilfont](https://github.com/silnrsi/pysilfont).
|
19
actionsosx/README.txt
Normal file
19
actionsosx/README.txt
Normal file
|
@ -0,0 +1,19 @@
|
|||
This folder contains actions for use in Mac OS X based on tools in pysilfont.
|
||||
|
||||
UFO NORMALIZE
|
||||
|
||||
This action takes a .ufo (Unified Font Object) and normalizes the file to standardize the formatting. Some of the changes include:
|
||||
- standard indenting in the xml files
|
||||
- sorting plists alphabetically
|
||||
- uniform handling of capitals & underscores in glif filenames
|
||||
|
||||
To install the UFO Normalize action:
|
||||
|
||||
- install the pysilfont package using the steps in INSTALL.txt (important!)
|
||||
- double-click on UFO Normalize.workflow
|
||||
|
||||
To use the UFO Normalize action:
|
||||
|
||||
- right-click on a UFO file, and choose Services>UFO Normalize
|
||||
|
||||
The action first makes a copy of the UFO in a backups subfolder.
|
27
actionsosx/UFO Normalize.workflow/Contents/Info.plist
Normal file
27
actionsosx/UFO Normalize.workflow/Contents/Info.plist
Normal file
|
@ -0,0 +1,27 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "https://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>NSServices</key>
|
||||
<array>
|
||||
<dict>
|
||||
<key>NSMenuItem</key>
|
||||
<dict>
|
||||
<key>default</key>
|
||||
<string>UFO Normalize</string>
|
||||
</dict>
|
||||
<key>NSMessage</key>
|
||||
<string>runWorkflowAsService</string>
|
||||
<key>NSRequiredContext</key>
|
||||
<dict>
|
||||
<key>NSApplicationIdentifier</key>
|
||||
<string>com.apple.finder</string>
|
||||
</dict>
|
||||
<key>NSSendFileTypes</key>
|
||||
<array>
|
||||
<string>public.item</string>
|
||||
</array>
|
||||
</dict>
|
||||
</array>
|
||||
</dict>
|
||||
</plist>
|
Binary file not shown.
After Width: | Height: | Size: 34 KiB |
204
actionsosx/UFO Normalize.workflow/Contents/document.wflow
Normal file
204
actionsosx/UFO Normalize.workflow/Contents/document.wflow
Normal file
|
@ -0,0 +1,204 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "https://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>AMApplicationBuild</key>
|
||||
<string>428</string>
|
||||
<key>AMApplicationVersion</key>
|
||||
<string>2.7</string>
|
||||
<key>AMDocumentVersion</key>
|
||||
<string>2</string>
|
||||
<key>actions</key>
|
||||
<array>
|
||||
<dict>
|
||||
<key>action</key>
|
||||
<dict>
|
||||
<key>AMAccepts</key>
|
||||
<dict>
|
||||
<key>Container</key>
|
||||
<string>List</string>
|
||||
<key>Optional</key>
|
||||
<true/>
|
||||
<key>Types</key>
|
||||
<array>
|
||||
<string>com.apple.cocoa.string</string>
|
||||
</array>
|
||||
</dict>
|
||||
<key>AMActionVersion</key>
|
||||
<string>2.0.3</string>
|
||||
<key>AMApplication</key>
|
||||
<array>
|
||||
<string>Automator</string>
|
||||
</array>
|
||||
<key>AMParameterProperties</key>
|
||||
<dict>
|
||||
<key>COMMAND_STRING</key>
|
||||
<dict/>
|
||||
<key>CheckedForUserDefaultShell</key>
|
||||
<dict/>
|
||||
<key>inputMethod</key>
|
||||
<dict/>
|
||||
<key>shell</key>
|
||||
<dict/>
|
||||
<key>source</key>
|
||||
<dict/>
|
||||
</dict>
|
||||
<key>AMProvides</key>
|
||||
<dict>
|
||||
<key>Container</key>
|
||||
<string>List</string>
|
||||
<key>Types</key>
|
||||
<array>
|
||||
<string>com.apple.cocoa.string</string>
|
||||
</array>
|
||||
</dict>
|
||||
<key>ActionBundlePath</key>
|
||||
<string>/System/Library/Automator/Run Shell Script.action</string>
|
||||
<key>ActionName</key>
|
||||
<string>Run Shell Script</string>
|
||||
<key>ActionParameters</key>
|
||||
<dict>
|
||||
<key>COMMAND_STRING</key>
|
||||
<string>for f in "$@"
|
||||
do
|
||||
/usr/local/bin/psfnormalize "$f"
|
||||
done</string>
|
||||
<key>CheckedForUserDefaultShell</key>
|
||||
<true/>
|
||||
<key>inputMethod</key>
|
||||
<integer>1</integer>
|
||||
<key>shell</key>
|
||||
<string>/bin/bash</string>
|
||||
<key>source</key>
|
||||
<string></string>
|
||||
</dict>
|
||||
<key>BundleIdentifier</key>
|
||||
<string>com.apple.RunShellScript</string>
|
||||
<key>CFBundleVersion</key>
|
||||
<string>2.0.3</string>
|
||||
<key>CanShowSelectedItemsWhenRun</key>
|
||||
<false/>
|
||||
<key>CanShowWhenRun</key>
|
||||
<true/>
|
||||
<key>Category</key>
|
||||
<array>
|
||||
<string>AMCategoryUtilities</string>
|
||||
</array>
|
||||
<key>Class Name</key>
|
||||
<string>RunShellScriptAction</string>
|
||||
<key>InputUUID</key>
|
||||
<string>A5DDB5A1-5587-4252-BDF2-2088FB0C18DA</string>
|
||||
<key>Keywords</key>
|
||||
<array>
|
||||
<string>Shell</string>
|
||||
<string>Script</string>
|
||||
<string>Command</string>
|
||||
<string>Run</string>
|
||||
<string>Unix</string>
|
||||
</array>
|
||||
<key>OutputUUID</key>
|
||||
<string>C1A7AC3D-90D6-472B-8DBA-C3F2CD74F083</string>
|
||||
<key>UUID</key>
|
||||
<string>C4284E07-22D9-4635-9306-D48F8EA7946F</string>
|
||||
<key>UnlocalizedApplications</key>
|
||||
<array>
|
||||
<string>Automator</string>
|
||||
</array>
|
||||
<key>arguments</key>
|
||||
<dict>
|
||||
<key>0</key>
|
||||
<dict>
|
||||
<key>default value</key>
|
||||
<integer>0</integer>
|
||||
<key>name</key>
|
||||
<string>inputMethod</string>
|
||||
<key>required</key>
|
||||
<string>0</string>
|
||||
<key>type</key>
|
||||
<string>0</string>
|
||||
<key>uuid</key>
|
||||
<string>0</string>
|
||||
</dict>
|
||||
<key>1</key>
|
||||
<dict>
|
||||
<key>default value</key>
|
||||
<string></string>
|
||||
<key>name</key>
|
||||
<string>source</string>
|
||||
<key>required</key>
|
||||
<string>0</string>
|
||||
<key>type</key>
|
||||
<string>0</string>
|
||||
<key>uuid</key>
|
||||
<string>1</string>
|
||||
</dict>
|
||||
<key>2</key>
|
||||
<dict>
|
||||
<key>default value</key>
|
||||
<false/>
|
||||
<key>name</key>
|
||||
<string>CheckedForUserDefaultShell</string>
|
||||
<key>required</key>
|
||||
<string>0</string>
|
||||
<key>type</key>
|
||||
<string>0</string>
|
||||
<key>uuid</key>
|
||||
<string>2</string>
|
||||
</dict>
|
||||
<key>3</key>
|
||||
<dict>
|
||||
<key>default value</key>
|
||||
<string></string>
|
||||
<key>name</key>
|
||||
<string>COMMAND_STRING</string>
|
||||
<key>required</key>
|
||||
<string>0</string>
|
||||
<key>type</key>
|
||||
<string>0</string>
|
||||
<key>uuid</key>
|
||||
<string>3</string>
|
||||
</dict>
|
||||
<key>4</key>
|
||||
<dict>
|
||||
<key>default value</key>
|
||||
<string>/bin/sh</string>
|
||||
<key>name</key>
|
||||
<string>shell</string>
|
||||
<key>required</key>
|
||||
<string>0</string>
|
||||
<key>type</key>
|
||||
<string>0</string>
|
||||
<key>uuid</key>
|
||||
<string>4</string>
|
||||
</dict>
|
||||
</dict>
|
||||
<key>isViewVisible</key>
|
||||
<true/>
|
||||
<key>location</key>
|
||||
<string>309.000000:253.000000</string>
|
||||
<key>nibPath</key>
|
||||
<string>/System/Library/Automator/Run Shell Script.action/Contents/Resources/English.lproj/main.nib</string>
|
||||
</dict>
|
||||
<key>isViewVisible</key>
|
||||
<true/>
|
||||
</dict>
|
||||
</array>
|
||||
<key>connectors</key>
|
||||
<dict/>
|
||||
<key>workflowMetaData</key>
|
||||
<dict>
|
||||
<key>serviceApplicationBundleID</key>
|
||||
<string>com.apple.finder</string>
|
||||
<key>serviceApplicationPath</key>
|
||||
<string>/System/Library/CoreServices/Finder.app</string>
|
||||
<key>serviceInputTypeIdentifier</key>
|
||||
<string>com.apple.Automator.fileSystemObject</string>
|
||||
<key>serviceOutputTypeIdentifier</key>
|
||||
<string>com.apple.Automator.nothing</string>
|
||||
<key>serviceProcessesInput</key>
|
||||
<integer>0</integer>
|
||||
<key>workflowTypeIdentifier</key>
|
||||
<string>com.apple.Automator.servicesMenu</string>
|
||||
</dict>
|
||||
</dict>
|
||||
</plist>
|
106
docs/composite.md
Normal file
106
docs/composite.md
Normal file
|
@ -0,0 +1,106 @@
|
|||
# Defining composite glyphs
|
||||
|
||||
A composite glyph is one that is defined in terms of one or more other glyphs.
|
||||
The composite definition syntax described in this document is a subset of the [GlyphConstruction](https://github.com/typemytype/GlyphConstruction) syntax used by Robofont, but with extensions for additional functionality.
|
||||
Composites defined in this syntax can be applied to a UFO using the [psfbuildcomp](scripts.md#psfbuildcomp) tool.
|
||||
|
||||
# Overview
|
||||
|
||||
Each composite definition is on a single line and has the format:
|
||||
```
|
||||
<result> = <one or more glyphs> <parameters> # comment
|
||||
```
|
||||
where
|
||||
- `<result>` is the name of the composite glyph being constructed
|
||||
- `<one or more glyphs>` represents one or more glyphs used in the construction of the composite glyph, with optional glyph-level parameters described below
|
||||
- `<parameters>` represents adjustments made to the `<result>` glyph, using the following optional parameters:
|
||||
- at most one of the two following options:
|
||||
- `^x,y` (where `x` is the amount added to the left margin and `y` is the amount added to the right margin)
|
||||
- `^a` (where `a` is the advance width of the resulting glyph)
|
||||
- `|usv` where `usv` is the 4-, 5- or 6-digit hex Unicode scalar value assigned to the resulting glyph
|
||||
- `!colordef` (currently ignored by SIL tools)
|
||||
- `[key1=value1;key2=value2;...]` to add one or more `key=value` pairs (representing SIL-specific properties documented below) to the resulting glyph
|
||||
- `# comment` is an optional comment (everything from the `#` to the end of the line is ignored)
|
||||
|
||||
In addition, a line that begins with `#` is considered a comment and is ignored (as are blank lines).
|
||||
|
||||
If `[key=value]` properties for the resulting glyph are specified but no `|usv` is specified, then a `|` must be included before the `[`.
|
||||
This insures that the properties are applied to the resulting composite glyph and not to the last glyph in the composite specification.
|
||||
|
||||
# Examples
|
||||
|
||||
In the following examples,
|
||||
- `glyph` represents the resulting glyph being constructed
|
||||
- `base`, `base1`, `base2` represent base glyphs being used in the construction
|
||||
- `diac`, `diac1`, `diac2` represent diacritic glyphs being used in the construction
|
||||
- `AP` represents an attachment point (also known as an anchor)
|
||||
|
||||
## glyph = base
|
||||
```
|
||||
Minus = Hyphen
|
||||
```
|
||||
This defines one glyph (`Minus`) in terms of another (`Hyphen`), without having to duplicate the contours used to create the shape.
|
||||
|
||||
## glyph = base1 & base2
|
||||
```
|
||||
ffi = f & f & i
|
||||
```
|
||||
This construct causes a glyph to be composed by aligning the origin of each successive base with the origin+advancewidth of the previous base. Unless overridden by the `^` parameter, the left sidebearing of the composite is that of the first base, the right sidebearing is that of the last, and the advancewidth of the composite is the sum of the advance widths of all the component base glyphs. [Unsure how this changes for right-to-left scripts]
|
||||
|
||||
## glyph = base + diac@AP
|
||||
```
|
||||
Aacute = A + acute@U
|
||||
```
|
||||
The resulting composite has the APs of the base(s), minus any APs used to attach the diacritics, plus the APs of the diacritics (adjusted for any displacement, as would be the case for stacked diacritics). In this example, glyph `acute` attaches to glyph `A` at AP `U` on `A` (by default using the `_U` AP on `tilde`). The `U` AP from `A` is removed (as is the `_U` AP on the `tilde`) and the `U` AP from `acute` is added.
|
||||
|
||||
Unless overridden by the `^` parameter, the advance width of the resulting composite is that of the base.
|
||||
|
||||
## glyph = base + diac1@AP + diac2@APonpreviousdiac
|
||||
```
|
||||
Ocircumflexacute = O + circumflex@U + acute@U
|
||||
```
|
||||
The acute is positioned according to the `U` AP on the immediately preceding glyph (`circumflex`), not the `U` AP on the base (`O`).
|
||||
|
||||
## glyph = base + diac@anyglyph:anyAP
|
||||
|
||||
The syntax allows you to express diacritic positioning using any arbitrary AP on any arbitrary glyph in the font, for example:
|
||||
```
|
||||
barredOacute = barredO + acute@O:U # not supported
|
||||
```
|
||||
Current SIL tools, however, only support an `anyglyph` that appears earlier in the composite definition, so the above example is **not** supported.
|
||||
|
||||
This syntax, however, makes it possible to override the default behavior of attaching to the immediately preceding glyph, so the following is supported (since the `@O:L` refers to the glyph `O` which appears earlier in the definition):
|
||||
```
|
||||
Ocircumflexdotaccent = O + circumflex@U + dotaccent@O:L
|
||||
```
|
||||
The `@O:L` causes the `dotaccent` diacritic to attach to the base glyph `O` (rather the immediately preceding `circumflex` glyph) using the `L` AP on the glyph `O` and the `_L` AP on the glyph `dotaccent`.
|
||||
|
||||
## glyph = base + diac@AP | usv
|
||||
```
|
||||
Aacute = A + acute@U | 00C1
|
||||
```
|
||||
USV is always given as four- to six-digit hexadecimal number with no leading "U+" or "0x".
|
||||
|
||||
## glyph = base + diac@AP ^ leftmarginadd,rightmarginadd
|
||||
```
|
||||
itilde = i + tilde@U ^ 50,50
|
||||
```
|
||||
This adds the values (in design units) to the left and right sidebearings. Note that these values could be negative.
|
||||
|
||||
# SIL Extensions
|
||||
|
||||
SIL extensions are all expressed as property lists (`key=value`) separated by semicolons and enclosed in square brackets: `[key1=value1;key2=value2]`.
|
||||
- Properties that apply to a glyph being used in the construction of the composite appear after the glyph.
|
||||
- Properties that apply to the resulting composite glyph appear after `|` (either that of the `|usv` or a single `|` if no `|usv` is present).
|
||||
|
||||
## glyph = base + diac@atAP[with=AP]
|
||||
```
|
||||
Aacute = A + acute@Ucap[with=_U]
|
||||
```
|
||||
The `with` property can be used to override the default AP, \_AP convention. The `_U` attachment point on the `acute` glyph is paired with the `Ucap` attachment point on the
|
||||
|
||||
## glyph = base + diac@AP[shift=x,y]
|
||||
|
||||
Aacute = A + acute@U[shift=100,100]
|
||||
|
||||
By applying the `shift` property to the `acute` glyph, the position of the diacritic relative to the base glyph `A` is changed.
|
204
docs/docs.md
Normal file
204
docs/docs.md
Normal file
|
@ -0,0 +1,204 @@
|
|||
# Pysilfont - utilities for font development
|
||||
|
||||
Pysilfont is a collection of tools to support font development, with an emphasis on [UFO](#ufo-support-in-pysilfont)-based workflows.
|
||||
|
||||
In addition to the UFO utilities, there is also support for testing using [FTML](#font-test-markup-language) and [Composite Definitions](#composite-definitions).
|
||||
|
||||
Some scripts are written specifically to fit in with the approaches recommended in [Font Development Best Practices](https://silnrsi.github.io/FDBP/en-US/index.html)
|
||||
|
||||
# Documentation
|
||||
|
||||
Documentation is held in the following documents:
|
||||
|
||||
- docs.md: This document - the main document for users
|
||||
- [scripts.md](scripts.md): User documentation for all command-line tools and other scripts
|
||||
- [technical.md](technical.md): Technical details for those wanting write scripts or other development tasks
|
||||
- Other sub-documents, with links from the above
|
||||
|
||||
Installation instructions are in [README.md](../README.md)
|
||||
|
||||
# Scripts and commands
|
||||
Many Pysilfont scripts are installed to be used as command-line tools, and these are all listed, with usage instructions, in [scripts.md](scripts.md). This also has details of some other example python scripts.
|
||||
|
||||
All scripts work using a standard framework designed to give users a consistent interface across scripts, and common features of these scripts are described in the following sections, so the **documentation below** needs to be read in conjunction with that in [scripts.md](scripts.md).
|
||||
|
||||
## Standard command line options
|
||||
|
||||
Nearly all scripts support these:
|
||||
|
||||
- `-h, --help`
|
||||
- Basic usage help for the command
|
||||
- `-h d`
|
||||
- Display -h info with added info about default values
|
||||
- `-h p`
|
||||
- Display information about parameters (-p --params below)
|
||||
- `-q, --quiet`
|
||||
- Quiet mode - only display severe errors. See reporting below
|
||||
- `-l LOG, --log LOG`
|
||||
- Log file name (if not using the default name). By default logs will go in a logs subdirectory. If just a directory path is given, the log will go in there using the default name.
|
||||
- `-p PARAMS, --params PARAMS`
|
||||
- Other parameters - see below
|
||||
|
||||
The individual script documentation in scripts.md should indicate if some don't apply for a particular script
|
||||
|
||||
(There is also a hidden option --nq which overrides -q for use with automated systems like [smith](https://github.com/silnrsi/smith) which run scripts using -q by default)
|
||||
|
||||
# Parameters
|
||||
|
||||
There are many parameters that can be set to change the behaviour of scripts, either on the command line (using -p) or via a config file.
|
||||
|
||||
To set a parameter on the command line, use ``-p <param name>=<param value>``, eg
|
||||
```
|
||||
psfnormalize font.ufo -p scrlevel=w
|
||||
```
|
||||
-p can be used multiple times on a single command.
|
||||
|
||||
Commonly used command-line parameters include:
|
||||
- scrlevel, loglevel
|
||||
- Set the screen/logfile level from increasingly verbose options
|
||||
- E - Errors
|
||||
- P - Progress (default for scrlevel)
|
||||
- W - Warnings (default for loglevel)
|
||||
- I - Information
|
||||
- V - Verbose
|
||||
- checkfix (UFOs only)
|
||||
- Validity tests when opening UFOs. Choice of None, Check, Fix with default Check
|
||||
- See description of check & fix under [normalization](#normalization)
|
||||
|
||||
For a full list of parameters and how to set them via a config file (or in a UFO font) see [parameters.md](parameters.md).
|
||||
|
||||
|
||||
## Default values
|
||||
|
||||
Most scripts have defaults for file names and other arguments - except for the main file the script is running against.
|
||||
|
||||
### Font/file name defaults
|
||||
|
||||
Once the initial input file (eg input font) has been given, most other font and file names will have defaults based on those.
|
||||
|
||||
This applies to other input font names, output font names, input file names and output file names and is done to minimise retyping repeated information like the path the files reside in. For example, simply using:
|
||||
|
||||
```
|
||||
psfsetpsnames path/font.ufo
|
||||
```
|
||||
|
||||
will:
|
||||
|
||||
- open (and update) `path/font.ufo`
|
||||
- backup the font to `path/backups/font.ufo.nnn~`
|
||||
- read its input from `path/font_psnames.csv`
|
||||
- write its log to `path/logs/font_psnames.log`
|
||||
|
||||
If only part of a file name is supplied, other parts will default. So if only "test" is supplied for the output font name, the font would be output to `path/test.ufo`.
|
||||
|
||||
If a full file name is supplied, but no path, the current working directory will be used, so if “test.ufo” is supplied it won’t have `path/` added.
|
||||
|
||||
### Other defaults
|
||||
|
||||
Other parameters will just have standard default values.
|
||||
|
||||
### Displaying defaults for a command
|
||||
|
||||
Use `-h d` to see what defaults are for a given command. For example,
|
||||
|
||||
```
|
||||
psfsetpsnames -h d
|
||||
```
|
||||
|
||||
will output its help text with the following appended:
|
||||
|
||||
```
|
||||
Defaults for parameters/options
|
||||
|
||||
Font/file names
|
||||
-i _PSnames.csv
|
||||
-l _PSnames.log
|
||||
```
|
||||
|
||||
If the default value starts with “\_” (as with \_PSnames.csv above) then the input file name will be prepended to the default value; otherwise just the default value will be used.
|
||||
|
||||
## Reporting
|
||||
Most scripts support standardised reporting (logging), both to screen and a log file, with different levels of reporting available. Levels are set for via loglevel and scrlevel parameters which can be set to one of:
|
||||
- E Errors
|
||||
- P Progress - Reports basic progress messages and all errors
|
||||
- W Warning - As P but with warning messages as well
|
||||
- I Info - As W but with information messages as well
|
||||
- V Verbose - even more messages!
|
||||
|
||||
For most scripts these default to W for loglevel and P for scrlevel and can be set using -p (eg to set screen reporting to verbose use -p scrlevel=v).
|
||||
|
||||
-q --quiet sets quiet mode where all normal screen messages are suppressed. However, if there are any errors during the script execution, a single message is output on completion listing the counts for errors and warnings.
|
||||
|
||||
## Backups for fonts
|
||||
|
||||
If the output font name is the same as the input font name (which is the default behaviour for most scripts), then a backup is made original font prior to updating it.
|
||||
|
||||
By default, the last 5 copies of backups are kept in a sub-directory called “backups”. These defaults can be changed using the following parameters:
|
||||
|
||||
- `backup` - if set to 0, no backups are done
|
||||
- `backupdir` - alternative directory for backups
|
||||
- `backupkeep` - number of backups to keep
|
||||
|
||||
# UFO support in Pysilfont
|
||||
With some limitations, all UFO scripts in Pysilfont should work with UFO2 or UFO3 source files - and can convert from one format to the other.
|
||||
|
||||
In addition most scripts will output in a normalized form, designed to work with source control systems. Most aspects of the normalization can be set by parameters, so projects are not forced to use Pysilfont’s default normalization.
|
||||
|
||||
The simplest script is psfnormalize, which will normalize a UFO (and optionally convert between UFO 2 and UFO3 if -v is used to specify the alternative version)
|
||||
|
||||
Note that other scripts also normalize, so psfnormalize is usually only needed after fonts have been processed by external font tools.
|
||||
|
||||
## Normalization
|
||||
By default scripts normalize the UFOs and also run various check & fix tests to ensure the validity of the UFO metadata.
|
||||
|
||||
Default normalization behaviours include:
|
||||
- XML formatting
|
||||
- Use 2 spaces as indents
|
||||
- Don’t indent the ``<dict>`` for plists
|
||||
- Sort all ``<dict>``s in ascending key order
|
||||
- Where values can be “integer or float”, store integer values as ``<integer>``
|
||||
- Limit ``<real>`` limit decimal precision to 6
|
||||
- For attributes identified as numeric, limit decimal precision to 6
|
||||
- glif file names - use the UFO 3 suggested algorithm, even for UFO 2 fonts
|
||||
- order glif elements and attributes in the order they are described in the UFO spec
|
||||
|
||||
Most of the above can be overridden by [parameters](#parameters)
|
||||
|
||||
The check & fix tests are based on [Font Development Best Practices](https://silnrsi.github.io/FDBP/en-US/index.html) and include:
|
||||
- fontinfo.plist
|
||||
- Required fields
|
||||
- Fields to be deleted
|
||||
- Fields to constructed from other fields
|
||||
- Specific recommended values for some fields
|
||||
- lib.plist
|
||||
- Required fields
|
||||
- Recommended values
|
||||
- Fields that should not be present
|
||||
|
||||
The check & fix behaviour can be controlled by [parameters](#parameters), currently just the checkfix parameter which defaults to 'check' (just report what is wrong), but can be set to 'fix' to fix what it can, or none for no checking.
|
||||
|
||||
## Known limitations
|
||||
The following are known limitations that will be addressed in the future:
|
||||
- UFO 3 specific folders (data and images) are preserved, even if present in a UFO 2 font.
|
||||
- Converting from UFO 3 to UFO 2 only handles data that has a place in UFO 2, but does include converting UFO 3 anchors to the standard way of handling them in UFO 2
|
||||
- If a project uses non-standard files within the UFO folder, they are deleted
|
||||
|
||||
# Font Test Markup Language
|
||||
|
||||
Font Test Markup Language (FTML) is a file format for specifying the content and structure of font test data. It is designed to support complex test data, such as strings with specific language tags or data that should presented with certain font features activated. It also allows for indication of what portions of test data are in focus and which are only present to provide context.
|
||||
|
||||
FTML is described in the [FTML github project](https://github.com/silnrsi/ftml).
|
||||
|
||||
Pysilfont includes some python scripts for working with FTML, and a python library, [ftml.py](technical.md#ftml.py), so that new scripts can be developed to read and write FTML files.
|
||||
|
||||
# Composite definitions
|
||||
|
||||
Pysilfont includes tools for automatically adding composite glyphs to fonts. The syntax used for composite definitions is a subset of that used by RoboFont plus some extensions - see [Composite Tools](https://silnrsi.github.io/FDBP/en-US/Composite_Tools.html) in the Font Development Best Practices documentation for more details.
|
||||
|
||||
The current tools (psfbuildcomp, psfcomp2xml and psfxml2comp) are documented in [scripts.md](scripts.md).
|
||||
|
||||
The tools are based on a python module, [comp.py](technical.md#comppy).
|
||||
|
||||
# Contributing to the project
|
||||
|
||||
Pysilfont is developed and maintained by SIL International’s [Writing Systems Technology team ](https://software.sil.org/wstech/), though contributions from anyone are welcome. Pysilfont is copyright (c) 2014-2017 [SIL International](https://www.sil.org) and licensed under the [MIT license](https://en.wikipedia.org/wiki/MIT_License). The project is hosted at [https://github.com/silnrsi/pysilfont](https://github.com/silnrsi/pysilfont).
|
236
docs/examples.md
Normal file
236
docs/examples.md
Normal file
|
@ -0,0 +1,236 @@
|
|||
# Pysilfont example scripts
|
||||
|
||||
In addition to the main pysilfont [scripts](scripts.md), there are many further scripts under pysilfont/examples and its sub-directories.
|
||||
|
||||
They are not maintained in the same way as the main scripts, and come in many categories including:
|
||||
|
||||
- Scripts under development
|
||||
- Examples of how to do things
|
||||
- Deprecated scripts
|
||||
- Left-overs from previous development plans!
|
||||
|
||||
Note - all FontForge-based scripts need updating, since FontForge (as "FF") is no longer a supported tool for execute()
|
||||
|
||||
Some are documented below.
|
||||
|
||||
## Table of scripts
|
||||
|
||||
| Command | Status | Description |
|
||||
| ------- | ------ | ----------- |
|
||||
| [accesslibplist.py](#accesslibplist) | ? | Demo script for accessing fields in lib.plist |
|
||||
| [chaindemo.py](#chaindemo) | ? | Demo of how to chain calls to multiple scripts together |
|
||||
| [ffchangeglyphnames](#ffchangeglyphnames) | ? | Update glyph names in a ttf font based on csv file |
|
||||
| [ffcopyglyphs](#ffcopyglyphs) | ? | Copy glyphs from one font to another, without using ffbuilder |
|
||||
| [ffremovealloverlaps](#ffremovealloverlaps) | ? | Remove overlap on all glyphs in a ttf font |
|
||||
| [FFmapGdlNames.py](#ffmapgdlnames) | ? | Write mapping of graphite names to new graphite names |
|
||||
| [FFmapGdlNames2.py](#ffmapgdlnames2) | ? | Write mapping of graphite names to new graphite names |
|
||||
| [FLWriteXml.py](#flwritexml) | ? | Outputs attachment point information and notes as XML file for TTFBuilder |
|
||||
| [FTaddEmptyOT.py](#ftaddemptyot) | ? | Add empty Opentype tables to ttf font |
|
||||
| [FTMLnorm.py](#ftmlnorm) | ? | Normalize an FTML file |
|
||||
| [psfaddGlyphDemo.py](#psfaddglyphdemo) | ? | Demo script to add a glyph to a UFO font |
|
||||
| [psfexpandstroke.py](#psfexpandstroke) | ? | Expands an unclosed UFO stroke font into monoline forms with a fixed width |
|
||||
| [psfexportnamesunicodesfp.py](#psfexportnamesunicodesfp) | ? | Outputs an unsorted csv file containing the names of all the glyphs in the default layer |
|
||||
| [psfgenftml.py](#psfgenftml) | ? | generate ftml tests from glyph_data.csv and UFO |
|
||||
| [psftoneletters.py](#psftoneletters) | ? | Creates Latin script tone letters (pitch contours) |
|
||||
| [xmlDemo.py](#xmldemo) | ? | Demo script for use of ETWriter |
|
||||
|
||||
|
||||
---
|
||||
#### accesslibplist
|
||||
Usage: **` python accesslibplist.py ...`**
|
||||
|
||||
_([Standard options](docs.md#standard-command-line-options) may also apply)_
|
||||
|
||||
Demo script for accessing fields in lib.plist
|
||||
|
||||
|
||||
---
|
||||
#### chaindemo
|
||||
Usage: **` python chaindemo.py ...`**
|
||||
|
||||
_([Standard options](docs.md#standard-command-line-options) also apply)_
|
||||
|
||||
Demo of how to chain calls to multiple scripts together.
|
||||
Running
|
||||
|
||||
`python chaindemo.py infont outfont --featfile feat.csv --uidsfile uids.csv`
|
||||
|
||||
will run execute() against psfnormalize, psfsetassocfeat and psfsetassocuids passing the font, parameters
|
||||
and logger objects from one call to the next. So:
|
||||
- the font is only opened once and written once
|
||||
- there is a single log file produced
|
||||
|
||||
|
||||
---
|
||||
#### ffchangeglyphnames
|
||||
Usage: **`ffchangeglyphnames [-i INPUT] [--reverse] ifont [ofont]`**
|
||||
|
||||
_([Standard options](docs.md#standard-command-line-options) also apply)_
|
||||
|
||||
Update the glyph names in a ttf font based on csv file.
|
||||
|
||||
Example usage:
|
||||
|
||||
```
|
||||
ffchangeglyphnames -i glyphmap.csv font.ttf
|
||||
```
|
||||
will update the glyph names in the font based on mapping file glyphmap.csv
|
||||
|
||||
If \-\-reverse is used, it change names in reverse.
|
||||
|
||||
---
|
||||
#### ffcopyglyphs
|
||||
Usage: **`ffcopyglyphs -i INPUT [-r RANGE] [--rangefile RANGEFILE] [-n NAME] [--namefile NAMEFILE] [-a] [-f] [-s SCALE] ifont [ofont]`**
|
||||
|
||||
_([Standard options](docs.md#standard-command-line-options) also apply)_
|
||||
|
||||
_This section is Work In Progress!_
|
||||
|
||||
optional arguments:
|
||||
|
||||
```
|
||||
-h, --help show this help message and exit
|
||||
-i INPUT, --input INPUT
|
||||
Font to get glyphs from
|
||||
-r RANGE, --range RANGE
|
||||
StartUnicode..EndUnicode no spaces, e.g. 20..7E
|
||||
--rangefile RANGEFILE
|
||||
File with USVs e.g. 20 or a range e.g. 20..7E or both
|
||||
-n NAME, --name NAME Include glyph named name
|
||||
--namefile NAMEFILE File with glyph names
|
||||
-a, --anchors Copy across anchor points
|
||||
-f, --force Overwrite existing glyphs in the font
|
||||
-s SCALE, --scale SCALE
|
||||
Scale glyphs by this factor
|
||||
```
|
||||
|
||||
---
|
||||
#### ffremovealloverlaps
|
||||
Usage: **`ffremovealloverlaps ifont [ofont]`**
|
||||
|
||||
_([Standard options](docs.md#standard-command-line-options) also apply)_
|
||||
|
||||
Remove overlap on all glyphs in a ttf font
|
||||
|
||||
---
|
||||
#### FFmapGdlNames
|
||||
Usage: **` python FFmapGdlNames2.py ...`**
|
||||
|
||||
_([Standard options](docs.md#standard-command-line-options) may also apply)_
|
||||
|
||||
Write mapping of graphite names to new graphite names based on:
|
||||
- two ttf files
|
||||
- the gdl files produced by makeGdl run against those fonts
|
||||
This could be different versions of makeGdl
|
||||
- a csv mapping glyph names used in original ttf to those in the new font
|
||||
|
||||
|
||||
---
|
||||
#### FFmapGdlNames2
|
||||
Usage: **` python FFmapGdlNames.py ...`**
|
||||
|
||||
_([Standard options](docs.md#standard-command-line-options) may also apply)_
|
||||
|
||||
Write mapping of graphite names to new graphite names based on:
|
||||
- an original ttf font
|
||||
- the gdl file produced by makeGdl when original font was produced
|
||||
- a csv mapping glyph names used in original ttf to those in the new font
|
||||
- pysilfont's gdl library - so assumes pysilfonts makeGdl will be used with new font
|
||||
|
||||
|
||||
---
|
||||
#### FLWriteXml
|
||||
Usage: **` python FLWriteXml.py ...`**
|
||||
|
||||
_([Standard options](docs.md#standard-command-line-options) may also apply)_
|
||||
|
||||
Outputs attachment point information and notes as XML file for TTFBuilder
|
||||
|
||||
|
||||
---
|
||||
#### FTaddEmptyOT
|
||||
Usage: **` python FTaddEmptyOT.py ...`**
|
||||
|
||||
_([Standard options](docs.md#standard-command-line-options) may also apply)_
|
||||
|
||||
Add empty Opentype tables to ttf font
|
||||
|
||||
|
||||
---
|
||||
#### FTMLnorm
|
||||
Usage: **` python FTMLnorm.py ...`**
|
||||
|
||||
_([Standard options](docs.md#standard-command-line-options) may also apply)_
|
||||
|
||||
Normalize an FTML file
|
||||
|
||||
|
||||
---
|
||||
#### psfaddGlyphDemo
|
||||
Usage: **` python psfaddGlyphDemo.py ...`**
|
||||
|
||||
_([Standard options](docs.md#standard-command-line-options) may also apply)_
|
||||
|
||||
Demo script to add a glyph to a UFO font
|
||||
|
||||
|
||||
---
|
||||
#### psfexpandstroke
|
||||
|
||||
Usage: **`psfexpandstroke infont outfont expansion`**
|
||||
|
||||
_([Standard options](docs.md#standard-command-line-options) also apply)_
|
||||
|
||||
Expands the outlines (typically unclosed) in an UFO stroke font into monoline forms with a fixed width.
|
||||
|
||||
Example that expands the stokes in a UFO font `SevdaStrokeMaster-Regular.ufo` by 13 units on both sides, giving them a total width of 26 units, and writes the result to `Sevda-Regular.ufo`.
|
||||
|
||||
```
|
||||
psfexpandstroke SevdaStrokeMaster-Regular.ufo Sevda-Regular.ufo 13
|
||||
```
|
||||
|
||||
Note that this only expands the outlines - it does not remove any resulting overlap.
|
||||
|
||||
|
||||
---
|
||||
#### psfexportnamesunicodesfp
|
||||
Usage: **` python psfexportnamesunicodesfp.py ...`**
|
||||
|
||||
_([Standard options](docs.md#standard-command-line-options) may also apply)_
|
||||
|
||||
Outputs an unsorted csv file containing the names of all the glyphs in the default layer and their primary unicode values.
|
||||
|
||||
Format name,usv
|
||||
|
||||
|
||||
---
|
||||
#### psfgenftml
|
||||
Usage: **` python psfgenftml.py ...`**
|
||||
|
||||
_([Standard options](docs.md#standard-command-line-options) may also apply)_
|
||||
|
||||
generate ftml tests from glyph_data.csv and UFO
|
||||
|
||||
|
||||
---
|
||||
#### psftoneletters
|
||||
Usage: **`psftoneletters infont outfont`**
|
||||
|
||||
_([Standard options](docs.md#standard-command-line-options) also apply)_
|
||||
|
||||
This uses the parameters from the UFO lib.plist org.sil.lcg.toneLetters key to create Latin script tone letters (pitch contours).
|
||||
|
||||
Example usage:
|
||||
|
||||
```
|
||||
psftoneletters Andika-Regular.ufo Andika-Regular.ufo
|
||||
```
|
||||
|
||||
|
||||
---
|
||||
#### xmlDemo
|
||||
Usage: **` python xmlDemo.py ...`**
|
||||
|
||||
_([Standard options](docs.md#standard-command-line-options) may also apply)_
|
||||
|
||||
Demo script for use of ETWriter
|
150
docs/fea2_proposal.md
Normal file
150
docs/fea2_proposal.md
Normal file
|
@ -0,0 +1,150 @@
|
|||
# Proposed Extensions to FEA
|
||||
|
||||
This document describes a macro extension to FEA that will enable it to grow
|
||||
and support more powerful OpenType descriptions. The proposal is presented as
|
||||
various syntax extensions to the core FEA syntax.
|
||||
|
||||
## Functions
|
||||
|
||||
Currently FEA makes no use of parentheses. This may be a conscious decision to
|
||||
reserve these for later ues. Such parentheses lend themselves perfectly to the
|
||||
addition of macro functions to the FEA syntax:
|
||||
|
||||
```
|
||||
function = funcname '(' (parameter (',' parameter)*)? ')'
|
||||
|
||||
funcname = /[A-Za-z_.][A-Za-z_0-9.]*/
|
||||
parameter = glyph | glyphlist | classref | value_record | function
|
||||
| ('"' string '"') | ("{" tokens "}")
|
||||
tokens = noncurlytoken* | ("{" tokens "}")
|
||||
glyphlist = '[' glyph* ']'
|
||||
classref = '@' classname
|
||||
value_record = number | '<' chars '>'
|
||||
```
|
||||
|
||||
A function call consists of a function name and a parenthesised parameter list,
|
||||
which may be empty.
|
||||
|
||||
|
||||
and an optional following token list enclosed in braces. The
|
||||
token list is just that, an unparsed sequence of lexical tokens. The result of
|
||||
the function is also an unparsed sequence of lexical tokens that are then parsed
|
||||
and processed as if the function were replaced by a textual representation of
|
||||
the tokens.
|
||||
|
||||
The parameters are parsed, so for example a classref would expand to its
|
||||
resulting list of glyphs. Likewise a function call result would be parsed to its
|
||||
single semantic item, it is not parsed as a token list. A value_record is the
|
||||
widest interpretation of a value record, including an anchor. Basically it is
|
||||
either a number or anything between < and >.
|
||||
|
||||
A function statement is the use of a function result as a statement in the FEA
|
||||
syntax.
|
||||
|
||||
The FEA syntax defines nothing more that functions exist and how they may be
|
||||
referenced. It is up to a particular FEA processor to supply the functions and
|
||||
to execute them to resolve them to a token list. It is also up to the particular
|
||||
FEA processor to report an error or otherwise handle an unknown function
|
||||
reference. As such this is similar to other programming languages where the
|
||||
language itself says nothing about what functions exist or what they do. That is
|
||||
for libraries.
|
||||
|
||||
There is one exception. The `include` statement in the core FEA syntax follows
|
||||
the same syntax, apart from the missing quotation marks around the filename. As
|
||||
such `include` is not available for use as a function name.
|
||||
|
||||
### Sample Implementation
|
||||
|
||||
In this section we give a sample implementation based on the FEA library in
|
||||
fonttools.
|
||||
|
||||
Functions are kept in module style namespaces, much like a simplified python module
|
||||
system. A function name then typically consists of a `modulename.funcname` The
|
||||
top level module is reserved for the fea processor itself. The following
|
||||
functions are defined in the top level module (i.e. no modulename.)
|
||||
|
||||
#### load
|
||||
|
||||
The `load` function takes a path to a file containing python definitions.
|
||||
Whether this python code is preprocessed for security purposes or not is an open
|
||||
question. It also takes a modulename as its second parameter.
|
||||
|
||||
```
|
||||
load("path/to/pythonfile.py", "mymodule")
|
||||
```
|
||||
|
||||
The function returns an empty token string but has the effect of loading all the
|
||||
functions defined in the python file as those functions prefixed by the
|
||||
modulename, as described above.
|
||||
|
||||
#### set
|
||||
|
||||
This sets a variable to a token list. Variables are described in a later syntax
|
||||
extension. The first parameter is the name of a variable. The token list is then
|
||||
used for the variable expansion.
|
||||
|
||||
```
|
||||
set("distance") { 30 };
|
||||
```
|
||||
|
||||
Other non top level module may be supplied with the core FEA processing module.
|
||||
|
||||
#### core.refilter
|
||||
|
||||
This function is passed a glyphlist (or via a classref) and a regular
|
||||
expression. The result is a glyphlist consisting of all the glyphs whose name
|
||||
matches the regular expression. For example:
|
||||
|
||||
```
|
||||
@csc = core.refilter("\.sc$", @allglyphs)
|
||||
```
|
||||
|
||||
#### core.pairup
|
||||
|
||||
This function is passed two classnames, a regular expression and a glyph list.
|
||||
The result is two class definitions for the two classnames. One class is
|
||||
of all the glyphs which match the regular expression. The other class is a
|
||||
corresponding list of glyphs whose name is the same as the matching regular
|
||||
expression with the matching regular expression text removed. If no such glyph
|
||||
exists in the font, then neither the name or the glyph matching the regular
|
||||
expression is included. The resulting classes may therefore be used in a simple
|
||||
substitution. For example:
|
||||
|
||||
```
|
||||
core.pairup("cnosc", "csc", "\.sc$", [a.sc b.sc fred.sc]);
|
||||
lookup smallcap {
|
||||
sub @cnosc by @csc;
|
||||
} smallcap;
|
||||
```
|
||||
|
||||
Assuming `fred.sc` exists but `fred` does not, this is equivalent to:
|
||||
|
||||
```
|
||||
@cnosc = [a b];
|
||||
@csc = [a.sc b.sc];
|
||||
lookup smallcap {
|
||||
sub @cnosc by @csc;
|
||||
} smallcap;
|
||||
```
|
||||
|
||||
## Variables
|
||||
|
||||
A further extension to the FEA syntax is to add a simple variable expansion. A
|
||||
variable expands to a token list. Since variables may occur anywhere they need a
|
||||
syntactic identifier. The proposed identifier is an initial `$`.
|
||||
|
||||
```
|
||||
variable = '$' funcname
|
||||
```
|
||||
|
||||
Variables are expanded at the point of expansion. Since expansion is recursive,
|
||||
the variable may contain a function call which expands when the variable
|
||||
expands.
|
||||
|
||||
There is no syntax for defining a variable. This is unnatural and may be
|
||||
revisited if a suitable syntax can be found. Definition is therefore a processor
|
||||
specific activity.
|
||||
|
||||
It is undecided whether undefined variables expand to an empty token list or an
|
||||
error.
|
||||
|
559
docs/feaextensions.md
Normal file
559
docs/feaextensions.md
Normal file
|
@ -0,0 +1,559 @@
|
|||
# FEA Extensions Current
|
||||
|
||||
This document describes the functionality of `psfmakefea` and lists the extensions to fea that are currently supported.
|
||||
<!-- TOC -->
|
||||
|
||||
- [Generated Classes](#generated-classes)
|
||||
- [Variant glyph classes](#variant-glyph-classes)
|
||||
- [Ligatures](#ligatures)
|
||||
- [Statements](#statements)
|
||||
- [baseclass](#baseclass)
|
||||
- [Cursive Attachment](#cursive-attachment)
|
||||
- [Mark Attachment](#mark-attachment)
|
||||
- [Ligature Attachment](#ligature-attachment)
|
||||
- [ifinfo](#ifinfo)
|
||||
- [ifclass](#ifclass)
|
||||
- [do](#do)
|
||||
- [SubStatements](#substatements)
|
||||
- [for](#for)
|
||||
- [let](#let)
|
||||
- [forlet](#forlet)
|
||||
- [if](#if)
|
||||
- [Examples](#examples)
|
||||
- [Simple calculation](#simple-calculation)
|
||||
- [More complex calculation](#more-complex-calculation)
|
||||
- [Right Guard](#right-guard)
|
||||
- [Left Guard](#left-guard)
|
||||
- [Left Kern](#left-kern)
|
||||
- [Myanmar Great Ya](#myanmar-great-ya)
|
||||
- [Advance for Ldot on U](#advance-for-ldot-on-u)
|
||||
- [def](#def)
|
||||
- [python support](#python-support)
|
||||
- [kernpairs](#kernpairs)
|
||||
- [Capabilities](#capabilities)
|
||||
- [Permit classes on both sides of GSUB type 2 (multiple) and type 4 (ligature) lookups](#permit-classes-on-both-sides-of-gsub-type-2-multiple-and-type-4-ligature-lookups)
|
||||
- [Processing](#processing)
|
||||
- [Example](#example)
|
||||
- [Support classes in alternate lookups](#support-classes-in-alternate-lookups)
|
||||
- [groups.plist](#groupsplist)
|
||||
|
||||
<!-- /TOC -->
|
||||
## Generated Classes
|
||||
|
||||
`psfmakefea` simplifies the hand creation of fea code by analysing the glyphs in the input font, particularly with regard to their names. Names are assumed to conform to the Adobe Glyph List conventions regarding `_` for ligatures and `.` for glyph variants.
|
||||
|
||||
### Variant glyph classes
|
||||
|
||||
If a font contains a glyph with a final variant (there may be more than one listed for a glyph, in sequence) and also a glyph without that final variant, then `psfmakefea` will create two classes based on the variant name: @c\__variant_ contains the glyph with the variant and @cno\__variant_ contains the glyph without the variant. The two lists are aligned such that a simple classes based replacement will change all the glyphs without the variant into ones with the variant.
|
||||
|
||||
For example, U+025B is an open e that occurs in some African languages. Consider a font that contains the glyphs `uni025B` and `uni025B.smcp` for a small caps version of the glyph. `psfmakefea` will create two classes:
|
||||
|
||||
```
|
||||
@c_smcp = [uni025B.scmp];
|
||||
@cno_smcp = [uni025B];
|
||||
```
|
||||
|
||||
In addition, if this font contains two other glyphs `uni025B.alt`, an alternative shape to `uni025B` and `uni025B.alt.smcp`, the small caps version of the alternate. `psfmakefea` will create the following classes:
|
||||
|
||||
```
|
||||
@c_smcp = [uni025B.scmp uni025B.alt.smcp];
|
||||
@cno_smcp = [uni025B uni025B.alt];
|
||||
@c_alt = [uni025B.alt];
|
||||
@cno_alt = [uni025B];
|
||||
```
|
||||
|
||||
Notice that the classes with multiple glyphs, while keeping the alignment, do not guarantee any particular order of the glyphs in one of the classes. Only that the other class will align its glyph order correctly. Notice also that `uni025B.alt.smcp` does not appear in the `@c_alt` class. This latter behaviour may change.
|
||||
|
||||
### Ligatures
|
||||
|
||||
Unless instructed on the command line via the `-L` or `--ligmode` option, `psfmakefea` does nothing special with ligatures and treats them simply as glyphs that may take variants. There are four ligature modes. The most commonly used is `-L last`. This says to create classes based on the last components in all ligatures. Thus if the font from the previous section also included `uni025B_acutecomb` and the corresponding small caps `uni025B_acutecomb.smcp`. We also need an `acutecomb`. If the command line included `-L last`, the generated classes would be:
|
||||
|
||||
```
|
||||
@c_smcp = [uni025B.scmp uni025B.alt.smcp uni025B_acutecomb.smcp];
|
||||
@cno_smcp = [uni025B uni025B.alt uni025B_acutecomb];
|
||||
@c_alt = [uni025B.alt];
|
||||
@cno_alt = [uni025B];
|
||||
@clig_acutecomb = [uni025B_acutecomb];
|
||||
@cligno_acutecomb = [uni025B];
|
||||
```
|
||||
|
||||
And if the command line option were `-L first`, the last two lines of the above code fragment would become:
|
||||
|
||||
```
|
||||
@clig_uni025B = [uni025B_acutecomb];
|
||||
@cligno_uni025B = [acutecomb];
|
||||
```
|
||||
|
||||
while the variant classes would remain the same.
|
||||
|
||||
There are two other ligaturemodes: `lastcomp` and `firstcomp`. These act like `last` and `first`, but in addition they say that any final variants must be handled differently. Instead of seeing the final variants (those on the last ligature component) as applying to the whole ligature, they are only to be treated as applying to the last component. To demonstrate this we need to add the nonsensical `acutecomb.smcp`. With either `-L last` or `-L first` we get the same ligature classes as above. (Although we would add `acutecomb.smcp` to the `@c_smcp` and `acutecomb` to `@cno_smcp`) With `-L firstcomp` we get:
|
||||
|
||||
```
|
||||
@c_smcp = [uni025B.scmp uni025B.alt.smcp acutecomb.smcp];
|
||||
@cno_smcp = [uni025B uni025B.alt acutecomb];
|
||||
@c_alt = [uni025B.alt];
|
||||
@cno_alt = [uni025B];
|
||||
@clig_uni025B = [uni025B_acutecomb uni025B_acutecomb.smcp];
|
||||
@cligno_uni025B = [acutecomb acutecomb.smcp];
|
||||
```
|
||||
|
||||
Notice the removal of `uni025B_acutecomb.smcp` from `@c_smcp`, since `uni025B_acutecomb.smcp` is considered by `-L firstcomp` to be a ligature of `uni025B` and `acutecomb.smcp` there is no overall ligature `uni025B_acutecomb` with a variant `.smcp` that would fit into `@c_smcp`. If we use `-L lastcomp` we change the last two classes to:
|
||||
|
||||
```
|
||||
@clig_acutecomb = [uni025B_acutecomb];
|
||||
@cligno_acutecomb = [uni025B];
|
||||
@clig_acutecomb_smcp = [uni025B_acutecomb.smcp];
|
||||
@cligno_acutecomb_smcp = [un025B];
|
||||
```
|
||||
|
||||
With any `.` in the variant being changed to `_` in the class name.
|
||||
|
||||
In our example, if the author wanted to use `-L lastcomp` or `-L firstcomp`, they might find it more helpful to rename `uni025B_acutecomb.smcp` to `uni025B.smcp_acutecomb` and remove the nonsensical `acutecomb.smcp`. This would give, for `-L lastcomp`:
|
||||
|
||||
```
|
||||
@c_smcp = [uni025B.scmp uni025B.alt.smcp];
|
||||
@cno_smcp = [uni025B uni025B.alt];
|
||||
@c_alt = [uni025B.alt];
|
||||
@cno_alt = [uni025B];
|
||||
@clig_acutecomb = [uni025B_acutecomb uni025B.smcp_acutecomb];
|
||||
@cligno_acutecomb = [uni025B uni025B.smcp];
|
||||
```
|
||||
|
||||
and for `-L firstcomp`, the last two classes become:
|
||||
|
||||
```
|
||||
@clig_uni025B = [uni025B_acutecomb];
|
||||
@cligno_uni025B = [acutecomb];
|
||||
@clig_uni025B_smcp = [uni025B.smcp_acutecomb];
|
||||
@cligno_uni025B_smcp = [acutecomb];
|
||||
```
|
||||
|
||||
## Statements
|
||||
|
||||
### baseclass
|
||||
|
||||
A baseclass is the base equivalent of a markclass. It specifies the position of a particular class of anchor points on a base, be that a true base or a mark base. The syntax is the same as for a markclass, but it is used differently in a pos rule:
|
||||
|
||||
```
|
||||
markClass [acute] <anchor 350 0> @TOP_MARKS;
|
||||
baseClass [a] <anchor 500 500> @BASE_TOPS;
|
||||
baseClass b <anchor 500 750> @BASE_TOPS;
|
||||
|
||||
feature test {
|
||||
pos base @BASE_TOPS mark @TOP_MARKS;
|
||||
} test;
|
||||
```
|
||||
|
||||
Which is the functional equivalent of:
|
||||
|
||||
```
|
||||
markClass [acute] <anchor 350 0> @TOP_MARKS;
|
||||
|
||||
feature test {
|
||||
pos base [a] <anchor 500 500> mark @TOP_MARKS;
|
||||
pos base b <anchor 500 750> mark @TOP_MARKS;
|
||||
} test;
|
||||
```
|
||||
|
||||
It should be borne in mind that both markClasses and baseClasses can also be used as normal glyph classes and as such use the same namespace.
|
||||
|
||||
The baseClass statement is a high priority need in order to facilitate auto generation of attachment point information without having to create what might be redundant lookups in the wrong order.
|
||||
|
||||
Given a set of base glyphs with attachment point A and marks with attachment point \_A, psfmakefea will generate the following:
|
||||
|
||||
- baseClass A - containing all bases with attachment point A
|
||||
- markClass \_A - containing all marks with attachment point \_A
|
||||
- baseClass A\_MarkBase - containing all marks with attachment point A
|
||||
|
||||
#### Cursive Attachment
|
||||
|
||||
Cursive attachment involves two base anchors, one for the entry and one for the exit. We can extend the use of baseClasses to support this, by passing two baseClasses to the pos cursive statement:
|
||||
|
||||
```
|
||||
baseClass meem.medial <anchor 700 50> @ENTRIES;
|
||||
baseClass meem.medial <anchor 0 10> @EXITS;
|
||||
|
||||
feature test {
|
||||
pos cursive @ENTRIES @EXITS;
|
||||
} test;
|
||||
```
|
||||
|
||||
Here we have two base classes for the two anchor points, and the pos cursive processing code works out which glyphs are in both classes, and which are in one or the other and generates the necessary pos cursive statement for each glyph. I.e. there will be statements for the union of the two classes but with null anchors for those only in one (according to which baseClass they are in). This has the added advantage that any code generating baseClasses does not need to know whether a particular attachment point is being used in a cursive attachment. That is entirely up to the user of the baseClass.
|
||||
|
||||
#### Mark Attachment
|
||||
|
||||
The current mark attachment syntax is related to the base mark attachment in that the base mark has to be specified explicitly and we cannot currently use a markclass as the base mark in a mark attachment lookup. We can extend the mark attachment in the same way as we extend the base attachment, by allowing the mark base to be a markclass. Thus:
|
||||
|
||||
```
|
||||
pos mark @MARK_BASE_CLASS mark @MARK_MARK_CLASS;
|
||||
```
|
||||
|
||||
Would expand out to a list of mark mark attachment rules.
|
||||
|
||||
#### Ligature Attachment
|
||||
|
||||
Ligature attachment involves all the attachments to a ligature in a single rule. Given a list of possible ligature glyphs, the ligature positioning rule has been extended to allow the use of baseClasses instead of the base anchor on the ligature. For a noddy example:
|
||||
|
||||
```
|
||||
baseClass a <anchor 200 200> @TOP_1;
|
||||
baseClass fi <anchor 200 0> @BOTTOM_1;
|
||||
baseClass fi <anchor 400 0> @BOTTOM_2;
|
||||
markClass acute <anchor 0 200> @TOP;
|
||||
markClass circumflex <anchor 200 0> @BOTTOM;
|
||||
|
||||
pos ligature [a fi] @BOTTOM_1 mark @BOTTOM @TOP_1 mark @TOP
|
||||
ligComponent @BOTTOM_2 mark @BOTTOM;
|
||||
```
|
||||
|
||||
becomes
|
||||
|
||||
```
|
||||
pos ligature a <anchor 200 200> mark @TOP
|
||||
ligComponent <anchor NULL>;
|
||||
pos ligature fi <anchor 200 0> mark @BOTTOM
|
||||
ligComponent <anchor 400 0> mark @BOTTOM;
|
||||
```
|
||||
|
||||
### ifinfo
|
||||
|
||||
This statement initiates a block either of statements or within another block. The block is only processed if the ifinfo condition is met. ifinfo takes two parameters. The first is a name that is an entry in a fontinfo.plist. The second is a string containing a regular expression that is matched against the given value in the fontinfo.plist. If there is a match, the condition is considered to be met.
|
||||
|
||||
```
|
||||
ifinfo(familyName, "Doulos") {
|
||||
|
||||
# statements
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
Notice the lack of a `;` after the block close.
|
||||
|
||||
ifinfo acts as a kind of macro, this means that the test is executed in the parser rather than collecting everything inside the block and processing it later like say the `do` statement. Notice that if you want to do something more complex than a regular expression test, then you may need to use a `do` statement and the `info()` function.
|
||||
|
||||
### ifclass
|
||||
|
||||
This statement initiates a block either of statements or within another block. The block is only processed if the given @class is defined and contains at least one glyph.
|
||||
|
||||
```
|
||||
ifclass(@oddities) {
|
||||
|
||||
# statements
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
Notice the lack of a `;` after the block close.
|
||||
|
||||
### do
|
||||
|
||||
The `do` statement is a means of setting variables and repeating statement groups with variable expansion. A `do` statement is followed by various substatements that are in effect nested statements. The basic structure of the `do` statement is:
|
||||
|
||||
`do` _substatement_ _substatement_ _..._ [ `{` _statements_ `}` ]
|
||||
|
||||
Where _statements_ is a sequence of FEA statements. Within these statements, variables may be referenced by preceding them with a `$`. Anything, including statement words, can be the result of variable expantion. The only constraints are:
|
||||
|
||||
- The item expands to one or more complete tokens. It cannot be joined to something preceding or following it to create a single name, token, whatever.
|
||||
|
||||
In effect a `{}` type block following a `for` or `let` substatement is the equivalent of inserting the substatement `if True;` before the block.
|
||||
|
||||
#### SubStatements
|
||||
|
||||
Each substatement is terminated by a `;`. The various substatements are:
|
||||
|
||||
##### for
|
||||
|
||||
The `for` substatement is structured as:
|
||||
|
||||
`for` _var_ `=` _glyphlist_ `;`
|
||||
|
||||
This creates a variable _var_ that will iterate over the _glyphlist_.
|
||||
|
||||
With the addition of `forlet` (see below), there is also `forgroup` that is a synonym for the `for` substatement defined here.
|
||||
|
||||
##### let
|
||||
|
||||
The `let` substatement executes a short python expression (via `eval`), storing the result in the given variable, or variable list. The structure of the substatement is:
|
||||
|
||||
`let` _var_ [`,` _var_]* `=` _expression_ `;`
|
||||
|
||||
There are various python functions that are especially supported, along with the builtins. These are:
|
||||
|
||||
| Function | Parameters | Description |
|
||||
|-------------|-------------|------------------------|
|
||||
| ADVx | _glyphname_ | Returns the advanced width of the given glyph |
|
||||
| allglyphs | | Returns a list of all the glyph names in the font |
|
||||
| APx | _glyphname_, "_apname_" | Returns the x coordinate of the given attachment point on the given glyph |
|
||||
| APy | _glyphname_, "_apname_" | Returns the y coordinate of the given attachment point on the given glyph |
|
||||
| feaclass | _classname_ | Returns a list of the glyph names in a class as a python list |
|
||||
| info | _finfoelement_ | Looks up the entry in the fontinfo plist and returns its value |
|
||||
| kerninfo | | Returns a list of tuples (left, right, kern_value) |
|
||||
| opt | _defined_ | Looks up a given -D/--define variable. Returns empty string if missing |
|
||||
| MINx | _glyphname_ | Returns the minimum x value of the bounding box of the glyph |
|
||||
| MINy | _glyphname_ | Returns the minimum y value of the bounding box of the glyph |
|
||||
| MAXx | _glyphname_ | Returns the maximum x value of the bounding box of the glyph |
|
||||
| MAXy | _glyphname_ | Returns the maximum y value of the bounding box of the glyph |
|
||||
|
||||
See the section on python in the `def` command section following.
|
||||
|
||||
##### forlet
|
||||
|
||||
The `for` substatement only allows iteration over a group of glyphs. There are situations in which someone would want to iterate over a true python expression, for example, over the return value of a function. The `forlet` substatement is structured identically to a `let` substatement, but instead of setting the variable once, the following substatements are executed once for each value of the expression, with the variable set to each in turn. For example:
|
||||
|
||||
```
|
||||
def optlist(*alist) {
|
||||
if len(alist) > 0:
|
||||
for r in optlist(*alist[1:]):
|
||||
yield [alist[0]] + r
|
||||
yield r
|
||||
else:
|
||||
yield alist
|
||||
} optlist;
|
||||
|
||||
lookup example {
|
||||
do
|
||||
forlet l = optlist("uni17CC", "@coeng_no_ro", "[uni17C9 uni17CA]", "@below_vowels", "@above_vowels");
|
||||
let s = " ".join(l)
|
||||
{
|
||||
sub uni17C1 @coeng_ro @base_cons $s uni17B8' lookup insert_dotted_circle;
|
||||
sub uni17C1 @base_cons $s uni17B8' lookup insert_dotted_circle;
|
||||
}
|
||||
} example;
|
||||
```
|
||||
|
||||
This examples uses a `def` statement as defined below. The example produces rules for each of the possible subsequences of the optlist parameters, where each element is treated as being optional. It is a way of writing:
|
||||
|
||||
```
|
||||
sub uni17C1 @base_cons uni17CC? @coeng_no_ro [uni17C9 uni17CA]? @below_vowels? @above_vowels? uni17B8' lookup insert_dotted_circle;
|
||||
```
|
||||
|
||||
The structure of a `forlet` substatement is:
|
||||
|
||||
`forlet` _var_ [`,` _var_]* `=` _expression_ `;`
|
||||
|
||||
A `forlet` substatement has the same access to functions that the `let` statement has, included those listed above under `let`.
|
||||
|
||||
|
||||
##### if
|
||||
|
||||
The `if` substatement consists of an expression and a block of statements. `if` substatements only make sense at the end of a sequence of substatements and are executed at the end of the `do` statement, in the order they occur but after all other `for` and `let` substatements. The expression is calculated and if the result is True then the _statements_ are expanded using variable expansion.
|
||||
|
||||
`if` _expression_ `;` `{` _statements_ `}`
|
||||
|
||||
There can be multiple `if` substatements, each with their own block, in a `do` statement.
|
||||
|
||||
#### Examples
|
||||
|
||||
The `do` statement is best understood through some examples.
|
||||
|
||||
##### Simple calculation
|
||||
|
||||
This calculates a simple offset shift and creates a lookup to apply it:
|
||||
|
||||
```
|
||||
do let a = -int(ADVx("u16F61") / 2);
|
||||
{
|
||||
lookup left_shift_vowel {
|
||||
pos @_H <$a 0 0 0>;
|
||||
} left_shift_vowel;
|
||||
}
|
||||
```
|
||||
|
||||
Notice the lack of iteration here.
|
||||
|
||||
##### More complex calculation
|
||||
|
||||
This calculates the guard spaces on either side of a base glyph in response to applied diacritics.
|
||||
|
||||
```
|
||||
lookup advance_base {
|
||||
do for g = @H;
|
||||
let a = APx(g, "H") - ADVx(g) + int(1.5 * ADVx("u16F61"));
|
||||
let b = int(1.5 * ADVx("u16F61")) - APx(g, "H");
|
||||
let c = a + b;
|
||||
{
|
||||
pos $g <$b 0 $c 0>;
|
||||
}
|
||||
} advance_base;
|
||||
```
|
||||
|
||||
##### Right Guard
|
||||
|
||||
It is often desirable to give a base character extra advance width to account for a diacritic hanging over the right hand side of the glyph. Calculating this can be very difficult by hand. This code achieves this:
|
||||
|
||||
```
|
||||
do for b = @bases;
|
||||
for d = @diacritics;
|
||||
let v = (ADVx(d) - APx(d, "_U")) - (ADVx(b) - APx(b, "U"));
|
||||
if v > 0; {
|
||||
pos $b' $v $d;
|
||||
}
|
||||
```
|
||||
|
||||
##### Left Guard
|
||||
|
||||
A corresponding guarding of space for diacritics may be done on the left side of a glyph:
|
||||
|
||||
```
|
||||
do for b = @bases;
|
||||
for d = @diacritics;
|
||||
let v = APx(d, "_U") - APx(b, "U");
|
||||
if v > 0; {
|
||||
pos $b' <$v 0 $v 0> $d;
|
||||
}
|
||||
```
|
||||
|
||||
##### Left Kern
|
||||
|
||||
Consider the case where someone has used an attachment point as a kerning point. In some context they want to adjust the advance of the left glyph based on the position of the attachment point in the right glyph:
|
||||
|
||||
```
|
||||
do for r = @rights;
|
||||
let v = APx(r, "K"); {
|
||||
pos @lefts' $v $r;
|
||||
pos @lefts' $v @diacritics $r;
|
||||
}
|
||||
```
|
||||
|
||||
##### Myanmar Great Ya
|
||||
|
||||
One obscure situation is the Great Ya (U+103C) in the Myanmar script, that visual wraps around the following base glyph. The great ya is given a small advance to then position the following consonant glyph within it. The advance of this consonant needs to be enough to place the next character outside the great ya. So we create an A attachment point on the great ya to emulate this intended final advance. Note that there are many variants of the great ya glyph. Thus:
|
||||
|
||||
```
|
||||
do for y = @c103C_nar;
|
||||
for c = @cCons_nar;
|
||||
let v = APx(y, "A") - (ADVx(y) + ADVx(c));
|
||||
if v > 0; {
|
||||
pos $y' $v $c;
|
||||
}
|
||||
|
||||
do for y = @c103C_wide;
|
||||
for c = @cCons_wide;
|
||||
let v = APx(y, "A") - (ADVx(y) + ADVx(c));
|
||||
if v > 0; {
|
||||
pos $y' $v $c;
|
||||
}
|
||||
```
|
||||
|
||||
##### Advance for Ldot on U
|
||||
|
||||
This example mirrors that used in the proposed [`setadvance`](feax_future.md#setadvance) statement. Here we want to add sufficient advance on the base to correspond to attaching an u vowel which in turn has a lower dot attached to it.
|
||||
|
||||
```
|
||||
do for b = @cBases;
|
||||
for u = @cLVowels;
|
||||
let v = APx(b, "L") - APx(u, "_L") + APx(u, "LD") - APx("ldot", "_LD") + ADVx("ldot") - ADVx(b);
|
||||
if v > 0; {
|
||||
pos $b' $v $u ldot;
|
||||
}
|
||||
```
|
||||
|
||||
### def
|
||||
|
||||
The `def` statement allows for the creation of python functions for use in `let` substatements of the `do` statement. The syntax of the `def` statement is:
|
||||
|
||||
```
|
||||
def <fn>(<param_list>) {
|
||||
... python code ...
|
||||
} <fn>;
|
||||
```
|
||||
|
||||
The `fn` must conform to a FEA name (not starting with a digit, etc.) and is repeated at the end of the block to mark the end of the function. The parameter is a standard python parameter list and the python code is standard python code, indented as if under a `def` statement.
|
||||
|
||||
#### python support
|
||||
Here and in `let` and `forlet` substatements, the python that is allowed to be executed is limited. Only a subset of functions from builtins is supported and the `__` may not occur in any attribute. This is to stop people escaping the sandbox in which python code is interpreted. The `math` and `re` modules are also included along with the functions available to a `let` and `forlet` substatement. The full list of builtins supported are:
|
||||
|
||||
```
|
||||
True, False, None, int, float, str, abs, bool, dict, enumerate, filter, hex, isinstance, len, list,
|
||||
map, max, min, ord, range, set, sorted, sum, tuple, type, zip
|
||||
```
|
||||
|
||||
### kernpairs
|
||||
|
||||
The `kernpairs` statement expands all the kerning pairs in the font into `pos` statements. For example:
|
||||
|
||||
```
|
||||
lookup kernpairs {
|
||||
lookupflag IgnoreMarks;
|
||||
kernpairs;
|
||||
} kernpairs;
|
||||
```
|
||||
|
||||
Might produce:
|
||||
|
||||
```
|
||||
lookup kernpairs {
|
||||
lookupflag IgnoreMarks;
|
||||
pos @MMK_L_afii57929 -164 @MMK_R_uniA4F8;
|
||||
pos @MMK_L_uniA4D1 -164 @MMK_R_uniA4F8;
|
||||
pos @MMK_L_uniA4D5 -164 @MMK_R_afii57929;
|
||||
pos @MMK_L_uniA4FA -148 @MMK_R_space;
|
||||
} kernpairs;
|
||||
```
|
||||
|
||||
Currently, kerning information is only available from .ufo files.
|
||||
|
||||
## Capabilities
|
||||
|
||||
### Permit classes on both sides of GSUB type 2 (multiple) and type 4 (ligature) lookups
|
||||
|
||||
Adobe doesn't permit compact notation using groups in 1-to-many (decomposition) rules e.g:
|
||||
|
||||
```
|
||||
sub @AlefPlusMark by absAlef @AlefMark ;
|
||||
```
|
||||
|
||||
or many-to-1 (ligature) rules, e.g.:
|
||||
|
||||
```
|
||||
sub @ShaddaKasraMarks absShadda by @ShaddaKasraLigatures ;
|
||||
```
|
||||
|
||||
This is implemented in FEAX as follows.
|
||||
|
||||
#### Processing
|
||||
|
||||
Of the four simple (i.e., non-contextual) substitution lookups, Types 2 and 4
|
||||
are the only ones using the 'by' keyword that have a *sequence* of glyphs or
|
||||
classes on one side of the rule. The other side will, necessarily, contain a
|
||||
single term -- which Adobe currently requires to be a glyph. For convenience of
|
||||
expression, we'll call the sides of the rule the *sequence side* and the *singleton side*.
|
||||
|
||||
* Non-contextual substitution
|
||||
* Uses the 'by' keyword
|
||||
* Singleton side references a glyph class.
|
||||
|
||||
Such rules are expanded by enumerating the singleton side class and the corresponding
|
||||
class(es) on the sequence side and writing a set of Adobe-compliant rules to give
|
||||
the same result. It is an error if the singleton and corresponding classes do
|
||||
not have the same number of glyphs.
|
||||
|
||||
#### Example
|
||||
|
||||
Given:
|
||||
|
||||
```
|
||||
@class1 = [ g1 g2 ] ;
|
||||
@class2 = [ g1a g2a ] ;
|
||||
```
|
||||
|
||||
then
|
||||
|
||||
```
|
||||
sub @class1 gOther by @class2 ;
|
||||
```
|
||||
|
||||
would be rewritten as:
|
||||
|
||||
```
|
||||
sub g1 gOther by g1a ;
|
||||
sub g2 gOther by g2a ;
|
||||
```
|
||||
|
||||
### Support classes in alternate lookups
|
||||
|
||||
The default behaviour in FEA is for a `sub x from [x.a x.b];` to only allow a single glyph before the `from` keyword. But it is often useful to do things like: `sub @a from [@a.lower @a.upper];`. Feax supports this by treating the right hand side list of glyphs as a single list and dividing it equally by the list on the left. Thus if `@a` is of length 3 then the first 3 glyphs in the right hand list will go one each as the first alternate for each glyph in `@a`, then the next 3 go as the second alternate, and so on until they are all consumed. If any are left over in that one of the glyphs ends up with a different number of alternates to another, then an error is given.
|
||||
|
||||
### groups.plist
|
||||
|
||||
If a .ufo file contains a `groups.plist` file, the groups declared there are propagated straight through to the output file and can be referenced within a source file.
|
||||
|
283
docs/feax_future.md
Normal file
283
docs/feax_future.md
Normal file
|
@ -0,0 +1,283 @@
|
|||
# FEA Extensions Future
|
||||
|
||||
## Introduction
|
||||
|
||||
This document is where people can dream of the extensions they would like to see
|
||||
added to FEA. Notice that any extensions need to be convertible back to normal FEA
|
||||
so shouldn't do things that can't be expressed in FEA.
|
||||
|
||||
As things get implemented from here, they will be moved to feaextensions.md. There
|
||||
are no guaranteees that what is in here, will end up in psfmakefea.
|
||||
The various features listed here are given priorities:
|
||||
|
||||
| Level | Priority
|
||||
|------|-------------
|
||||
| 1 | Intended to be implemented
|
||||
| 2 | Probably will be implemented but after priority 1 stuff
|
||||
| 3 | Almost certainly won't be implemented
|
||||
|
||||
There are a number of possible things that can be added to FEA, the question is whether they are worth adding in terms of meeting actual need (remove from this list if added to the rest of the document):
|
||||
|
||||
* classsubtract() classand() functions
|
||||
* classand(x, y) = classsubtract(x, (classsubtract(x, y))
|
||||
* classbuild(class, "$.ext") builds one class out of another. What if something is missing? Or do we just build those classes on the fly from make_fea and glyph name parsing?
|
||||
|
||||
## Statements
|
||||
|
||||
Statements are used to make rules, lookups, etc.
|
||||
|
||||
### setadvance
|
||||
|
||||
Priority: 3 (since the do statement has a higher priority and covers this)
|
||||
|
||||
This function does the calculations necessary to adjust the advance of a glyph based on information of attachment points, etc. The result is a single shift on each of the glyphs in the class. The syntax is:
|
||||
|
||||
```
|
||||
setadvance(@glyphs, APName [, attachedGlyph[, APName, attachedGlyph [...]]])
|
||||
```
|
||||
|
||||
In effect there are two modes for this function. The first only has two parameters
|
||||
and shifts the advance from its default designed position to the x coordinate of
|
||||
the given attachment point. The second mode adds extra glyphs. The advance is moved
|
||||
to the advance of the attachedGlyph assuming the base has the other glyphs chained
|
||||
attached at their given APs. An AP may be a number in which case that is the
|
||||
x coordinate of the AP that will be used.
|
||||
|
||||
Typically there will be only one of these per lookup, unless the classes referenced
|
||||
are non overlapping.
|
||||
|
||||
The statement only triggers if the resulting advance is greater than the current
|
||||
advance. Thus some glyphs may not have a statement created for them. I.e. all
|
||||
values in the lookup will be positive.
|
||||
|
||||
#### Examples
|
||||
|
||||
These examples also act as motivating use cases.
|
||||
|
||||
##### Nokyung
|
||||
|
||||
In Nokyung there is a need to kern characters that do not descend below the baseline closer to glyphs with a right underhang. This can be done through kerning pairs or we could add an attachment point to the glyphs with the right underhang and contextual adjust their advances to that position. The approach of using an AP to do kerning is certainly quirky and few designers would go that route. The contextual lookup would call a lookup that just does the single adjustment. Consider the AP to be called K (for kern). The fea might look like:
|
||||
|
||||
```
|
||||
lookup overhangKernShift {
|
||||
setadvance(@overhangs, K);
|
||||
} overhangKernShift;
|
||||
```
|
||||
|
||||
And would expand, potentially, into
|
||||
|
||||
```
|
||||
lookup overhangKernShift {
|
||||
@overhangs <-80>;
|
||||
} overhangKernShift;
|
||||
```
|
||||
Not much, but that is because in Nokyung the overhanging glyphs all have the same overhang. If they didn't, then the list could well expand with different values for each glyph in the overhangs class. In fact, a simple implementation would do such an expansion anyway, while a more sophisticated implementation would group the results into ad hoc glyph lists.
|
||||
|
||||
##### Myanmar
|
||||
|
||||
An example from Myanmar is where a diacritic is attached such that the diacritic overhangs the right hand side of the base glyph and we want to extend the advance of the base glyph to encompass the diacritic. This is a primary motivating example for this statement. Such a lookup might read:
|
||||
|
||||
```
|
||||
lookup advanceForLDotOnU {
|
||||
setadvance(@base, L, uvowel, LD, ldot);
|
||||
} advanceForLDotOnU;
|
||||
```
|
||||
|
||||
Which transforms to:
|
||||
|
||||
```
|
||||
lookup advanceForLDotOnU {
|
||||
ka <120>;
|
||||
kha <80>;
|
||||
# …
|
||||
} advanceForLDotOnU;
|
||||
```
|
||||
|
||||
##### Miao
|
||||
|
||||
Miao is slightly different in that the advance we want to use is a constant,
|
||||
partly because calculating it involves a sequence of 3 vowel widths and you end up
|
||||
with a very long list of possible values and lookups for each one:
|
||||
|
||||
```
|
||||
lookup advancesShortShortShort {
|
||||
setadvance(@base, 1037);
|
||||
} advancesShortShortShort;
|
||||
```
|
||||
|
||||
#### Issues
|
||||
|
||||
* Do we want to use a syntax more akin to that used for composites, since that is, in effect, what we are describing: make the base have the advance of the composite?
|
||||
* Do we want to change the output to reflect the sequence so that there can be more statements per lookup?
|
||||
* The problem is that then you may want to skip intervening non-contributing glyphs (like upper diacritics in the above examples), which you would do anyway from the contextual driving lookup, but wouldn't want to have to do in each situation here.
|
||||
* It's a bit of a pain that in effect there is only one setadvance() per lookup. It would be nice to do more.
|
||||
* Does this work (and have useful meaning) in RTL?
|
||||
* Appears to leave the base glyph *position* unchanged. Is there a need to handle, for example in LTR scripts, LSB change for a base due to its diacritics? (Think i-tilde, etc.)
|
||||
|
||||
### move
|
||||
|
||||
Priority: 2
|
||||
|
||||
The move semantic results in a complex of lookups. See this [article](https://github.com/OpenType/opentype-layout/blob/master/docs/ligatures.md) on how to implement a move semantic successfully in OpenType. As such a move semantic can only be expressed as a statement at the highest level since it creates lookups. The move statement takes a number of parameters:
|
||||
|
||||
```
|
||||
move lookup_basename, skipped, matched;
|
||||
```
|
||||
|
||||
The *lookup_basename* is a name (unadorned string) prefix that is used in the naming of the lookups that the move statement creates. It also allows multiple move statements to share the same lookups where appropriate. Such lookups can be referenced by contextual chaining lookups. The lookups generated are:
|
||||
|
||||
| | |
|
||||
| ---------------------------- | -------------------------------------------------- |
|
||||
| lookup_basename_match | Contextual chaining lookup to drive the sublookups |
|
||||
| lookup_basename_pres_matched | Converts skipped(1) to matched + skipped(1) |
|
||||
| lookup_basename_pref_matched | Converts skipped(1) to matched + skipped(1) + matched |
|
||||
| lookup_basename_back | Converts skipped(-1) + matched to skipped(-1). |
|
||||
|
||||
Multiple instances of a move statement that use the same *lookup_basename* will correctly merge the various rules in the the lookups created since often at least parts of the *skipped* or *matched* will be the same across different statements.
|
||||
|
||||
Since lookups may be added to, extra contextual rules can be added to the *lookup_basename*_match.
|
||||
|
||||
*skipped* contains a sequence of glyphs (of minimum length 1), where each glyph may be a class or whatever. The move statement considers both the first and last glyph of this sequence when it comes to the other lookups it creates. *skipped(1)* is the first glyph in the sequence and *skipped(-1)* is the last.
|
||||
|
||||
*matched* is a single glyph that is to be moved. There needs to be a two lookups for each matched glyph.
|
||||
|
||||
Notice that only *lookup_basename*_matched should be added to a feature. The rest are sublookups and can be in any order. The *lookup_basename*_matched lookup is created at the point of the first move statement that has a first parameter of *lookup_basename*.
|
||||
|
||||
#### Examples
|
||||
|
||||
While there are no known use cases for this in our fonts at the moment, this is an important statement in terms of showing how complex concepts of wider interest can be implemented as extensions to fea.
|
||||
|
||||
##### Myanmar
|
||||
|
||||
Moving prevowels to the front of a syllable from their specified position in the sequence, in a DFLT processor is one such use of a move semantic:
|
||||
|
||||
```
|
||||
move(pv, @cons, my-e);
|
||||
move(pv, @cons @medial, my-e);
|
||||
move(pv, @cons @medial @medial, my-e);
|
||||
move(pv, @cons @medial @medial @medial, my-e);
|
||||
move(pv, @cons, my-shane);
|
||||
move(pv, @cons, @medial, my-shane);
|
||||
```
|
||||
|
||||
This becomes:
|
||||
|
||||
```
|
||||
lookup pv_pres_my-e {
|
||||
sub @cons by my-e @cons;
|
||||
} pv_pres_my-e;
|
||||
|
||||
lookup pv_pref_my-e {
|
||||
sub @cons by my-e @cons my-e;
|
||||
} pv_pref_my-e;
|
||||
|
||||
lookup pv_back {
|
||||
sub @cons my-e by @cons;
|
||||
sub @medial my-e by @medial;
|
||||
sub @cons my-shane by @cons;
|
||||
sub @medial my-shane by @medial;
|
||||
} pv_back;
|
||||
|
||||
lookup pv_match {
|
||||
sub @cons' lookup pv_pres-my-e my-e' lookup pv_back;
|
||||
sub @cons' lookup pv_pref-my-e @medial my-e' lookup pv_back;
|
||||
sub @cons' lookup pv_pref-my-e @medial @medial my-e' lookup pv_back;
|
||||
sub @cons' lookup pv_pref-my-e @medial @medial @medial my-e' lookup pv_back;
|
||||
sub @cons' lookup pv_pres-my-shane my-shane' lookup pv_back;
|
||||
sub @cons' lookup pv_pref-my-shane @medial my-shane' lookup pv_back;
|
||||
} pv_match;
|
||||
|
||||
lookup pv_pres_my-shane {
|
||||
sub @cons by my-shane @cons;
|
||||
} pv_pres_my-shane;
|
||||
|
||||
lookup pv_pref_my-shane {
|
||||
sub @cons by my-shane @cons my-shane;
|
||||
} pv_pref_my-shane;
|
||||
```
|
||||
|
||||
##### Khmer Split Vowels
|
||||
|
||||
Khmer has a system of split vowels, of which we will consider a very few:
|
||||
|
||||
```
|
||||
lookup presplit {
|
||||
sub km-oe by km-e km-ii;
|
||||
sub km-ya by km-e km-yy km-ya.sub;
|
||||
sub km-oo by km-e km-aa;
|
||||
} presplit;
|
||||
|
||||
move(split, @cons, km-e);
|
||||
move(split, @cons @medial, km-e);
|
||||
```
|
||||
|
||||
## Functions
|
||||
|
||||
Functions may be used in the place of a glyph or glyph class and return a list of glyphs.
|
||||
|
||||
### index
|
||||
|
||||
Priority: 2
|
||||
|
||||
Used in rules where the expansion of a rule results in a particular glyph from a class being used. Where two classes need to be synchronised, and which two classes are involved, this function specifies the rule element that drives the choice of glyph from this class. This function is motivated by the Keyman language. The parameters of index() are:
|
||||
|
||||
```
|
||||
index(slot_index, glyphclass)
|
||||
```
|
||||
|
||||
*slot_index* considers the rule as two sequences of slots, each slot referring to one glyph or glyphclass. The first sequence is on the left hand side of the rule and the second on the right, with the index running sequentially from one sequence to the other. Thus if a rule has 2 slots on the left hand side and 3 on the right, a *slot_index* of 5 refers to the last glyph on the right hand side. *Slot_index* values start from 1 for the first glyph on the left hand side.
|
||||
|
||||
What makes an index() function difficult to implement is that it requires knowledge of its context in the statement it occurs in. This is tricky to implement since it is a kind of layer violation. It doesn't matter how an index() type function is represented syntactically, the same problem applies.
|
||||
|
||||
### infont
|
||||
|
||||
Priority: 2
|
||||
|
||||
This function filters the glyph class that is passed to it, and returns only those glyphs, in glyphclass order, which are actually present in the font being compiled for. For example:
|
||||
|
||||
```
|
||||
@cons = infont([ka kha gha nga]);
|
||||
```
|
||||
|
||||
## Capabilities
|
||||
|
||||
### Permit multiple classes on RHS of GSUB type 2 (multiple) and the LHS of type 4 (ligature) lookups
|
||||
|
||||
Priority: 2
|
||||
|
||||
#### Slot correspondence
|
||||
|
||||
In Type 2 (multiple) substitutions, the LHS will be the singleton case and the RHS will be the sequence. In normal use-cases exactly one slot in the RHS will be a class -- all the others will be glyphs -- in which case that class and the singleton side class correspond.
|
||||
|
||||
If more than one RHS slot is to contain a class, then the only logical meaning is that all such classes must also correspond to the singleton class in the LHS, and will be expanded (along with the singleton side class) in parallel. Thus all the classes must have the same number of elements.
|
||||
|
||||
In Type 4 (ligature) substitutions, the RHS will be the singleton class. In the case that the LHS (sequence side) of the rule has class references in more than one slot, we need to identify which slot corresponds to the singleton side class. Some alternatives:
|
||||
|
||||
* Pick the slot that, when the classes are flattened, has the same number of glyphs as the class on the singleton side. It is possible that there is more than one such slot, however.
|
||||
* Add a notation to the rule. Graphite uses the $n modifier on the RHS to identify the corresponding slot (in the context), which we could adapt to FEA as:
|
||||
|
||||
```
|
||||
sub @class1 @class2 @class3 by @class4$2 ;
|
||||
```
|
||||
|
||||
Alternatively, since there can be only one such slot, we could use a simpler notation by putting something like the $ in the LHS:
|
||||
|
||||
```
|
||||
sub @class1 @class2$ @class3 by @class4 ;
|
||||
```
|
||||
|
||||
[This won't look right to GDL programmers, but makes does sense for OT code]
|
||||
|
||||
* Extra syntactic elements at the lexical level are hard to introduce. Instead a function such as:
|
||||
|
||||
```
|
||||
sub @class1 @class2 @class3 by index(2, @class4);
|
||||
```
|
||||
|
||||
Would give the necessary interpretation. See the discussion of the index() function for more details.
|
||||
|
||||
Note that the other classes in the LHS of ligature rules do not need further processing since FEA allows such classes.
|
||||
|
||||
#### Nested classes
|
||||
|
||||
We will want to expand nested classes in a way (i.e., depth or breadth first) that is compatible with Adobe. **Concern:** Might this be different than Graphite? Is there any difference if one says always expand left to right? [a b [c [d e] f] g] flattens the same as [[[a b] c d] e f g] or whatever. The FontTools parser does not support nested glyph classes. To what extent are they required?
|
179
docs/parameters.md
Normal file
179
docs/parameters.md
Normal file
|
@ -0,0 +1,179 @@
|
|||
# Pysilfont parameters
|
||||
|
||||
In addition to normal command-line arguments (see [scripts.md](scripts.md) and [Standard Command-line Options](docs.md#standard-command-line-options)), Pysilfont supports many other parameters that can be changed either on the command-line or by settings in a config file. For UFO fonts there is also an option to set parameters within the UFO.
|
||||
|
||||
See [List of Parameters](#list-of-parameters) for a full list, which includes the default values for each parameter.
|
||||
|
||||
# Setting parameters
|
||||
|
||||
Parameters can be set in multiple ways
|
||||
1. Default values are set by the core.py Pysilfont module - see [List of Parameters](#list-of-parameters)
|
||||
1. Standard values for a project can be set in a pysilfont.cfg [config file](#config-file)
|
||||
1. For UFO fonts, font-specific values can be set within the [lib.plist](#lib-plist) file
|
||||
1. On the command line \- see next section
|
||||
|
||||
Values set by later methods override those set by earlier methods.
|
||||
|
||||
(Scripts can also change some values, but they would normally be written to avoid overwriting command-line values)
|
||||
|
||||
## Command line
|
||||
|
||||
For script users, parameters can be set on the command line with -p, for example:
|
||||
```
|
||||
psfnormalize test.ufo -p scrlevel=V -p indentIncr=" "
|
||||
```
|
||||
would increase the screen reporting level to Verbose and change the xml indent from 2 spaces to 4 spaces.
|
||||
|
||||
If a parameter has multiple values, enter them separated with commas but no spaces, eg:
|
||||
|
||||
`-p glifElemOrder=unicode,advance,note,image,guideline,anchor,outline,lib`
|
||||
|
||||
|
||||
|
||||
## Config file
|
||||
If pysilfont.cfg exists in the same directory as the first file specified on the command line (typically the font being processed) then parameters will be read from there.
|
||||
|
||||
The format is a [ConfigParser](https://docs.python.org/2/library/configparser.html) config file, which is similar structure to a Windows .ini file.
|
||||
|
||||
Lines starting with # are ignored, as are any blank lines.
|
||||
|
||||
Example:
|
||||
```
|
||||
# Config file
|
||||
|
||||
[logging]
|
||||
scrlevel: I
|
||||
|
||||
[outparams]
|
||||
indentIncr: ' '
|
||||
glifElemOrder: unicode,advance,note,image,guideline,anchor,outline,lib
|
||||
```
|
||||
The section headers are backups, logging, outparams and ufometadata.
|
||||
|
||||
In a font project with multiple UFO fonts in the same folder, all would use a single config file.
|
||||
|
||||
## lib plist
|
||||
|
||||
If, with a UFO font, org.sil.pysilfontparams exists in lib.plist, parameter values held in an array will be processed, eg
|
||||
```
|
||||
<key>org.sil.pysilfontparams</key>
|
||||
<array>
|
||||
<indentIncr>\t</indentIncr>
|
||||
<glifElemOrder>lib,unicode,note,image,guideline,anchor,outline,advance</glifElemOrder>
|
||||
</array>
|
||||
```
|
||||
Currently only font output parameters can be changed via lib.plist
|
||||
|
||||
## List of parameters
|
||||
|
||||
| Parameter | Default | Description | Notes |
|
||||
| -------- | -------- | --------------------------------------------- | ------------------------------------- |
|
||||
| **Reporting** | | | To change within a script use <br>`logger.<parameter> = <value>`|
|
||||
| scrlevel | P | Reporting level to screen. See [Reporting](docs.md#reporting) for more details | -q, --quiet option sets this to S |
|
||||
| loglevel | W | Reporting level to log file | |
|
||||
| **Backup** (font scripts only) | | | |
|
||||
| backup | True | Backup font to subdirectory | If the original font is being updated, make a backup first |
|
||||
| backupdir | backups | Sub-directory name for backups | |
|
||||
| backupkeep | 5 | Number of backups to keep | |
|
||||
| **Output** (UFO scripts only) | | | To change in a script use <br>`font.outparams[<parameter>] = <value>` |
|
||||
| indentFirst | 2 spaces | Increment for first level in xml | |
|
||||
| indentIncr | 2 spaces | Amount to increment xml indents | |
|
||||
| indentML | False | Indent multi-line text items | (indenting really messes some things up!) |
|
||||
| plistIndentFirst | Empty string | Different initial indent for plists | (dict is commonly not indented) |
|
||||
| sortDicts | True | sort all plist dicts | |
|
||||
| precision | 6 | decimal precision | |
|
||||
| renameGlifs | True | Name glifs with standard algorithm | |
|
||||
| UFOversion | (existing) | | Defaults to the version of the UFO when opened |
|
||||
| format1Glifs | False| Force output of format 1 glifs | Includes UFO2-style anchors; for use with FontForge |
|
||||
| floatAttribs | (list of attributes in the spec that hold numbers and are handled as float) | Used to know if precision needs setting. | May need items adding for lib data |
|
||||
| intAttribs | (list of attributes in the spec that hold numbers and handled as integer) | | May need items adding for lib data |
|
||||
| glifElemOrder | (list of elements in the order defined in spec) | Order for outputting elements in a glif | |
|
||||
| attribOrders | (list of attribute orders defined in spec) | Order for outputting attributes in an element. One list per element type | When setting this, the parameter name is `attribOrders.<element type>`. Currently only used with attribOrders.glif |
|
||||
| **ufometadata** (ufo scripts only) | | | |
|
||||
| checkfix | check | Metadata check & fix action | If set to "fix", some values updated (or deleted). Set to "none" for no metadata checking |
|
||||
| More may be added... | |
|
||||
|
||||
## Within basic scripts
|
||||
### Accessing values
|
||||
If you need to access values of parameters or to see what values have been set on the command line you can look at:
|
||||
- args.paramsobj.sets[“main”]
|
||||
- This is a dictionary containing the values for **all** parameters listed above. Where they have been specified in a config file, or overwritten on the command line, those values will be used. Otherwise the default values listed above will be used
|
||||
- args.params
|
||||
- This is a dictionary containing any parameters specified on the command line with -p.
|
||||
|
||||
Within a UFO Ufont object, use font.paramset, since this will include any updates as a result parameter values set in lib.plist.
|
||||
|
||||
In addition to the parameters in the table above, two more read-only parameters can be accessed by scripts - “version” and “copyright” - which give the pysilfont library version and copyright info, based on values in core.py headers.
|
||||
|
||||
### Updating values
|
||||
Currently only values under Output can be set via scripts, since Backup and Reporting parameters are processed by execute() prior to the script being called. For example:
|
||||
```python
|
||||
font.paramset[“precision”] = 9
|
||||
```
|
||||
would set the precision parameter to 9.
|
||||
|
||||
Note that, whilst reporting _parameters_ can’t be set in scripts, _reporting levels_ can be updated by setting values in the args.logger() object, eg `args.logger.scrlevel = “W”.`
|
||||
|
||||
# Technical
|
||||
|
||||
_Note the details below are probably not needed just for developing scripts..._
|
||||
|
||||
## Basics
|
||||
The default for all parameters are set in core.py as part of the parameters() object. Those for **all** pysilfont library modules need to be defined in core.py so that execute() can process command-line arguments without needing information from other modules.
|
||||
|
||||
Parameters are passed to scripts via a parameters() object as args.paramsobj. This contains several parameter sets, with “main” being the standard one for scripts to use since that contains the default parameters updated with those (if any) from the config file then the same for any command-line values.
|
||||
|
||||
Parameters can be accessed from the parameter set by parameter name, eg paramsobj.sets[“main”][“loglevel”].
|
||||
|
||||
Although parameters are split into classes (eg backup, logging), parameter names need to be unique across all groups to allow simple access by name.
|
||||
|
||||
If logging set set to I or V, changes to parameter values (eg config file values updating default values) are logged.
|
||||
|
||||
There should only be ever a single parameters() object used by a script.
|
||||
|
||||
## Paramobj
|
||||
In addition to the paramsets, the paramobj also contains
|
||||
- classes:
|
||||
- A dictionary keyed on class, returning a list of parameter names in that class
|
||||
- paramclass:
|
||||
- A dictionary keyed on parameter name, returning the class of that parameter
|
||||
- lcase:
|
||||
- A dictionary keyed on lowercase version of parameter name returning the parameter name
|
||||
- type:
|
||||
- A dictionary keyed on parameter name, returning the type of that parameter (eg str, boolean, list)
|
||||
- listtype:
|
||||
- For list parameters, a dictionary keyed on parameter name, returning the type of that parameters in the list
|
||||
- logger:
|
||||
- The logger object for the script
|
||||
|
||||
## Parameter sets
|
||||
These serve two purposes:
|
||||
1. To allow multiple set of parameter values to be used - eg two different fonts might have different values in the lib.plist
|
||||
1. To keep track of the original sets of parameters (“default”, “config file” and “command line”) if needed. See UFO specific for an example of this need.
|
||||
|
||||
Additional sets can be added with addset() and one set can be updated with values from another using updatewith(), for example, to create the “main” set, the following code is used:
|
||||
```
|
||||
params.addset("main",copyset = "default") # Make a copy of the default set
|
||||
params.sets["main"].updatewith("config file") # Update with config file values
|
||||
params.sets["main"].updatewith("command line") # Update with command-line values
|
||||
```
|
||||
## UFO-specific
|
||||
The parameter set relevant to a UFO font can be accessed by font.paramset, so font.paramset[“loglevel"] would access the loglevel.
|
||||
|
||||
In ufo.py there is code to cope with two complications:
|
||||
1. If a script is opening multiple fonts, in they could have different lib.plist values so font-specific parameter sets are needed
|
||||
1. The parameters passed to ufo.py include the “main” set which has already had command-line parameters applied. Any in lib.plist also need to be applied, but can’t simply be applied to “main” since command-line parameters should take precedence over lib.plist ones
|
||||
|
||||
To ensure unique names, the parameter sets are created using the full path name of the UFO. Then font.paramset is set to point to this, so scripts do not need to know the underlying set name.
|
||||
|
||||
To apply the parameter sets updates in the correct order, ufo.py does:
|
||||
|
||||
1. Create a new paramset from any lib parameters present
|
||||
1. Update this with any command line parameters
|
||||
1. Create the paramset for the font by copying the “main” paramset
|
||||
1. Update this with the lib paramset (which has already been updated with command line values in step 2)
|
||||
|
||||
## Adding another parameter or class
|
||||
If there was a need to add another parameter or class, all that should be needed is to add that to defparams in the \_\_init\_\_() of parameters() in core.py. Ensure the new parameter is case-insensitively unique.
|
||||
|
||||
If a class was Ufont-specific and needed to be supported within lib.plist, then ufo.py would also need updating to handle that similarly to how it now handles outparams and ufometadata.
|
1565
docs/scripts.md
Normal file
1565
docs/scripts.md
Normal file
File diff suppressed because it is too large
Load diff
342
docs/technical.md
Normal file
342
docs/technical.md
Normal file
|
@ -0,0 +1,342 @@
|
|||
# Pysilfont Technical Documentation
|
||||
This section is for script writers and developers.
|
||||
|
||||
See [docs.md](docs.md) for the main Pysilfont user documentation.
|
||||
|
||||
# Writing scripts
|
||||
The Pysilfont modules are designed so that all scripts operate using a standard framework based on the execute() command in core.py. The purpose of the framework is to:
|
||||
- Simplify the writing of scripts, with much work (eg parameter parsing, opening fonts) being handled there rather than within the script.
|
||||
- Provide a consistent user interface for all Pysilfont command-line scripts
|
||||
|
||||
The framework covers:
|
||||
- Parsing arguments (parameters and options)
|
||||
- Defaults for arguments
|
||||
- Extended parameter support by command-line or config file
|
||||
- Producing help text
|
||||
- Opening fonts and other files
|
||||
- Outputting fonts (including normalization for UFO fonts)
|
||||
- Initial error handling
|
||||
- Reporting (logging) - both to screen and log file
|
||||
|
||||
## Basic use of the framework
|
||||
|
||||
The structure of a command-line script should be:
|
||||
```
|
||||
<header lines>
|
||||
<general imports, if any>
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [ <parameter/option definitions> ]
|
||||
|
||||
def doit(args):
|
||||
<main script code>
|
||||
return <output font, if any>
|
||||
|
||||
<other function definitions>
|
||||
|
||||
def cmd() : execute(Tool,doit, argspec)
|
||||
if __name__ == "__main__": cmd()
|
||||
```
|
||||
|
||||
The following sections work through this, using psfnormalize, which normalizes a UFO, with the option to convert between different UFO versions:
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
'''Normalize a UFO and optionally convert between UFO2 and UFO3.
|
||||
- If no options are chosen, the output font will simply be a normalized version of the font.'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output font file','nargs': '?' }, {'type': 'outfont'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': '_conv.log'}),
|
||||
('-v','--version',{'help': 'UFO version to convert to'},{})]
|
||||
|
||||
def doit(args) :
|
||||
|
||||
if args.version is not None : args.ifont.outparams['UFOversion'] = args.version
|
||||
|
||||
return args.ifont
|
||||
|
||||
def cmd() : execute("UFO",doit, argspec)
|
||||
if __name__ == "__main__": cmd()
|
||||
```
|
||||
#### Header lines
|
||||
Sample headers:
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
'''Normalize a UFO and optionally convert between UFO2 and UFO3.
|
||||
- If no options are chosen, the output font will simply be a normalized version of the font.'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
```
|
||||
As well as providing the information for someone looking at the source file, the description comment (second line, which can be multi-line) is used by the framework when constructing the help text.
|
||||
|
||||
#### Import statement(s)
|
||||
```
|
||||
from silfont.core import execute
|
||||
```
|
||||
is required. Other imports from pysilfont or other libraries should be added, if needed.
|
||||
#### Argument specification
|
||||
The argument specifications take the form of a list of tuples, with one tuple per argument, eg:
|
||||
```
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output font file','nargs': '?' }, {'type': 'outfont'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': '_conv.log'}),
|
||||
('-v','--version',{'help': 'UFO version to convert to'},{})]
|
||||
```
|
||||
Each argument has the format:
|
||||
```
|
||||
(argument name(s),argparse dict,framework dict)
|
||||
```
|
||||
argument name is either
|
||||
- name for positional parameters, eg *‘ifont’*
|
||||
- *-n, --name* or *--name* for other arguments, eg *‘-v’, ‘--version’*
|
||||
|
||||
**argparse dict** follows standard [argparse usage for .add_argument()](https://docs.python.org/2/library/argparse.html#the-add-argument-method). Help should always be included.
|
||||
|
||||
**framework dict** has optional values for:
|
||||
- ‘type’ - the type of parameter, eg ‘outfile’
|
||||
- ‘def’ - default for file names. Only applies if ‘type’ is a font or file.
|
||||
- 'optlog' - For logs only. Flag to indicate the log file is optional - default False
|
||||
|
||||
‘Type’ can be one of:
|
||||
|
||||
| Value | Action |
|
||||
|-------|-------------------------------------|
|
||||
|infont|Open a font of that name and pass the font to the main function|
|
||||
|outfont|If the main function to returns a font, save that to the supplied name|
|
||||
|infile|Open a file for read and pass the file handle to the main function|
|
||||
|incsv|Open a [csv](#support-for-csv-files) file for input and pass iterator to the main function|
|
||||
|outfile|Open a file for writing and pass the file handle to the main function|
|
||||
|filename|Filename to be passed as text|
|
||||
|optiondict|Expects multiple values in the form name=val and passes a dictionary containing them|
|
||||
|
||||
If ‘def’ is supplied, the parameter value is passed through the [file name defaulting](#default-values-for-arguments) as specified below. Applies to all the above types except for optiondict.
|
||||
|
||||
In addition to options supplied in argspec, the framework adds [standard options](docs.md#standard-command-line-options), ie:
|
||||
|
||||
- -h, --help
|
||||
- -q, --quiet
|
||||
- -p, --params
|
||||
- -l, --log
|
||||
|
||||
so these do not need to be included in argspec.
|
||||
|
||||
With -l, --log, this is still usually set in argspec to create default log file names. Set optlog to False if you want the log file to be optional.
|
||||
|
||||
#### doit() function
|
||||
The main code of the script is in the doit() function.
|
||||
|
||||
The name is just by convention - it just needs to match what is passed to execute() at the end of the script. The
|
||||
execute() function passes an args object to doit() containing:
|
||||
- An entry for each command-line argument as appropriate, based on the full name of the argument
|
||||
- eg with ``'-v','--version'``, args.version is set.
|
||||
- Values are set for every entry in argspec, plus params, quiet and log added by the framework
|
||||
- If no value is given on the command-line and the argument has no default then None is used.
|
||||
- logger for the loggerobj()
|
||||
- clarguments for a list of what was actually specified on the command line
|
||||
- For parameters:
|
||||
- params is a list of what parameters, if any, were specified on the command line
|
||||
- paramsobj is the parameters object containing all [parameter](parameters.md) details
|
||||
|
||||
#### The final lines
|
||||
|
||||
These should always be:
|
||||
```
|
||||
def cmd() : execute(Tool,doit, argspec)
|
||||
if __name__ == "__main__": cmd()
|
||||
```
|
||||
The first line defines the function that actually calls execute() to do the work, where Tool is one of:
|
||||
- “UFO” to open fonts with pysilfont’s ufo.py module, returning a Ufont object
|
||||
- “FP” to open fonts with fontParts, returning a font object
|
||||
- “FT” to open fonts with FontTools, returning a TTfont object
|
||||
- None if no font to be opened by execute()
|
||||
- Other tools may be added in the future
|
||||
|
||||
The function must be called cmd(), since this is used by setup.py to install the commands.
|
||||
|
||||
The second line is the python way of saying, if you run this file as a script (rather than using it as a python module), execute the cmd() function.
|
||||
|
||||
Even if a script is initially just going to be used to be run manually, include these lines so no modification is needed to make it installable at a later date.
|
||||
|
||||
# Further framework notes
|
||||
## Default values for arguments
|
||||
Default values in [docs.md](docs.md#default-values) describes how file name defaulting works from a user perspective.
|
||||
|
||||
To set default values, either use the ‘default’ keyword in the argparse dict (for standard defaults) or the ‘def’ keyword in the framework dict to use Pysilfont’s file-name defaulting mechanism. Only one of these should be used. 'def' can't be used with the first positional parameter.
|
||||
|
||||
Note if you want a fixed file name, ie to bypass the file name defaulting mechanism, then use the argparse default keyword.
|
||||
|
||||
## Reporting
|
||||
args.logger is a loggerobj(), and used to report messages to screen and log file. If no log file is set, messages are just to screen.
|
||||
|
||||
Messages are sent using
|
||||
```
|
||||
logger.log(<message text>, [severity level]>
|
||||
```
|
||||
Where severity level has a default value of W and can be set to one of:
|
||||
- X Exception - For fatal programming errors
|
||||
- S Severe - For fatal errors - eg input file missing
|
||||
- E Errors - For serious errors that must be reported to screen
|
||||
- P Progress - Progress messages
|
||||
- W Warning - General warnings about anything not correct
|
||||
- I Info - For more detailed reporting - eg the name of each glif file opened
|
||||
- V Verbose - For even more messages!
|
||||
|
||||
Errors are reported to screen if the severity level is higher or equal to logger.scrlevel (default P) and to log based on loglevel (default W). The defaults for these can be set via parameters or within a script, if needed.
|
||||
|
||||
With X and S, the script is terminated. S should be used for user problems (eg file does not exist, font is invalid) and X for programming issues (eg an invalid value has been set by code). Exception errors are mainly used by the libraries and force a stack trace.
|
||||
|
||||
With Ufont objects, font.logger also points to the logger, but this is used primarily within the libraries rather than in scripts.
|
||||
|
||||
There would normally only be a single logger object used by a script.
|
||||
|
||||
### Changing reporting levels
|
||||
|
||||
loglevel and scrlevel *can* be set by scripts, but care should be taken not to override values set on the command line. To increase screen logging temporarily, use logger.raisescrlevel(<new level>) then set to previous value with logger.resetscrlevel(), eg
|
||||
|
||||
```
|
||||
if not(args.quiet or "scrlevel" in params.sets["command line"]) :
|
||||
logger.raisescrlevel("W") # Raise level to W if not already W or higher
|
||||
|
||||
<code>
|
||||
|
||||
logger.resetscrlevel()
|
||||
```
|
||||
|
||||
### Error and warning counts
|
||||
|
||||
These are kept in logger.errorcount and logger.warningcount.
|
||||
|
||||
For scripts using the execute() framework, these counts are reported as progress messages when the script completes.
|
||||
|
||||
## Support for csv files
|
||||
csv file support has been added to core.py with a csvreader() object (using the python csv module). In addition to the basic handling that the csv module provides, the following are supported:
|
||||
- csvreader.firstline returns the first line of the file, so analyse headers if needed. Iteration still starts with the first line.
|
||||
- Specifying the number of values expected (with minfields, maxfields, numfields)
|
||||
- Comments (lines starting with #) are ignored
|
||||
- Blank lines are also ignored
|
||||
|
||||
The csvreader() object is an iterator which returns the next line in the file after validating it against the min, max and num settings, if any, so the script does not have to do such validation. For example:
|
||||
```
|
||||
incsv = csvreader(<filespec>)
|
||||
incsv.minfields = 2
|
||||
Incsv.maxfields = 3
|
||||
for line in inscv:
|
||||
<code>
|
||||
```
|
||||
Will run `<code>` against each line in the file, skipping comments and blank lines. If any lines don’t have 2 or 3 fields, an error will be reported and the line skipped.
|
||||
|
||||
## Parameters
|
||||
[Parameters.md](parameters.md) contains user, technical and developer’s notes on these.
|
||||
|
||||
## Chaining
|
||||
With ufo.py scripts, core.py has a mechanism for chaining script function calls together to avoid writing a font to disk then reading it in again for the next call. In theory it could be used simply to call another script’s function from within a script.
|
||||
|
||||
This has not yet been used in practice, and will be documented (and perhaps debugged!) when there is a need, but there are example scripts to show how it was designed to work.
|
||||
|
||||
# pysilfont modules
|
||||
|
||||
These notes should be read in conjunction with looking at the comments in the code (and the code itself!).
|
||||
|
||||
## core.py
|
||||
|
||||
This is the main module that has the code to support:
|
||||
- Reporting
|
||||
- Logging
|
||||
- The execute() function
|
||||
- Chaining
|
||||
- csvreader()
|
||||
|
||||
## etutil.py
|
||||
|
||||
Code to support xml handling based on xml.etree cElementTree objects. It covers the following:
|
||||
- ETWriter() - a general purpose pretty-printer for outputting xml in a normalized form including
|
||||
- Various controls on indenting
|
||||
- inline elements
|
||||
- Sorting attributes based on a supplied order
|
||||
- Setting decimal precision for specific attributes
|
||||
- doctype, comments and commentsafter
|
||||
- xmlitem() class
|
||||
- For reading and writing xml files
|
||||
- Keeps record of original and final xml strings, so only needs to write to disk if changed
|
||||
- ETelement() class
|
||||
- For handling an ElementTree element
|
||||
- For each tag in the element, ETelement[tag] returns a list of sub-elements with that tag
|
||||
- process_attributes() processes the attributes of the element based on a supplied spec
|
||||
- process_subelements() processes the subelements of the element based on a supplied spec
|
||||
|
||||
xmlitem() and ETelement() are mainly used as parent classes for other classes, eg in ufo.py.
|
||||
|
||||
The process functions validate the attributes/subelements against the spec. See code comments for details.
|
||||
|
||||
#### Immutable containers
|
||||
|
||||
Both xmlitem and ETelement objects are immutable containers, where
|
||||
- object[name] can be used to reference items
|
||||
- the object can be iterated over
|
||||
- object.keys() returns a list of keys in the object
|
||||
|
||||
however, values can't be set with `object[name] = ... `; rather values need to be set using methods within child objects. For example, with a Uglif object, you can refer to the Uadvance object with glif['advance'], but to add a Uadvance object you need to use glif.addObject().
|
||||
|
||||
This is done so that values can be easily referenced and iterated over, but values can only be changed if appropriate methods have been defined.
|
||||
|
||||
Other Pysilfont objects also use such immutable containers.
|
||||
|
||||
## util.py
|
||||
|
||||
Module for general utilities. Currently just contains dirtree code.
|
||||
|
||||
#### dirTree
|
||||
|
||||
A dirTree() object represents all the directories and files in a directory tree and keeps track of the status of the directories/files in various ways. It was designed for use with ufo.py, so, after changes to the ufo, only files that had been added or changed were written to disk and files that were no longer part of the ufo were deleted. Could have other uses!
|
||||
|
||||
Each dirTreeItem() in the tree has details about the directory or file:
|
||||
- type
|
||||
- "d" or "f" to indicate directory or file
|
||||
- dirtree
|
||||
- For sub-directories, a dirtree() for the sub-directory
|
||||
- read
|
||||
- Item has been read by the script
|
||||
- added
|
||||
- Item has been added to dirtree, so does not exist on disk
|
||||
- changed
|
||||
- Item has been changed, so may need updating on disk
|
||||
- towrite
|
||||
- Item should be written out to disk
|
||||
- written
|
||||
- Item has been written to disk
|
||||
- fileObject
|
||||
- An object representing the file
|
||||
- fileType
|
||||
- The type of the file object
|
||||
- flags
|
||||
- Any other flags a script might need
|
||||
|
||||
|
||||
## ufo.py
|
||||
|
||||
See [ufo.md](ufo.md) for details
|
||||
|
||||
## ftml.py
|
||||
|
||||
To be written
|
||||
|
||||
## comp.py
|
||||
|
||||
To be written
|
||||
|
||||
# Developer's notes
|
||||
|
||||
To cover items relevant to extending the library modules or adding new
|
||||
|
||||
To be written
|
17
docs/tests.md
Normal file
17
docs/tests.md
Normal file
|
@ -0,0 +1,17 @@
|
|||
# Test suite
|
||||
pysilfont has a pytest-based test suite.
|
||||
|
||||
### install the test framework bits:
|
||||
```
|
||||
python3 -m pip install pytest
|
||||
```
|
||||
|
||||
## set up the folders:
|
||||
```
|
||||
python3 tests/setuptestdata.py
|
||||
```
|
||||
|
||||
## run the test suite:
|
||||
```
|
||||
pytest
|
||||
```
|
159
docs/ufo.md
Normal file
159
docs/ufo.md
Normal file
|
@ -0,0 +1,159 @@
|
|||
# Pysilfont - ufo support technical docs
|
||||
|
||||
# The Basics
|
||||
|
||||
UFO support is provided by the ufo.py library.
|
||||
|
||||
Most scripts work by reading a UFO into a Ufont object, making changes to it and writing it out again. The Ufont object contains many separate objects representing the UFO in a UFO 3 based hierarchy, even if the original UFO was format 2 - see [UFO 2 or UFO 3?](#ufo-2-or-ufo-3-) below.
|
||||
|
||||
Details of the [Ufont Object Model](#ufont-object-model) are given below, but in summary:
|
||||
|
||||
- There is an object for each file within the UFO (based on [xmlitem](technical.md#etutil.py))
|
||||
- There is an object for each xml element within a parent object (based on [ETelement](technical.md#etutil.py))
|
||||
- Data within objects can always(?) be accessed via generic methods based on the xml element tags
|
||||
- For much data, there are object-specific methods to access data, which is easier than the generic methods
|
||||
|
||||
For example, a .glif file is:
|
||||
- Read into a Uglif object which has:
|
||||
- Methods for glyph-level data (eg name, format)
|
||||
- objects for all the sub-elements within a glyph (eg advance, outline)
|
||||
- Where an element type can only appear once in a glyph, eg advance, Uglif.*element-name* returns the relevant object
|
||||
- Where an element can occur multiple times (eg anchor), Uglif.*element-name* returns a list of objects
|
||||
- If an sub-element itself has sub-elements, then there are usually sub-element objects for that following the same pattern, eg Uglif.outline has lists of Ucontour and Ucomponent objects
|
||||
|
||||
It is planned that more object-specific methods will be added as needs arise, so raise in issue if you see specific needs that are likely to be useful in multiple scripts.
|
||||
|
||||
|
||||
|
||||
### UFO 2 or UFO 3?
|
||||
|
||||
The Ufont() object model UFO 3 based, so UFO 2 format fonts are converted to UFO 3 when read and then converted back to UFO 2 when written out to disk. Unless a script has code that is specific to a particular UFO format, scripts do not need to know the format of the font that was opened; rather they can just work in the UFO 3 format and leave the conversion to the library.
|
||||
|
||||
The main differences this makes in practice are:
|
||||
- **Layers** The Ufont() always has layers. With UFO 2 fonts, there will be only one, and it can be accessed via Ufont.deflayer
|
||||
- **Anchors** If a UFO 2 font uses the accepted practice of anchors being single point contours with a name and a type of "Move" then
|
||||
- On reading the font, they will be removed from the list of contours and added to the list of anchors
|
||||
- On writing the font, they will be added back into the list of contours
|
||||
|
||||
Despite being based on UFO 3 (for future-proofing), nearly all use of Pysilfont's UFO scripts has been with UFO 2-based projects so testing with UFO 3 has been minimal - and there are some [known limitations](docs.md#known-limitations).
|
||||
|
||||
|
||||
# Ufont Object Model
|
||||
|
||||
A **Ufont** object represents the font using the following objects:
|
||||
|
||||
- font.**metainfo**: [Uplist](#uplist) object created from metainfo.plist
|
||||
- font.**fontinfo**: [Uplist](#uplist) object created from fontinfo.plist, if present
|
||||
- font.**groups**: [Uplist](#uplist) object created from groups.plist, if present
|
||||
- font.**kerning**: [Uplist](#uplist) object created from kerning.plist, if present
|
||||
- font.**lib**: [Uplist](#uplist) object created from lib.plist, if present
|
||||
- self.**layercontents**: [Uplist](#uplist) object
|
||||
- created from layercontents.plist for UFO 3 fonts
|
||||
- synthesized based on the font's single layer for UFO 2 fonts
|
||||
- font.**layers**: List of [Ulayer](#ulayer) objects, where each layer contains:
|
||||
- layer.**contents**: [Uplist](#uplist) object created from contents.plist
|
||||
- layer[**_glyph name_**]: A [Uglif](#uglif) object for each glyph in the font which contains:
|
||||
- glif['**advance**']: Uadvance object
|
||||
- glif['**unicode**']: List of Uunicode objects
|
||||
- glif['**note**']: Unote object (UFO 3 only)
|
||||
- glif['**image**']: Uimage object (UFO 3 only)
|
||||
- glif['**guideline**']: List of Uguideline objects (UFO 3 only)
|
||||
- glif['**anchor**']: List of Uanchor objects
|
||||
- glif['**outline**']: Uoutline object which contains
|
||||
- outline.**contours**: List of Ucontour objects
|
||||
- outline.**components**: List of Ucomponent objects
|
||||
- glif['**lib**']: Ulib object
|
||||
- font.**features**: UfeatureFile created from features.fea, if present
|
||||
|
||||
## General Notes
|
||||
|
||||
Except for UfeatureFile (and Ufont itself), all the above objects are set up as [immutable containers](technical.md#immutable-containers), though the contents, if any, depend on the particular object.
|
||||
|
||||
Objects usually have a link back to their parent object, eg glif.layer points to the Ulayer object containing that glif.
|
||||
|
||||
## Specific classes
|
||||
|
||||
**Note - the sections below don't list all the class details** so also look in the code in ufo.py if you need something not listed - it might be there!
|
||||
|
||||
### Ufont
|
||||
|
||||
In addition to the objects listed above, a Ufont object contains:
|
||||
- self.params: The [parameters](parameters.md) object for the font
|
||||
- self.paramsset: The parameter set within self.params specific to the font
|
||||
- self.logger: The logger object for the script
|
||||
- self.ufodir: Text string of the UFO location
|
||||
- self.UFOversion: from formatVersion in metainfo.plist
|
||||
- self.dtree: [dirTree](technical.md#dirtree) object representing all the files on fisk and their status
|
||||
- self.outparams: The output parameters for the font, initially set from self.paramset
|
||||
- self.deflayer:
|
||||
- The single layer for UFO 2 fonts
|
||||
- The layer called public.default for UFO 3 fonts
|
||||
|
||||
When creating a new Ufont() object in a script, it is normal to pass args.paramsobj for params so that it has all the settings for parameters and logging.
|
||||
|
||||
self.write(outputdir) will write the UFO to disk. For basic scripts this will usually be done by the execute() function - see [writing scripts](technical.md#writing-scripts).
|
||||
|
||||
self.addfile(type) will add an empty entry for any of the optional plist files (fontinfo, groups, kerning or lib).
|
||||
|
||||
When writing to disk, the UFO is always normalized, and only changed files will actually be written to disk. The format for normalization, as well as the output UFO version, are controlled by values in self.outparams.
|
||||
|
||||
### Uplist
|
||||
|
||||
Used to represent any .plist file, as listed above.
|
||||
|
||||
For each key,value pair in the file, self[key] contains a list:
|
||||
- self[key][0] is an elementtree element for the key
|
||||
- self[key][1] is an elementtree element for the value
|
||||
|
||||
self.getval(key) will return:
|
||||
- the value, if the value type is integer, real or string
|
||||
- a list if the value type is array
|
||||
- a dict if the value type is dict
|
||||
- None, for other value types (which would only occur in lib.plist)
|
||||
- It will throw an exception if the key does not exist
|
||||
- for dict and array, it will recursively process dict and/or array subelements
|
||||
|
||||
Methods are available for adding, changing and deleting values - see class \_plist in ufo.py for details.
|
||||
|
||||
self.font points to the parent Ufont object
|
||||
|
||||
### Ulayer
|
||||
|
||||
Represents a layer in the font. With UFO 2 fonts, a single layer is synthesized from the glifs folder.
|
||||
|
||||
For each glyph, layer[glyphname] returns a Uglif object for the glyph. It has addGlyph and delGlyph functions.
|
||||
|
||||
### Uglif
|
||||
|
||||
Represents a glyph within a layer. It has child objects, as listed below, and functions self.add and self.remove for adding and removing them. For UFO 2 fonts, and contours identified as anchors will have been removed from Uoutline and added as Uanchor objects.
|
||||
|
||||
#### glif child objects
|
||||
|
||||
There are 8 child objects for a glif:
|
||||
|
||||
| Name | Notes | UFO 2 | Multi |
|
||||
| ---- | -------------------------------- | --- | --- |
|
||||
| Uadvance | Has width & height attributes | Y | |
|
||||
| Uunicode | Has hex attribute | Y | Y |
|
||||
| Uoutline | | Y | |
|
||||
| Ulib | | Y | |
|
||||
| Unote | | | |
|
||||
| Uimage | | | |
|
||||
| Uguideline | | | Y |
|
||||
| Uanchor | | | Y |
|
||||
|
||||
They all have separate object classes, but currently (except for setting attributes), only Uoutline and Ulib have any extra code - though more will be added in the future.
|
||||
|
||||
(With **Uanchor**, the conversion between UFO 3 anchors and the UFO 2 way of handling anchors is handled by code in Uglif and Ucontour)
|
||||
|
||||
**Ulib** shares a parent class (\_plist) with [Uplist](#uplist) so has the same functionality for managing key,value pairs.
|
||||
|
||||
#### Uoutline
|
||||
|
||||
This has Ucomponent and Ucontour child objects, with addobject, appendobject and insertobject methods for managing them.
|
||||
|
||||
With Ucontour, self['point'] returns a list of the point subelements within the contour, and points can be managed using the methods in Ulelement. other than that, changes need to be made by changing the elements using elementtree methods.
|
||||
|
||||
# Module Developer Notes
|
||||
|
||||
To be written
|
99
examples/FFmapGdlNames.py
Executable file
99
examples/FFmapGdlNames.py
Executable file
|
@ -0,0 +1,99 @@
|
|||
#!/usr/bin/env python3
|
||||
'''Write mapping of graphite names to new graphite names based on:
|
||||
- two ttf files
|
||||
- the gdl files produced by makeGdl run against those fonts
|
||||
This could be different versions of makeGdl
|
||||
- a csv mapping glyph names used in original ttf to those in the new font '''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2016 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
import datetime
|
||||
|
||||
suffix = "_mapGDLnames2"
|
||||
argspec = [
|
||||
('ifont1',{'help': 'First ttf font file'}, {'type': 'infont'}),
|
||||
('ifont2',{'help': 'Second ttf font file'}, {'type': 'infont'}),
|
||||
('gdl1',{'help': 'Original make_gdl file'}, {'type': 'infile'}),
|
||||
('gdl2',{'help': 'Updated make_gdl file'}, {'type': 'infile'}),
|
||||
('-m','--mapping',{'help': 'Mapping csv file'}, {'type': 'incsv', 'def': '_map.csv'}),
|
||||
('-o','--output',{'help': 'Ouput csv file'}, {'type': 'outfile', 'def': suffix+'.csv'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': suffix+'.log'}),
|
||||
('--nocomments',{'help': 'No comments in output files', 'action': 'store_true', 'default': False},{})]
|
||||
|
||||
def doit(args) :
|
||||
logger = args.paramsobj.logger
|
||||
# Check input fonts are ttf
|
||||
fontfile1 = args.cmdlineargs[1]
|
||||
fontfile2 = args.cmdlineargs[2]
|
||||
|
||||
if fontfile1[-3:] != "ttf" or fontfile2[-3:] != "ttf" :
|
||||
logger.log("Input fonts needs to be ttf files", "S")
|
||||
|
||||
font1 = args.ifont1
|
||||
font2 = args.ifont2
|
||||
gdlfile1 = args.gdl1
|
||||
gdlfile2 = args.gdl2
|
||||
mapping = args.mapping
|
||||
outfile = args.output
|
||||
|
||||
# Add initial comments to outfile
|
||||
if not args.nocomments :
|
||||
outfile.write("# " + datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S ") + args.cmdlineargs[0] + "\n")
|
||||
outfile.write("# "+" ".join(args.cmdlineargs[1:])+"\n\n")
|
||||
|
||||
# Process gdl files
|
||||
oldgrnames = {}
|
||||
for line in gdlfile1 :
|
||||
# Look for lines of format <grname> = glyphid(nnn)...
|
||||
pos = line.find(" = glyphid(")
|
||||
if pos == -1 : continue
|
||||
grname = line[0:pos]
|
||||
gid = line[pos+11:line.find(")")]
|
||||
oldgrnames[int(gid)]=grname
|
||||
|
||||
newgrnames = {}
|
||||
for line in gdlfile2 :
|
||||
# Look for lines of format <grname> = glyphid(nnn)...
|
||||
pos = line.find(" = glyphid(")
|
||||
if pos == -1 : continue
|
||||
grname = line[0:pos]
|
||||
gid = line[pos+11:line.find(")")]
|
||||
newgrnames[int(gid)]=grname
|
||||
|
||||
# Process mapping file
|
||||
SILnames = {}
|
||||
mapping.numfields = 2
|
||||
for line in mapping : SILnames[line[1]] = line[0]
|
||||
|
||||
# Map SIL name to gids in font 2
|
||||
SILtogid2={}
|
||||
for glyph in font2.glyphs(): SILtogid2[glyph.glyphname] = glyph.originalgid
|
||||
|
||||
# Combine all the mappings via ttf1!
|
||||
cnt1 = 0
|
||||
cnt2 = 0
|
||||
for glyph in font1.glyphs():
|
||||
gid1 = glyph.originalgid
|
||||
gname1 = glyph.glyphname
|
||||
gname2 = SILnames[gname1]
|
||||
gid2 = SILtogid2[gname2]
|
||||
oldgrname = oldgrnames[gid1] if gid1 in oldgrnames else None
|
||||
newgrname = newgrnames[gid2] if gid2 in newgrnames else None
|
||||
if oldgrname is None or newgrname is None :
|
||||
print type(gid1), gname1, oldgrname
|
||||
print gid2, gname2, newgrname
|
||||
cnt2 += 1
|
||||
if cnt2 > 10 : break
|
||||
else:
|
||||
outfile.write(oldgrname + "," + newgrname+"\n")
|
||||
cnt1 += 1
|
||||
|
||||
print cnt1,cnt2
|
||||
|
||||
outfile.close()
|
||||
return
|
||||
|
||||
execute("FF",doit, argspec)
|
72
examples/FFmapGdlNames2.py
Executable file
72
examples/FFmapGdlNames2.py
Executable file
|
@ -0,0 +1,72 @@
|
|||
#!/usr/bin/env python3
|
||||
'''Write mapping of graphite names to new graphite names based on:
|
||||
- an original ttf font
|
||||
- the gdl file produced by makeGdl when original font was produced
|
||||
- a csv mapping glyph names used in original ttf to those in the new font
|
||||
- pysilfont's gdl library - so assumes pysilfonts makeGdl will be used with new font'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2016 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
import silfont.gdl.psnames as ps
|
||||
import datetime
|
||||
|
||||
suffix = "_mapGDLnames"
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input ttf font file'}, {'type': 'infont'}),
|
||||
('-g','--gdl',{'help': 'Input gdl file'}, {'type': 'infile', 'def': '.gdl'}),
|
||||
('-m','--mapping',{'help': 'Mapping csv file'}, {'type': 'incsv', 'def': '_map.csv'}),
|
||||
('-o','--output',{'help': 'Ouput csv file'}, {'type': 'outfile', 'def': suffix+'.csv'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': suffix+'.log'}),
|
||||
('--nocomments',{'help': 'No comments in output files', 'action': 'store_true', 'default': False},{})]
|
||||
|
||||
def doit(args) :
|
||||
logger = args.paramsobj.logger
|
||||
# Check input font is a ttf
|
||||
fontfile = args.cmdlineargs[1]
|
||||
if fontfile[-3:] != "ttf" :
|
||||
logger.log("Input font needs to be a ttf file", "S")
|
||||
|
||||
font = args.ifont
|
||||
gdlfile = args.gdl
|
||||
mapping = args.mapping
|
||||
outfile = args.output
|
||||
|
||||
# Add initial comments to outfile
|
||||
if not args.nocomments :
|
||||
outfile.write("# " + datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S ") + args.cmdlineargs[0] + "\n")
|
||||
outfile.write("# "+" ".join(args.cmdlineargs[1:])+"\n\n")
|
||||
|
||||
# Process gdl file
|
||||
oldgrnames = {}
|
||||
for line in args.gdl :
|
||||
# Look for lines of format <grname> = glyphid(nnn)...
|
||||
pos = line.find(" = glyphid(")
|
||||
if pos == -1 : continue
|
||||
grname = line[0:pos]
|
||||
gid = line[pos+11:line.find(")")]
|
||||
oldgrnames[int(gid)]=grname
|
||||
|
||||
# Create map from AGL name to new graphite name
|
||||
newgrnames = {}
|
||||
mapping.numfields = 2
|
||||
for line in mapping :
|
||||
AGLname = line[1]
|
||||
SILname = line[0]
|
||||
grname = ps.Name(SILname).GDL()
|
||||
newgrnames[AGLname] = grname
|
||||
|
||||
# Find glyph names in ttf font
|
||||
for glyph in font.glyphs():
|
||||
gid = glyph.originalgid
|
||||
gname = glyph.glyphname
|
||||
oldgrname = oldgrnames[gid] if gid in oldgrnames else None
|
||||
newgrname = newgrnames[gname] if gname in newgrnames else None
|
||||
outfile.write(oldgrname + "," + newgrname+"\n")
|
||||
|
||||
outfile.close()
|
||||
return
|
||||
|
||||
execute("FF",doit, argspec)
|
118
examples/FLWriteXml.py
Executable file
118
examples/FLWriteXml.py
Executable file
|
@ -0,0 +1,118 @@
|
|||
#!/usr/bin/env python3
|
||||
'''Outputs attachment point information and notes as XML file for TTFBuilder'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'M Hosken'
|
||||
|
||||
# user controls
|
||||
|
||||
# output entries for all glyphs even those with nothing interesting to say about them
|
||||
all_glyphs = 1
|
||||
|
||||
# output the glyph id as part of the information
|
||||
output_gid = 1
|
||||
|
||||
# output the glyph notes
|
||||
output_note = 0
|
||||
|
||||
# output UID with "U+" prefix
|
||||
output_uid_prefix = 0
|
||||
|
||||
# print progress indicator
|
||||
print_progress = 0
|
||||
|
||||
# no user serviceable parts under here!
|
||||
from xml.sax.saxutils import XMLGenerator
|
||||
import os
|
||||
|
||||
def print_glyph(font, glyph, index):
|
||||
if print_progress and index % 100 == 0:
|
||||
print "%d: %s" % (index, glyph.name)
|
||||
|
||||
if (not all_glyphs and len(glyph.anchors) == 0 and len(glyph.components) == 0 and
|
||||
not (glyph.note and output_note)):
|
||||
return
|
||||
attribs = {}
|
||||
if output_gid:
|
||||
attribs["GID"] = unicode(index)
|
||||
if glyph.unicode:
|
||||
if output_uid_prefix:
|
||||
attribs["UID"] = unicode("U+%04X" % glyph.unicode)
|
||||
else:
|
||||
attribs["UID"] = unicode("%04X" % glyph.unicode)
|
||||
if glyph.name:
|
||||
attribs["PSName"] = unicode(glyph.name)
|
||||
xg.startElement("glyph", attribs)
|
||||
|
||||
for anchor in (glyph.anchors):
|
||||
xg.startElement("point", {"type":unicode(anchor.name), "mark":unicode(anchor.mark)})
|
||||
xg.startElement("location", {"x":unicode(anchor.x), "y":unicode(anchor.y)})
|
||||
xg.endElement("location")
|
||||
xg.endElement("point")
|
||||
|
||||
for comp in (glyph.components):
|
||||
g = font.glyphs[comp.index]
|
||||
r = g.GetBoundingRect()
|
||||
x0 = 0.5 * (r.ll.x * (1 + comp.scale.x) + r.ur.x * (1 - comp.scale.x)) + comp.delta.x
|
||||
y0 = 0.5 * (r.ll.y * (1 + comp.scale.y) + r.ur.y * (1 - comp.scale.y)) + comp.delta.y
|
||||
x1 = 0.5 * (r.ll.x * (1 - comp.scale.x) + r.ur.x * (1 + comp.scale.x)) + comp.delta.x
|
||||
y1 = 0.5 * (r.ll.y * (1 - comp.scale.x) + r.ur.y * (1 + comp.scale.y)) + comp.delta.y
|
||||
|
||||
attribs = {"bbox":unicode("%d, %d, %d, %d" % (x0, y0, x1, y1))}
|
||||
attribs["GID"] = unicode(comp.index)
|
||||
if (g.unicode):
|
||||
if output_uid_prefix:
|
||||
attribs["UID"] = unicode("U+%04X" % g.unicode)
|
||||
else:
|
||||
attribs["UID"] = unicode("%04X" % g.unicode)
|
||||
if (g.name):
|
||||
attribs["PSName"] = unicode(g.name)
|
||||
xg.startElement("compound", attribs)
|
||||
xg.endElement("compound")
|
||||
|
||||
if glyph.mark:
|
||||
xg.startElement("property", {"name":unicode("mark"), "value":unicode(glyph.mark)})
|
||||
xg.endElement("property")
|
||||
|
||||
if glyph.customdata:
|
||||
xg.startElement("customdata", {})
|
||||
xg.characters(unicode(glyph.customdata.strip()))
|
||||
xg.endElement("customdata")
|
||||
|
||||
if glyph.note and output_note:
|
||||
xg.startElement("note", {})
|
||||
xg.characters(glyph.note)
|
||||
xg.endElement("note")
|
||||
xg.endElement("glyph")
|
||||
|
||||
outname = fl.font.file_name.replace(".vfb", "_tmp.xml")
|
||||
fh = open(outname, "w")
|
||||
xg = XMLGenerator(fh, "utf-8")
|
||||
xg.startDocument()
|
||||
|
||||
#fl.font.full_name is needed to get the name as it appears to Windows
|
||||
#fl.font.font_name seems to be the PS name. This messes up GenTest.pl when it generates WPFeatures.wpx
|
||||
xg.startElement("font", {'name':unicode(fl.font.full_name), "upem":unicode(fl.font.upm)})
|
||||
for i in range(0, len(fl.font.glyphs)):
|
||||
print_glyph(fl.font, fl.font.glyphs[i], i)
|
||||
xg.endElement("font")
|
||||
|
||||
xg.endDocument()
|
||||
fh.close()
|
||||
|
||||
#somehow this enables UNC naming (\\Gutenberg vs i:) to work when Saxon is called with popen
|
||||
#without this, if outname is UNC-based, then drive letters and UNC volumes are invisible
|
||||
# if outname is drive-letter-based, then drive letters and UNC volumes are already visible
|
||||
if (outname[0:2] == r'\\'):
|
||||
os.chdir("c:")
|
||||
tidy = "tidy -i -xml -n -wrap 0 --char-encoding utf8 --indent-spaces 4 --quote-nbsp no --tab-size 4 -m %s"
|
||||
saxon = "saxon %s %s" % ('"' + outname + '"', r'"C:\Roman Font\rfs_font\10 Misc Utils\glyph_norm.xsl"') #handle spaces in file name
|
||||
f = os.popen(saxon, "rb")
|
||||
g = open(outname.replace("_tmp.xml", ".xml"), "wb")
|
||||
output = f.read()
|
||||
g.write(output)
|
||||
f.close()
|
||||
g.close()
|
||||
|
||||
print "Done"
|
24
examples/FTMLnorm.py
Normal file
24
examples/FTMLnorm.py
Normal file
|
@ -0,0 +1,24 @@
|
|||
#!/usr/bin/env python3
|
||||
'Normalize an FTML file'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2016 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
import silfont.ftml as ftml
|
||||
from xml.etree import cElementTree as ET
|
||||
|
||||
argspec = [
|
||||
('infile',{'help': 'Input ftml file'}, {'type': 'infile'}),
|
||||
('outfile',{'help': 'Output ftml file', 'nargs': '?'}, {'type': 'outfile', 'def': '_new.xml'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': '_ftmltest.log'})
|
||||
]
|
||||
|
||||
def doit(args) :
|
||||
f = ftml.Fxml(args.infile)
|
||||
f.save(args.outfile)
|
||||
|
||||
def cmd() : execute("",doit,argspec)
|
||||
if __name__ == "__main__": cmd()execute("", doit, argspec)
|
||||
|
49
examples/FTaddEmptyOT.py
Normal file
49
examples/FTaddEmptyOT.py
Normal file
|
@ -0,0 +1,49 @@
|
|||
#!/usr/bin/env python3
|
||||
'Add empty Opentype tables to ttf font'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2014 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Martin Hosken'
|
||||
|
||||
from silfont.core import execute
|
||||
from fontTools import ttLib
|
||||
from fontTools.ttLib.tables import otTables
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output font file','nargs': '?' }, {'type': 'outfont'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': '_conv.log'}),
|
||||
('-s','--script',{'help': 'Script tag to generate [DFLT]', 'default': 'DFLT', }, {}),
|
||||
('-t','--type',{'help': 'Table to create: gpos, gsub, [both]', 'default': 'both', }, {}) ]
|
||||
|
||||
def doit(args) :
|
||||
font = args.ifont
|
||||
args.type = args.type.upper()
|
||||
|
||||
for tag in ('GSUB', 'GPOS') :
|
||||
if tag == args.type or args.type == 'BOTH' :
|
||||
table = ttLib.getTableClass(tag)()
|
||||
t = getattr(otTables, tag, None)()
|
||||
t.Version = 1.0
|
||||
t.ScriptList = otTables.ScriptList()
|
||||
t.ScriptList.ScriptRecord = []
|
||||
t.FeatureList = otTables.FeatureList()
|
||||
t.FeatureList.FeatureRecord = []
|
||||
t.LookupList = otTables.LookupList()
|
||||
t.LookupList.Lookup = []
|
||||
srec = otTables.ScriptRecord()
|
||||
srec.ScriptTag = args.script
|
||||
srec.Script = otTables.Script()
|
||||
srec.Script.DefaultLangSys = None
|
||||
srec.Script.LangSysRecord = []
|
||||
t.ScriptList.ScriptRecord.append(srec)
|
||||
t.ScriptList.ScriptCount = 1
|
||||
t.FeatureList.FeatureCount = 0
|
||||
t.LookupList.LookupCount = 0
|
||||
table.table = t
|
||||
font[tag] = table
|
||||
|
||||
return font
|
||||
|
||||
def cmd() : execute("FT",doit, argspec)
|
||||
if __name__ == "__main__": cmd()
|
34
examples/accesslibplist.py
Normal file
34
examples/accesslibplist.py
Normal file
|
@ -0,0 +1,34 @@
|
|||
#!/usr/bin/env python3
|
||||
'Demo script for accessing fields in lib.plist'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2017 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [
|
||||
('ifont', {'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('field', {'help': 'field to access'},{})]
|
||||
|
||||
def doit(args):
|
||||
font = args.ifont
|
||||
field = args.field
|
||||
lib = font.lib
|
||||
|
||||
if field in lib:
|
||||
val = lib.getval(field)
|
||||
print
|
||||
print val
|
||||
print
|
||||
print "Field " + field + " is type " + lib[field][1].tag + " in xml"
|
||||
|
||||
print "The retrieved value is " + str(type(val)) + " in Python"
|
||||
else:
|
||||
print "Field not in lib.plist"
|
||||
|
||||
return
|
||||
|
||||
|
||||
def cmd(): execute("UFO", doit, argspec)
|
||||
if __name__ == "__main__": cmd()
|
43
examples/chaindemo.py
Normal file
43
examples/chaindemo.py
Normal file
|
@ -0,0 +1,43 @@
|
|||
#!/usr/bin/env python3
|
||||
''' Demo of how to chain calls to multiple scripts together.
|
||||
Running
|
||||
python chaindemo.py infont outfont --featfile feat.csv --uidsfile uids.csv
|
||||
will run execute() against psfnormalize, psfsetassocfeat and psfsetassocuids passing the font, parameters
|
||||
and logger objects from one call to the next. So:
|
||||
- the font is only opened once and written once
|
||||
- there is a single log file produced
|
||||
'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2017 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute, chain
|
||||
import silfont.scripts.psfnormalize as psfnormalize
|
||||
import silfont.scripts.psfsetassocfeat as psfsetassocfeat
|
||||
import silfont.scripts.psfsetassocuids as psfsetassocuids
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output font file','nargs': '?' }, {'type': 'outfont'}),
|
||||
('--featfile',{'help': 'Associate features csv'}, {'type': 'filename'}),
|
||||
('--uidsfile', {'help': 'Associate uids csv'}, {'type': 'filename'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': '_chain.log'})]
|
||||
|
||||
def doit(args) :
|
||||
|
||||
argv = ['psfnormalize', 'dummy'] # 'dummy' replaces input font since font object is being passed. Other parameters could be added.
|
||||
font = chain(argv, psfnormalize.doit, psfnormalize.argspec, args.ifont, args.paramsobj, args.logger, args.quiet)
|
||||
|
||||
argv = ['psfsetassocfeat', 'dummy', '-i', args.featfile]
|
||||
font = chain(argv, psfsetassocfeat.doit, psfsetassocfeat.argspec, font, args.paramsobj, args.logger, args.quiet)
|
||||
|
||||
argv = ['psfsetassocuids', 'dummy', '-i', args.uidsfile]
|
||||
font = chain(argv, psfsetassocuids.doit, psfsetassocuids.argspec, font, args.paramsobj, args.logger, args.quiet)
|
||||
|
||||
return font
|
||||
|
||||
def cmd() : execute("UFO",doit, argspec)
|
||||
|
||||
if __name__ == "__main__": cmd()
|
||||
|
17
examples/fbonecheck.py
Normal file
17
examples/fbonecheck.py
Normal file
|
@ -0,0 +1,17 @@
|
|||
#!/usr/bin/env python3
|
||||
'''Example profile for use with psfrunfbchecks that will just run one or more specified checks'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2022 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.fbtests.ttfchecks import psfcheck_list, make_profile, check, PASS, FAIL
|
||||
|
||||
# Exclude all checks bar those listed
|
||||
for check in psfcheck_list:
|
||||
if check not in ["org.sil/check/whitespace_widths"]:
|
||||
psfcheck_list[check] = {'exclude': True}
|
||||
|
||||
# Create the fontbakery profile
|
||||
profile = make_profile(psfcheck_list, variable_font = False)
|
||||
|
65
examples/fbttfchecks.py
Normal file
65
examples/fbttfchecks.py
Normal file
|
@ -0,0 +1,65 @@
|
|||
#!/usr/bin/env python3
|
||||
'''Example for making project-specific changes to the standard pysilfont set of Font Bakery ttf checks.
|
||||
It will start with all the checks normally run by pysilfont's ttfchecks profile then modify as described below'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2020 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.fbtests.ttfchecks import psfcheck_list, make_profile, check, PASS, FAIL
|
||||
|
||||
#
|
||||
# General settings
|
||||
#
|
||||
psfvariable_font = False # Set to True for variable fonts, so different checks will be run
|
||||
|
||||
#
|
||||
# psfcheck_list is a dictionary of all standard Fontbakery checks with a dictionary for each check indicating
|
||||
# pysilfont's standard processing of that check
|
||||
#
|
||||
# Specifically:
|
||||
# - If the dictionary has "exclude" set to True, that check will be excluded from the profile
|
||||
# - If change_status is set, the status values reported by psfrunfbchecks will be changed based on its values
|
||||
# - If a change in status is temporary - eg just until something is fixed, use temp_change_status instead
|
||||
#
|
||||
# Projects can edit this dictionary to change behaviour from Pysilfont defaults. See examples below
|
||||
|
||||
# To reinstate the copyright check (which is normally excluded):
|
||||
psfcheck_list["com.google.fonts/check/metadata/copyright"]["exclude"] = False
|
||||
|
||||
# To prevent the hinting_impact check from running:
|
||||
psfcheck_list["com.google.fonts/check/hinting_impact"]["exclude"] = True
|
||||
|
||||
# To change a FAIL status for com.google.fonts/check/whitespace_glyphnames to WARN:
|
||||
psfcheck_list["com.google.fonts/check/whitespace_glyphnames"]["temp_change_status"] = {
|
||||
"FAIL": "WARN", "reason": "This font currently uses non-standard names"}
|
||||
|
||||
#
|
||||
# Create the fontbakery profile
|
||||
#
|
||||
profile = make_profile(psfcheck_list, variable_font = psfvariable_font)
|
||||
|
||||
# Add any project-specific tests (This dummy test should normally be commented out!)
|
||||
|
||||
@profile.register_check
|
||||
@check(
|
||||
id = 'org.sil/dummy',
|
||||
rationale = """
|
||||
There is no reason for this test!
|
||||
"""
|
||||
)
|
||||
def org_sil_dummy():
|
||||
"""Dummy test that always fails"""
|
||||
if True: yield FAIL, "Oops!"
|
||||
|
||||
'''
|
||||
Run this using
|
||||
|
||||
$ psfrunfbchecks --profile fbttfchecks.py <ttf file(s) to check> ...
|
||||
|
||||
It can also be used with fontbakery directly if you want to use options that psfrunfbchecks does not support, however
|
||||
status changes will not be actioned.
|
||||
|
||||
$ fontbakery check-profile fbttfchecks.py <ttf file(s) to check> ...
|
||||
|
||||
'''
|
65
examples/ffchangeglyphnames.py
Executable file
65
examples/ffchangeglyphnames.py
Executable file
|
@ -0,0 +1,65 @@
|
|||
#!/usr/bin/env python3
|
||||
from __future__ import unicode_literals
|
||||
'''Update glyph names in a font based on csv file
|
||||
- Using FontForge rather than UFOlib so it can work with ttf (or sfd) files'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2016 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
''' This will need updating, since FontForge is no longer supported as a tool by execute() So:
|
||||
- ifont and ofont will need to be changed to have type 'filename'
|
||||
- ifont will then need to be opened using fontforge.open
|
||||
- The font will need to be saved with font.save
|
||||
- execute will need to be called with the tool set to None instead of "FF"
|
||||
'''
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input ttf font file'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output font file','nargs': '?' }, {'type': 'outfont'}),
|
||||
('-i','--input',{'help': 'Mapping csv file'}, {'type': 'incsv', 'def': 'psnames.csv'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': '_setPostNames.log'}),
|
||||
('--reverse',{'help': 'Change names in reverse', 'action': 'store_true', 'default': False},{})]
|
||||
|
||||
def doit(args) :
|
||||
logger = args.paramsobj.logger
|
||||
|
||||
font = args.ifont
|
||||
|
||||
# Process csv
|
||||
csv = args.input
|
||||
csv.numfields = 2
|
||||
newnames={}
|
||||
namescheck=[]
|
||||
missingnames = False
|
||||
for line in csv :
|
||||
if args.reverse :
|
||||
newnames[line[1]] = line[0]
|
||||
namescheck.append(line[1])
|
||||
else :
|
||||
newnames[line[0]] = line[1]
|
||||
namescheck.append(line[0])
|
||||
|
||||
for glyph in font.glyphs():
|
||||
gname = glyph.glyphname
|
||||
if gname in newnames :
|
||||
namescheck.remove(gname)
|
||||
glyph.glyphname = newnames[gname]
|
||||
else:
|
||||
missingnames = True
|
||||
logger.log(gname + " in font but not csv file","W")
|
||||
|
||||
if missingnames : logger.log("Font glyph names missing from csv - see log for details","E")
|
||||
|
||||
for name in namescheck : # Any names left in namescheck were in csv but not ttf
|
||||
logger.log(name + " in csv but not in font","W")
|
||||
|
||||
if namescheck != [] : logger.log("csv file names missing from font - see log for details","E")
|
||||
|
||||
return font
|
||||
|
||||
def cmd() : execute("FF",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
||||
|
161
examples/ffcopyglyphs.py
Normal file
161
examples/ffcopyglyphs.py
Normal file
|
@ -0,0 +1,161 @@
|
|||
#!/usr/bin/env python3
|
||||
'''FontForge: Copy glyphs from one font to another, without using ffbuilder'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015-2019 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Martin Hosken'
|
||||
|
||||
from silfont.core import execute
|
||||
import psMat
|
||||
import io
|
||||
|
||||
|
||||
''' This will need updating, since FontForge is no longer supported as a tool by execute() So:
|
||||
- ifont and ofont will need to be changed to have type 'filename'
|
||||
- ifont will then need to be opened using fontforge.open
|
||||
- The font will need to be saved with font.save
|
||||
- execute will need to be called with the tool set to None instead of "FF"
|
||||
'''
|
||||
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output font file','nargs': '?' }, {'type': 'outfont', 'def': 'new'}),
|
||||
('-i','--input',{'help': 'Font to get glyphs from', 'required' : True}, {'type': 'infont'}),
|
||||
('-r','--range',{'help': 'StartUnicode..EndUnicode no spaces, e.g. 20..7E', 'action' : 'append'}, {}),
|
||||
('--rangefile',{'help': 'File with USVs e.g. 20 or a range e.g. 20..7E or both', 'action' : 'append'}, {}),
|
||||
('-n','--name',{'help': 'Include glyph named name', 'action' : 'append'}, {}),
|
||||
('--namefile',{'help': 'File with glyph names', 'action' : 'append'}, {}),
|
||||
('-a','--anchors',{'help' : 'Copy across anchor points', 'action' : 'store_true'}, {}),
|
||||
('-f','--force',{'help' : 'Overwrite existing glyphs in the font', 'action' : 'store_true'}, {}),
|
||||
('-s','--scale',{'type' : float, 'help' : 'Scale glyphs by this factor'}, {})
|
||||
]
|
||||
|
||||
def copyglyph(font, infont, g, u, args) :
|
||||
extras = set()
|
||||
if args.scale is None :
|
||||
scale = psMat.identity()
|
||||
else :
|
||||
scale = psMat.scale(args.scale)
|
||||
o = font.findEncodingSlot(u)
|
||||
if o == -1 :
|
||||
glyph = font.createChar(u, g.glyphname)
|
||||
else :
|
||||
glyph = font[o]
|
||||
if len(g.references) == 0 :
|
||||
font.selection.select(glyph)
|
||||
pen = glyph.glyphPen()
|
||||
g.draw(pen)
|
||||
glyph.transform(scale)
|
||||
else :
|
||||
for r in g.references :
|
||||
t = psMat.compose(r[1], scale)
|
||||
newt = psMat.compose(psMat.identity(), psMat.translate(t[4], t[5]))
|
||||
glyph.addReference(r[0], newt)
|
||||
extras.add(r[0])
|
||||
glyph.width = g.width * scale[0]
|
||||
if args.anchors :
|
||||
for a in g.anchorPoints :
|
||||
try :
|
||||
l = font.getSubtableOfAnchor(a[1])
|
||||
except EnvironmentError :
|
||||
font.addAnchorClass("", a[0]*scale[0], a[1]*scale[3])
|
||||
glyph.anchorPoints = g.anchorPoints
|
||||
return list(extras)
|
||||
|
||||
def doit(args) :
|
||||
font = args.ifont
|
||||
infont = args.input
|
||||
font.encoding = "Original"
|
||||
infont.encoding = "Original" # compact the font so findEncodingSlot will work
|
||||
infont.layers["Fore"].is_quadratic = font.layers["Fore"].is_quadratic
|
||||
|
||||
# list of glyphs to copy
|
||||
glist = list()
|
||||
|
||||
# glyphs specified on the command line
|
||||
for n in args.name or [] :
|
||||
glist.append(n)
|
||||
|
||||
# glyphs specified in a file
|
||||
for filename in args.namefile or [] :
|
||||
namefile = io.open(filename, 'r')
|
||||
for line in namefile :
|
||||
# ignore comments
|
||||
line = line.partition('#')[0]
|
||||
line = line.strip()
|
||||
|
||||
# ignore blank lines
|
||||
if (line == ''):
|
||||
continue
|
||||
|
||||
glist.append(line)
|
||||
|
||||
# copy glyphs by name
|
||||
reportErrors = True
|
||||
while len(glist) :
|
||||
tglist = glist[:]
|
||||
glist = []
|
||||
for n in tglist:
|
||||
if n in font and not args.force :
|
||||
if reportErrors :
|
||||
print("Glyph {} already present. Skipping".format(n))
|
||||
continue
|
||||
if n not in infont :
|
||||
print("Can't find glyph {}".format(n))
|
||||
continue
|
||||
g = infont[n]
|
||||
glist.extend(copyglyph(font, infont, g, -1, args))
|
||||
reportErrors = False
|
||||
|
||||
# list of characters to copy
|
||||
ulist = list()
|
||||
|
||||
# characters specified on the command line
|
||||
for r in args.range or [] :
|
||||
(rstart, rend) = [int(x, 16) for x in r.split('..')]
|
||||
for u in range(rstart, rend + 1) :
|
||||
ulist.append(u)
|
||||
|
||||
# characters specified in a file
|
||||
for filename in args.rangefile or [] :
|
||||
rangefile = io.open(filename, 'r')
|
||||
for line in rangefile :
|
||||
# ignore comments
|
||||
line = line.partition('#')[0]
|
||||
line = line.strip()
|
||||
|
||||
# ignore blank lines
|
||||
if (line == ''):
|
||||
continue
|
||||
|
||||
# obtain USVs
|
||||
try:
|
||||
(rstart, rend) = line.split('..')
|
||||
except ValueError:
|
||||
rstart = line
|
||||
rend = line
|
||||
|
||||
rstart = int(rstart, 16)
|
||||
rend = int(rend, 16)
|
||||
|
||||
for u in range(rstart, rend + 1):
|
||||
ulist.append(u)
|
||||
|
||||
# copy the characters from the generated list
|
||||
for u in ulist:
|
||||
o = font.findEncodingSlot(u)
|
||||
if o != -1 and not args.force :
|
||||
print("Glyph for {:x} already present. Skipping".format(u))
|
||||
continue
|
||||
e = infont.findEncodingSlot(u)
|
||||
if e == -1 :
|
||||
print("Can't find glyph for {:04x}".format(u))
|
||||
continue
|
||||
g = infont[e]
|
||||
copyglyph(font, infont, g, u, args)
|
||||
|
||||
return font
|
||||
|
||||
def cmd() : execute("FF",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
31
examples/ffremovealloverlaps.py
Executable file
31
examples/ffremovealloverlaps.py
Executable file
|
@ -0,0 +1,31 @@
|
|||
#!/usr/bin/env python3
|
||||
from __future__ import unicode_literals
|
||||
'FontForge: Remove overlap on all glyphs in font'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Victor Gaultney'
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
|
||||
''' This will need updating, since FontForge is no longer supported as a tool by execute() So:
|
||||
- ifont and ofont will need to be changed to have type 'filename'
|
||||
- ifont will then need to be opened using fontforge.open
|
||||
- The font will need to be saved with font.save
|
||||
- execute will need to be called with the tool set to None instead of "FF"
|
||||
'''
|
||||
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output font file','nargs': '?' }, {'type': 'outfont', 'def': 'new'})]
|
||||
|
||||
def doit(args) :
|
||||
font = args.ifont
|
||||
for glyph in font:
|
||||
font[glyph].removeOverlap()
|
||||
return font
|
||||
|
||||
def cmd() : execute("FF",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
31
examples/fontforge-old/FFaddPUA.py
Executable file
31
examples/fontforge-old/FFaddPUA.py
Executable file
|
@ -0,0 +1,31 @@
|
|||
#!/usr/bin/env python3
|
||||
'''FontForge: Add cmap entries for all glyphs in the font'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2016 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Martin Hosken'
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output font file','nargs': '?' }, {'type': 'outfont', 'def': 'new'})
|
||||
]
|
||||
|
||||
def nextpua(p) :
|
||||
if p == 0 : return 0xE000
|
||||
if p == 0xF8FF : return 0xF0000
|
||||
return p + 1
|
||||
|
||||
def doit(args) :
|
||||
p = nextpua(0)
|
||||
font = args.ifont
|
||||
for n in font :
|
||||
g = font[n]
|
||||
if g.unicode == -1 :
|
||||
g.unicode = p
|
||||
p = nextpua(p)
|
||||
return font
|
||||
|
||||
def cmd() : execute("FF",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
62
examples/fontforge-old/FFcheckDupUSV.py
Executable file
62
examples/fontforge-old/FFcheckDupUSV.py
Executable file
|
@ -0,0 +1,62 @@
|
|||
#!/usr/bin/env python3
|
||||
'FontForge: Check for duplicate USVs in unicode or altuni fields'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('-o','--output',{'help': 'Output text file'}, {'type': 'outfile', 'def': 'DupUSV.txt'})]
|
||||
|
||||
def doit(args) :
|
||||
font = args.ifont
|
||||
outf = args.output
|
||||
|
||||
# Process unicode and altunicode for all glyphs
|
||||
usvs={}
|
||||
for glyph in font:
|
||||
g = font[glyph]
|
||||
if g.unicode != -1:
|
||||
usv=UniStr(g.unicode)
|
||||
AddUSV(usvs,usv,glyph)
|
||||
# Check any alternate usvs
|
||||
altuni=g.altuni
|
||||
if altuni != None:
|
||||
for au in altuni:
|
||||
usv=UniStr(au[0]) # (may need to check variant flag)
|
||||
AddUSV(usvs,usv,glyph + ' (alt)')
|
||||
|
||||
items = usvs.items()
|
||||
items = filter(lambda x: len(x[1]) > 1, items)
|
||||
items.sort()
|
||||
|
||||
for i in items:
|
||||
usv = i[0]
|
||||
print usv + ' has duplicates'
|
||||
gl = i[1]
|
||||
glyphs = gl[0]
|
||||
for j in range(1,len(gl)):
|
||||
glyphs = glyphs + ', ' + gl[j]
|
||||
|
||||
outf.write('%s: %s\n' % (usv,glyphs))
|
||||
|
||||
outf.close()
|
||||
print "Done!"
|
||||
|
||||
def UniStr(u):
|
||||
if u:
|
||||
return "U+{0:04X}".format(u)
|
||||
else:
|
||||
return "No USV" #length same as above
|
||||
|
||||
def AddUSV(usvs,usv,glyph):
|
||||
if not usvs.has_key(usv):
|
||||
usvs[usv] = [glyph]
|
||||
else:
|
||||
usvs[usv].append(glyph)
|
||||
|
||||
def cmd() : execute("FF",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
51
examples/fontforge-old/FFcolourGlyphs.py
Executable file
51
examples/fontforge-old/FFcolourGlyphs.py
Executable file
|
@ -0,0 +1,51 @@
|
|||
#!/usr/bin/env python3
|
||||
'Set Glyph colours based on a csv file - format glyphname,colour'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output font file','nargs': '?' }, {'type': 'outfont', 'def': 'new'}),
|
||||
('-i','--input',{'help': 'Input csv file'}, {'type': 'infile', 'def': 'colourGlyphs.csv'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': 'colourGlyphs.log'})]
|
||||
|
||||
def doit(args) :
|
||||
font=args.ifont
|
||||
inpf = args.input
|
||||
logf = args.log
|
||||
# define colours
|
||||
colours = {
|
||||
'black' :0x000000,
|
||||
'red' :0xFF0000,
|
||||
'green' :0x00FF00,
|
||||
'blue' :0x0000FF,
|
||||
'cyan' :0x00FFFF,
|
||||
'magenta':0xFF00FF,
|
||||
'yellow' :0xFFFF00,
|
||||
'white' :0xFFFFFF }
|
||||
|
||||
# Change colour of Glyphs
|
||||
for line in inpf.readlines() :
|
||||
glyphn, colour = line.strip().split(",") # will exception if not 2 elements
|
||||
colour=colour.lower()
|
||||
if glyphn[0] in '"\'' : glyphn = glyphn[1:-1] # slice off quote marks, if present
|
||||
if glyphn not in font:
|
||||
logf.write("Glyph %s not in font\n" % (glyphn))
|
||||
print "Glyph %s not in font" % (glyphn)
|
||||
continue
|
||||
g = font[glyphn]
|
||||
if colour in colours.keys():
|
||||
g.color=colours[colour]
|
||||
else:
|
||||
logf.write("Glyph: %s - non-standard colour %s\n" % (glyphn,colour))
|
||||
print "Glyph: %s - non-standard colour %s" % (glyphn,colour)
|
||||
|
||||
logf.close()
|
||||
return font
|
||||
|
||||
def cmd() : execute("FF",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
44
examples/fontforge-old/FFcompareFonts.py
Executable file
44
examples/fontforge-old/FFcompareFonts.py
Executable file
|
@ -0,0 +1,44 @@
|
|||
#!/usr/bin/env python3
|
||||
'Compare two fonts based on specified criteria and report differences'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('ifont2',{'help': 'Input font file 2'}, {'type': 'infont', 'def': 'new'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': 'compareFonts.log'}),
|
||||
('-o','--options',{'help': 'Options', 'choices': ['c'], 'nargs': '*'}, {})
|
||||
]
|
||||
|
||||
def doit(args) :
|
||||
font1=args.ifont
|
||||
font2=args.ifont2
|
||||
logf = args.log
|
||||
options = args.options
|
||||
logf.write("Comparing fonts: \n %s (%s)\n %s (%s)\n" % (font1.path,font1.fontname,font2.path,font2.fontname))
|
||||
if options != None : logf.write('with options: %s\n' % (options))
|
||||
logf.write("\n")
|
||||
compare(font1,font2,logf,options)
|
||||
compare(font2,font1,logf,None) # Compare again the other way around, just looking for missing Glyphs
|
||||
logf.close()
|
||||
return
|
||||
|
||||
def compare(fonta,fontb,logf,options) :
|
||||
for glyph in fonta :
|
||||
if glyph in fontb :
|
||||
if options != None : # Do extra checks based on options supplied
|
||||
ga=fonta[glyph]
|
||||
gb=fontb[glyph]
|
||||
for opt in options :
|
||||
if opt == "c" :
|
||||
if len(ga.references) != len(gb.references) :
|
||||
logf.write("Glyph %s: number of components is different - %s v %s\n" % (glyph,len(ga.references),len(gb.references)))
|
||||
else :
|
||||
logf.write("Glyph %s missing from %s\n" % (glyph,fonta.path))
|
||||
|
||||
def cmd() : execute("FF",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
48
examples/fontforge-old/FFdblEncode.py
Executable file
48
examples/fontforge-old/FFdblEncode.py
Executable file
|
@ -0,0 +1,48 @@
|
|||
#!/usr/bin/env python3
|
||||
'''FontForge: Double encode glyphs based on double encoding data in a file
|
||||
Lines in file should look like: "LtnSmARetrHook",U+F236,U+1D8F'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output font file','nargs': '?' }, {'type': 'outfont', 'def': 'new'}),
|
||||
('-i','--input',{'help': 'Input csv text file'}, {'type': 'infile', 'def': 'DblEnc.txt'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': 'DblEnc.log'})]
|
||||
|
||||
def doit(args) :
|
||||
font = args.ifont
|
||||
inpf = args.input
|
||||
logf = args.log
|
||||
#Create dbl_encode list from the input file
|
||||
dbl_encode = {}
|
||||
for line in inpf.readlines() :
|
||||
glyphn, pua_usv_str, std_usv_str = line.strip().split(",") # will exception if not 3 elements
|
||||
if glyphn[0] in '"\'' : glyphn = glyphn[1:-1] # slice off quote marks, if present
|
||||
pua_usv, std_usv = int(pua_usv_str[2:], 16), int(std_usv_str[2:], 16)
|
||||
dbl_encode[glyphn] = [std_usv, pua_usv]
|
||||
inpf.close()
|
||||
|
||||
for glyph in sorted(dbl_encode.keys()) :
|
||||
if glyph not in font:
|
||||
logf.write("Glyph %s not in font\n" % (glyph))
|
||||
continue
|
||||
g = font[glyph]
|
||||
ousvs=[g.unicode]
|
||||
oalt=g.altuni
|
||||
if oalt != None:
|
||||
for au in oalt:
|
||||
ousvs.append(au[0]) # (may need to check variant flag)
|
||||
dbl = dbl_encode[glyph]
|
||||
g.unicode = dbl[0]
|
||||
g.altuni = ((dbl[1],),)
|
||||
logf.write("encoding for %s changed: %s -> %s\n" % (glyph, ousvs, dbl))
|
||||
logf.close()
|
||||
return font
|
||||
|
||||
def cmd() : execute("FF",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
74
examples/fontforge-old/FFfromAP.py
Executable file
74
examples/fontforge-old/FFfromAP.py
Executable file
|
@ -0,0 +1,74 @@
|
|||
#!/usr/bin/env python3
|
||||
'''Import Attachment Point database into a fontforge font'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Martin Hosken'
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [
|
||||
('ifont', {'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('ofont', {'help': 'Output font file'}, {'type': 'outfont'}),
|
||||
('-a','--ap', {'nargs' : 1, 'help': 'Input AP database (required)'}, {})
|
||||
]
|
||||
|
||||
def assign(varlist, expr) :
|
||||
"""passes a variable to be assigned as a list and returns the value"""
|
||||
varlist[0] = expr
|
||||
return expr
|
||||
|
||||
def getuidenc(e, f) :
|
||||
if 'UID' in e.attrib :
|
||||
u = int(e.get('UID'), 16)
|
||||
return f.findEncodingSlot(u)
|
||||
else :
|
||||
return -1
|
||||
|
||||
def getgid(e, f) :
|
||||
if 'GID' in e.attrib :
|
||||
return int(e.get('GID'))
|
||||
else :
|
||||
return -1
|
||||
|
||||
def doit(args) :
|
||||
from xml.etree.ElementTree import parse
|
||||
|
||||
f = args.ifont
|
||||
g = None
|
||||
etree = parse(args.ap)
|
||||
u = []
|
||||
for e in etree.getroot().iterfind("glyph") :
|
||||
name = e.get('PSName')
|
||||
if name in f :
|
||||
g = f[name]
|
||||
elif assign(u, getuidenc(e, f)) != -1 :
|
||||
g = f[u[0]]
|
||||
elif assign(u, getgid(e, f)) != -1 :
|
||||
g = f[u[0]]
|
||||
elif g is not None : # assume a rename so just take next glyph
|
||||
g = f[g.encoding + 1]
|
||||
else :
|
||||
g = f[0]
|
||||
g.name = name
|
||||
g.anchorPoints = ()
|
||||
for p in e.iterfind('point') :
|
||||
pname = p.get('type')
|
||||
l = p[0]
|
||||
x = int(l.get('x'))
|
||||
y = int(l.get('y'))
|
||||
if pname.startswith('_') :
|
||||
ptype = 'mark'
|
||||
pname = pname[1:]
|
||||
else :
|
||||
ptype = 'base'
|
||||
g.addAnchorPoint(pname, ptype, float(x), float(y))
|
||||
comment = []
|
||||
for p in e.iterfind('property') :
|
||||
comment.append("{}: {}".format(e.get('name'), e.get('value')))
|
||||
for p in e.iterfind('note') :
|
||||
comment.append(e.text.strip())
|
||||
g.comment = "\n".join(comment)
|
||||
|
||||
def cmd() : execute("FF",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
38
examples/fontforge-old/FFlistAPNum.py
Executable file
38
examples/fontforge-old/FFlistAPNum.py
Executable file
|
@ -0,0 +1,38 @@
|
|||
#!/usr/bin/env python3
|
||||
'FontForge: Report Glyph name, number of anchors - sorted by number of anchors'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('-o','--output',{'help': 'Output text file'}, {'type': 'outfile', 'def': 'APnum.txt'})]
|
||||
|
||||
def doit(args) :
|
||||
font = args.ifont
|
||||
outf = args.output
|
||||
|
||||
# Make a list of glyphs and number of anchor points
|
||||
AP_lst = []
|
||||
for glyph in font:
|
||||
AP_lst.append( [glyph, len(font[glyph].anchorPoints)] )
|
||||
# Sort by numb of APs then glyphname
|
||||
AP_lst.sort(AP_cmp)
|
||||
for AP in AP_lst:
|
||||
outf.write("%s,%s\n" % (AP[0], AP[1]))
|
||||
|
||||
outf.close()
|
||||
print "done"
|
||||
|
||||
def AP_cmp(a, b): # Comparison to sort first by number of attachment points) then by Glyph name
|
||||
c = cmp(a[1], b[1])
|
||||
if c != 0:
|
||||
return c
|
||||
else:
|
||||
return cmp(a[0], b[0])
|
||||
|
||||
def cmd() : execute("FF",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
22
examples/fontforge-old/FFlistGlyphNames.py
Executable file
22
examples/fontforge-old/FFlistGlyphNames.py
Executable file
|
@ -0,0 +1,22 @@
|
|||
#!/usr/bin/env python3
|
||||
'FontForge: List all gyphs with encoding and name'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('-o','--output',{'help': 'Output text file'}, {'type': 'outfile', 'def': 'Gnames.txt'})]
|
||||
|
||||
def doit(args) :
|
||||
outf = args.output
|
||||
for glyph in args.ifont:
|
||||
g = args.ifont[glyph]
|
||||
outf.write('%s: %s, %s\n' % (glyph, g.encoding, g.glyphname))
|
||||
outf.close()
|
||||
|
||||
def cmd() : execute("FF",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
61
examples/fontforge-old/FFlistGlyphinfo.py
Executable file
61
examples/fontforge-old/FFlistGlyphinfo.py
Executable file
|
@ -0,0 +1,61 @@
|
|||
#!/usr/bin/env python3
|
||||
'FontForge: List all the data in a glyph object in key, value pairs'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
import fontforge, types, sys
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [
|
||||
('font',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('-o','--output',{'help': 'Output text file'}, {'type': 'outfile', 'def': 'glyphinfo.txt'})]
|
||||
|
||||
|
||||
def doit(args) :
|
||||
font=args.font
|
||||
outf = args.output
|
||||
|
||||
glyphn = raw_input("Glyph name or number: ")
|
||||
|
||||
while glyphn:
|
||||
|
||||
isglyph=True
|
||||
if not(glyphn in font):
|
||||
try:
|
||||
glyphn=int(glyphn)
|
||||
except ValueError:
|
||||
isglyph=False
|
||||
else:
|
||||
if not(glyphn in font):
|
||||
isglyph=False
|
||||
|
||||
if isglyph:
|
||||
g=font[glyphn]
|
||||
outf.write("\n%s\n\n" % glyphn)
|
||||
# Write to file all normal key,value pairs - exclude __ and built in functions
|
||||
for k in dir(g):
|
||||
if k[0:2] == "__": continue
|
||||
attrk=getattr(g,k)
|
||||
if attrk is None: continue
|
||||
tk=type(attrk)
|
||||
if tk == types.BuiltinFunctionType: continue
|
||||
if k == "ttinstrs": # ttinstr values are not printable characters
|
||||
outf.write("%s,%s\n" % (k,"<has values>"))
|
||||
else:
|
||||
outf.write("%s,%s\n" % (k,attrk))
|
||||
# Write out all normal keys where value is none
|
||||
for k in dir(g):
|
||||
attrk=getattr(g,k)
|
||||
if attrk is None:
|
||||
outf.write("%s,%s\n" % (k,attrk))
|
||||
else:
|
||||
print "Invalid glyph"
|
||||
|
||||
glyphn = raw_input("Glyph name or number: ")
|
||||
print "done"
|
||||
outf.close
|
||||
|
||||
def cmd() : execute("FF",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
33
examples/fontforge-old/FFlistRefNum.py
Executable file
33
examples/fontforge-old/FFlistRefNum.py
Executable file
|
@ -0,0 +1,33 @@
|
|||
#!/usr/bin/env python3
|
||||
'FontForge: Report Glyph name, Number of references (components)'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('-o','--output',{'help': 'Output text file'}, {'type': 'outfile', 'def': 'RefNum.txt'})]
|
||||
|
||||
def doit(args) :
|
||||
font = args.ifont
|
||||
outf = args.output
|
||||
|
||||
outf.write("# glyphs with number of components\n\n")
|
||||
for glyph in font:
|
||||
gname=font[glyph].glyphname
|
||||
ref = font[glyph].references
|
||||
if ref is None:
|
||||
n=0
|
||||
else:
|
||||
n=len(ref)
|
||||
outf.write("%s %i\n" % (gname,n))
|
||||
|
||||
outf.close()
|
||||
|
||||
print "Done!"
|
||||
|
||||
def cmd() : execute("FF",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
38
examples/fontforge-old/FFnameSearchNReplace.py
Executable file
38
examples/fontforge-old/FFnameSearchNReplace.py
Executable file
|
@ -0,0 +1,38 @@
|
|||
#!/usr/bin/env python3
|
||||
'Search and replace strings in Glyph names. Strings can be regular expressions'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
import re
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output font file','nargs': '?' }, {'type': 'outfont', 'def': 'new'}),
|
||||
('search',{'help': 'Expression to search for'}, {}),
|
||||
('replace',{'help': 'Expression to replace with'}, {}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': 'searchNReplace.log'})]
|
||||
|
||||
def doit(args) :
|
||||
font=args.ifont
|
||||
search=args.search
|
||||
replace=args.replace
|
||||
logf = args.log
|
||||
|
||||
changes=False
|
||||
for glyph in font :
|
||||
newname = re.sub(search, replace, glyph)
|
||||
if newname != glyph :
|
||||
font[glyph].glyphname=newname
|
||||
changes=True
|
||||
logf.write('Glyph %s renamed to %s\n' % (glyph,newname))
|
||||
logf.close()
|
||||
if changes :
|
||||
return font
|
||||
else :
|
||||
return
|
||||
|
||||
def cmd() : execute("FF",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
50
examples/fontforge-old/FFundblEncode.py
Executable file
50
examples/fontforge-old/FFundblEncode.py
Executable file
|
@ -0,0 +1,50 @@
|
|||
#!/usr/bin/env python3
|
||||
'''FontForge: Re-encode double-encoded glyphs based on double encoding data in a file
|
||||
Lines in file should look like: "LtnSmARetrHook",U+F236,U+1D8F'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output font file','nargs': '?' }, {'type': 'outfont', 'def': 'new'}),
|
||||
('-i','--input',{'help': 'Input csv text file'}, {'type': 'infile', 'def': 'DblEnc.txt'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': 'unDblEnc.log'})]
|
||||
|
||||
def doit(args) :
|
||||
font=args.ifont
|
||||
inpf = args.input
|
||||
logf = args.log
|
||||
# Create dbl_encode list from the input file
|
||||
dbl_encode = {}
|
||||
for line in inpf.readlines():
|
||||
glyphn, pua_usv_str, std_usv_str = line.strip().split(",") # will exception if not 3 elements
|
||||
if glyphn[0] in '"\'' : glyphn = glyphn[1:-1] # slice off quote marks, if present
|
||||
pua_usv, std_usv = int(pua_usv_str[2:], 16), int(std_usv_str[2:], 16)
|
||||
dbl_encode[glyphn] = [std_usv, pua_usv]
|
||||
inpf.close()
|
||||
|
||||
for glyph in sorted(dbl_encode.keys()):
|
||||
logf.write (reincode(font,glyph,dbl_encode[glyph][0]))
|
||||
logf.write (reincode(font,glyph+"Dep",dbl_encode[glyph][1]))
|
||||
logf.close()
|
||||
return font
|
||||
|
||||
def reincode(font,glyph,usv):
|
||||
if glyph not in font:
|
||||
return ("Glyph %s not in font\n" % (glyph))
|
||||
g = font[glyph]
|
||||
ousvs=[g.unicode]
|
||||
oalt=g.altuni
|
||||
if oalt != None:
|
||||
for au in oalt:
|
||||
ousvs.append(au[0]) # (may need to check variant flag)
|
||||
g.unicode = usv
|
||||
g.altuni = None
|
||||
return ("encoding for %s changed: %s -> %s\n" % (glyph, ousvs, usv))
|
||||
|
||||
def cmd() : execute("FF",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
38
examples/fontforge-old/demoAddToMenu.py
Executable file
38
examples/fontforge-old/demoAddToMenu.py
Executable file
|
@ -0,0 +1,38 @@
|
|||
#!/usr/bin/env python3
|
||||
'FontForge: Demo script to add menu items to FF tools menu'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2014 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
import sys, os, fontforge
|
||||
sys.path.append(os.path.join(os.environ['HOME'], 'src/pysilfont/scripts'))
|
||||
import samples.demoFunctions
|
||||
from samples.demoFunctions import functionList, callFunctions
|
||||
#from samples.demoCallFunctions import callFunctions
|
||||
|
||||
def toolMenuFunction(functionGroup,font) :
|
||||
reload (samples.demoFunctions)
|
||||
callFunctions(functionGroup,font)
|
||||
|
||||
funcList=functionList()
|
||||
|
||||
for functionGroup in funcList :
|
||||
menuType = funcList[functionGroup][0]
|
||||
fontforge.registerMenuItem(toolMenuFunction,None,functionGroup,menuType,None,functionGroup);
|
||||
print functionGroup, " registered"
|
||||
|
||||
''' This script needs to be called from one of the folders that FontForge looks in for scripts to
|
||||
run when it is started. With current versions of FontForge, one is Home/.config/fontforge/python.
|
||||
You may need to turn on showing hidden files (ctrl-H in Nautilus) before you can see the .config
|
||||
folder. Within there create a one-line python script, say call sampledemo.py containing a call
|
||||
to this script, eg:
|
||||
|
||||
execfile("/home/david/src/pysilfont/scripts/samples/demoAddToMenu.py")
|
||||
|
||||
Due to the reload(samples.demoFunctions) line above, changes functions defined in demoFunctions.py
|
||||
are dynamic, ie FontForge does not have to be restarted (as would be the case if the functions were
|
||||
called directly from the tools menu. Functions can even be added dynamically to the function groups.
|
||||
|
||||
If new function groups are defined, FontForge does have to be restarted to add them to the tools menu.
|
||||
'''
|
29
examples/fontforge-old/demoExecuteScript.py
Executable file
29
examples/fontforge-old/demoExecuteScript.py
Executable file
|
@ -0,0 +1,29 @@
|
|||
#!/usr/bin/env python3
|
||||
'FontForge: Demo code to paste into the "Execute Script" dialog'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2013 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
import sys, os, fontforge
|
||||
sys.path.append(os.path.join(os.environ['HOME'], 'src/pysilfont/scripts'))
|
||||
import samples.demoFunctions # Loads demoFunctions.py module from src/pysilfont/scripts/samples
|
||||
reload (samples.demoFunctions) # Reload the demo module each time you execute the script to pick up any recent edits
|
||||
samples.demoFunctions.callFunctions("Colour Glyphs",fontforge.activeFont())
|
||||
|
||||
'''Demo usage:
|
||||
Open the "Execute Script" dialog (from the FontForge File menu or press ctrl+.),
|
||||
paste just the code section this (from "import..." to "samples...") into there then
|
||||
run it (Alt+o) and see how it pops up a dialogue with a choice of 3 functions to run.
|
||||
Edit demoFunctions.py and alter one of the functions.
|
||||
Execute the script again and see that that the function's behaviour has changed.
|
||||
|
||||
Additional functions can be added to demoFunctions.py and, if also defined functionList()
|
||||
become availably immdiately.
|
||||
|
||||
If you want to see the output from print statements, or use commands like input, (eg
|
||||
for degugging purposes) then start FontForge from a terminal window rather than the
|
||||
desktop launcher.
|
||||
|
||||
When starting from a terminal window, you can also specify the font to use,
|
||||
eg $ fontforge /home/david/RFS/GenBasR.sfd'''
|
90
examples/fontforge-old/demoFunctions.py
Executable file
90
examples/fontforge-old/demoFunctions.py
Executable file
|
@ -0,0 +1,90 @@
|
|||
#!/usr/bin/env python3
|
||||
'FontForge: Sample functions to call from other demo scripts'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2014 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
import fontforge
|
||||
|
||||
def colLtnAGlyphs(font) :
|
||||
|
||||
#print "Toggling colour of glyphs with LtnCapA in their name"
|
||||
for glyph in font:
|
||||
g = font[glyph]
|
||||
if glyph.find('LtnCapA') >= 0:
|
||||
if g.color != 0x00FF00:
|
||||
g.color = 0x00FF00 # Green
|
||||
else :
|
||||
g.color = 0xFFFFFF # White
|
||||
print "LtnCapA glyphs coloured"
|
||||
|
||||
def markOverlaps(font) :
|
||||
print "Toggling colour of glyphs where contours overlap"
|
||||
for glyph in font:
|
||||
g = font[glyph]
|
||||
if g.selfIntersects() :
|
||||
if g.color != 0xFF0000:
|
||||
g.color = 0xFF0000 # Red
|
||||
else :
|
||||
g.color = 0xFFFFFF # White
|
||||
print "Glyphs coloured"
|
||||
|
||||
def markScaled(font) :
|
||||
print "Toggling colour of glyphs with scaled components"
|
||||
for glyph in font:
|
||||
g = font[glyph]
|
||||
for ref in g.references:
|
||||
transform=ref[1]
|
||||
if transform[0] != 1.0 or transform[3] != 1.0 :
|
||||
if g.color != 0xFF0000:
|
||||
g.color = 0xFF0000 # Red
|
||||
else :
|
||||
g.color = 0xFFFFFF # White
|
||||
print "Glyphs coloured"
|
||||
|
||||
def clearColours(font) :
|
||||
for glyph in font :
|
||||
g = font[glyph]
|
||||
g.color = 0xFFFFFF
|
||||
|
||||
def functionList() :
|
||||
''' Returns a dictionary to be used by callFunctions() and demoAddToMenu.py
|
||||
The dictionary is indexed by a group name which could be used as Tools menu
|
||||
entry or to reference the group of functions. For each group there is a tuple
|
||||
consisting of the Tools menu type (Font or Glyph) then one tuple per function.
|
||||
For each function the tuple contains:
|
||||
Function name
|
||||
Label for the individual function in dialog box called from Tools menu
|
||||
Actual function object'''
|
||||
funcList = {
|
||||
"Colour Glyphs":("Font",
|
||||
("colLtnAGlyphs","Colour Latin A Glyphs",colLtnAGlyphs),
|
||||
("markOverlaps","Mark Overlaps",markOverlaps),
|
||||
("markScaled","Mark Scaled",markScaled),
|
||||
("clearColours","Clear all colours",clearColours)),
|
||||
"Group with single item":("Font",
|
||||
("clearColours","Clear all colours",clearColours))}
|
||||
return funcList
|
||||
|
||||
def callFunctions(functionGroup,font) :
|
||||
funcList=functionList()[functionGroup]
|
||||
i=0
|
||||
for tuple in funcList :
|
||||
if i == 0 :
|
||||
pass # Font/Glyph parameter not relevant here
|
||||
elif i == 1 :
|
||||
functionDescs=[tuple[1]]
|
||||
functions=[tuple[2]]
|
||||
else :
|
||||
functionDescs.append(tuple[1])
|
||||
functions.append(tuple[2])
|
||||
i=i+1
|
||||
|
||||
if i == 2 : # Only one function in the group, so just call the function
|
||||
functions[0](font)
|
||||
else :
|
||||
functionNum=fontforge.ask(functionGroup,"Please choose the function to run",functionDescs)
|
||||
functions[functionNum](font)
|
||||
|
||||
|
20
examples/gdl/__init__.py
Normal file
20
examples/gdl/__init__.py
Normal file
|
@ -0,0 +1,20 @@
|
|||
# Copyright 2012, SIL International
|
||||
# All rights reserved.
|
||||
#
|
||||
# This library is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Lesser General Public License as published
|
||||
# by the Free Software Foundation; either version 2.1 of License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
# Lesser General Public License for more details.
|
||||
#
|
||||
# You should also have received a copy of the GNU Lesser General Public
|
||||
# License along with this library in the file named "LICENSE".
|
||||
# If not, write to the Free Software Foundation, 51 Franklin Street,
|
||||
# suite 500, Boston, MA 02110-1335, USA or visit their web page on the
|
||||
# internet at https://www.fsf.org/licenses/lgpl.html.
|
||||
|
||||
__all__ = ['makegdl', 'psnames']
|
394
examples/gdl/font.py
Normal file
394
examples/gdl/font.py
Normal file
|
@ -0,0 +1,394 @@
|
|||
#!/usr/bin/env python
|
||||
'The main font object for GDL creation. Depends on fonttools'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2012 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
|
||||
import os, re, traceback
|
||||
from silfont.gdl.glyph import Glyph
|
||||
from silfont.gdl.psnames import Name
|
||||
from xml.etree.cElementTree import ElementTree, parse, Element
|
||||
from fontTools.ttLib import TTFont
|
||||
|
||||
# A collection of glyphs that have a given attachment point defined
|
||||
class PointClass(object) :
|
||||
|
||||
def __init__(self, name) :
|
||||
self.name = name
|
||||
self.glyphs = []
|
||||
self.dias = []
|
||||
|
||||
def addBaseGlyph(self, g) :
|
||||
self.glyphs.append(g)
|
||||
|
||||
def addDiaGlyph(self, g) :
|
||||
self.dias.append(g)
|
||||
g.isDia = True
|
||||
|
||||
def hasDias(self) :
|
||||
if len(self.dias) and len(self.glyphs) :
|
||||
return True
|
||||
else :
|
||||
return False
|
||||
|
||||
def classGlyphs(self, isDia = False) :
|
||||
if isDia :
|
||||
return self.dias
|
||||
else :
|
||||
return self.glyphs
|
||||
|
||||
def isNotInClass(self, g, isDia = False) :
|
||||
if not g : return False
|
||||
if not g.isDia : return False
|
||||
|
||||
if isDia :
|
||||
return g not in self.dias
|
||||
else :
|
||||
return g not in self.dias and g not in self.glyphs
|
||||
|
||||
|
||||
class FontClass(object) :
|
||||
|
||||
def __init__(self, elements = None, fname = None, lineno = None, generated = False, editable = False) :
|
||||
self.elements = elements or []
|
||||
self.fname = fname
|
||||
self.lineno = lineno
|
||||
self.generated = generated
|
||||
self.editable = editable
|
||||
|
||||
def append(self, element) :
|
||||
self.elements.append(element)
|
||||
|
||||
|
||||
class Font(object) :
|
||||
|
||||
def __init__(self, fontfile) :
|
||||
self.glyphs = []
|
||||
self.psnames = {}
|
||||
self.canons = {}
|
||||
self.gdls = {}
|
||||
self.anchors = {}
|
||||
self.ligs = {}
|
||||
self.subclasses = {}
|
||||
self.points = {}
|
||||
self.classes = {}
|
||||
self.aliases = {}
|
||||
self.rules = {}
|
||||
self.posRules = {}
|
||||
if fontfile :
|
||||
self.font = TTFont(fontfile)
|
||||
for i, n in enumerate(self.font.getGlyphOrder()) :
|
||||
self.addGlyph(i, n)
|
||||
else :
|
||||
self.font = None
|
||||
|
||||
def __len__(self) :
|
||||
return len(self.glyphs)
|
||||
|
||||
# [] syntax returns the indicated element of the glyphs array.
|
||||
def __getitem__(self, y) :
|
||||
try :
|
||||
return self.glyphs[y]
|
||||
except IndexError :
|
||||
return None
|
||||
|
||||
def glyph(self, name) :
|
||||
return self.psnames.get(name, None)
|
||||
|
||||
def alias(self, s) :
|
||||
return self.aliases.get(s, s)
|
||||
|
||||
def emunits(self) :
|
||||
return 0
|
||||
|
||||
def initGlyphs(self, nGlyphs) :
|
||||
#print "Font::initGlyphs",nGlyphs
|
||||
self.glyphs = [None] * nGlyphs
|
||||
self.numRealGlyphs = nGlyphs # does not include pseudo-glyphs
|
||||
self.psnames = {}
|
||||
self.canons = {}
|
||||
self.gdls = {}
|
||||
self.classes = {}
|
||||
|
||||
def addGlyph(self, index = None, psName = None, gdlName = None, factory = Glyph) :
|
||||
#print "Font::addGlyph",index,psName,gdlName
|
||||
if psName in self.psnames :
|
||||
return self.psnames[psName]
|
||||
if index is not None and index < len(self.glyphs) and self.glyphs[index] :
|
||||
g = self.glyphs[index]
|
||||
return g
|
||||
g = factory(psName, index) # create a new glyph of the given class
|
||||
self.renameGlyph(g, psName, gdlName)
|
||||
if index is None : # give it the next available index
|
||||
index = len(self.glyphs)
|
||||
self.glyphs.append(g)
|
||||
elif index >= len(self.glyphs) :
|
||||
self.glyphs.extend([None] * (len(self.glyphs) - index + 1))
|
||||
self.glyphs[index] = g
|
||||
return g
|
||||
|
||||
def renameGlyph(self, g, name, gdlName = None) :
|
||||
if g.psname != name :
|
||||
for n in g.parseNames() :
|
||||
del self.psnames[n.psname]
|
||||
del self.canons[n.canonical()]
|
||||
if gdlName :
|
||||
self.setGDL(g, gdlName)
|
||||
else :
|
||||
self.setGDL(g, g.GDLName())
|
||||
for n in g.parseNames() :
|
||||
if n is None : break
|
||||
self.psnames[n.psname] = g
|
||||
self.canons[n.canonical()] = (n, g)
|
||||
|
||||
def setGDL(self, glyph, name) :
|
||||
if not glyph : return
|
||||
n = glyph.GDLName()
|
||||
if n != name and n in self.gdls : del self.gdls[n]
|
||||
if name and name in self.gdls and self.gdls[name] is not glyph :
|
||||
count = 1
|
||||
index = -2
|
||||
name = name + "_1"
|
||||
while name in self.gdls :
|
||||
if self.gdls[name] is glyph : break
|
||||
count = count + 1
|
||||
name = name[0:index] + "_" + str(count)
|
||||
if count == 10 : index = -3
|
||||
if count == 100 : index = -4
|
||||
self.gdls[name] = glyph
|
||||
glyph.setGDL(name)
|
||||
|
||||
def addClass(self, name, elements, fname = None, lineno = 0, generated = False, editable = False) :
|
||||
if name :
|
||||
self.classes[name] = FontClass(elements, fname, lineno, generated, editable)
|
||||
|
||||
def addGlyphClass(self, name, gid, editable = False) :
|
||||
if name not in self.classes :
|
||||
self.classes[name] = FontClass()
|
||||
if gid not in self.classes[name].elements :
|
||||
self.classes[name].append(gid)
|
||||
|
||||
def addRules(self, rules, index) :
|
||||
self.rules[index] = rules
|
||||
|
||||
def addPosRules(self, rules, index) :
|
||||
self.posRules[index] = rules
|
||||
|
||||
def classUpdated(self, name, value) :
|
||||
c = []
|
||||
if name in self.classes :
|
||||
for gid in self.classes[name].elements :
|
||||
g = self[gid]
|
||||
if g : g.removeClass(name)
|
||||
if value is None and name in classes :
|
||||
del self.classes[name]
|
||||
return
|
||||
for n in value.split() :
|
||||
g = self.gdls.get(n, None)
|
||||
if g :
|
||||
c.append(g.gid)
|
||||
g.addClass(name)
|
||||
if name in self.classes :
|
||||
self.classes[name].elements = c
|
||||
else :
|
||||
self.classes[name] = FontClass(c)
|
||||
|
||||
# Return the list of classes that should be updated in the AP XML file.
|
||||
# This does not include classes that are auto-generated or defined in the hand-crafted GDL code.
|
||||
def filterAutoClasses(self, names, autoGdlFile) :
|
||||
res = []
|
||||
for n in names :
|
||||
c = self.classes[n]
|
||||
if not c.generated and (not c.fname or c.fname == autoGdlFile) : res.append(n)
|
||||
return res
|
||||
|
||||
def loadAlias(self, fname) :
|
||||
with open(fname) as f :
|
||||
for l in f.readlines() :
|
||||
l = l.strip()
|
||||
l = re.sub(ur'#.*$', '', l).strip()
|
||||
if not len(l) : continue
|
||||
try :
|
||||
k, v = re.split(ur'\s*[,;\s]\s*', l, 1)
|
||||
except ValueError :
|
||||
k = l
|
||||
v = ''
|
||||
self.aliases[k] = v
|
||||
|
||||
# TODO: move this method to GraideFont, or refactor
|
||||
def loadAP(self, apFileName) :
|
||||
if not os.path.exists(apFileName) : return False
|
||||
etree = parse(apFileName)
|
||||
self.initGlyphs(len(etree.getroot())) # guess each child is a glyph
|
||||
i = 0
|
||||
for e in etree.getroot().iterfind("glyph") :
|
||||
g = self.addGlyph(i, e.get('PSName'))
|
||||
g.readAP(e, self)
|
||||
i += 1
|
||||
return True
|
||||
|
||||
def saveAP(self, apFileName, autoGdlFile) :
|
||||
root = Element('font')
|
||||
root.set('upem', str(self.emunits()))
|
||||
root.set('producer', 'graide 1.0')
|
||||
root.text = "\n\n"
|
||||
for g in self.glyphs :
|
||||
if g : g.createAP(root, self, autoGdlFile)
|
||||
ElementTree(root).write(apFileName, encoding="utf-8", xml_declaration=True)
|
||||
|
||||
def createClasses(self) :
|
||||
self.subclasses = {}
|
||||
for k, v in self.canons.items() :
|
||||
if v[0].ext :
|
||||
h = v[0].head()
|
||||
o = self.canons.get(h.canonical(), None)
|
||||
if o :
|
||||
if v[0].ext not in self.subclasses : self.subclasses[v[0].ext] = {}
|
||||
self.subclasses[v[0].ext][o[1].GDLName()] = v[1].GDLName()
|
||||
# for g in self.glyphs :
|
||||
# if not g : continue
|
||||
# for c in g.classes :
|
||||
# if c not in self.classes :
|
||||
# self.classes[c] = []
|
||||
# self.classes[c].append(g.gid)
|
||||
|
||||
def calculatePointClasses(self) :
|
||||
self.points = {}
|
||||
for g in self.glyphs :
|
||||
if not g : continue
|
||||
for apName in g.anchors.keys() :
|
||||
genericName = apName[:-1] # without the M or S
|
||||
if genericName not in self.points :
|
||||
self.points[genericName] = PointClass(genericName)
|
||||
if apName.endswith('S') :
|
||||
self.points[genericName].addBaseGlyph(g)
|
||||
else :
|
||||
self.points[genericName].addDiaGlyph(g)
|
||||
|
||||
def calculateOTLookups(self) :
|
||||
if self.font :
|
||||
for t in ('GSUB', 'GPOS') :
|
||||
if t in self.font :
|
||||
self.font[t].table.LookupList.process(self)
|
||||
|
||||
def getPointClasses(self) :
|
||||
if len(self.points) == 0 :
|
||||
self.calculatePointClasses()
|
||||
return self.points
|
||||
|
||||
def ligClasses(self) :
|
||||
self.ligs = {}
|
||||
for g in self.glyphs :
|
||||
if not g or not g.name : continue
|
||||
(h, t) = g.name.split_last()
|
||||
if t :
|
||||
o = self.canons.get(h.canonical(), None)
|
||||
if o and o[0].ext == t.ext :
|
||||
t.ext = None
|
||||
t.cname = None
|
||||
tn = t.canonical(noprefix = True)
|
||||
if tn in self.ligs :
|
||||
self.ligs[tn].append((g.GDLName(), o[0].GDL()))
|
||||
else :
|
||||
self.ligs[tn] = [(g.GDLName(), o[0].GDL())]
|
||||
|
||||
def outGDL(self, fh, args) :
|
||||
munits = self.emunits()
|
||||
fh.write('table(glyph) {MUnits = ' + str(munits) + '};\n')
|
||||
nglyphs = 0
|
||||
for g in self.glyphs :
|
||||
if not g or not g.psname : continue
|
||||
if g.psname == '.notdef' :
|
||||
fh.write(g.GDLName() + ' = glyphid(0)')
|
||||
else :
|
||||
fh.write(g.GDLName() + ' = postscript("' + g.psname + '")')
|
||||
outs = []
|
||||
if len(g.anchors) :
|
||||
for a in g.anchors.keys() :
|
||||
v = g.anchors[a]
|
||||
outs.append(a + "=point(" + str(int(v[0])) + "m, " + str(int(v[1])) + "m)")
|
||||
for (p, v) in g.gdl_properties.items() :
|
||||
outs.append("%s=%s" % (p, v))
|
||||
if len(outs) : fh.write(" {" + "; ".join(outs) + "}")
|
||||
fh.write(";\n")
|
||||
nglyphs += 1
|
||||
fh.write("\n")
|
||||
fh.write("\n/* Point Classes */\n")
|
||||
for p in sorted(self.points.values(), key=lambda x: x.name) :
|
||||
if not p.hasDias() : continue
|
||||
n = p.name + "Dia"
|
||||
self.outclass(fh, "c" + n, p.classGlyphs(True))
|
||||
self.outclass(fh, "cTakes" + n, p.classGlyphs(False))
|
||||
self.outclass(fh, 'cn' + n, filter(lambda x : p.isNotInClass(x, True), self.glyphs))
|
||||
self.outclass(fh, 'cnTakes' + n, filter(lambda x : p.isNotInClass(x, False), self.glyphs))
|
||||
fh.write("\n/* Classes */\n")
|
||||
for c in sorted(self.classes.keys()) : # c = class name, l = class object
|
||||
if c not in self.subclasses and not self.classes[c].generated : # don't output the class to the AP file if it was autogenerated
|
||||
self.outclass(fh, c, self.classes[c].elements)
|
||||
for p in self.subclasses.keys() :
|
||||
ins = []
|
||||
outs = []
|
||||
for k, v in self.subclasses[p].items() :
|
||||
ins.append(k)
|
||||
outs.append(v)
|
||||
n = p.replace('.', '_')
|
||||
self.outclass(fh, 'cno_' + n, ins)
|
||||
self.outclass(fh, 'c' + n, outs)
|
||||
fh.write("/* Ligature Classes */\n")
|
||||
for k in sorted(self.ligs.keys()) :
|
||||
self.outclass(fh, "clig" + k, map(lambda x: self.gdls[x[0]], self.ligs[k]))
|
||||
self.outclass(fh, "cligno_" + k, map(lambda x: self.gdls[x[1]], self.ligs[k]))
|
||||
fh.write("\nendtable;\n")
|
||||
fh.write("/* Substitution Rules */\n")
|
||||
for k, v in sorted(self.rules.items(), key=lambda x:map(int,x[0].split('_'))) :
|
||||
fh.write('\n// lookup ' + k + '\n')
|
||||
fh.write('// ' + "\n// ".join(v) + "\n")
|
||||
fh.write("\n/* Positioning Rules */\n")
|
||||
for k, v in sorted(self.posRules.items(), key=lambda x:map(int,x[0].split('_'))) :
|
||||
fh.write('\n// lookup ' + k + '\n')
|
||||
fh.write('// ' + "\n// ".join(v) + "\n")
|
||||
fh.write("\n\n#define MAXGLYPH %d\n\n" % (nglyphs - 1))
|
||||
if args.include :
|
||||
fh.write("#include \"%s\"\n" % args.include)
|
||||
|
||||
def outPosRules(self, fh, num) :
|
||||
fh.write("""
|
||||
#ifndef opt2
|
||||
#define opt(x) [x]?
|
||||
#define opt2(x) [opt(x) x]?
|
||||
#define opt3(x) [opt2(x) x]?
|
||||
#define opt4(x) [opt3(x) x]?
|
||||
#endif
|
||||
#define posrule(x) c##x##Dia {attach{to=@1; at=x##S; with=x##M}} / cTakes##x##Dia opt4(cnTakes##x##Dia) _;
|
||||
|
||||
table(positioning);
|
||||
pass(%d);
|
||||
""" % num)
|
||||
for p in self.points.values() :
|
||||
if p.hasDias() :
|
||||
fh.write("posrule(%s);\n" % p.name)
|
||||
fh.write("endpass;\nendtable;\n")
|
||||
|
||||
|
||||
def outclass(self, fh, name, glyphs) :
|
||||
fh.write(name + " = (")
|
||||
count = 1
|
||||
sep = ""
|
||||
for g in glyphs :
|
||||
if not g : continue
|
||||
|
||||
|
||||
if isinstance(g, basestring) :
|
||||
fh.write(sep + g)
|
||||
else :
|
||||
if g.GDLName() is None :
|
||||
print "Can't output " + str(g.gid) + " to class " + name
|
||||
else :
|
||||
fh.write(sep + g.GDLName())
|
||||
if count % 8 == 0 :
|
||||
sep = ',\n '
|
||||
else :
|
||||
sep = ', '
|
||||
count += 1
|
||||
fh.write(');\n\n')
|
||||
|
174
examples/gdl/glyph.py
Normal file
174
examples/gdl/glyph.py
Normal file
|
@ -0,0 +1,174 @@
|
|||
#!/usr/bin/env python
|
||||
'Corresponds to a glyph, for analysis purposes, for GDL generation'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2012 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
|
||||
import re, traceback
|
||||
from silfont.gdl.psnames import Name
|
||||
from xml.etree.cElementTree import SubElement
|
||||
|
||||
# Convert from Graphite AP name to the standard name, eg upperM -> _upper
|
||||
def gr_ap(txt) :
|
||||
if txt.endswith('M') :
|
||||
return "_" + txt[:-1]
|
||||
elif txt.endswith('S') :
|
||||
return txt[:-1]
|
||||
else :
|
||||
return txt
|
||||
|
||||
# Convert from standard AP name to the Graphite name, eg _upper -> upperM
|
||||
def ap_gr(txt) :
|
||||
if txt.startswith('_') :
|
||||
return txt[1:] + 'M'
|
||||
else :
|
||||
return txt + 'S'
|
||||
|
||||
|
||||
class Glyph(object) :
|
||||
|
||||
isDia = False
|
||||
|
||||
def __init__(self, name, gid = 0) :
|
||||
self.clear()
|
||||
self.setName(name)
|
||||
self.gdl = None
|
||||
self.gid = gid
|
||||
self.uid = "" # this is a string!
|
||||
self.comment = ""
|
||||
self.isDia = False
|
||||
|
||||
def clear(self) :
|
||||
self.anchors = {}
|
||||
self.classes = set()
|
||||
self.gdl_properties = {}
|
||||
self.properties = {}
|
||||
|
||||
def setName(self, name) :
|
||||
self.psname = name
|
||||
self.name = next(self.parseNames())
|
||||
|
||||
def setAnchor(self, name, x, y, t = None) :
|
||||
send = True
|
||||
if name in self.anchors :
|
||||
if x is None and y is None :
|
||||
del self.anchors[name]
|
||||
return True
|
||||
if x is None : x = self.anchors[name][0]
|
||||
if y is None : y = self.anchors[name][1]
|
||||
send = self.anchors[name] != (x, y)
|
||||
self.anchors[name] = (x, y)
|
||||
return send
|
||||
# if not name.startswith("_") and t != 'basemark' :
|
||||
# self.isBase = True
|
||||
|
||||
def parseNames(self) :
|
||||
if self.psname :
|
||||
for name in self.psname.split("/") :
|
||||
res = Name(name)
|
||||
yield res
|
||||
else :
|
||||
yield None
|
||||
|
||||
def GDLName(self) :
|
||||
if self.gdl :
|
||||
return self.gdl
|
||||
elif self.name :
|
||||
return self.name.GDL()
|
||||
else :
|
||||
return None
|
||||
|
||||
def setGDL(self, name) :
|
||||
self.gdl = name
|
||||
|
||||
def readAP(self, elem, font) :
|
||||
self.uid = elem.get('UID', None)
|
||||
for p in elem.iterfind('property') :
|
||||
n = p.get('name')
|
||||
if n == 'GDLName' :
|
||||
self.setGDL(p.get('value'))
|
||||
elif n.startswith('GDL_') :
|
||||
self.gdl_properties[n[4:]] = p.get('value')
|
||||
else :
|
||||
self.properties[n] = p.get('value')
|
||||
for p in elem.iterfind('point') :
|
||||
l = p.find('location')
|
||||
self.setAnchor(ap_gr(p.get('type')), int(l.get('x', 0)), int(l.get('y', 0)))
|
||||
p = elem.find('note')
|
||||
if p is not None and p.text :
|
||||
self.comment = p.text
|
||||
if 'classes' in self.properties :
|
||||
for c in self.properties['classes'].split() :
|
||||
if c not in self.classes :
|
||||
self.classes.add(c)
|
||||
font.addGlyphClass(c, self, editable = True)
|
||||
|
||||
def createAP(self, elem, font, autoGdlFile) :
|
||||
e = SubElement(elem, 'glyph')
|
||||
if self.psname : e.set('PSName', self.psname)
|
||||
if self.uid : e.set('UID', self.uid)
|
||||
if self.gid is not None : e.set('GID', str(self.gid))
|
||||
ce = None
|
||||
if 'classes' in self.properties and self.properties['classes'].strip() :
|
||||
tempClasses = self.properties['classes']
|
||||
self.properties['classes'] = " ".join(font.filterAutoClasses(self.properties['classes'].split(), autoGdlFile))
|
||||
|
||||
for k in sorted(self.anchors.keys()) :
|
||||
v = self.anchors[k]
|
||||
p = SubElement(e, 'point')
|
||||
p.set('type', gr_ap(k))
|
||||
p.text = "\n "
|
||||
l = SubElement(p, 'location')
|
||||
l.set('x', str(v[0]))
|
||||
l.set('y', str(v[1]))
|
||||
l.tail = "\n "
|
||||
if ce is not None : ce.tail = "\n "
|
||||
ce = p
|
||||
|
||||
for k in sorted(self.gdl_properties.keys()) :
|
||||
if k == "*skipPasses*" : continue # not set in GDL
|
||||
|
||||
v = self.gdl_properties[k]
|
||||
if v :
|
||||
p = SubElement(e, 'property')
|
||||
p.set('name', 'GDL_' + k)
|
||||
p.set('value', v)
|
||||
if ce is not None : ce.tail = "\n "
|
||||
ce = p
|
||||
|
||||
if self.gdl and (not self.name or self.gdl != self.name.GDL()) :
|
||||
p = SubElement(e, 'property')
|
||||
p.set('name', 'GDLName')
|
||||
p.set('value', self.GDLName())
|
||||
if ce is not None : ce.tail = "\n "
|
||||
ce = p
|
||||
|
||||
for k in sorted(self.properties.keys()) :
|
||||
v = self.properties[k]
|
||||
if v :
|
||||
p = SubElement(e, 'property')
|
||||
p.set('name', k)
|
||||
p.set('value', v)
|
||||
if ce is not None : ce.tail = "\n "
|
||||
ce = p
|
||||
|
||||
if self.comment :
|
||||
p = SubElement(e, 'note')
|
||||
p.text = self.comment
|
||||
if ce is not None : ce.tail = "\n "
|
||||
ce = p
|
||||
|
||||
if 'classes' in self.properties and self.properties['classes'].strip() :
|
||||
self.properties['classes'] = tempClasses
|
||||
if ce is not None :
|
||||
ce.tail = "\n"
|
||||
e.text = "\n "
|
||||
e.tail = "\n"
|
||||
return e
|
||||
|
||||
def isMakeGDLSpecialClass(name) :
|
||||
# if re.match(r'^cn?(Takes)?.*?Dia$', name) : return True
|
||||
# if name.startswith('clig') : return True
|
||||
# if name.startswith('cno_') : return True
|
||||
if re.match(r'^\*GC\d+\*$', name) : return True # auto-pseudo glyph with name = *GCXXXX*
|
||||
return False
|
31
examples/gdl/makeGdl.py
Executable file
31
examples/gdl/makeGdl.py
Executable file
|
@ -0,0 +1,31 @@
|
|||
#!/usr/bin/env python
|
||||
'Analyse a font and generate GDL to help with the creation of graphite fonts'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
|
||||
from gdl.font import Font
|
||||
import gdl.ot
|
||||
from argparse import ArgumentParser
|
||||
|
||||
parser = ArgumentParser()
|
||||
parser.add_argument('infont')
|
||||
parser.add_argument('outgdl')
|
||||
parser.add_argument('-a','--ap')
|
||||
parser.add_argument('-i','--include')
|
||||
parser.add_argument('-y','--alias')
|
||||
args = parser.parse_args()
|
||||
|
||||
f = Font(args.infont)
|
||||
if args.alias : f.loadAlias(args.alias)
|
||||
if args.ap : f.loadAP(args.ap)
|
||||
|
||||
f.createClasses()
|
||||
f.calculateOTLookups()
|
||||
f.calculatePointClasses()
|
||||
f.ligClasses()
|
||||
|
||||
outf = open(args.outgdl, "w")
|
||||
f.outGDL(outf, args)
|
||||
outf.close()
|
||||
|
448
examples/gdl/ot.py
Normal file
448
examples/gdl/ot.py
Normal file
|
@ -0,0 +1,448 @@
|
|||
#!/usr/bin/env python
|
||||
'OpenType analysis for GDL conversion'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2012 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
|
||||
import re, traceback, logging
|
||||
from fontTools.ttLib.tables import otTables
|
||||
|
||||
def compress_strings(strings) :
|
||||
'''If we replace one column in the string with different lists, can we reduce the number
|
||||
of strings? Each string is a tuple of the string and a single value that will be put into
|
||||
a class as well when list compression occurs'''
|
||||
maxlen = max(map(lambda x: len(x[0]), strings))
|
||||
scores = []
|
||||
for r in range(maxlen) :
|
||||
allchars = {}
|
||||
count = 0
|
||||
for s in strings :
|
||||
if r >= len(s[0]) : continue
|
||||
c = tuple(s[0][0:r] + (s[0][r+1:] if r < len(s[0]) - 1 else []))
|
||||
if c in allchars :
|
||||
allchars[c] += 1
|
||||
else :
|
||||
allchars[c] = 0
|
||||
count += 1
|
||||
scores.append((max(allchars.values()), len(allchars), count))
|
||||
best = maxlen
|
||||
bestr = 0
|
||||
for r in range(maxlen) :
|
||||
score = maxlen - (scores[r][2] - scores[r][1])
|
||||
if score < best :
|
||||
best = score
|
||||
bestr = r
|
||||
numstrings = len(strings)
|
||||
i = 0
|
||||
allchars = {}
|
||||
while i < len(strings) :
|
||||
s = strings[i][0]
|
||||
if bestr >= len(s) :
|
||||
i += 1
|
||||
continue
|
||||
c = tuple(s[0:bestr] + (s[bestr+1:] if bestr < len(s) - 1 else []))
|
||||
if c in allchars :
|
||||
allchars[c][1].append(s[bestr])
|
||||
allchars[c][2].append(strings[i][1])
|
||||
strings.pop(i)
|
||||
else :
|
||||
allchars[c] = [i, [s[bestr]], [strings[i][1]]]
|
||||
i += 1
|
||||
for v in allchars.values() :
|
||||
if len(set(v[1])) != 1 : # if all values in the list identical, don't output list
|
||||
strings[v[0]][0][bestr] = v[1]
|
||||
if len(v[2]) > 1 : # don't need a list if length 1
|
||||
strings[v[0]][1] = v[2]
|
||||
return strings
|
||||
|
||||
def make_rule(left, right = None, before = None, after = None) :
|
||||
res = " ".join(map(lambda x: x or "_", left))
|
||||
if right :
|
||||
res += " > " + " ".join(map(lambda x: x or "_", right))
|
||||
if before or after :
|
||||
res += " / "
|
||||
if before : res += " ".join(map(lambda x: x or 'ANY', before))
|
||||
res += " " + "_ " * len(left) + " "
|
||||
if after : res += " ".join(map(lambda x: x or 'ANY', after))
|
||||
res += ";"
|
||||
return res
|
||||
|
||||
def add_class_classes(font, name, ctable) :
|
||||
vals = {}
|
||||
for k, v in ctable.classDefs.items() :
|
||||
if v not in vals : vals[v] = []
|
||||
vals[v].append(k)
|
||||
numk = max(vals.keys())
|
||||
res = [None] * (numk + 1)
|
||||
for k, v in vals.items() :
|
||||
if len(v) > 1 :
|
||||
res[k] = font.alias(name+"{}".format(k))
|
||||
font.addClass(res[k], map(font.glyph, v))
|
||||
else :
|
||||
res[k] = font.glyph(v[0]).GDLName()
|
||||
return res
|
||||
|
||||
vrgdlmap = {
|
||||
'XPlacement' : 'shift.x',
|
||||
'YPlacement' : 'shift.y',
|
||||
'XAdvance' : 'advance'
|
||||
}
|
||||
def valuerectogdl(vr) :
|
||||
res = "{"
|
||||
for k, v in vrgdlmap.items() :
|
||||
if hasattr(vr, k) :
|
||||
res += "{}={}; ".format(v, getattr(vr, k))
|
||||
res = res[:-1] + "}"
|
||||
if len(res) == 1 : return ""
|
||||
return res
|
||||
|
||||
def _add_method(*clazzes):
|
||||
"""Returns a decorator function that adds a new method to one or
|
||||
more classes."""
|
||||
def wrapper(method):
|
||||
for c in clazzes:
|
||||
assert c.__name__ != 'DefaultTable', \
|
||||
'Oops, table class not found.'
|
||||
assert not hasattr(c, method.__name__), \
|
||||
"Oops, class '%s' has method '%s'." % (c.__name__,
|
||||
method.__name__)
|
||||
setattr(c, method.__name__, method)
|
||||
return None
|
||||
return wrapper
|
||||
|
||||
@_add_method(otTables.Lookup)
|
||||
def process(self, font, index) :
|
||||
for i, s in enumerate(self.SubTable) :
|
||||
if hasattr(s, 'process') :
|
||||
s.process(font, index + "_{}".format(i))
|
||||
else :
|
||||
logging.warning("No processing of {} {}_{}".format(str(s), index, i))
|
||||
|
||||
@_add_method(otTables.LookupList)
|
||||
def process(self, font) :
|
||||
for i, s in enumerate(self.Lookup) :
|
||||
s.process(font, str(i))
|
||||
|
||||
@_add_method(otTables.ExtensionSubst, otTables.ExtensionPos)
|
||||
def process(self, font, index) :
|
||||
x = self.ExtSubTable
|
||||
if hasattr(x, 'process') :
|
||||
x.process(font, index)
|
||||
else :
|
||||
logging.warning("No processing of {} {}".format(str(x), index))
|
||||
|
||||
@_add_method(otTables.SingleSubst)
|
||||
def process(self, font, index) :
|
||||
cname = "cot_s{}".format(index)
|
||||
if not len(font.alias(cname)) : return
|
||||
lists = zip(*self.mapping.items())
|
||||
font.addClass(font.alias(cname+"l"), map(font.glyph, lists[0]))
|
||||
font.addClass(font.alias(cname+"r"), map(font.glyph, lists[1]))
|
||||
|
||||
@_add_method(otTables.MultipleSubst)
|
||||
def process(self, font, index) :
|
||||
cname = "cot_m{}".format(index)
|
||||
if not len(font.alias(cname)) : return
|
||||
nums = len(self.Coverage.glyphs)
|
||||
strings = []
|
||||
for i in range(nums) :
|
||||
strings.append([self.Sequence[i].Substitute, self.Coverage.glyphs[i]])
|
||||
res = compress_strings(strings)
|
||||
count = 0
|
||||
rules = []
|
||||
for r in res :
|
||||
if hasattr(r[1], '__iter__') :
|
||||
lname = font.alias(cname+"l{}".format(count))
|
||||
font.addClass(lname, map(font.glyph, r[1]))
|
||||
rule = lname
|
||||
else :
|
||||
rule = font.glyph(r[1]).GDLName()
|
||||
rule += " _" * (len(r[0]) - 1) + " >"
|
||||
for c in r[0] :
|
||||
if hasattr(c, '__iter__') :
|
||||
rname = font.alias(cname+"r{}".format(count))
|
||||
font.addClass(rname, map(font.glyph, c))
|
||||
rule += " " + rname
|
||||
count += 1
|
||||
else :
|
||||
rule += " " + font.glyph(c).GDLName()
|
||||
rule += ';'
|
||||
rules.append(rule)
|
||||
font.addRules(rules, index)
|
||||
|
||||
@_add_method(otTables.LigatureSubst)
|
||||
def process(self, font, index) :
|
||||
cname = "cot_l{}".format(index)
|
||||
if not len(font.alias(cname)) : return
|
||||
strings = []
|
||||
for lg, ls in self.ligatures.items() :
|
||||
for l in ls :
|
||||
strings.append([[lg] + l.Component, l.LigGlyph])
|
||||
res = compress_strings(strings)
|
||||
count = 0
|
||||
rules = []
|
||||
for r in res :
|
||||
rule = ""
|
||||
besti = 0
|
||||
for i, c in enumerate(r[0]) :
|
||||
if hasattr(c, '__iter__') :
|
||||
lname = font.alias(cname+"l{}".format(count))
|
||||
font.addClass(lname, map(font.glyph, c))
|
||||
rule += lname + " "
|
||||
besti = i
|
||||
else :
|
||||
rule += font.glyph(c).GDLName() + " "
|
||||
rule += "> " + "_ " * besti
|
||||
if hasattr(r[1], '__iter__') :
|
||||
rname = font.alias(cname+"r{}".format(count))
|
||||
font.addClass(rname, map(font.glyph, r[1]))
|
||||
rule += rname
|
||||
count += 1
|
||||
else :
|
||||
rule += font.glyph(r[1]).GDLName()
|
||||
rule += " _" * (len(r[0]) - 1 - besti) + ";"
|
||||
rules.append(rule)
|
||||
font.addRules(rules, index)
|
||||
|
||||
@_add_method(otTables.ChainContextSubst)
|
||||
def process(self, font, index) :
|
||||
|
||||
def procsubst(rule, action) :
|
||||
for s in rule.SubstLookupRecord :
|
||||
action[s.SequenceIndex] += "/*{}*/".format(s.LookupListIndex)
|
||||
def procCover(cs, name) :
|
||||
res = []
|
||||
for i, c in enumerate(cs) :
|
||||
if len(c.glyphs) > 1 :
|
||||
n = font.alias(name+"{}".format(i))
|
||||
font.addClass(n, map(font.glyph, c.glyphs))
|
||||
res.append(n)
|
||||
else :
|
||||
res.append(font.glyph(c.glyphs[0]).GDLName())
|
||||
return res
|
||||
|
||||
cname = "cot_c{}".format(index)
|
||||
if not len(font.alias(cname)) : return
|
||||
rules = []
|
||||
if self.Format == 1 :
|
||||
for i in range(len(self.ChainSubRuleSet)) :
|
||||
for r in self.ChainSubRuleSet[i].ChainSubRule :
|
||||
action = [self.Coverage.glyphs[i]] + r.Input
|
||||
procsubst(r, action)
|
||||
rules.append(make_rule(action, None, r.Backtrack, r.LookAhead))
|
||||
elif self.Format == 2 :
|
||||
ilist = add_class_classes(font, cname+"i", self.InputClassDef)
|
||||
if self.BacktrackClassDef :
|
||||
blist = add_class_classes(font, cname+"b", self.BacktrackClassDef)
|
||||
if self.LookAheadClassDef :
|
||||
alist = add_class_classes(font, cname+"a", self.LookAheadClassDef)
|
||||
for i, s in enumerate(self.ChainSubClassSet) :
|
||||
if s is None : continue
|
||||
for r in s.ChainSubClassRule :
|
||||
action = map(lambda x:ilist[x], [i]+r.Input)
|
||||
procsubst(r, action)
|
||||
rules.append(make_rule(action, None,
|
||||
map(lambda x:blist[x], r.Backtrack or []),
|
||||
map(lambda x:alist[x], r.LookAhead or [])))
|
||||
elif self.Format == 3 :
|
||||
backs = procCover(self.BacktrackCoverage, cname+"b")
|
||||
aheads = procCover(self.LookAheadCoverage, cname+"a")
|
||||
actions = procCover(self.InputCoverage, cname+"i")
|
||||
procsubst(self, actions)
|
||||
rules.append(make_rule(actions, None, backs, aheads))
|
||||
font.addRules(rules, index)
|
||||
|
||||
@_add_method(otTables.SinglePos)
|
||||
def process(self, font, index) :
|
||||
cname = "cot_p{}".format(index)
|
||||
if self.Format == 1 :
|
||||
font.addClass(font.alias(cname), map(font.glyph, self.Coverage.glyphs))
|
||||
rule = cname + " " + valuerectogdl(self.Value)
|
||||
font.addPosRules([rule], index)
|
||||
elif self.Format == 2 :
|
||||
rules = []
|
||||
for i, g in enumerage(map(font.glyph, self.Coverage.glyphs)) :
|
||||
rule = font.glyph(g).GDLName()
|
||||
rule += " " + valuerectogdl(self.Value[i])
|
||||
rules.append(rule)
|
||||
font.addPosRules(rules, index)
|
||||
|
||||
@_add_method(otTables.PairPos)
|
||||
def process(self, font, index) :
|
||||
pass
|
||||
|
||||
@_add_method(otTables.CursivePos)
|
||||
def process(self, font, index) :
|
||||
apname = "P{}".format(index)
|
||||
if not len(font.alias(apname)) : return
|
||||
if self.Format == 1 :
|
||||
mark_names = self.Coverage.glyphs
|
||||
for i, g in enumerate(map(font.glyph, mark_names)) :
|
||||
rec = self.EntryExitRecord[i]
|
||||
if rec.EntryAnchor is not None :
|
||||
g.setAnchor(font.alias(apname+"_{}M".format(rec.EntryAnchor)),
|
||||
rec.EntryAnchor.XCoordinate, rec.EntryAnchor.YCoordinate)
|
||||
if rec.ExitAnchor is not None :
|
||||
g.setAnchor(font.alias(apname+"_{}S".format(rec.ExitAnchor)),
|
||||
rec.ExitAnchor.XCoordinate, rec.ExitAnchor.YCoordinate)
|
||||
|
||||
@_add_method(otTables.MarkBasePos)
|
||||
def process(self, font, index) :
|
||||
apname = "P{}".format(index)
|
||||
if not len(font.alias(apname)) : return
|
||||
if self.Format == 1 :
|
||||
mark_names = self.MarkCoverage.glyphs
|
||||
for i, g in enumerate(map(font.glyph, mark_names)) :
|
||||
rec = self.MarkArray.MarkRecord[i]
|
||||
g.setAnchor(font.alias(apname+"_{}M".format(rec.Class)),
|
||||
rec.MarkAnchor.XCoordinate, rec.MarkAnchor.YCoordinate)
|
||||
base_names = self.BaseCoverage.glyphs
|
||||
for i, g in enumerate(map(font.glyph, base_names)) :
|
||||
for j,b in enumerate(self.BaseArray.BaseRecord[i].BaseAnchor) :
|
||||
if b : g.setAnchor(font.alias(apname+"_{}S".format(j)),
|
||||
b.XCoordinate, b.YCoordinate)
|
||||
|
||||
@_add_method(otTables.MarkMarkPos)
|
||||
def process(self, font, index) :
|
||||
apname = "P{}".format(index)
|
||||
if not len(font.alias(apname)) : return
|
||||
if self.Format == 1 :
|
||||
mark_names = self.Mark1Coverage.glyphs
|
||||
for i, g in enumerate(map(font.glyph, mark_names)) :
|
||||
rec = self.Mark1Array.MarkRecord[i]
|
||||
g.setAnchor(font.alias(apname+"_{}M".format(rec.Class)),
|
||||
rec.MarkAnchor.XCoordinate, rec.MarkAnchor.YCoordinate)
|
||||
base_names = self.Mark2Coverage.glyphs
|
||||
for i, g in enumerate(map(font.glyph, base_names)) :
|
||||
for j,b in enumerate(self.Mark2Array.Mark2Record[i].Mark2Anchor) :
|
||||
if b : g.setAnchor(font.alias(apname+"_{}S".format(j)),
|
||||
b.XCoordinate, b.YCoordinate)
|
||||
|
||||
@_add_method(otTables.ContextSubst)
|
||||
def process(self, font, index) :
|
||||
|
||||
def procsubst(rule, action) :
|
||||
for s in rule.SubstLookupRecord :
|
||||
action[s.SequenceIndex] += "/*{}*/".format(s.LookupListIndex)
|
||||
def procCover(cs, name) :
|
||||
res = []
|
||||
for i, c in enumerate(cs) :
|
||||
if len(c.glyphs) > 1 :
|
||||
n = font.alias(name+"{}".format(i))
|
||||
font.addClass(n, map(font.glyph, c.glyphs))
|
||||
res.append(n)
|
||||
else :
|
||||
res.append(font.glyph(c.glyphs[0]).GDLName())
|
||||
return res
|
||||
|
||||
cname = "cot_cs{}".format(index)
|
||||
if not len(font.alias(cname)) : return
|
||||
rules = []
|
||||
if self.Format == 1 :
|
||||
for i in range(len(self.SubRuleSet)) :
|
||||
for r in self.SubRuleSet[i].SubRule :
|
||||
action = [self.Coverage.glyphs[i]] + r.Input
|
||||
procsubst(r, action)
|
||||
rules.append(make_rule(action, None, None, None))
|
||||
elif self.Format == 2 :
|
||||
ilist = add_class_classes(font, cname+"i", self.ClassDef)
|
||||
for i, s in enumerate(self.SubClassSet) :
|
||||
if s is None : continue
|
||||
for r in s.SubClassRule :
|
||||
action = map(lambda x:ilist[x], [i]+r.Class)
|
||||
procsubst(r, action)
|
||||
rules.append(make_rule(action, None, None, None))
|
||||
elif self.Format == 3 :
|
||||
actions = procCover(self.Coverage, cname+"i")
|
||||
procsubst(self, actions)
|
||||
rules.append(make_rule(actions, None, None, None))
|
||||
font.addRules(rules, index)
|
||||
|
||||
@_add_method(otTables.ContextPos)
|
||||
def process(self, font, index) :
|
||||
|
||||
def procsubst(rule, action) :
|
||||
for s in rule.PosLookupRecord :
|
||||
action[s.SequenceIndex] += "/*{}*/".format(s.LookupListIndex)
|
||||
def procCover(cs, name) :
|
||||
res = []
|
||||
for i, c in enumerate(cs) :
|
||||
if len(c.glyphs) > 1 :
|
||||
n = font.alias(name+"{}".format(i))
|
||||
font.addClass(n, map(font.glyph, c.glyphs))
|
||||
res.append(n)
|
||||
else :
|
||||
res.append(font.glyph(c.glyphs[0]).GDLName())
|
||||
return res
|
||||
|
||||
cname = "cot_cp{}".format(index)
|
||||
if not len(font.alias(cname)) : return
|
||||
rules = []
|
||||
if self.Format == 1 :
|
||||
for i in range(len(self.PosRuleSet)) :
|
||||
for r in self.PosRuleSet[i] :
|
||||
action = [self.Coverage.glyphs[i]] + r.Input
|
||||
procsubst(r, action)
|
||||
rules.append(make_rule(action, None, None, None))
|
||||
elif self.Format == 2 :
|
||||
ilist = add_class_classes(font, cname+"i", self.ClassDef)
|
||||
for i, s in enumerate(self.PosClassSet) :
|
||||
if s is None : continue
|
||||
for r in s.PosClassRule :
|
||||
action = map(lambda x:ilist[x], [i]+r.Class)
|
||||
procsubst(r, action)
|
||||
rules.append(make_rule(action, None, None, None))
|
||||
elif self.Format == 3 :
|
||||
actions = procCover(self.Coverage, cname+"i")
|
||||
procsubst(self, actions)
|
||||
rules.append(make_rule(actions, None, None, None))
|
||||
font.addPosRules(rules, index)
|
||||
|
||||
@_add_method(otTables.ChainContextPos)
|
||||
def process(self, font, index) :
|
||||
|
||||
def procsubst(rule, action) :
|
||||
for s in rule.PosLookupRecord :
|
||||
action[s.SequenceIndex] += "/*{}*/".format(s.LookupListIndex)
|
||||
def procCover(cs, name) :
|
||||
res = []
|
||||
for i, c in enumerate(cs) :
|
||||
if len(c.glyphs) > 1 :
|
||||
n = font.alias(name+"{}".format(i))
|
||||
font.addClass(n, map(font.glyph, c.glyphs))
|
||||
res.append(n)
|
||||
else :
|
||||
res.append(font.glyph(c.glyphs[0]).GDLName())
|
||||
return res
|
||||
|
||||
cname = "cot_c{}".format(index)
|
||||
if not len(font.alias(cname)) : return
|
||||
rules = []
|
||||
if self.Format == 1 :
|
||||
for i in range(len(self.ChainPosRuleSet)) :
|
||||
for r in self.ChainPosRuleSet[i].ChainPosRule :
|
||||
action = [self.Coverage.glyphs[i]] + r.Input
|
||||
procsubst(r, action)
|
||||
rules.append(make_rule(action, None, r.Backtrack, r.LookAhead))
|
||||
elif self.Format == 2 :
|
||||
ilist = add_class_classes(font, cname+"i", self.InputClassDef)
|
||||
if self.BacktrackClassDef :
|
||||
blist = add_class_classes(font, cname+"b", self.BacktrackClassDef)
|
||||
if self.LookAheadClassDef :
|
||||
alist = add_class_classes(font, cname+"a", self.LookAheadClassDef)
|
||||
for i, s in enumerate(self.ChainPosClassSet) :
|
||||
if s is None : continue
|
||||
for r in s.ChainPosClassRule :
|
||||
action = map(lambda x:ilist[x], [i]+r.Input)
|
||||
procsubst(r, action)
|
||||
rules.append(make_rule(action, None,
|
||||
map(lambda x:blist[x], r.Backtrack or []),
|
||||
map(lambda x:alist[x], r.LookAhead or [])))
|
||||
elif self.Format == 3 :
|
||||
backs = procCover(self.BacktrackCoverage, cname+"b")
|
||||
aheads = procCover(self.LookAheadCoverage, cname+"a")
|
||||
actions = procCover(self.InputCoverage, cname+"i")
|
||||
procsubst(self, actions)
|
||||
rules.append(make_rule(actions, None, backs, aheads))
|
||||
font.addPosRules(rules, index)
|
||||
|
4506
examples/gdl/psnames.py
Normal file
4506
examples/gdl/psnames.py
Normal file
File diff suppressed because it is too large
Load diff
9
examples/preflight
Executable file
9
examples/preflight
Executable file
|
@ -0,0 +1,9 @@
|
|||
#!/bin/sh
|
||||
# Sample script for calling multiple routines on a project, typically prior to committing to a repository.
|
||||
# Place this in root of a project, adjust the font path, then set it to be executable by typing:
|
||||
# chmod +x preflight
|
||||
|
||||
psfnormalize -p checkfix=fix source/font-Regular.ufo
|
||||
psfnormalize -p checkfix=fix source/font-Bold.ufo
|
||||
|
||||
psfsyncmasters source/font-RB.designspace
|
53
examples/psfaddGlyphDemo.py
Executable file
53
examples/psfaddGlyphDemo.py
Executable file
|
@ -0,0 +1,53 @@
|
|||
#!/usr/bin/env python3
|
||||
'''Demo script for UFOlib to add a glyph to a UFO font'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
import silfont.ufo as ufo
|
||||
from xml.etree import cElementTree as ET
|
||||
|
||||
suffix = '_addGlyph'
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output font file','nargs': '?' }, {'type': 'outfont'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': suffix+'log'})]
|
||||
|
||||
def doit(args) :
|
||||
''' This will add the following glyph to the font
|
||||
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<glyph name="Test" format="1">
|
||||
<unicode hex="007D"/>
|
||||
<outline>
|
||||
<contour>
|
||||
<point x="275" y="1582" type="line"/>
|
||||
<point x="275" y="-493" type="line"/>
|
||||
</contour>
|
||||
</outline>
|
||||
</glyph>
|
||||
'''
|
||||
|
||||
font = args.ifont
|
||||
|
||||
# Create basic glyph
|
||||
newglyph = ufo.Uglif(layer = font.deflayer, name = "Test")
|
||||
newglyph.add("unicode", {"hex": "007D"})
|
||||
# Add an outline
|
||||
newglyph.add("outline")
|
||||
# Create a contour and add to outline
|
||||
element = ET.Element("contour")
|
||||
ET.SubElement(element, "point", {"x": "275", "y": "1582", "type": "line"})
|
||||
ET.SubElement(element, "point", {"x": "275", "y": "-493", "type": "line"})
|
||||
contour =ufo.Ucontour(newglyph["outline"],element)
|
||||
newglyph["outline"].appendobject(contour, "contour")
|
||||
|
||||
font.deflayer.addGlyph(newglyph)
|
||||
|
||||
return args.ifont
|
||||
|
||||
def cmd() : execute("UFO",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
||||
|
641
examples/psfexpandstroke.py
Executable file
641
examples/psfexpandstroke.py
Executable file
|
@ -0,0 +1,641 @@
|
|||
#!/usr/bin/env python3
|
||||
from __future__ import unicode_literals
|
||||
'''Expands an unclosed UFO stroke font into monoline forms with a fixed width'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2017 SIL International (https://www.sil.org), based on outlinerRoboFontExtension Copyright (c) 2016 Frederik Berlaen'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Victor Gaultney'
|
||||
|
||||
# Usage: psfexpandstroke ifont ofont expansion
|
||||
# expansion is the number of units added to each side of the stroke
|
||||
|
||||
# To Do
|
||||
# - Simplify to assume round caps and corners
|
||||
|
||||
# main input, output, and execution handled by pysilfont framework
|
||||
from silfont.core import execute
|
||||
|
||||
from fontTools.pens.basePen import BasePen
|
||||
from fontTools.misc.bezierTools import splitCubicAtT
|
||||
from robofab.world import OpenFont
|
||||
from robofab.pens.pointPen import AbstractPointPen
|
||||
from robofab.pens.reverseContourPointPen import ReverseContourPointPen
|
||||
from robofab.pens.adapterPens import PointToSegmentPen
|
||||
|
||||
from defcon import Glyph
|
||||
|
||||
from math import sqrt, cos, sin, acos, asin, degrees, radians, pi
|
||||
|
||||
suffix = '_expanded'
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'filename'}),
|
||||
('ofont',{'help': 'Output font file','nargs': '?' }, {'type': 'filename', 'def': "_"+suffix}),
|
||||
('thickness',{'help': 'Stroke thickness'}, {}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': suffix+'.log'})]
|
||||
|
||||
|
||||
# The following functions are straight from outlinerRoboFontExtension
|
||||
|
||||
def roundFloat(f):
|
||||
error = 1000000.
|
||||
return round(f*error)/error
|
||||
|
||||
def checkSmooth(firstAngle, lastAngle):
|
||||
if firstAngle is None or lastAngle is None:
|
||||
return True
|
||||
error = 4
|
||||
firstAngle = degrees(firstAngle)
|
||||
lastAngle = degrees(lastAngle)
|
||||
|
||||
if int(firstAngle) + error >= int(lastAngle) >= int(firstAngle) - error:
|
||||
return True
|
||||
return False
|
||||
|
||||
def checkInnerOuter(firstAngle, lastAngle):
|
||||
if firstAngle is None or lastAngle is None:
|
||||
return True
|
||||
dirAngle = degrees(firstAngle) - degrees(lastAngle)
|
||||
|
||||
if dirAngle > 180:
|
||||
dirAngle = 180 - dirAngle
|
||||
elif dirAngle < -180:
|
||||
dirAngle = -180 - dirAngle
|
||||
|
||||
if dirAngle > 0:
|
||||
return True
|
||||
|
||||
if dirAngle <= 0:
|
||||
return False
|
||||
|
||||
|
||||
def interSect((seg1s, seg1e), (seg2s, seg2e)):
|
||||
denom = (seg2e.y - seg2s.y)*(seg1e.x - seg1s.x) - (seg2e.x - seg2s.x)*(seg1e.y - seg1s.y)
|
||||
if roundFloat(denom) == 0:
|
||||
# print 'parallel: %s' % denom
|
||||
return None
|
||||
uanum = (seg2e.x - seg2s.x)*(seg1s.y - seg2s.y) - (seg2e.y - seg2s.y)*(seg1s.x - seg2s.x)
|
||||
ubnum = (seg1e.x - seg1s.x)*(seg1s.y - seg2s.y) - (seg1e.y - seg1s.y)*(seg1s.x - seg2s.x)
|
||||
ua = uanum / denom
|
||||
# ub = ubnum / denom
|
||||
x = seg1s.x + ua*(seg1e.x - seg1s.x)
|
||||
y = seg1s.y + ua*(seg1e.y - seg1s.y)
|
||||
return MathPoint(x, y)
|
||||
|
||||
|
||||
def pointOnACurve((x1, y1), (cx1, cy1), (cx2, cy2), (x2, y2), value):
|
||||
dx = x1
|
||||
cx = (cx1 - dx) * 3.0
|
||||
bx = (cx2 - cx1) * 3.0 - cx
|
||||
ax = x2 - dx - cx - bx
|
||||
dy = y1
|
||||
cy = (cy1 - dy) * 3.0
|
||||
by = (cy2 - cy1) * 3.0 - cy
|
||||
ay = y2 - dy - cy - by
|
||||
mx = ax*(value)**3 + bx*(value)**2 + cx*(value) + dx
|
||||
my = ay*(value)**3 + by*(value)**2 + cy*(value) + dy
|
||||
return MathPoint(mx, my)
|
||||
|
||||
|
||||
class MathPoint(object):
|
||||
|
||||
def __init__(self, x, y=None):
|
||||
if y is None:
|
||||
x, y = x
|
||||
self.x = x
|
||||
self.y = y
|
||||
|
||||
def __repr__(self):
|
||||
return "<MathPoint x:%s y:%s>" % (self.x, self.y)
|
||||
|
||||
def __getitem__(self, index):
|
||||
if index == 0:
|
||||
return self.x
|
||||
if index == 1:
|
||||
return self.y
|
||||
raise IndexError
|
||||
|
||||
def __iter__(self):
|
||||
for value in [self.x, self.y]:
|
||||
yield value
|
||||
|
||||
def __add__(self, p): # p+ p
|
||||
if not isinstance(p, self.__class__):
|
||||
return self.__class__(self.x + p, self.y + p)
|
||||
return self.__class__(self.x + p.x, self.y + p.y)
|
||||
|
||||
def __sub__(self, p): # p - p
|
||||
if not isinstance(p, self.__class__):
|
||||
return self.__class__(self.x - p, self.y - p)
|
||||
return self.__class__(self.x - p.x, self.y - p.y)
|
||||
|
||||
def __mul__(self, p): # p * p
|
||||
if not isinstance(p, self.__class__):
|
||||
return self.__class__(self.x * p, self.y * p)
|
||||
return self.__class__(self.x * p.x, self.y * p.y)
|
||||
|
||||
def __div__(self, p):
|
||||
if not isinstance(p, self.__class__):
|
||||
return self.__class__(self.x / p, self.y / p)
|
||||
return self.__class__(self.x / p.x, self.y / p.y)
|
||||
|
||||
def __eq__(self, p): # if p == p
|
||||
if not isinstance(p, self.__class__):
|
||||
return False
|
||||
return roundFloat(self.x) == roundFloat(p.x) and roundFloat(self.y) == roundFloat(p.y)
|
||||
|
||||
def __ne__(self, p): # if p != p
|
||||
return not self.__eq__(p)
|
||||
|
||||
def copy(self):
|
||||
return self.__class__(self.x, self.y)
|
||||
|
||||
def round(self):
|
||||
self.x = round(self.x)
|
||||
self.y = round(self.y)
|
||||
|
||||
def distance(self, p):
|
||||
return sqrt((p.x - self.x)**2 + (p.y - self.y)**2)
|
||||
|
||||
def angle(self, other, add=90):
|
||||
# returns the angle of a Line in radians
|
||||
b = other.x - self.x
|
||||
a = other.y - self.y
|
||||
c = sqrt(a**2 + b**2)
|
||||
if c == 0:
|
||||
return None
|
||||
if add is None:
|
||||
return b/c
|
||||
cosAngle = degrees(acos(b/c))
|
||||
sinAngle = degrees(asin(a/c))
|
||||
if sinAngle < 0:
|
||||
cosAngle = 360 - cosAngle
|
||||
return radians(cosAngle + add)
|
||||
|
||||
|
||||
class CleanPointPen(AbstractPointPen):
|
||||
|
||||
def __init__(self, pointPen):
|
||||
self.pointPen = pointPen
|
||||
self.currentContour = None
|
||||
|
||||
def processContour(self):
|
||||
pointPen = self.pointPen
|
||||
contour = self.currentContour
|
||||
|
||||
index = 0
|
||||
prevAngle = None
|
||||
toRemove = []
|
||||
for data in contour:
|
||||
if data["segmentType"] in ["line", "move"]:
|
||||
prevPoint = contour[index-1]
|
||||
if prevPoint["segmentType"] in ["line", "move"]:
|
||||
angle = MathPoint(data["point"]).angle(MathPoint(prevPoint["point"]))
|
||||
if prevAngle is not None and angle is not None and roundFloat(prevAngle) == roundFloat(angle):
|
||||
prevPoint["uniqueID"] = id(prevPoint)
|
||||
toRemove.append(prevPoint)
|
||||
prevAngle = angle
|
||||
else:
|
||||
prevAngle = None
|
||||
else:
|
||||
prevAngle = None
|
||||
index += 1
|
||||
|
||||
for data in toRemove:
|
||||
contour.remove(data)
|
||||
|
||||
pointPen.beginPath()
|
||||
for data in contour:
|
||||
pointPen.addPoint(data["point"], **data)
|
||||
pointPen.endPath()
|
||||
|
||||
def beginPath(self):
|
||||
assert self.currentContour is None
|
||||
self.currentContour = []
|
||||
self.onCurve = []
|
||||
|
||||
def endPath(self):
|
||||
assert self.currentContour is not None
|
||||
self.processContour()
|
||||
self.currentContour = None
|
||||
|
||||
def addPoint(self, pt, segmentType=None, smooth=False, name=None, **kwargs):
|
||||
data = dict(point=pt, segmentType=segmentType, smooth=smooth, name=name)
|
||||
data.update(kwargs)
|
||||
self.currentContour.append(data)
|
||||
|
||||
def addComponent(self, glyphName, transform):
|
||||
assert self.currentContour is None
|
||||
self.pointPen.addComponent(glyphName, transform)
|
||||
|
||||
# The following class has been been adjusted to work around how outline types use closePath() and endPath(),
|
||||
# to remove unneeded bits, and hard-code some assumptions.
|
||||
|
||||
class OutlinePen(BasePen):
|
||||
|
||||
pointClass = MathPoint
|
||||
magicCurve = 0.5522847498
|
||||
|
||||
def __init__(self, glyphSet, offset=10, contrast=0, contrastAngle=0, connection="round", cap="round", miterLimit=None, optimizeCurve=True):
|
||||
BasePen.__init__(self, glyphSet)
|
||||
|
||||
self.offset = abs(offset)
|
||||
self.contrast = abs(contrast)
|
||||
self.contrastAngle = contrastAngle
|
||||
self._inputmiterLimit = miterLimit
|
||||
if miterLimit is None:
|
||||
miterLimit = self.offset * 2
|
||||
self.miterLimit = abs(miterLimit)
|
||||
|
||||
self.optimizeCurve = optimizeCurve
|
||||
|
||||
self.connectionCallback = getattr(self, "connection%s" % (connection.title()))
|
||||
self.capCallback = getattr(self, "cap%s" % (cap.title()))
|
||||
|
||||
self.originalGlyph = Glyph()
|
||||
self.originalPen = self.originalGlyph.getPen()
|
||||
|
||||
self.outerGlyph = Glyph()
|
||||
self.outerPen = self.outerGlyph.getPen()
|
||||
self.outerCurrentPoint = None
|
||||
self.outerFirstPoint = None
|
||||
self.outerPrevPoint = None
|
||||
|
||||
self.innerGlyph = Glyph()
|
||||
self.innerPen = self.innerGlyph.getPen()
|
||||
self.innerCurrentPoint = None
|
||||
self.innerFirstPoint = None
|
||||
self.innerPrevPoint = None
|
||||
|
||||
self.prevPoint = None
|
||||
self.firstPoint = None
|
||||
self.firstAngle = None
|
||||
self.prevAngle = None
|
||||
|
||||
self.shouldHandleMove = True
|
||||
|
||||
self.components = []
|
||||
|
||||
self.drawSettings()
|
||||
|
||||
def _moveTo(self, (x, y)):
|
||||
if self.offset == 0:
|
||||
self.outerPen.moveTo((x, y))
|
||||
self.innerPen.moveTo((x, y))
|
||||
return
|
||||
self.originalPen.moveTo((x, y))
|
||||
|
||||
p = self.pointClass(x, y)
|
||||
self.prevPoint = p
|
||||
self.firstPoint = p
|
||||
self.shouldHandleMove = True
|
||||
|
||||
def _lineTo(self, (x, y)):
|
||||
if self.offset == 0:
|
||||
self.outerPen.lineTo((x, y))
|
||||
self.innerPen.lineTo((x, y))
|
||||
return
|
||||
self.originalPen.lineTo((x, y))
|
||||
|
||||
currentPoint = self.pointClass(x, y)
|
||||
if currentPoint == self.prevPoint:
|
||||
return
|
||||
|
||||
self.currentAngle = self.prevPoint.angle(currentPoint)
|
||||
thickness = self.getThickness(self.currentAngle)
|
||||
self.innerCurrentPoint = self.prevPoint - self.pointClass(cos(self.currentAngle), sin(self.currentAngle)) * thickness
|
||||
self.outerCurrentPoint = self.prevPoint + self.pointClass(cos(self.currentAngle), sin(self.currentAngle)) * thickness
|
||||
|
||||
if self.shouldHandleMove:
|
||||
self.shouldHandleMove = False
|
||||
|
||||
self.innerPen.moveTo(self.innerCurrentPoint)
|
||||
self.innerFirstPoint = self.innerCurrentPoint
|
||||
|
||||
self.outerPen.moveTo(self.outerCurrentPoint)
|
||||
self.outerFirstPoint = self.outerCurrentPoint
|
||||
|
||||
self.firstAngle = self.currentAngle
|
||||
else:
|
||||
self.buildConnection()
|
||||
|
||||
self.innerCurrentPoint = currentPoint - self.pointClass(cos(self.currentAngle), sin(self.currentAngle)) * thickness
|
||||
self.innerPen.lineTo(self.innerCurrentPoint)
|
||||
self.innerPrevPoint = self.innerCurrentPoint
|
||||
|
||||
self.outerCurrentPoint = currentPoint + self.pointClass(cos(self.currentAngle), sin(self.currentAngle)) * thickness
|
||||
self.outerPen.lineTo(self.outerCurrentPoint)
|
||||
self.outerPrevPoint = self.outerCurrentPoint
|
||||
|
||||
self.prevPoint = currentPoint
|
||||
self.prevAngle = self.currentAngle
|
||||
|
||||
def _curveToOne(self, (x1, y1), (x2, y2), (x3, y3)):
|
||||
if self.optimizeCurve:
|
||||
curves = splitCubicAtT(self.prevPoint, (x1, y1), (x2, y2), (x3, y3), .5)
|
||||
else:
|
||||
curves = [(self.prevPoint, (x1, y1), (x2, y2), (x3, y3))]
|
||||
for curve in curves:
|
||||
p1, h1, h2, p2 = curve
|
||||
self._processCurveToOne(h1, h2, p2)
|
||||
|
||||
def _processCurveToOne(self, (x1, y1), (x2, y2), (x3, y3)):
|
||||
if self.offset == 0:
|
||||
self.outerPen.curveTo((x1, y1), (x2, y2), (x3, y3))
|
||||
self.innerPen.curveTo((x1, y1), (x2, y2), (x3, y3))
|
||||
return
|
||||
self.originalPen.curveTo((x1, y1), (x2, y2), (x3, y3))
|
||||
|
||||
p1 = self.pointClass(x1, y1)
|
||||
p2 = self.pointClass(x2, y2)
|
||||
p3 = self.pointClass(x3, y3)
|
||||
|
||||
if p1 == self.prevPoint:
|
||||
p1 = pointOnACurve(self.prevPoint, p1, p2, p3, 0.01)
|
||||
if p2 == p3:
|
||||
p2 = pointOnACurve(self.prevPoint, p1, p2, p3, 0.99)
|
||||
|
||||
a1 = self.prevPoint.angle(p1)
|
||||
a2 = p2.angle(p3)
|
||||
|
||||
self.currentAngle = a1
|
||||
tickness1 = self.getThickness(a1)
|
||||
tickness2 = self.getThickness(a2)
|
||||
|
||||
a1bis = self.prevPoint.angle(p1, 0)
|
||||
a2bis = p3.angle(p2, 0)
|
||||
intersectPoint = interSect((self.prevPoint, self.prevPoint + self.pointClass(cos(a1), sin(a1)) * 100),
|
||||
(p3, p3 + self.pointClass(cos(a2), sin(a2)) * 100))
|
||||
self.innerCurrentPoint = self.prevPoint - self.pointClass(cos(a1), sin(a1)) * tickness1
|
||||
self.outerCurrentPoint = self.prevPoint + self.pointClass(cos(a1), sin(a1)) * tickness1
|
||||
|
||||
if self.shouldHandleMove:
|
||||
self.shouldHandleMove = False
|
||||
|
||||
self.innerPen.moveTo(self.innerCurrentPoint)
|
||||
self.innerFirstPoint = self.innerPrevPoint = self.innerCurrentPoint
|
||||
|
||||
self.outerPen.moveTo(self.outerCurrentPoint)
|
||||
self.outerFirstPoint = self.outerPrevPoint = self.outerCurrentPoint
|
||||
|
||||
self.firstAngle = a1
|
||||
else:
|
||||
self.buildConnection()
|
||||
|
||||
h1 = None
|
||||
if intersectPoint is not None:
|
||||
h1 = interSect((self.innerCurrentPoint, self.innerCurrentPoint + self.pointClass(cos(a1bis), sin(a1bis)) * tickness1), (intersectPoint, p1))
|
||||
if h1 is None:
|
||||
h1 = p1 - self.pointClass(cos(a1), sin(a1)) * tickness1
|
||||
|
||||
self.innerCurrentPoint = p3 - self.pointClass(cos(a2), sin(a2)) * tickness2
|
||||
|
||||
h2 = None
|
||||
if intersectPoint is not None:
|
||||
h2 = interSect((self.innerCurrentPoint, self.innerCurrentPoint + self.pointClass(cos(a2bis), sin(a2bis)) * tickness2), (intersectPoint, p2))
|
||||
if h2 is None:
|
||||
h2 = p2 - self.pointClass(cos(a1), sin(a1)) * tickness1
|
||||
|
||||
self.innerPen.curveTo(h1, h2, self.innerCurrentPoint)
|
||||
self.innerPrevPoint = self.innerCurrentPoint
|
||||
|
||||
########
|
||||
h1 = None
|
||||
if intersectPoint is not None:
|
||||
h1 = interSect((self.outerCurrentPoint, self.outerCurrentPoint + self.pointClass(cos(a1bis), sin(a1bis)) * tickness1), (intersectPoint, p1))
|
||||
if h1 is None:
|
||||
h1 = p1 + self.pointClass(cos(a1), sin(a1)) * tickness1
|
||||
|
||||
self.outerCurrentPoint = p3 + self.pointClass(cos(a2), sin(a2)) * tickness2
|
||||
|
||||
h2 = None
|
||||
if intersectPoint is not None:
|
||||
h2 = interSect((self.outerCurrentPoint, self.outerCurrentPoint + self.pointClass(cos(a2bis), sin(a2bis)) * tickness2), (intersectPoint, p2))
|
||||
if h2 is None:
|
||||
h2 = p2 + self.pointClass(cos(a1), sin(a1)) * tickness1
|
||||
self.outerPen.curveTo(h1, h2, self.outerCurrentPoint)
|
||||
self.outerPrevPoint = self.outerCurrentPoint
|
||||
|
||||
self.prevPoint = p3
|
||||
self.currentAngle = a2
|
||||
self.prevAngle = a2
|
||||
|
||||
def _closePath(self):
|
||||
if self.shouldHandleMove:
|
||||
return
|
||||
|
||||
self.originalPen.endPath()
|
||||
self.innerPen.endPath()
|
||||
self.outerPen.endPath()
|
||||
|
||||
innerContour = self.innerGlyph[-1]
|
||||
outerContour = self.outerGlyph[-1]
|
||||
|
||||
innerContour.reverse()
|
||||
|
||||
innerContour[0].segmentType = "line"
|
||||
outerContour[0].segmentType = "line"
|
||||
|
||||
self.buildCap(outerContour, innerContour)
|
||||
|
||||
for point in innerContour:
|
||||
outerContour.addPoint((point.x, point.y), segmentType=point.segmentType, smooth=point.smooth)
|
||||
|
||||
self.innerGlyph.removeContour(innerContour)
|
||||
|
||||
def _endPath(self):
|
||||
# The current way glyph outlines are processed means that _endPath() would not be called
|
||||
# _closePath() is used instead
|
||||
pass
|
||||
|
||||
def addComponent(self, glyphName, transform):
|
||||
self.components.append((glyphName, transform))
|
||||
|
||||
# thickness
|
||||
|
||||
def getThickness(self, angle):
|
||||
a2 = angle + pi * .5
|
||||
f = abs(sin(a2 + radians(self.contrastAngle)))
|
||||
f = f ** 5
|
||||
return self.offset + self.contrast * f
|
||||
|
||||
# connections
|
||||
|
||||
def buildConnection(self, close=False):
|
||||
if not checkSmooth(self.prevAngle, self.currentAngle):
|
||||
if checkInnerOuter(self.prevAngle, self.currentAngle):
|
||||
self.connectionCallback(self.outerPrevPoint, self.outerCurrentPoint, self.outerPen, close)
|
||||
self.connectionInnerCorner(self.innerPrevPoint, self.innerCurrentPoint, self.innerPen, close)
|
||||
else:
|
||||
self.connectionCallback(self.innerPrevPoint, self.innerCurrentPoint, self.innerPen, close)
|
||||
self.connectionInnerCorner(self.outerPrevPoint, self.outerCurrentPoint, self.outerPen, close)
|
||||
|
||||
def connectionRound(self, first, last, pen, close):
|
||||
angle_1 = radians(degrees(self.prevAngle)+90)
|
||||
angle_2 = radians(degrees(self.currentAngle)+90)
|
||||
|
||||
tempFirst = first - self.pointClass(cos(angle_1), sin(angle_1)) * self.miterLimit
|
||||
tempLast = last + self.pointClass(cos(angle_2), sin(angle_2)) * self.miterLimit
|
||||
|
||||
newPoint = interSect((first, tempFirst), (last, tempLast))
|
||||
if newPoint is None:
|
||||
pen.lineTo(last)
|
||||
return
|
||||
distance1 = newPoint.distance(first)
|
||||
distance2 = newPoint.distance(last)
|
||||
if roundFloat(distance1) > self.miterLimit + self.contrast:
|
||||
distance1 = self.miterLimit + tempFirst.distance(tempLast) * .7
|
||||
if roundFloat(distance2) > self.miterLimit + self.contrast:
|
||||
distance2 = self.miterLimit + tempFirst.distance(tempLast) * .7
|
||||
|
||||
distance1 *= self.magicCurve
|
||||
distance2 *= self.magicCurve
|
||||
|
||||
bcp1 = first - self.pointClass(cos(angle_1), sin(angle_1)) * distance1
|
||||
bcp2 = last + self.pointClass(cos(angle_2), sin(angle_2)) * distance2
|
||||
pen.curveTo(bcp1, bcp2, last)
|
||||
|
||||
def connectionInnerCorner(self, first, last, pen, close):
|
||||
if not close:
|
||||
pen.lineTo(last)
|
||||
|
||||
# caps
|
||||
|
||||
def buildCap(self, firstContour, lastContour):
|
||||
first = firstContour[-1]
|
||||
last = lastContour[0]
|
||||
first = self.pointClass(first.x, first.y)
|
||||
last = self.pointClass(last.x, last.y)
|
||||
|
||||
self.capCallback(firstContour, lastContour, first, last, self.prevAngle)
|
||||
|
||||
first = lastContour[-1]
|
||||
last = firstContour[0]
|
||||
first = self.pointClass(first.x, first.y)
|
||||
last = self.pointClass(last.x, last.y)
|
||||
|
||||
angle = radians(degrees(self.firstAngle)+180)
|
||||
self.capCallback(lastContour, firstContour, first, last, angle)
|
||||
|
||||
def capRound(self, firstContour, lastContour, first, last, angle):
|
||||
hookedAngle = radians(degrees(angle)+90)
|
||||
|
||||
p1 = first - self.pointClass(cos(hookedAngle), sin(hookedAngle)) * self.offset
|
||||
|
||||
p2 = last - self.pointClass(cos(hookedAngle), sin(hookedAngle)) * self.offset
|
||||
|
||||
oncurve = p1 + (p2-p1)*.5
|
||||
|
||||
roundness = .54
|
||||
|
||||
h1 = first - self.pointClass(cos(hookedAngle), sin(hookedAngle)) * self.offset * roundness
|
||||
h2 = oncurve + self.pointClass(cos(angle), sin(angle)) * self.offset * roundness
|
||||
|
||||
firstContour[-1].smooth = True
|
||||
|
||||
firstContour.addPoint((h1.x, h1.y))
|
||||
firstContour.addPoint((h2.x, h2.y))
|
||||
firstContour.addPoint((oncurve.x, oncurve.y), smooth=True, segmentType="curve")
|
||||
|
||||
h1 = oncurve - self.pointClass(cos(angle), sin(angle)) * self.offset * roundness
|
||||
h2 = last - self.pointClass(cos(hookedAngle), sin(hookedAngle)) * self.offset * roundness
|
||||
|
||||
firstContour.addPoint((h1.x, h1.y))
|
||||
firstContour.addPoint((h2.x, h2.y))
|
||||
|
||||
lastContour[0].segmentType = "curve"
|
||||
lastContour[0].smooth = True
|
||||
|
||||
def drawSettings(self, drawOriginal=False, drawInner=False, drawOuter=True):
|
||||
self.drawOriginal = drawOriginal
|
||||
self.drawInner = drawInner
|
||||
self.drawOuter = drawOuter
|
||||
|
||||
def drawPoints(self, pointPen):
|
||||
if self.drawInner:
|
||||
reversePen = ReverseContourPointPen(pointPen)
|
||||
self.innerGlyph.drawPoints(CleanPointPen(reversePen))
|
||||
if self.drawOuter:
|
||||
self.outerGlyph.drawPoints(CleanPointPen(pointPen))
|
||||
|
||||
if self.drawOriginal:
|
||||
if self.drawOuter:
|
||||
pointPen = ReverseContourPointPen(pointPen)
|
||||
self.originalGlyph.drawPoints(CleanPointPen(pointPen))
|
||||
|
||||
for glyphName, transform in self.components:
|
||||
pointPen.addComponent(glyphName, transform)
|
||||
|
||||
def draw(self, pen):
|
||||
pointPen = PointToSegmentPen(pen)
|
||||
self.drawPoints(pointPen)
|
||||
|
||||
def getGlyph(self):
|
||||
glyph = Glyph()
|
||||
pointPen = glyph.getPointPen()
|
||||
self.drawPoints(pointPen)
|
||||
return glyph
|
||||
|
||||
# The following functions have been decoupled from the outlinerRoboFontExtension and
|
||||
# effectively de-parameterized, with built-in assumptions
|
||||
|
||||
def calculate(glyph, strokewidth):
|
||||
tickness = strokewidth
|
||||
contrast = 0
|
||||
contrastAngle = 0
|
||||
keepBounds = False
|
||||
optimizeCurve = True
|
||||
miterLimit = None #assumed
|
||||
|
||||
corner = "round" #assumed - other options not supported
|
||||
cap = "round" #assumed - other options not supported
|
||||
|
||||
drawOriginal = False
|
||||
drawInner = True
|
||||
drawOuter = True
|
||||
|
||||
pen = OutlinePen(glyph.getParent(),
|
||||
tickness,
|
||||
contrast,
|
||||
contrastAngle,
|
||||
connection=corner,
|
||||
cap=cap,
|
||||
miterLimit=miterLimit,
|
||||
optimizeCurve=optimizeCurve)
|
||||
|
||||
glyph.draw(pen)
|
||||
|
||||
pen.drawSettings(drawOriginal=drawOriginal,
|
||||
drawInner=drawInner,
|
||||
drawOuter=drawOuter)
|
||||
|
||||
result = pen.getGlyph()
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def expandGlyph(glyph, strokewidth):
|
||||
defconGlyph = glyph
|
||||
outline = calculate(defconGlyph, strokewidth)
|
||||
|
||||
glyph.clearContours()
|
||||
outline.drawPoints(glyph.getPointPen())
|
||||
|
||||
glyph.round()
|
||||
|
||||
def expandFont(targetfont, strokewidth):
|
||||
font = targetfont
|
||||
for glyph in font:
|
||||
expandGlyph(glyph, strokewidth)
|
||||
|
||||
def doit(args):
|
||||
infont = OpenFont(args.ifont)
|
||||
outfont = args.ofont
|
||||
# add try to catch bad input
|
||||
strokewidth = int(args.thickness)
|
||||
expandFont(infont, strokewidth)
|
||||
infont.save(outfont)
|
||||
|
||||
return infont
|
||||
|
||||
def cmd() : execute(None,doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
30
examples/psfexportnamesunicodesfp.py
Normal file
30
examples/psfexportnamesunicodesfp.py
Normal file
|
@ -0,0 +1,30 @@
|
|||
#!/usr/bin/env python3
|
||||
'''Outputs an unsorted csv file containing the names of all the glyphs in the default layer
|
||||
and their primary unicode values. Format name,usv'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2018 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Victor Gaultney'
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
suffix = "_namesunicodes"
|
||||
|
||||
argspec = [
|
||||
('ifont', {'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('-o','--output',{'help': 'Output csv file'}, {'type': 'outfile', 'def': suffix+'.csv'})]
|
||||
|
||||
def doit(args) :
|
||||
font = args.ifont
|
||||
outfile = args.output
|
||||
|
||||
for glyph in font:
|
||||
unival = ""
|
||||
if glyph.unicode:
|
||||
unival = str.upper(hex(glyph.unicode))[2:7].zfill(4)
|
||||
outfile.write(glyph.name + "," + unival + "\n")
|
||||
|
||||
print("Done")
|
||||
|
||||
def cmd() : execute("FP",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
189
examples/psfgenftml.py
Normal file
189
examples/psfgenftml.py
Normal file
|
@ -0,0 +1,189 @@
|
|||
#!/usr/bin/env python3
|
||||
'''
|
||||
Example script to generate ftml document from glyph_data.csv and UFO.
|
||||
|
||||
To try this with the Harmattan font project:
|
||||
1) clone and build Harmattan:
|
||||
clone https://github.com/silnrsi/font-harmattan
|
||||
cd font-harmattan
|
||||
smith configure
|
||||
smith build ftml
|
||||
2) run psfgenftml as follows:
|
||||
python3 psfgenftml.py \
|
||||
-t "AllChars" \
|
||||
--ap "_?dia[AB]$" \
|
||||
--xsl ../tools/lib/ftml.xsl \
|
||||
--scale 200 \
|
||||
-i source/glyph_data.csv \
|
||||
-s "url(../references/Harmattan-Regular-v1.ttf)=ver 1" \
|
||||
-s "url(../results/Harmattan-Regular.ttf)=Reg-GR" \
|
||||
-s "url(../results/tests/ftml/fonts/Harmattan-Regular_ot_arab.ttf)=Reg-OT" \
|
||||
source/Harmattan-Regular.ufo tests/AllChars-dev.ftml
|
||||
3) launch resulting output file, tests/AllChars-dev.ftml, in a browser.
|
||||
(see https://silnrsi.github.io/FDBP/en-US/Browsers%20as%20a%20font%20test%20platform.html)
|
||||
NB: Using Firefox will allow simultaneous display of both Graphite and OpenType rendering
|
||||
4) As above but substitute:
|
||||
-t "Diac Test" for the -t parameter
|
||||
tests/DiacTest-dev.ftml for the final parameter
|
||||
and launch tests/DiacTest-dev.ftml in a browser.
|
||||
'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2018,2021 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Bob Hallissy'
|
||||
|
||||
import re
|
||||
from silfont.core import execute
|
||||
import silfont.ftml_builder as FB
|
||||
|
||||
argspec = [
|
||||
('ifont', {'help': 'Input UFO'}, {'type': 'infont'}),
|
||||
('output', {'help': 'Output file ftml in XML format', 'nargs': '?'}, {'type': 'outfile', 'def': '_out.ftml'}),
|
||||
('-i','--input', {'help': 'Glyph info csv file'}, {'type': 'incsv', 'def': 'glyph_data.csv'}),
|
||||
('-f','--fontcode', {'help': 'letter to filter for glyph_data'},{}),
|
||||
('-l','--log', {'help': 'Set log file name'}, {'type': 'outfile', 'def': '_ftml.log'}),
|
||||
('--langs', {'help':'List of bcp47 language tags', 'default': None}, {}),
|
||||
('--rtl', {'help': 'enable right-to-left features', 'action': 'store_true'}, {}),
|
||||
('--norendercheck', {'help': 'do not include the RenderingUnknown check', 'action': 'store_true'}, {}),
|
||||
('-t', '--test', {'help': 'name of the test to generate', 'default': None}, {}),
|
||||
('-s','--fontsrc', {'help': 'font source: "url()" or "local()" optionally followed by "=label"', 'action': 'append'}, {}),
|
||||
('--scale', {'help': 'percentage to scale rendered text (default 100)'}, {}),
|
||||
('--ap', {'help': 'regular expression describing APs to examine', 'default': '.'}, {}),
|
||||
('-w', '--width', {'help': 'total width of all <string> column (default automatic)'}, {}),
|
||||
('--xsl', {'help': 'XSL stylesheet to use'}, {}),
|
||||
]
|
||||
|
||||
|
||||
def doit(args):
|
||||
logger = args.logger
|
||||
|
||||
# Read input csv
|
||||
builder = FB.FTMLBuilder(logger, incsv=args.input, fontcode=args.fontcode, font=args.ifont, ap=args.ap,
|
||||
rtlenable=True, langs=args.langs)
|
||||
|
||||
# Override default base (25CC) for displaying combining marks:
|
||||
builder.diacBase = 0x0628 # beh
|
||||
|
||||
# Initialize FTML document:
|
||||
# Default name for test: AllChars or something based on the csvdata file:
|
||||
test = args.test or 'AllChars (NG)'
|
||||
widths = None
|
||||
if args.width:
|
||||
try:
|
||||
width, units = re.match(r'(\d+)(.*)$', args.width).groups()
|
||||
if len(args.fontsrc):
|
||||
width = int(round(int(width)/len(args.fontsrc)))
|
||||
widths = {'string': f'{width}{units}'}
|
||||
logger.log(f'width: {args.width} --> {widths["string"]}', 'I')
|
||||
except:
|
||||
logger.log(f'Unable to parse width argument "{args.width}"', 'W')
|
||||
# split labels from fontsource parameter
|
||||
fontsrc = []
|
||||
labels = []
|
||||
for sl in args.fontsrc:
|
||||
try:
|
||||
s, l = sl.split('=',1)
|
||||
fontsrc.append(s)
|
||||
labels.append(l)
|
||||
except ValueError:
|
||||
fontsrc.append(sl)
|
||||
labels.append(None)
|
||||
ftml = FB.FTML(test, logger, rendercheck=not args.norendercheck, fontscale=args.scale,
|
||||
widths=widths, xslfn=args.xsl, fontsrc=fontsrc, fontlabel=labels, defaultrtl=args.rtl)
|
||||
|
||||
if test.lower().startswith("allchars"):
|
||||
# all chars that should be in the font:
|
||||
ftml.startTestGroup('Encoded characters')
|
||||
for uid in sorted(builder.uids()):
|
||||
if uid < 32: continue
|
||||
c = builder.char(uid)
|
||||
# iterate over all permutations of feature settings that might affect this character:
|
||||
for featlist in builder.permuteFeatures(uids = (uid,)):
|
||||
ftml.setFeatures(featlist)
|
||||
builder.render((uid,), ftml)
|
||||
# Don't close test -- collect consecutive encoded chars in a single row
|
||||
ftml.clearFeatures()
|
||||
for langID in sorted(c.langs):
|
||||
ftml.setLang(langID)
|
||||
builder.render((uid,), ftml)
|
||||
ftml.clearLang()
|
||||
|
||||
# Add unencoded specials and ligatures -- i.e., things with a sequence of USVs in the glyph_data:
|
||||
ftml.startTestGroup('Specials & ligatures from glyph_data')
|
||||
for basename in sorted(builder.specials()):
|
||||
special = builder.special(basename)
|
||||
# iterate over all permutations of feature settings that might affect this special
|
||||
for featlist in builder.permuteFeatures(uids = special.uids):
|
||||
ftml.setFeatures(featlist)
|
||||
builder.render(special.uids, ftml)
|
||||
# close test so each special is on its own row:
|
||||
ftml.closeTest()
|
||||
ftml.clearFeatures()
|
||||
if len(special.langs):
|
||||
for langID in sorted(special.langs):
|
||||
ftml.setLang(langID)
|
||||
builder.render(special.uids, ftml)
|
||||
ftml.closeTest()
|
||||
ftml.clearLang()
|
||||
|
||||
# Add Lam-Alef data manually
|
||||
ftml.startTestGroup('Lam-Alef')
|
||||
# generate list of lam and alef characters that should be in the font:
|
||||
lamlist = list(filter(lambda x: x in builder.uids(), (0x0644, 0x06B5, 0x06B6, 0x06B7, 0x06B8, 0x076A, 0x08A6)))
|
||||
aleflist = list(filter(lambda x: x in builder.uids(), (0x0627, 0x0622, 0x0623, 0x0625, 0x0671, 0x0672, 0x0673, 0x0675, 0x0773, 0x0774)))
|
||||
# iterate over all combinations:
|
||||
for lam in lamlist:
|
||||
for alef in aleflist:
|
||||
for featlist in builder.permuteFeatures(uids = (lam, alef)):
|
||||
ftml.setFeatures(featlist)
|
||||
builder.render((lam,alef), ftml)
|
||||
# close test so each combination is on its own row:
|
||||
ftml.closeTest()
|
||||
ftml.clearFeatures()
|
||||
|
||||
if test.lower().startswith("diac"):
|
||||
# Diac attachment:
|
||||
|
||||
# Representative base and diac chars:
|
||||
repDiac = list(filter(lambda x: x in builder.uids(), (0x064E, 0x0650, 0x065E, 0x0670, 0x0616, 0x06E3, 0x08F0, 0x08F2)))
|
||||
repBase = list(filter(lambda x: x in builder.uids(), (0x0627, 0x0628, 0x062B, 0x0647, 0x064A, 0x77F, 0x08AC)))
|
||||
|
||||
ftml.startTestGroup('Representative diacritics on all bases that take diacritics')
|
||||
for uid in sorted(builder.uids()):
|
||||
# ignore some I don't care about:
|
||||
if uid < 32 or uid in (0xAA, 0xBA): continue
|
||||
c = builder.char(uid)
|
||||
# Always process Lo, but others only if that take marks:
|
||||
if c.general == 'Lo' or c.isBase:
|
||||
for diac in repDiac:
|
||||
for featlist in builder.permuteFeatures(uids = (uid,diac)):
|
||||
ftml.setFeatures(featlist)
|
||||
# Don't automatically separate connecting or mirrored forms into separate lines:
|
||||
builder.render((uid,diac), ftml, addBreaks = False)
|
||||
ftml.clearFeatures()
|
||||
ftml.closeTest()
|
||||
|
||||
ftml.startTestGroup('All diacritics on representative bases')
|
||||
for uid in sorted(builder.uids()):
|
||||
# ignore non-ABS marks
|
||||
if uid < 0x600 or uid in range(0xFE00, 0xFE10): continue
|
||||
c = builder.char(uid)
|
||||
if c.general == 'Mn':
|
||||
for base in repBase:
|
||||
for featlist in builder.permuteFeatures(uids = (uid,base)):
|
||||
ftml.setFeatures(featlist)
|
||||
builder.render((base,uid), ftml, keyUID = uid, addBreaks = False)
|
||||
ftml.clearFeatures()
|
||||
ftml.closeTest()
|
||||
|
||||
ftml.startTestGroup('Special cases')
|
||||
builder.render((0x064A, 0x065E), ftml, comment="Yeh + Fatha should keep dots")
|
||||
builder.render((0x064A, 0x0654), ftml, comment="Yeh + Hamza should loose dots")
|
||||
ftml.closeTest()
|
||||
|
||||
# Write the output ftml file
|
||||
ftml.writeFile(args.output)
|
||||
|
||||
|
||||
def cmd() : execute("UFO",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
140
examples/psftidyfontlabufo.py
Executable file
140
examples/psftidyfontlabufo.py
Executable file
|
@ -0,0 +1,140 @@
|
|||
#!/usr/bin/env python3
|
||||
__doc__ = '''Make changes to a backup UFO to match some changes made to another UFO by FontLab
|
||||
When a UFO is first round-tripped through Fontlab 7, many changes are made including adding 'smooth="yes"' to many points
|
||||
in glifs and removing it from others. Also if components are after contours in a glif, then they get moved to before them.
|
||||
These changes make initial comparisons hard and can mask other changes.
|
||||
This script takes the backup of the original font that Fontlab made and writes out a new version with contours changed
|
||||
to match those in the round-tripped UFO so a diff can then be done to look for other differences.
|
||||
A glif is only changed if there are no other changes to contours.
|
||||
If also moves components to match.
|
||||
'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2021 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute, splitfn
|
||||
from xml.etree import ElementTree as ET
|
||||
from silfont.ufo import Ufont
|
||||
import os, glob
|
||||
from difflib import ndiff
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'post-fontlab ufo'}, {'type': 'infont'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': '_tidyfontlab.log'})]
|
||||
|
||||
def doit(args) :
|
||||
|
||||
flfont = args.ifont
|
||||
logger = args.logger
|
||||
params = args.paramsobj
|
||||
fontname = args.ifont.ufodir
|
||||
|
||||
# Locate the oldest backup
|
||||
(path, base, ext) = splitfn(fontname)
|
||||
backuppath = os.path.join(path, base + ".*-*" + ext) # Backup has date/time added in format .yymmdd-hhmm
|
||||
backups = glob.glob(backuppath)
|
||||
if len(backups) == 0:
|
||||
logger.log("No backups found matching %s so aborting..." % backuppath, "P")
|
||||
return
|
||||
backupname = sorted(backups)[0] # Choose the oldest backup - date/time format sorts alphabetically
|
||||
logger.log(f"Opening backup font {backupname}", "P")
|
||||
bfont = Ufont(backupname, params=params)
|
||||
outufoname = os.path.join(path, base + ".tidied.ufo")
|
||||
|
||||
fllayers = {} # Dictionary of flfont layers by layer name
|
||||
for layer in flfont.layers: fllayers[layer.layername] = layer
|
||||
|
||||
for layer in bfont.layers:
|
||||
if layer.layername not in fllayers:
|
||||
logger.log(f"layer {layer.layername} missing", "E")
|
||||
continue
|
||||
fllayer = fllayers[layer.layername]
|
||||
glifchangecount = 0
|
||||
smoothchangecount = 0
|
||||
duplicatenodecount = 0
|
||||
compchangecount = 0
|
||||
for gname in layer:
|
||||
glif = layer[gname]
|
||||
glifchange = False
|
||||
flglif = fllayer[gname]
|
||||
if "outline" in glif and "outline" in flglif:
|
||||
changestomake = []
|
||||
otherchange = False
|
||||
outline = glif["outline"]
|
||||
floutline = flglif["outline"]
|
||||
contours = outline.contours
|
||||
if len(contours) != len(floutline.contours): break # Different number so can't all be identical!
|
||||
flcontours = iter(floutline.contours)
|
||||
for contour in contours:
|
||||
flc = next(flcontours)
|
||||
points = contour["point"]
|
||||
flpoints = flc["point"]
|
||||
duplicatenode = False
|
||||
smoothchanges = True
|
||||
if len(points) != len(flpoints): # Contours must be different!
|
||||
if len(flpoints) - len(points) == 1: # Look for duplicate node issue
|
||||
(different, plus, minus) = sdiff(str(ET.tostring(points[0]).strip()), str(ET.tostring(flpoints[0]).strip()))
|
||||
if ET.tostring(points[0]).strip() == ET.tostring(flpoints[-1]).strip(): # With duplicate node issue first point is appended to the end
|
||||
if plus == "lin" and minus == "curv": # On first point curve changed to line.
|
||||
duplicatenode = True # Also still need check all the remaining points are the same
|
||||
break # but next check does that
|
||||
otherchange = True # Duplicate node issue above is only case where contour count can be different
|
||||
break
|
||||
|
||||
firstpoint = True
|
||||
for point in points:
|
||||
flp = flpoints.pop(0)
|
||||
if firstpoint and duplicatenode: # Ignore the first point since that will be different
|
||||
firstpoint = False
|
||||
continue
|
||||
firstpoint = False
|
||||
(different, plus, minus) = sdiff(str(ET.tostring(point).strip()), str(ET.tostring(flp).strip()))
|
||||
if different: # points are different
|
||||
if plus.strip() + minus.strip() == 'smooth="yes"':
|
||||
smoothchanges = True # Only difference is addition or removal of smooth="yes"
|
||||
else: # Other change to glif,so can't safely make changes
|
||||
otherchange = True
|
||||
|
||||
if (smoothchanges or duplicatenode) and not otherchange: # Only changes to contours in glif are known issues that should be reset
|
||||
flcontours = iter(floutline.contours)
|
||||
for contour in list(contours):
|
||||
flcontour = next(flcontours)
|
||||
outline.replaceobject(contour, flcontour, "contour")
|
||||
if smoothchanges:
|
||||
logger.log(f'Smooth changes made to {gname}', "I")
|
||||
smoothchangecount += 1
|
||||
if duplicatenode:
|
||||
logger.log(f'Duplicate node changes made to {gname}', "I")
|
||||
duplicatenodecount += 1
|
||||
glifchange = True
|
||||
|
||||
# Now need to move components to the front...
|
||||
components = outline.components
|
||||
if len(components) > 0 and len(contours) > 0 and list(outline)[0] == "contour":
|
||||
oldcontours = list(contours) # Easiest way to 'move' components is to delete contours then append back at the end
|
||||
for contour in oldcontours: outline.removeobject(contour, "contour")
|
||||
for contour in oldcontours: outline.appendobject(contour, "contour")
|
||||
logger.log(f'Component position changes made to {gname}', "I")
|
||||
compchangecount += 1
|
||||
glifchange = True
|
||||
if glifchange: glifchangecount += 1
|
||||
|
||||
logger.log(f'{layer.layername}: {glifchangecount} glifs changed', 'P')
|
||||
logger.log(f'{layer.layername}: {smoothchangecount} changes due to smooth, {duplicatenodecount} due to duplicate nodes and {compchangecount} due to components position', "P")
|
||||
|
||||
bfont.write(outufoname)
|
||||
return
|
||||
|
||||
def sdiff(before, after): # Returns strings with the differences between the supplited strings
|
||||
if before == after: return(False,"","") # First returned value is True if the strings are different
|
||||
diff = ndiff(before, after)
|
||||
plus = "" # Plus will have the extra characters that are only in after
|
||||
minus = "" # Minus will have the characters missing from after
|
||||
for d in diff:
|
||||
if d[0] == "+": plus += d[2]
|
||||
if d[0] == "-": minus += d[2]
|
||||
return(True, plus, minus)
|
||||
|
||||
def cmd() : execute("UFO",doit, argspec)
|
||||
if __name__ == "__main__": cmd()
|
327
examples/psftoneletters.py
Normal file
327
examples/psftoneletters.py
Normal file
|
@ -0,0 +1,327 @@
|
|||
#!/usr/bin/env python3
|
||||
from __future__ import unicode_literals
|
||||
'''Creates Latin script tone letters (pitch contours)'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2017 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Victor Gaultney'
|
||||
|
||||
# Usage: psftoneletters ifont ofont
|
||||
# Assumption is that the named tone letters already exist in the font,
|
||||
# so this script is only to update (rebuild) them. New tone letter spaces
|
||||
# in the font can be created with psfbuildcomp.py
|
||||
|
||||
# To Do
|
||||
# Get parameters from lib.plist org.sil.lcg.toneLetters
|
||||
|
||||
# main input, output, and execution handled by pysilfont framework
|
||||
from silfont.core import execute
|
||||
import silfont.ufo as UFO
|
||||
|
||||
from robofab.world import OpenFont
|
||||
|
||||
from math import tan, radians, sqrt
|
||||
|
||||
suffix = '_toneletters'
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'filename'}),
|
||||
('ofont',{'help': 'Output font file','nargs': '?' }, {'type': 'filename', 'def': "_"+suffix}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': suffix+'log'})]
|
||||
|
||||
|
||||
def getParameters(font):
|
||||
global glyphHeight, marginFlatLeft, marginPointLeft, marginFlatRight, marginPointRight, contourWidth, marginDotLeft, marginDotRight, dotSpacing, italicAngle, radius, strokeHeight, strokeDepth, contourGap, fakeBottom, dotRadius, dotBCP, contourGapDot, fakeBottomDot, anchorHeight, anchorOffset
|
||||
|
||||
source = font.lib.getval("org.sil.lcg.toneLetters")
|
||||
|
||||
strokeThickness = int(source["strokeThickness"]) # total width of stroke (ideally an even number)
|
||||
glyphHeight = int(source["glyphHeight"]) # height, including overshoot
|
||||
glyphDepth = int(source["glyphDepth"]) # depth - essentially overshoot (typically negative)
|
||||
marginFlatLeft = int(source["marginFlatLeft"]) # left sidebearing for straight bar
|
||||
marginPointLeft = int(source["marginPointLeft"]) # left sidebearing for endpoints
|
||||
marginFlatRight = int(source["marginFlatRight"]) # left sidebearing for straight bar
|
||||
marginPointRight = int(source["marginPointRight"]) # left sidebearing for endpoints
|
||||
contourWidth = int(source["contourWidth"]) # this is how wide the contour portions are, from the middle
|
||||
# of one end to the other, in the horizontal axis. The actual
|
||||
# bounding box of the contours would then be this plus the
|
||||
# strokeThickness.
|
||||
marginDotLeft = int(source["marginDotLeft"]) # left sidebearing for dots
|
||||
marginDotRight = int(source["marginDotRight"]) # right sidebearing for dots
|
||||
dotSize = int(source["dotSize"]) # the diameter of the dot, normally 150% of the stroke weight
|
||||
# (ideally an even number)
|
||||
dotSpacing = int(source["dotSpacing"]) # the space between the edge of the dot and the
|
||||
# edge of the expanded stroke
|
||||
italicAngle = float(source["italicAngle"]) # angle of italic slant, 0 for upright
|
||||
|
||||
radius = round(strokeThickness / 2)
|
||||
strokeHeight = glyphHeight - radius # for the unexpanded stroke
|
||||
strokeDepth = glyphDepth + radius
|
||||
strokeLength = strokeHeight - strokeDepth
|
||||
contourGap = round(strokeLength / 4) # gap between contour levels
|
||||
fakeBottom = strokeDepth - contourGap # a false 'bottom' for building contours
|
||||
|
||||
dotRadius = round(dotSize / 2) # this gets redefined during nine tone process
|
||||
dotBCP = round((dotSize / 2) * .55) # this gets redefined during nine tone process
|
||||
contourGapDot = round(( (glyphHeight - dotRadius) - (glyphDepth + dotRadius) ) / 4)
|
||||
fakeBottomDot = (glyphDepth + dotRadius) - contourGapDot
|
||||
|
||||
anchorHeight = [ 0 , strokeDepth , (strokeDepth + contourGap) , (strokeDepth + contourGap * 2) , (strokeHeight - contourGap) , strokeHeight ]
|
||||
anchorOffset = 20 # hardcoded for now
|
||||
|
||||
# drawing functions
|
||||
|
||||
def drawLine(glyph,startX,startY,endX,endY):
|
||||
|
||||
dx = (endX - startX) # dx of original stroke
|
||||
dy = (endY - startY) # dy of original stroke
|
||||
len = sqrt( dx * dx + dy * dy ) # length of original stroke
|
||||
opp = round(dy * (radius / len)) # offsets for on-curve points
|
||||
adj = round(dx * (radius / len))
|
||||
oppOff = round(opp * .55) # offsets for off-curve from on-curve
|
||||
adjOff = round(adj * .55)
|
||||
|
||||
glyph.clearContours()
|
||||
|
||||
pen = glyph.getPen()
|
||||
|
||||
# print startX + opp, startY - adj
|
||||
|
||||
pen.moveTo((startX + opp, startY - adj))
|
||||
pen.lineTo((endX + opp, endY - adj)) # first straight line
|
||||
|
||||
bcp1x = endX + opp + adjOff
|
||||
bcp1y = endY - adj + oppOff
|
||||
bcp2x = endX + adj + oppOff
|
||||
bcp2y = endY + opp - adjOff
|
||||
pen.curveTo((bcp1x, bcp1y), (bcp2x, bcp2y), (endX + adj, endY + opp))
|
||||
|
||||
bcp1x = endX + adj - oppOff
|
||||
bcp1y = endY + opp + adjOff
|
||||
bcp2x = endX - opp + adjOff
|
||||
bcp2y = endY + adj + oppOff
|
||||
pen.curveTo((bcp1x, bcp1y), (bcp2x, bcp2y), (endX - opp, endY + adj))
|
||||
|
||||
pen.lineTo((startX - opp, startY + adj)) # second straight line
|
||||
|
||||
bcp1x = startX - opp - adjOff
|
||||
bcp1y = startY + adj - oppOff
|
||||
bcp2x = startX - adj - oppOff
|
||||
bcp2y = startY - opp + adjOff
|
||||
pen.curveTo((bcp1x, bcp1y), (bcp2x, bcp2y), (startX - adj, startY - opp))
|
||||
|
||||
bcp1x = startX - adj + oppOff
|
||||
bcp1y = startY - opp - adjOff
|
||||
bcp2x = startX + opp - adjOff
|
||||
bcp2y = startY - adj - oppOff
|
||||
pen.curveTo((bcp1x, bcp1y), (bcp2x, bcp2y), (startX + opp, startY - adj))
|
||||
# print startX + opp, startY - adj
|
||||
|
||||
pen.closePath()
|
||||
|
||||
|
||||
def drawDot(glyph,dotX,dotY):
|
||||
|
||||
glyph.clearContours()
|
||||
|
||||
pen = glyph.getPen()
|
||||
|
||||
pen.moveTo((dotX, dotY - dotRadius))
|
||||
pen.curveTo((dotX + dotBCP, dotY - dotRadius), (dotX + dotRadius, dotY - dotBCP), (dotX + dotRadius, dotY))
|
||||
pen.curveTo((dotX + dotRadius, dotY + dotBCP), (dotX + dotBCP, dotY + dotRadius), (dotX, dotY + dotRadius))
|
||||
pen.curveTo((dotX - dotBCP, dotY + dotRadius), (dotX - dotRadius, dotY + dotBCP), (dotX - dotRadius, dotY))
|
||||
pen.curveTo((dotX - dotRadius, dotY - dotBCP), (dotX - dotBCP, dotY - dotRadius), (dotX, dotY - dotRadius))
|
||||
pen.closePath()
|
||||
|
||||
|
||||
def adjItalX(aiX,aiY):
|
||||
newX = aiX + round(tan(radians(italicAngle)) * aiY)
|
||||
return newX
|
||||
|
||||
|
||||
def buildComp(f,g,pieces,ancLevelLeft,ancLevelMidLeft,ancLevelMidRight,ancLevelRight):
|
||||
|
||||
g.clear()
|
||||
g.width = 0
|
||||
|
||||
for p in pieces:
|
||||
g.appendComponent(p, (g.width, 0))
|
||||
g.width += f[p].width
|
||||
|
||||
if ancLevelLeft > 0:
|
||||
anc_nm = "_TL"
|
||||
anc_x = adjItalX(0,anchorHeight[ancLevelLeft])
|
||||
if g.name[0:7] == 'TnStaff':
|
||||
anc_x = anc_x - anchorOffset
|
||||
anc_y = anchorHeight[ancLevelLeft]
|
||||
g.appendAnchor(anc_nm, (anc_x, anc_y))
|
||||
|
||||
if ancLevelMidLeft > 0:
|
||||
anc_nm = "_TL"
|
||||
anc_x = adjItalX(marginPointLeft + radius,anchorHeight[ancLevelMidLeft])
|
||||
anc_y = anchorHeight[ancLevelMidLeft]
|
||||
g.appendAnchor(anc_nm, (anc_x, anc_y))
|
||||
|
||||
if ancLevelMidRight > 0:
|
||||
anc_nm = "TL"
|
||||
anc_x = adjItalX(g.width - marginPointRight - radius,anchorHeight[ancLevelMidRight])
|
||||
anc_y = anchorHeight[ancLevelMidRight]
|
||||
g.appendAnchor(anc_nm, (anc_x, anc_y))
|
||||
|
||||
if ancLevelRight > 0:
|
||||
anc_nm = "TL"
|
||||
anc_x = adjItalX(g.width,anchorHeight[ancLevelRight])
|
||||
if g.name[0:7] == 'TnStaff':
|
||||
anc_x = anc_x + anchorOffset
|
||||
anc_y = anchorHeight[ancLevelRight]
|
||||
g.appendAnchor(anc_nm, (anc_x, anc_y))
|
||||
|
||||
|
||||
# updating functions
|
||||
|
||||
def updateTLPieces(targetfont):
|
||||
|
||||
f = targetfont
|
||||
|
||||
# set spacer widths
|
||||
f["TnLtrSpcFlatLeft"].width = marginFlatLeft + radius
|
||||
f["TnLtrSpcPointLeft"].width = marginPointLeft + radius - 1 # -1 corrects final sidebearing
|
||||
f["TnLtrSpcFlatRight"].width = marginFlatRight + radius
|
||||
f["TnLtrSpcPointRight"].width = marginPointRight + radius - 1 # -1 corrects final sidebearing
|
||||
f["TnLtrSpcDotLeft"].width = marginDotLeft + dotRadius
|
||||
f["TnLtrSpcDotMiddle"].width = dotRadius + dotSpacing + radius
|
||||
f["TnLtrSpcDotRight"].width = dotRadius + marginDotRight
|
||||
|
||||
# redraw bar
|
||||
g = f["TnLtrBar"]
|
||||
drawLine(g,adjItalX(0,strokeDepth),strokeDepth,adjItalX(0,strokeHeight),strokeHeight)
|
||||
g.width = 0
|
||||
|
||||
# redraw contours
|
||||
namePre = 'TnLtrSeg'
|
||||
for i in range(1,6):
|
||||
for j in range(1,6):
|
||||
|
||||
nameFull = namePre + str(i) + str(j)
|
||||
|
||||
if i == 5: # this deals with round off errors
|
||||
startLevel = strokeHeight
|
||||
else:
|
||||
startLevel = fakeBottom + i * contourGap
|
||||
if j == 5:
|
||||
endLevel = strokeHeight
|
||||
else:
|
||||
endLevel = fakeBottom + j * contourGap
|
||||
|
||||
g = f[nameFull]
|
||||
g.width = contourWidth
|
||||
drawLine(g,adjItalX(1,startLevel),startLevel,adjItalX(contourWidth-1,endLevel),endLevel)
|
||||
|
||||
|
||||
# redraw dots
|
||||
namePre = 'TnLtrDot'
|
||||
for i in range(1,6):
|
||||
|
||||
nameFull = namePre + str(i)
|
||||
|
||||
if i == 5: # this deals with round off errors
|
||||
dotLevel = glyphHeight - dotRadius
|
||||
else:
|
||||
dotLevel = fakeBottomDot + i * contourGapDot
|
||||
|
||||
g = f[nameFull]
|
||||
drawDot(g,adjItalX(0,dotLevel),dotLevel)
|
||||
|
||||
|
||||
def rebuildTLComps(targetfont):
|
||||
|
||||
f = targetfont
|
||||
|
||||
# staff right
|
||||
for i in range(1,6):
|
||||
nameFull = 'TnStaffRt' + str(i)
|
||||
buildComp(f,f[nameFull],['TnLtrBar','TnLtrSpcFlatRight'],i,0,0,0)
|
||||
|
||||
# staff right no outline
|
||||
for i in range(1,6):
|
||||
nameFull = 'TnStaffRt' + str(i) + 'no'
|
||||
buildComp(f,f[nameFull],['TnLtrSpcFlatRight'],i,0,0,0)
|
||||
|
||||
# staff left
|
||||
for i in range(1,6):
|
||||
nameFull = 'TnStaffLft' + str(i)
|
||||
buildComp(f,f[nameFull],['TnLtrSpcFlatLeft','TnLtrBar'],0,0,0,i)
|
||||
|
||||
# staff left no outline
|
||||
for i in range(1,6):
|
||||
nameFull = 'TnStaffLft' + str(i) + 'no'
|
||||
buildComp(f,f[nameFull],['TnLtrSpcFlatLeft'],0,0,0,i)
|
||||
|
||||
# contours right
|
||||
for i in range(1,6):
|
||||
for j in range(1,6):
|
||||
nameFull = 'TnContRt' + str(i) + str(j)
|
||||
segment = 'TnLtrSeg' + str(i) + str(j)
|
||||
buildComp(f,f[nameFull],['TnLtrSpcPointLeft',segment],0,i,0,j)
|
||||
|
||||
# contours left
|
||||
for i in range(1,6):
|
||||
for j in range(1,6):
|
||||
nameFull = 'TnContLft' + str(i) + str(j)
|
||||
segment = 'TnLtrSeg' + str(i) + str(j)
|
||||
buildComp(f,f[nameFull],[segment,'TnLtrSpcPointRight'],i,0,j,0)
|
||||
|
||||
# basic tone letters
|
||||
for i in range(1,6):
|
||||
nameFull = 'TnLtr' + str(i)
|
||||
segment = 'TnLtrSeg' + str(i) + str(i)
|
||||
buildComp(f,f[nameFull],['TnLtrSpcPointLeft',segment,'TnLtrBar','TnLtrSpcFlatRight'],0,0,0,0)
|
||||
|
||||
# basic tone letters no outline
|
||||
for i in range(1,6):
|
||||
nameFull = 'TnLtr' + str(i) + 'no'
|
||||
segment = 'TnLtrSeg' + str(i) + str(i)
|
||||
buildComp(f,f[nameFull],['TnLtrSpcPointLeft',segment,'TnLtrSpcFlatRight'],0,i,0,0)
|
||||
|
||||
# left stem tone letters
|
||||
for i in range(1,6):
|
||||
nameFull = 'LftStemTnLtr' + str(i)
|
||||
segment = 'TnLtrSeg' + str(i) + str(i)
|
||||
buildComp(f,f[nameFull],['TnLtrSpcFlatLeft','TnLtrBar',segment,'TnLtrSpcPointRight'],0,0,0,0)
|
||||
|
||||
# left stem tone letters no outline
|
||||
for i in range(1,6):
|
||||
nameFull = 'LftStemTnLtr' + str(i) + 'no'
|
||||
segment = 'TnLtrSeg' + str(i) + str(i)
|
||||
buildComp(f,f[nameFull],['TnLtrSpcFlatLeft',segment,'TnLtrSpcPointRight'],0,0,i,0)
|
||||
|
||||
# dotted tone letters
|
||||
for i in range(1,6):
|
||||
nameFull = 'DotTnLtr' + str(i)
|
||||
dot = 'TnLtrDot' + str(i)
|
||||
buildComp(f,f[nameFull],['TnLtrSpcDotLeft',dot,'TnLtrSpcDotMiddle','TnLtrBar','TnLtrSpcFlatRight'],0,0,0,0)
|
||||
|
||||
# dotted left stem tone letters
|
||||
for i in range(1,6):
|
||||
nameFull = 'DotLftStemTnLtr' + str(i)
|
||||
dot = 'TnLtrDot' + str(i)
|
||||
buildComp(f,f[nameFull],['TnLtrSpcFlatLeft','TnLtrBar','TnLtrSpcDotMiddle',dot,'TnLtrSpcDotRight'],0,0,0,0)
|
||||
|
||||
|
||||
def doit(args):
|
||||
|
||||
psffont = UFO.Ufont(args.ifont, params = args.paramsobj)
|
||||
rffont = OpenFont(args.ifont)
|
||||
outfont = args.ofont
|
||||
|
||||
getParameters(psffont)
|
||||
|
||||
updateTLPieces(rffont)
|
||||
rebuildTLComps(rffont)
|
||||
|
||||
|
||||
rffont.save(outfont)
|
||||
|
||||
return
|
||||
|
||||
def cmd() : execute(None,doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
54
examples/xmlDemo.py
Executable file
54
examples/xmlDemo.py
Executable file
|
@ -0,0 +1,54 @@
|
|||
#!/usr/bin/env python3
|
||||
'Demo script for use of ETWriter'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
import silfont.etutil as etutil
|
||||
from xml.etree import cElementTree as ET
|
||||
|
||||
argspec = [('outfile1',{'help': 'output file 1','default': './xmlDemo.xml','nargs': '?'}, {'type': 'outfile'}),
|
||||
('outfile2',{'help': 'output file 2','nargs': '?'}, {'type': 'outfile', 'def':'_2.xml'}),
|
||||
('outfile3',{'help': 'output file 3','nargs': '?'}, {'type': 'outfile', 'def':'_3.xml'})]
|
||||
|
||||
def doit(args) :
|
||||
ofile1 = args.outfile1
|
||||
ofile2 = args.outfile2
|
||||
ofile3 = args.outfile3
|
||||
|
||||
xmlstring = "<item>\n<subitem hello='world'>\n<subsub name='moon'>\n<value>lunar</value>\n</subsub>\n</subitem>"
|
||||
xmlstring += "<subitem hello='jupiter'>\n<subsub name='moon'>\n<value>IO</value>\n</subsub>\n</subitem>\n</item>"
|
||||
|
||||
# Using etutil's xmlitem class
|
||||
|
||||
xmlobj = etutil.xmlitem()
|
||||
xmlobj.etree = ET.fromstring(xmlstring)
|
||||
|
||||
etwobj = etutil.ETWriter(xmlobj.etree)
|
||||
xmlobj.outxmlstr = etwobj.serialize_xml()
|
||||
|
||||
ofile1.write(xmlobj.outxmlstr)
|
||||
|
||||
# Just using ETWriter
|
||||
|
||||
etwobj = etutil.ETWriter( ET.fromstring(xmlstring) )
|
||||
xmlstr = etwobj.serialize_xml()
|
||||
ofile2.write(xmlstr)
|
||||
# Changing parameters
|
||||
|
||||
etwobj = etutil.ETWriter( ET.fromstring(xmlstring) )
|
||||
etwobj.indentIncr = " "
|
||||
etwobj.indentFirst = ""
|
||||
xmlstr = etwobj.serialize_xml()
|
||||
ofile3.write(xmlstr)
|
||||
|
||||
# Close files and exit
|
||||
ofile1.close()
|
||||
ofile2.close()
|
||||
ofile3.close()
|
||||
return
|
||||
|
||||
def cmd() : execute("",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
52
preflight/update-preflight-libs-pyenv
Executable file
52
preflight/update-preflight-libs-pyenv
Executable file
|
@ -0,0 +1,52 @@
|
|||
#!/bin/sh
|
||||
# Update preflight libs (assumes a pyenv approach)
|
||||
|
||||
# Copyright (c) 2023, SIL International (https://www.sil.org)
|
||||
# Released under the MIT License (https://opensource.org/licenses/MIT)
|
||||
# maintained by Nicolas Spalinger
|
||||
|
||||
echo "Update preflight libs pyenv - version 2023-10-19"
|
||||
|
||||
# checking we have pyenv installed
|
||||
if ! [ -x "$(command -v pyenv)" ]; then
|
||||
echo 'Error: pyenv is not installed. Check the workflow doc for details. '
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Active python version and location (via pyenv):"
|
||||
pyenv version
|
||||
which python3
|
||||
echo ""
|
||||
|
||||
echo "Installing/Updating pip"
|
||||
python3 -m pip install --upgrade pip setuptools wheel setuptools_scm
|
||||
echo ""
|
||||
|
||||
echo "Populating/updating the preflight dependencies for the active pyenv interpreter"
|
||||
|
||||
# components in editable mode:
|
||||
# (with source at the root of the user's home directory so that src/ folders don't appear anywhere else)
|
||||
python3 -m pip install -e git+https://github.com/silnrsi/pysilfont.git#egg=silfont --src "$HOME"/src
|
||||
|
||||
# components from main/master directly from upstream git repositories
|
||||
python3 -m pip install git+https://github.com/silnrsi/palaso-python.git git+https://github.com/googlefonts/GlyphsLib.git git+https://github.com/fonttools/ufoLib2.git git+https://github.com/fonttools/fonttools.git git+https://github.com/typemytype/glyphConstruction.git git+https://github.com/robotools/fontParts.git --use-pep517
|
||||
|
||||
# components from stable releases on pypi
|
||||
python3 -m pip install fs mutatorMath defcon fontMath lxml
|
||||
|
||||
# reload the config file to rehash the path for either bash or zsh
|
||||
if [ -n "$ZSH_VERSION" ]; then
|
||||
SHELL_PROFILE="$HOME/.zshrc"
|
||||
else
|
||||
SHELL_PROFILE="$HOME/.bash_profile"
|
||||
fi
|
||||
if [ -n "$ZSH_VERSION" ]; then
|
||||
. $SHELL_PROFILE
|
||||
fi
|
||||
|
||||
# conditional to only run psfpreflightversion if the command is really available otherwise output a guidance message
|
||||
if [ -n "$(command -v psfpreflightversion)" ]; then
|
||||
psfpreflightversion
|
||||
else echo "Open a new Terminal and type psfpreflightversion to check paths and versions of the preflight modules"
|
||||
fi
|
124
pyproject.toml
Normal file
124
pyproject.toml
Normal file
|
@ -0,0 +1,124 @@
|
|||
[build-system]
|
||||
requires = ["setuptools>=62.0", "wheel"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
name = "silfont"
|
||||
version = "1.7.1.dev1"
|
||||
# (also manually bump version in src/silfont/__init__.py)
|
||||
authors = [{name = "SIL International", email = "fonts@sil.org"}]
|
||||
readme = "README.md"
|
||||
license = {file = "LICENSE"}
|
||||
description = "A growing collection of font utilities in Python to help with font design and production. Developed and maintained by SIL International's WSTech team (formerly NRSI)."
|
||||
classifiers = [
|
||||
"Environment :: Console",
|
||||
"Programming Language :: Python :: 3.8",
|
||||
"Intended Audience :: Developers",
|
||||
"License :: OSI Approved :: MIT License",
|
||||
"Topic :: Text Processing :: Fonts"
|
||||
]
|
||||
requires-python = ">=3.8"
|
||||
|
||||
dependencies = [
|
||||
"MutatorMath",
|
||||
"odfpy",
|
||||
"defcon",
|
||||
"fontMath",
|
||||
"fontParts",
|
||||
"fonttools",
|
||||
"glyphsLib",
|
||||
"ufo2ft",
|
||||
"tabulate",
|
||||
"lxml",
|
||||
"lz4",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
git = [
|
||||
"MutatorMath @ git+https://github.com/LettError/MutatorMath",
|
||||
"odfpy @ git+https://github.com/eea/odfpy",
|
||||
"palaso @ git+https://github.com/silnrsi/palaso-python",
|
||||
"defcon @ git+https://github.com/robotools/defcon",
|
||||
"fontMath @ git+https://github.com/robotools/fontMath",
|
||||
"fontParts @ git+https://github.com/robotools/fontParts",
|
||||
"fonttools @ git+https://github.com/fonttools/fonttools",
|
||||
"fontbakery @ git+https://github.com/fonttools/fontbakery",
|
||||
"glyphsLib @ git+https://github.com/googlefonts/GlyphsLib",
|
||||
"ufo2ft @ git+https://github.com/googlei18n/ufo2ft",
|
||||
"tabulate",
|
||||
"lxml",
|
||||
"lz4",
|
||||
]
|
||||
|
||||
[project.urls]
|
||||
Home-Page = "https://github.com/silnrsi/pysilfont"
|
||||
|
||||
[tool.setuptools.packages.find]
|
||||
where = ["src"]
|
||||
|
||||
[tool.setuptools.package-data]
|
||||
"silfont.data" = ["*.*"]
|
||||
|
||||
[tool.bdist_wheel]
|
||||
universal = true
|
||||
|
||||
[project.scripts]
|
||||
psfaddanchors = "silfont.scripts.psfaddanchors:cmd"
|
||||
psfbuildcomp = "silfont.scripts.psfbuildcomp:cmd"
|
||||
psfbuildcompgc = "silfont.scripts.psfbuildcompgc:cmd"
|
||||
psfbuildfea = "silfont.scripts.psfbuildfea:cmd"
|
||||
psfchangegdlnames = "silfont.scripts.psfchangegdlnames:cmd"
|
||||
psfchangettfglyphnames = "silfont.scripts.psfchangettfglyphnames:cmd"
|
||||
psfcheckbasicchars = "silfont.scripts.psfcheckbasicchars:cmd"
|
||||
psfcheckclassorders = "silfont.scripts.psfcheckclassorders:cmd"
|
||||
psfcheckftml = "silfont.scripts.psfcheckftml:cmd"
|
||||
psfcheckglyphinventory = "silfont.scripts.psfcheckglyphinventory:cmd"
|
||||
psfcheckinterpolatable = "silfont.scripts.psfcheckinterpolatable:cmd"
|
||||
psfcheckproject = "silfont.scripts.psfcheckproject:cmd"
|
||||
psfcompdef2xml = "silfont.scripts.psfcompdef2xml:cmd"
|
||||
psfcompressgr = "silfont.scripts.psfcompressgr:cmd"
|
||||
psfcopyglyphs = "silfont.scripts.psfcopyglyphs:cmd"
|
||||
psfcopymeta = "silfont.scripts.psfcopymeta:cmd"
|
||||
psfcreateinstances = "silfont.scripts.psfcreateinstances:cmd"
|
||||
psfcsv2comp = "silfont.scripts.psfcsv2comp:cmd"
|
||||
psfdeflang = "silfont.scripts.psfdeflang:cmd"
|
||||
psfdeleteglyphs = "silfont.scripts.psfdeleteglyphs:cmd"
|
||||
psfdupglyphs = "silfont.scripts.psfdupglyphs:cmd"
|
||||
psfexportanchors = "silfont.scripts.psfexportanchors:cmd"
|
||||
psfexportmarkcolors = "silfont.scripts.psfexportmarkcolors:cmd"
|
||||
psfexportpsnames = "silfont.scripts.psfexportpsnames:cmd"
|
||||
psfexportunicodes = "silfont.scripts.psfexportunicodes:cmd"
|
||||
psffixffglifs = "silfont.scripts.psffixffglifs:cmd"
|
||||
psffixfontlab = "silfont.scripts.psffixfontlab:cmd"
|
||||
psfftml2TThtml = "silfont.scripts.psfftml2TThtml:cmd"
|
||||
psfftml2odt = "silfont.scripts.psfftml2odt:cmd"
|
||||
psfgetglyphnames = "silfont.scripts.psfgetglyphnames:cmd"
|
||||
psfglyphs2ufo = "silfont.scripts.psfglyphs2ufo:cmd"
|
||||
psfmakedeprecated = "silfont.scripts.psfmakedeprecated:cmd"
|
||||
psfmakefea = "silfont.scripts.psfmakefea:cmd"
|
||||
psfmakescaledshifted = "silfont.scripts.psfmakescaledshifted:cmd"
|
||||
psfmakewoffmetadata = "silfont.scripts.psfmakewoffmetadata:cmd"
|
||||
psfnormalize = "silfont.scripts.psfnormalize:cmd"
|
||||
psfremovegliflibkeys = "silfont.scripts.psfremovegliflibkeys:cmd"
|
||||
psfrenameglyphs = "silfont.scripts.psfrenameglyphs:cmd"
|
||||
psfrunfbchecks = "silfont.scripts.psfrunfbchecks:cmd"
|
||||
psfsetassocfeat = "silfont.scripts.psfsetassocfeat:cmd"
|
||||
psfsetassocuids = "silfont.scripts.psfsetassocuids:cmd"
|
||||
psfsetdummydsig = "silfont.scripts.psfsetdummydsig:cmd"
|
||||
psfsetglyphdata = "silfont.scripts.psfsetglyphdata:cmd"
|
||||
psfsetglyphorder = "silfont.scripts.psfsetglyphorder:cmd"
|
||||
psfsetkeys = "silfont.scripts.psfsetkeys:cmd"
|
||||
psfsetmarkcolors = "silfont.scripts.psfsetmarkcolors:cmd"
|
||||
psfsetpsnames = "silfont.scripts.psfsetpsnames:cmd"
|
||||
psfsetunicodes = "silfont.scripts.psfsetunicodes:cmd"
|
||||
psfsetversion = "silfont.scripts.psfsetversion:cmd"
|
||||
psfshownames = "silfont.scripts.psfshownames:cmd"
|
||||
psfsubset = "silfont.scripts.psfsubset:cmd"
|
||||
psfsyncmasters = "silfont.scripts.psfsyncmasters:cmd"
|
||||
psfsyncmeta = "silfont.scripts.psfsyncmeta:cmd"
|
||||
psftuneraliases = "silfont.scripts.psftuneraliases:cmd"
|
||||
psfufo2glyphs = "silfont.scripts.psfufo2glyphs:cmd"
|
||||
psfufo2ttf = "silfont.scripts.psfufo2ttf:cmd"
|
||||
psfversion = "silfont.scripts.psfversion:cmd"
|
||||
psfwoffit = "silfont.scripts.psfwoffit:cmd"
|
||||
psfxml2compdef = "silfont.scripts.psfxml2compdef:cmd"
|
3
pytest.ini
Normal file
3
pytest.ini
Normal file
|
@ -0,0 +1,3 @@
|
|||
[pytest]
|
||||
testpaths = tests
|
||||
filterwarnings = ignore::DeprecationWarning
|
5
src/silfont/__init__.py
Normal file
5
src/silfont/__init__.py
Normal file
|
@ -0,0 +1,5 @@
|
|||
#!/usr/bin/env python3
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2014-2023 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__version__ = '1.8.0'
|
358
src/silfont/comp.py
Normal file
358
src/silfont/comp.py
Normal file
|
@ -0,0 +1,358 @@
|
|||
#!/usr/bin/env python3
|
||||
'Composite glyph definition'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Rowe'
|
||||
|
||||
import re
|
||||
from xml.etree import ElementTree as ET
|
||||
|
||||
# REs to parse (from right to left) comment, SIL extension parameters, markinfo, UID, metrics,
|
||||
# and (from left) glyph name
|
||||
|
||||
# Extract comment from end of line (NB: Doesn't use re.VERBOSE because it contains #.)
|
||||
# beginning of line, optional whitespace, remainder, optional whitespace, comment to end of line
|
||||
inputline=re.compile(r"""^\s*(?P<remainder>.*?)(\s*#\s*(?P<commenttext>.*))?$""")
|
||||
|
||||
# Parse SIL extension parameters in [...], but only after |
|
||||
paraminfo=re.compile(r"""^\s*
|
||||
(?P<remainder>[^|]*
|
||||
($|
|
||||
\|[^[]*$|
|
||||
\|[^[]*\[(?P<paraminfo>[^]]*)\]))
|
||||
\s*$""",re.VERBOSE)
|
||||
|
||||
# Parse markinfo
|
||||
markinfo=re.compile(r"""^\s*
|
||||
(?P<remainder>[^!]*?)
|
||||
\s*
|
||||
(?:!\s*(?P<markinfo>[.0-9]+(?:,[ .0-9]+){3}))? # ! markinfo
|
||||
(?P<remainder2>[^!]*?)
|
||||
\s*$""",re.VERBOSE)
|
||||
|
||||
# Parse uid
|
||||
uidinfo=re.compile(r"""^\s*
|
||||
(?P<remainder>[^|]*?)
|
||||
\s*
|
||||
(?:\|\s*(?P<UID>[^^!]*)?)? # | followed by nothing, or 4- to 6-digit UID
|
||||
(?P<remainder2>[^|]*?)
|
||||
\s*$""",re.VERBOSE)
|
||||
|
||||
# Parse metrics
|
||||
metricsinfo=re.compile(r"""^\s*
|
||||
(?P<remainder>[^^]*?)
|
||||
\s*
|
||||
(?:\^\s*(?P<metrics>[-0-9]+\s*(?:,\s*[-0-9]+)?))? # metrics (either ^x,y or ^a)
|
||||
(?P<remainder2>[^^]*?)
|
||||
\s*$""",re.VERBOSE)
|
||||
|
||||
# Parse glyph information (up to =)
|
||||
glyphdef=re.compile(r"""^\s*
|
||||
(?P<PSName>[._A-Za-z][._A-Za-z0-9-]*) # glyphname
|
||||
\s*=\s*
|
||||
(?P<remainder>.*?)
|
||||
\s*$""",re.VERBOSE)
|
||||
|
||||
# break tokens off the right hand side from right to left and finally off left hand side (up to =)
|
||||
initialtokens=[ (inputline, 'commenttext', ""),
|
||||
(paraminfo, 'paraminfo', "Error parsing parameters in [...]"),
|
||||
(markinfo, 'markinfo', "Error parsing information after !"),
|
||||
(uidinfo, 'UID', "Error parsing information after |"),
|
||||
(metricsinfo, 'metrics', "Error parsing information after ^"),
|
||||
(glyphdef, 'PSName', "Error parsing glyph name before =") ]
|
||||
|
||||
# Parse base and diacritic information
|
||||
compdef=re.compile(r"""^\s*
|
||||
(?P<compname>[._A-Za-z][._A-Za-z0-9-]*) # name of base or diacritic in composite definition
|
||||
(?:@ # @ precedes position information
|
||||
(?:(?:\s*(?P<base>[^: ]+)):)? # optional base glyph followed by :
|
||||
\s*
|
||||
(?P<position>(?:[^ +&[])+) # position information (delimited by space + & [ or end of line)
|
||||
\s*)? # end of @ clause
|
||||
\s*
|
||||
(?:\[(?P<params>[^]]*)\])? # parameters inside [..]
|
||||
\s*
|
||||
(?P<remainder>.*)$
|
||||
""",re.VERBOSE)
|
||||
|
||||
# Parse metrics
|
||||
lsb_rsb=re.compile(r"""^\s*
|
||||
(?P<lsb>[-0-9]+)\s*(?:,\s*(?P<rsb>[-0-9]+))? # optional metrics (either ^lsb,rsb or ^adv)
|
||||
\s*$""",re.VERBOSE)
|
||||
|
||||
# RE to break off one key=value parameter from text inside [key=value;key=value;key=value]
|
||||
paramdef=re.compile(r"""^\s*
|
||||
(?P<paramname>[a-z0-9]+) # paramname
|
||||
\s*=\s* # = (with optional white space before/after)
|
||||
(?P<paramval>[^;]+?) # any text up to ; or end of string
|
||||
\s* # optional whitespace
|
||||
(?:;\s*(?P<rest>.+)$|\s*$) # either ; and (non-empty) rest of parameters, or end of line
|
||||
""",re.VERBOSE)
|
||||
|
||||
class CompGlyph(object):
|
||||
|
||||
def __init__(self, CDelement=None, CDline=None):
|
||||
self.CDelement = CDelement
|
||||
self.CDline = CDline
|
||||
|
||||
def _parseparams(self, rest):
|
||||
"""Parse a parameter line such as:
|
||||
key1=value1;key2=value2
|
||||
and return a dictionary with key:value pairs.
|
||||
"""
|
||||
params = {}
|
||||
while rest:
|
||||
matchparam=re.match(paramdef,rest)
|
||||
if matchparam == None:
|
||||
raise ValueError("Parameter error: " + rest)
|
||||
params[matchparam.group('paramname')] = matchparam.group('paramval')
|
||||
rest = matchparam.group('rest')
|
||||
return(params)
|
||||
|
||||
def parsefromCDline(self):
|
||||
"""Parse the composite glyph information (in self.CDline) such as:
|
||||
LtnCapADiear = LtnCapA + CombDiaer@U |00C4 ! 1, 0, 0, 1 # comment
|
||||
and return a <glyph> element (in self.CDelement)
|
||||
<glyph PSName="LtnCapADiear" UID="00C4">
|
||||
<note>comment</note>
|
||||
<property name="mark" value="1, 0, 0, 1"/>
|
||||
<base PSName="LtnCapA">
|
||||
<attach PSName="CombDiaer" with="_U" at="U"/>
|
||||
</base>
|
||||
</glyph>
|
||||
Position info after @ can include optional base glyph name followed by colon.
|
||||
"""
|
||||
line = self.CDline
|
||||
results = {}
|
||||
for parseinfo in initialtokens:
|
||||
if len(line) > 0:
|
||||
regex, groupname, errormsg = parseinfo
|
||||
matchresults = re.match(regex,line)
|
||||
if matchresults == None:
|
||||
raise ValueError(errormsg)
|
||||
line = matchresults.group('remainder')
|
||||
resultsval = matchresults.group(groupname)
|
||||
if resultsval != None:
|
||||
results[groupname] = resultsval.strip()
|
||||
if groupname == 'paraminfo': # paraminfo match needs to be removed from remainder
|
||||
line = line.rstrip('['+resultsval+']')
|
||||
if 'remainder2' in matchresults.groupdict().keys(): line += ' ' + matchresults.group('remainder2')
|
||||
# At this point results optionally may contain entries for any of 'commenttext', 'paraminfo', 'markinfo', 'UID', or 'metrics',
|
||||
# but it must have 'PSName' if any of 'paraminfo', 'markinfo', 'UID', or 'metrics' present
|
||||
note = results.pop('commenttext', None)
|
||||
if 'PSName' not in results:
|
||||
if len(results) > 0:
|
||||
raise ValueError("Missing glyph name")
|
||||
else: # comment only, or blank line
|
||||
return None
|
||||
dic = {}
|
||||
UIDpresent = 'UID' in results
|
||||
if UIDpresent and results['UID'] == '':
|
||||
results.pop('UID')
|
||||
if 'paraminfo' in results:
|
||||
paramdata = results.pop('paraminfo')
|
||||
if UIDpresent:
|
||||
dic = self._parseparams(paramdata)
|
||||
else:
|
||||
line += " [" + paramdata + "]"
|
||||
mark = results.pop('markinfo', None)
|
||||
if 'metrics' in results:
|
||||
m = results.pop('metrics')
|
||||
matchmetrics = re.match(lsb_rsb,m)
|
||||
if matchmetrics == None:
|
||||
raise ValueError("Error in parameters: " + m)
|
||||
elif matchmetrics.group('rsb'):
|
||||
metricdic = {'lsb': matchmetrics.group('lsb'), 'rsb': matchmetrics.group('rsb')}
|
||||
else:
|
||||
metricdic = {'advance': matchmetrics.group('lsb')}
|
||||
else:
|
||||
metricdic = None
|
||||
|
||||
# Create <glyph> element and assign attributes
|
||||
g = ET.Element('glyph',attrib=results)
|
||||
if note: # note from commenttext becomes <note> subelement
|
||||
n = ET.SubElement(g,'note')
|
||||
n.text = note.rstrip()
|
||||
# markinfo becomes <property> subelement
|
||||
if mark:
|
||||
p = ET.SubElement(g, 'property', name = 'mark', value = mark)
|
||||
# paraminfo parameters (now in dic) become <property> subelements
|
||||
if dic:
|
||||
for key in dic:
|
||||
p = ET.SubElement(g, 'property', name = key, value = dic[key])
|
||||
# metrics parameters (now in metricdic) become subelements
|
||||
if metricdic:
|
||||
for key in metricdic:
|
||||
k = ET.SubElement(g, key, width=metricdic[key])
|
||||
|
||||
# Prepare to parse remainder of line
|
||||
prevbase = None
|
||||
prevdiac = None
|
||||
remainder = line
|
||||
expectingdiac = False
|
||||
|
||||
# top of loop to process remainder of line, breaking off base or diacritics from left to right
|
||||
while remainder != "":
|
||||
matchresults=re.match(compdef,remainder)
|
||||
if matchresults == None or matchresults.group('compname') == "" :
|
||||
raise ValueError("Error parsing glyph name: " + remainder)
|
||||
propdic = {}
|
||||
if matchresults.group('params'):
|
||||
propdic = self._parseparams(matchresults.group('params'))
|
||||
base = matchresults.group('base')
|
||||
position = matchresults.group('position')
|
||||
if expectingdiac:
|
||||
# Determine parent element, based on previous base and diacritic glyphs and optional
|
||||
# matchresults.group('base'), indicating diacritic attaches to a different glyph
|
||||
if base == None:
|
||||
if prevdiac != None:
|
||||
parent = prevdiac
|
||||
else:
|
||||
parent = prevbase
|
||||
elif base != prevbase.attrib['PSName']:
|
||||
raise ValueError("Error in diacritic alternate base glyph: " + base)
|
||||
else:
|
||||
parent = prevbase
|
||||
if prevdiac == None:
|
||||
raise ValueError("Unnecessary diacritic alternate base glyph: " + base)
|
||||
# Because 'with' is Python reserved word, passing it directly as a parameter
|
||||
# causes Python syntax error, so build dictionary to pass to SubElement
|
||||
att = {'PSName': matchresults.group('compname')}
|
||||
if position:
|
||||
if 'with' in propdic:
|
||||
withval = propdic.pop('with')
|
||||
else:
|
||||
withval = "_" + position
|
||||
att['at'] = position
|
||||
att['with'] = withval
|
||||
# Create <attach> subelement
|
||||
e = ET.SubElement(parent, 'attach', attrib=att)
|
||||
prevdiac = e
|
||||
elif (base or position):
|
||||
raise ValueError("Position information on base glyph not supported")
|
||||
else:
|
||||
# Create <base> subelement
|
||||
e = ET.SubElement(g, 'base', PSName=matchresults.group('compname'))
|
||||
prevbase = e
|
||||
prevdiac = None
|
||||
if 'shift' in propdic:
|
||||
xval, yval = propdic.pop('shift').split(',')
|
||||
s = ET.SubElement(e, 'shift', x=xval, y=yval)
|
||||
# whatever parameters are left in propdic become <property> subelements
|
||||
for key, val in propdic.items():
|
||||
p = ET.SubElement(e, 'property', name=key, value=val)
|
||||
|
||||
remainder = matchresults.group('remainder').lstrip()
|
||||
nextchar = remainder[:1]
|
||||
remainder = remainder[1:].lstrip()
|
||||
expectingdiac = nextchar == '+'
|
||||
if nextchar == '&' or nextchar == '+':
|
||||
if len(remainder) == 0:
|
||||
raise ValueError("Expecting glyph name after & or +")
|
||||
elif len(nextchar) > 0:
|
||||
raise ValueError("Expecting & or + and found " + nextchar)
|
||||
self.CDelement = g
|
||||
|
||||
def _diacinfo(self, node, parent, lastglyph):
|
||||
"""receives attach element, PSName of its parent, PSName of most recent glyph
|
||||
returns a string equivalent of this node (and all its descendants)
|
||||
and a string with the name of the most recent glyph
|
||||
"""
|
||||
diacname = node.get('PSName')
|
||||
atstring = node.get('at')
|
||||
withstring = node.get('with')
|
||||
propdic = {}
|
||||
if withstring != "_" + atstring:
|
||||
propdic['with'] = withstring
|
||||
subattachlist = []
|
||||
attachglyph = ""
|
||||
if parent != lastglyph:
|
||||
attachglyph = parent + ":"
|
||||
for subelement in node:
|
||||
if subelement.tag == 'property':
|
||||
propdic[subelement.get('name')] = subelement.get('value')
|
||||
elif subelement.tag == 'attach':
|
||||
subattachlist.append(subelement)
|
||||
elif subelement.tag == 'shift':
|
||||
propdic['shift'] = subelement.get('x') + "," + subelement.get('y')
|
||||
# else flag error/warning?
|
||||
propstring = ""
|
||||
if propdic:
|
||||
propstring += " [" + ";".join( [k + "=" + v for k,v in propdic.items()] ) + "]"
|
||||
returnstring = " + " + diacname + "@" + attachglyph + atstring + propstring
|
||||
prevglyph = diacname
|
||||
for s in subattachlist:
|
||||
string, prevglyph = self._diacinfo(s, diacname, prevglyph)
|
||||
returnstring += string
|
||||
return returnstring, prevglyph
|
||||
|
||||
def _basediacinfo(self, baseelement):
|
||||
"""receives base element and returns a string equivalent of this node (and all its desendants)"""
|
||||
basename = baseelement.get('PSName')
|
||||
returnstring = basename
|
||||
prevglyph = basename
|
||||
bpropdic = {}
|
||||
for child in baseelement:
|
||||
if child.tag == 'attach':
|
||||
string, prevglyph = self._diacinfo(child, basename, prevglyph)
|
||||
returnstring += string
|
||||
elif child.tag == 'shift':
|
||||
bpropdic['shift'] = child.get('x') + "," + child.get('y')
|
||||
if bpropdic:
|
||||
returnstring += " [" + ";".join( [k + "=" + v for k,v in bpropdic.items()] ) + "]"
|
||||
return returnstring
|
||||
|
||||
def parsefromCDelement(self):
|
||||
"""Parse a glyph element such as:
|
||||
<glyph PSName="LtnSmITildeGraveDotBlw" UID="E000">
|
||||
<note>i tilde grave dot-below</note>
|
||||
<base PSName="LtnSmDotlessI">
|
||||
<attach PSName="CombDotBlw" at="L" with="_L" />
|
||||
<attach PSName="CombTilde" at="U" with="_U">
|
||||
<attach PSName="CombGrave" at="U" with="_U" />
|
||||
</attach>
|
||||
</base>
|
||||
</glyph>
|
||||
and produce the equivalent CDline in format:
|
||||
LtnSmITildeGraveDotBlw = LtnSmDotlessI + CombDotBlw@L + CombTilde@LtnSmDotlessI:U + CombGrave@U | E000 # i tilde grave dot-below
|
||||
"""
|
||||
g = self.CDelement
|
||||
lsb = None
|
||||
rsb = None
|
||||
adv = None
|
||||
markinfo = None
|
||||
note = None
|
||||
paramdic = {}
|
||||
outputline = [g.get('PSName')]
|
||||
resultUID = g.get('UID')
|
||||
basesep = " = "
|
||||
|
||||
for child in g:
|
||||
if child.tag == 'note': note = child.text
|
||||
elif child.tag == 'property':
|
||||
if child.get('name') == 'mark': markinfo = child.get('value')
|
||||
else: paramdic[child.get('name')] = child.get('value')
|
||||
elif child.tag == 'lsb': lsb = child.get('width')
|
||||
elif child.tag == 'rsb': rsb = child.get('width')
|
||||
elif child.tag == 'advance': adv = child.get('width')
|
||||
elif child.tag == 'base':
|
||||
outputline.extend([basesep, self._basediacinfo(child)])
|
||||
basesep = " & "
|
||||
|
||||
if paramdic and resultUID == None:
|
||||
resultUID = " " # to force output of |
|
||||
if adv: outputline.extend([' ^', adv])
|
||||
if lsb and rsb: outputline.extend([' ^', lsb, ',', rsb])
|
||||
if resultUID: outputline.extend([' |', resultUID])
|
||||
if markinfo: outputline.extend([' !', markinfo])
|
||||
if paramdic:
|
||||
paramsep = " ["
|
||||
for k in paramdic:
|
||||
outputline.extend([paramsep, k, "=", paramdic[k]])
|
||||
paramsep = ";"
|
||||
outputline.append("]")
|
||||
if note:
|
||||
outputline.extend([" # ", note])
|
||||
self.CDline = "".join(outputline)
|
||||
|
754
src/silfont/core.py
Normal file
754
src/silfont/core.py
Normal file
|
@ -0,0 +1,754 @@
|
|||
#!/usr/bin/env python3
|
||||
'General classes and functions for use in pysilfont scripts'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2014-2023 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from glob import glob
|
||||
from collections import OrderedDict
|
||||
import sys, os, argparse, datetime, shutil, csv, configparser
|
||||
|
||||
import silfont
|
||||
|
||||
class loggerobj(object):
|
||||
# For handling log messages.
|
||||
# Use S for severe errors caused by data, parameters supplied by user etc
|
||||
# Use X for severe errors caused by bad code to get traceback exception
|
||||
|
||||
def __init__(self, logfile=None, loglevels="", leveltext="", loglevel="W", scrlevel="P"):
|
||||
self.logfile = logfile
|
||||
self.loglevels = loglevels
|
||||
self.leveltext = leveltext
|
||||
self.errorcount = 0
|
||||
self.warningcount = 0
|
||||
if not self.loglevels: self.loglevels = {'X': 0, 'S': 1, 'E': 2, 'P': 3, 'W': 4, 'I': 5, 'V': 6}
|
||||
if not self.leveltext: self.leveltext = ('Exception ', 'Severe: ', 'Error: ', 'Progress: ', 'Warning: ', 'Info: ', 'Verbose: ')
|
||||
super(loggerobj, self).__setattr__("loglevel", "E") # Temp values so invalid log levels can be reported
|
||||
super(loggerobj, self).__setattr__("scrlevel", "E") #
|
||||
self.loglevel = loglevel
|
||||
self.scrlevel = scrlevel
|
||||
|
||||
def __setattr__(self, name, value):
|
||||
if name in ("loglevel", "scrlevel"):
|
||||
if value in self.loglevels:
|
||||
(minlevel, minnum) = ("E",2) if name == "loglevel" else ("S", 1)
|
||||
if self.loglevels[value] < minnum:
|
||||
value = minlevel
|
||||
self.log(name + " increased to minimum level of " + minlevel, "E")
|
||||
else:
|
||||
self.log("Invalid " + name + " value: " + value, "S")
|
||||
super(loggerobj, self).__setattr__(name, value)
|
||||
if name == "scrlevel" : self._basescrlevel = value # Used by resetscrlevel
|
||||
|
||||
def log(self, logmessage, msglevel="W"):
|
||||
levelval = self.loglevels[msglevel]
|
||||
message = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S ") + self.leveltext[levelval] + str(logmessage)
|
||||
#message = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S.%f")[0:22] +" "+ self.leveltext[levelval] + logmessage ## added milliseconds for timing tests
|
||||
if levelval <= self.loglevels[self.scrlevel]: print(message)
|
||||
if self.logfile and levelval <= self.loglevels[self.loglevel]: self.logfile.write(message + "\n")
|
||||
if msglevel == "S":
|
||||
print("\n **** Fatal error - exiting ****")
|
||||
sys.exit(1)
|
||||
if msglevel == "X": assert False, message
|
||||
if msglevel == "E": self.errorcount += 1
|
||||
if msglevel == "W": self.warningcount += 1
|
||||
|
||||
def raisescrlevel(self, level): # Temporarily increase screen logging
|
||||
if level not in self.loglevels or level == "X" : self.log("Invalid scrlevel: " + level, "X")
|
||||
if self.loglevels[level] > self.loglevels[self.scrlevel]:
|
||||
current = self.scrlevel
|
||||
self.scrlevel = level
|
||||
self._basescrlevel = current
|
||||
self.log("scrlevel raised to " + level, "I")
|
||||
|
||||
def resetscrlevel(self):
|
||||
self.scrlevel = self._basescrlevel
|
||||
|
||||
|
||||
class parameters(object):
|
||||
# Object for holding parameters information, organised by class (eg logging)
|
||||
|
||||
# Default parameters for use in pysilfont modules
|
||||
# Names must be case-insensitively unique across all parameter classes
|
||||
# Parameter types are deduced from the default values
|
||||
|
||||
def __init__(self):
|
||||
# Default parameters for all modules
|
||||
defparams = {}
|
||||
defparams['system'] = {'version': silfont.__version__, 'copyright': silfont.__copyright__} # Code treats these as read-only
|
||||
defparams['logging'] = {'scrlevel': 'P', 'loglevel': 'W'}
|
||||
defparams['backups'] = {'backup': True, 'backupdir': 'backups', 'backupkeep': 5}
|
||||
# Default parameters for UFO module
|
||||
defparams['outparams'] = OrderedDict([ # Use ordered dict so parameters show in logical order with -h p
|
||||
("UFOversion", ""), # UFOversion - defaults to existing unless a value is supplied
|
||||
("indentIncr", " "), # XML Indent increment
|
||||
("indentFirst", " "), # First XML indent
|
||||
("indentML", False), # Should multi-line string values be indented?
|
||||
("plistIndentFirst", ""), # First indent amount for plists
|
||||
('precision', 6), # Decimal precision to use in XML output - both for real values and for attributes if float
|
||||
("floatAttribs", ['xScale', 'xyScale', 'yxScale', 'yScale', 'angle']), # Used with precision above
|
||||
("intAttribs", ['pos', 'width', 'height', 'xOffset', 'yOffset', 'x', 'y']),
|
||||
("sortDicts", True), # Should dict elements be sorted alphabetically?
|
||||
("renameGlifs", True), # Rename glifs based on UFO3 suggested algorithm
|
||||
("format1Glifs", False), # Force output format 1 glifs including UFO2-style anchors (for use with FontForge
|
||||
("glifElemOrder", ['advance', 'unicode', 'note', 'image', 'guideline', 'anchor', 'outline', 'lib']), # Order to output glif elements
|
||||
("attribOrders.glif",['pos', 'width', 'height', 'fileName', 'base', 'xScale', 'xyScale', 'yxScale', 'yScale', 'xOffset', 'yOffset',
|
||||
'x', 'y', 'angle', 'type', 'smooth', 'name', 'format', 'color', 'identifier'])
|
||||
])
|
||||
defparams['ufometadata'] = {"checkfix": "check"} # Apply metadata fixes when reading UFOs
|
||||
|
||||
self.paramshelp = {} # Info used when outputting help about parame options
|
||||
self.paramshelp["classdesc"] = {
|
||||
"logging": "controls the level of log messages go to screen or log files.",
|
||||
"backups": "controls backup settings for scripts that output fonts - by default backups are made if the output font is overwriting the input font",
|
||||
"outparams": "Output options for UFOs - cover UFO version and normalization",
|
||||
"ufometadata": "controls if UFO metadata be checked, or checked and fixed"
|
||||
}
|
||||
self.paramshelp["paramsdesc"] = {
|
||||
"scrlevel": "Logging level for screen messages - one of S,E,P.W,I or V",
|
||||
"loglevel": "Logging level for log file messages - one of E,P.W,I or V",
|
||||
"backup": "Should font backups be made",
|
||||
"backupdir": "Directory to use for font backups",
|
||||
"backupkeep": "How many backups to keep",
|
||||
"indentIncr": "XML Indent increment",
|
||||
"indentFirst": "First XML indent",
|
||||
"indentML": "Should multi-line string values be indented?",
|
||||
"plistIndentFirst": "First indent amount for plists",
|
||||
"sortDicts": "Should dict elements be sorted alphabetically?",
|
||||
"precision": "Decimal precision to use in XML output - both for real values and for attributes if numeric",
|
||||
"renameGlifs": "Rename glifs based on UFO3 suggested algorithm",
|
||||
"UFOversion": "UFOversion to output - defaults to version of the input UFO",
|
||||
"format1Glifs": "Force output format 1 glifs including UFO2-style anchors (was used with FontForge; no longer needed)",
|
||||
"glifElemOrder": "Order to output glif elements",
|
||||
"floatAttribs": "List of float attributes - used when setting decimal precision",
|
||||
"intAttribs": "List of attributes that should be integers",
|
||||
"attribOrders.glif": "Order in which to output glif attributes",
|
||||
"checkfix": "Should check & fix tests be done - one of None, Check or Fix"
|
||||
}
|
||||
self.paramshelp["defaultsdesc"] = { # For use where default needs clarifying with text
|
||||
"indentIncr" : "<two spaces>",
|
||||
"indentFirst": "<two spaces>",
|
||||
"plistIndentFirst": "<No indent>",
|
||||
"UFOversion": "<Existing version>"
|
||||
}
|
||||
|
||||
self.classes = {} # Dictionary containing a list of parameters in each class
|
||||
self.paramclass = {} # Dictionary of class name for each parameter name
|
||||
self.types = {} # Python type for each parameter deduced from initial values supplied
|
||||
self.listtypes = {} # If type is dict, the type of values in the dict
|
||||
self.logger = loggerobj()
|
||||
defset = _paramset(self, "default", "defaults")
|
||||
self.sets = {"default": defset}
|
||||
self.lcase = {} # Lower case index of parameters names
|
||||
for classn in defparams:
|
||||
self.classes[classn] = []
|
||||
for parn in defparams[classn]:
|
||||
value = defparams[classn][parn]
|
||||
self.classes[classn].append(parn)
|
||||
self.paramclass[parn] = classn
|
||||
self.types[parn] = type(value)
|
||||
if type(value) is list: self.listtypes[parn] = type(value[0])
|
||||
super(_paramset, defset).__setitem__(parn, value) # __setitem__ in paramset does not allow new values!
|
||||
self.lcase[parn.lower()] = parn
|
||||
|
||||
def addset(self, name, sourcedesc=None, inputdict=None, configfile=None, copyset=None):
|
||||
# Create a subset from one of a dict, config file or existing set
|
||||
# Only one option should used per call
|
||||
# sourcedesc should be added for user-supplied data (eg config file) for reporting purposes
|
||||
dict = {}
|
||||
if configfile:
|
||||
config = configparser.ConfigParser()
|
||||
config.read_file(open(configfile, encoding="utf-8"))
|
||||
if sourcedesc is None: sourcedesc = configfile
|
||||
for classn in config.sections():
|
||||
for item in config.items(classn):
|
||||
parn = item[0]
|
||||
if self.paramclass[parn] == "system":
|
||||
self.logger.log("Can't change " + parn + " parameter via config file", "S")
|
||||
val = item[1].strip('"').strip("'")
|
||||
dict[parn] = val
|
||||
elif copyset:
|
||||
if sourcedesc is None: sourcedesc = "Copy of " + copyset
|
||||
for parn in self.sets[copyset]:
|
||||
dict[parn] = self.sets[copyset][parn]
|
||||
elif inputdict:
|
||||
dict = inputdict
|
||||
if sourcedesc is None: sourcedesc = "unspecified source"
|
||||
self.sets[name] = _paramset(self, name, sourcedesc, dict)
|
||||
|
||||
def printhelp(self):
|
||||
phelp = self.paramshelp
|
||||
print("\nMost pysilfont scripts have -p, --params options which can be used to change default behaviour of scripts. For example '-p scrlevel=w' will log warning messages to screen \n")
|
||||
print("Listed below are all such parameters, grouped by purpose. Not all apply to all scripts - "
|
||||
"in partucular outparams and ufometadata only apply to scripts using pysilfont's own UFO code")
|
||||
for classn in ("logging", "backups", "ufometadata", "outparams"):
|
||||
print("\n" + classn[0].upper() + classn[1:] + " - " + phelp["classdesc"][classn])
|
||||
for param in self.classes[classn]:
|
||||
if param == "format1Glifs": continue # Param due to be phased out
|
||||
paramdesc = phelp["paramsdesc"][param]
|
||||
paramtype = self.types[param].__name__
|
||||
defaultdesc = phelp["defaultsdesc"][param] if param in phelp["defaultsdesc"] else self.sets["default"][param]
|
||||
print(' {:<20}: {}'.format(param, paramdesc))
|
||||
print(' (Type: {:<6} Default: {})'.format(paramtype + ",", defaultdesc))
|
||||
print("\nNote parameter names are case-insensitive\n")
|
||||
print("For more help see https://github.com/silnrsi/pysilfont/blob/master/docs/parameters.md\n")
|
||||
|
||||
class _paramset(dict):
|
||||
# Set of parameter values
|
||||
def __init__(self, params, name, sourcedesc, inputdict=None):
|
||||
if inputdict is None: inputdict = {}
|
||||
self.name = name
|
||||
self.sourcedesc = sourcedesc # Description of source for reporting
|
||||
self.params = params # Parent parameters object
|
||||
for parn in inputdict:
|
||||
if params.paramclass[parn] == "system": # system values can't be changed
|
||||
if inputdict[parn] != params.sets["default"][parn]:
|
||||
self.params.logger.log("Can't change " + parn + " - system parameters can't be changed", "X")
|
||||
else:
|
||||
super(_paramset, self).__setitem__(parn, inputdict[parn])
|
||||
else:
|
||||
self[parn] = inputdict[parn]
|
||||
|
||||
def __setitem__(self, parn, value):
|
||||
origvalue = value
|
||||
origparn = parn
|
||||
parn = parn.lower()
|
||||
if self.params.paramclass[origparn] == "system":
|
||||
self.params.logger.log("Can't change " + parn + " - system parameters are read-only", "X")
|
||||
if parn not in self.params.lcase:
|
||||
self.params.logger.log("Invalid parameter " + origparn + " from " + self.sourcedesc, "S")
|
||||
else:
|
||||
parn = self.params.lcase[parn]
|
||||
ptyp = self.params.types[parn]
|
||||
if ptyp is bool:
|
||||
value = str2bool(value)
|
||||
if value is None: self.params.logger.log(self.sourcedesc+" parameter "+origparn+" must be boolean: " + origvalue, "S")
|
||||
if ptyp is list:
|
||||
if type(value) is not list: value = value.split(",") # Convert csv string into list
|
||||
if len(value) < 2: self.params.logger.log(self.sourcedesc+" parameter "+origparn+" must have a list of values: " + origvalue, "S")
|
||||
valuesOK = True
|
||||
listtype = self.params.listtypes[parn]
|
||||
for i, val in enumerate(value):
|
||||
if listtype is bool:
|
||||
val = str2bool(val)
|
||||
if val is None: self.params.logger.log (self.sourcedesc+" parameter "+origparn+" must contain boolean values: " + origvalue, "S")
|
||||
value[i] = val
|
||||
if type(val) != listtype:
|
||||
valuesOK = False
|
||||
badtype = str(type(val))
|
||||
if not valuesOK: self.params.logger.log("Invalid "+badtype+" parameter type for "+origparn+": "+self.params.types[parn], "S")
|
||||
if parn in ("loglevel", "scrlevel"): # Need to check log level is valid before setting it since otherwise logging will fail
|
||||
value = value.upper()
|
||||
if value not in self.params.logger.loglevels: self.params.logger.log (self.sourcedesc+" parameter "+parn+" invalid", "S")
|
||||
super(_paramset, self).__setitem__(parn, value)
|
||||
|
||||
def updatewith(self, update, sourcedesc=None, log=True):
|
||||
# Update a set with values from another set
|
||||
if sourcedesc is None: sourcedesc = self.params.sets[update].sourcedesc
|
||||
for parn in self.params.sets[update]:
|
||||
oldval = self[parn] if parn in self else ""
|
||||
self[parn] = self.params.sets[update][parn]
|
||||
if log and oldval != "" and self[parn] != oldval:
|
||||
old = str(oldval)
|
||||
new = str(self[parn])
|
||||
if old != old.strip() or new != new.strip(): # Add quotes if there are leading or trailing spaces
|
||||
old = '"'+old+'"'
|
||||
new = '"'+new+'"'
|
||||
self.params.logger.log(sourcedesc + " parameters: changing "+parn+" from " + old + " to " + new, "I")
|
||||
|
||||
|
||||
class csvreader(object): # Iterator for csv files, skipping comments and checking number of fields
|
||||
def __init__(self, filename, minfields=0, maxfields=999, numfields=None, logger=None):
|
||||
self.filename = filename
|
||||
self.minfields = minfields
|
||||
self.maxfields = maxfields
|
||||
self.numfields = numfields
|
||||
self.logger = logger if logger else loggerobj() # If no logger supplied, will just log to screen
|
||||
# Open the file and create reader
|
||||
try:
|
||||
file = open(filename, "rt", encoding="utf-8")
|
||||
except Exception as e:
|
||||
print(e)
|
||||
sys.exit(1)
|
||||
self.file = file
|
||||
self.reader = csv.reader(file)
|
||||
# Find the first non-comment line then reset so __iter__ still returns the first line
|
||||
# This is so scripts can analyse first line (eg to look for headers) before starting iterating
|
||||
self.firstline = None
|
||||
self._commentsbeforefirstline = -1
|
||||
while not self.firstline:
|
||||
row = next(self.reader, None)
|
||||
if row is None: logger.log("Input csv is empty or all lines are comments or blank", "S")
|
||||
self._commentsbeforefirstline += 1
|
||||
if row == []: continue # Skip blank lines
|
||||
if row[0].lstrip().startswith("#"): continue # Skip comments - ie lines starting with #
|
||||
self.firstline = row
|
||||
file.seek(0) # Reset the csv and skip comments
|
||||
for i in range(self._commentsbeforefirstline): next(self.reader, None)
|
||||
|
||||
def __setattr__(self, name, value):
|
||||
if name == "numfields" and value is not None: # If numfields is changed, reset min and max fields
|
||||
self.minfields = value
|
||||
self.maxfields = value
|
||||
super(csvreader, self).__setattr__(name, value)
|
||||
|
||||
def __iter__(self):
|
||||
for row in self.reader:
|
||||
self.line_num = self.reader.line_num - 1 - self._commentsbeforefirstline # Count is out due to reading first line in __init__
|
||||
if row == []: continue # Skip blank lines
|
||||
if row[0].lstrip().startswith("#"): continue # Skip comments - ie lines starting with #
|
||||
if len(row) < self.minfields or len(row) > self.maxfields:
|
||||
self.logger.log("Invalid number of fields on line " + str(self.line_num) + " in "+self.filename, "E" )
|
||||
continue
|
||||
yield row
|
||||
|
||||
|
||||
def execute(tool, fn, scriptargspec, chain = None):
|
||||
# Function to handle parameter parsing, font and file opening etc in command-line scripts
|
||||
# Supports opening (and saving) fonts using PysilFont UFO (UFO), fontParts (FP) or fontTools (FT)
|
||||
# Special handling for:
|
||||
# -d variation on -h to print extra info about defaults
|
||||
# -q quiet mode - only output a single line with count of errors (if there are any)
|
||||
# -l opens log file and also creates a logger function to write to the log file
|
||||
# -p other parameters. Includes backup settings and loglevel/scrlevel settings for logger
|
||||
# for UFOlib scripts, also includes all outparams keys and ufometadata settings
|
||||
|
||||
argspec = list(scriptargspec)
|
||||
|
||||
chainfirst = False
|
||||
if chain == "first": # If first call to execute has this set, only do the final return part of chaining
|
||||
chainfirst = True
|
||||
chain = None
|
||||
|
||||
params = chain["params"] if chain else parameters()
|
||||
logger = chain["logger"] if chain else params.logger # paramset has already created a basic logger
|
||||
argv = chain["argv"] if chain else sys.argv
|
||||
|
||||
if tool == "UFO":
|
||||
from silfont.ufo import Ufont
|
||||
elif tool == "FT":
|
||||
from fontTools import ttLib
|
||||
elif tool == "FP":
|
||||
from fontParts.world import OpenFont
|
||||
elif tool == "" or tool is None:
|
||||
tool = None
|
||||
else:
|
||||
logger.log("Invalid tool in call to execute()", "X")
|
||||
return
|
||||
basemodule = sys.modules[fn.__module__]
|
||||
poptions = {}
|
||||
poptions['prog'] = splitfn(argv[0])[1]
|
||||
poptions['description'] = basemodule.__doc__
|
||||
poptions['formatter_class'] = argparse.RawDescriptionHelpFormatter
|
||||
epilog = "For more help options use -h ?. For more documentation see https://github.com/silnrsi/pysilfont/blob/master/docs/scripts.md#" + poptions['prog'] + "\n\n"
|
||||
poptions['epilog'] = epilog + "Version: " + params.sets['default']['version'] + "\n" + params.sets['default']['copyright']
|
||||
|
||||
parser = argparse.ArgumentParser(**poptions)
|
||||
parser._optionals.title = "other arguments"
|
||||
|
||||
|
||||
# Add standard arguments
|
||||
standardargs = {
|
||||
'quiet': ('-q', '--quiet', {'help': 'Quiet mode - only display severe errors', 'action': 'store_true'}, {}),
|
||||
'log': ('-l', '--log', {'help': 'Log file'}, {'type': 'outfile'}),
|
||||
'params': ('-p', '--params', {'help': 'Other parameters - see parameters.md for details', 'action': 'append'}, {'type': 'optiondict'}),
|
||||
'nq': ('--nq', {'help': argparse.SUPPRESS, 'action': 'store_true'}, {})}
|
||||
|
||||
suppliedargs = []
|
||||
for a in argspec:
|
||||
argn = a[:-2][-1] # [:-2] will give either 1 or 2, the last of which is the full argument name
|
||||
if argn[0:2] == "--": argn = argn[2:] # Will start with -- for options
|
||||
suppliedargs.append(argn)
|
||||
for arg in sorted(standardargs):
|
||||
if arg not in suppliedargs: argspec.append(standardargs[arg])
|
||||
|
||||
defhelp = False
|
||||
if "-h" in argv: # Look for help option supplied
|
||||
pos = argv.index("-h")
|
||||
if pos < len(argv)-1: # There is something following -h!
|
||||
opt = argv[pos+1]
|
||||
if opt in ("d", "defaults"):
|
||||
defhelp = True # Normal help will be displayed with default info displayed by the epilog
|
||||
deffiles = []
|
||||
defother = []
|
||||
elif opt in ("p", "params"):
|
||||
params.printhelp()
|
||||
sys.exit(0)
|
||||
else:
|
||||
if opt != "?":
|
||||
print("Invalid -h value")
|
||||
print("-h ? displays help options")
|
||||
print("-h d (or -h defaults) lists details of default values for arguments and parameters")
|
||||
print("-h p (or -h params) gives help on parameters that can be set with -p or --params")
|
||||
sys.exit(0)
|
||||
|
||||
quiet = True if "-q" in argv and '--nq' not in argv else False
|
||||
if quiet: logger.scrlevel = "S"
|
||||
|
||||
# Process the supplied argument specs, add args to parser, store other info in arginfo
|
||||
arginfo = []
|
||||
logdef = None
|
||||
for a in argspec:
|
||||
# Process all but last tuple entry as argparse arguments
|
||||
nonkwds = a[:-2]
|
||||
kwds = a[-2]
|
||||
try:
|
||||
parser.add_argument(*nonkwds, **kwds)
|
||||
except Exception as e:
|
||||
print(f'nonkwds: {nonkwds}, kwds: {kwds}')
|
||||
print(e)
|
||||
sys.exit(1)
|
||||
|
||||
# Create ainfo, a dict of framework keywords using argument name
|
||||
argn = nonkwds[-1] # Find the argument name from first 1 or 2 tuple entries
|
||||
if argn[0:2] == "--": # Will start with -- for options
|
||||
argn = argn[2:].replace("-", "_") # Strip the -- and replace any - in name with _
|
||||
ainfo=dict(a[-1]) #Make a copy so original argspec is not changed
|
||||
for key in ainfo: # Check all keys are valid
|
||||
if key not in ("def", "type", "optlog") : logger.log("Invalid argspec framework key: " + key, "X")
|
||||
ainfo['name']=argn
|
||||
if argn == 'log':
|
||||
logdef = ainfo['def'] if 'def' in ainfo else None
|
||||
optlog = ainfo['optlog'] if 'optlog' in ainfo else False
|
||||
arginfo.append(ainfo)
|
||||
if defhelp:
|
||||
arg = nonkwds[0]
|
||||
if 'def' in ainfo:
|
||||
defval = ainfo['def']
|
||||
if argn == 'log' and logdef: defval += " in logs subdirectory"
|
||||
deffiles.append([arg, defval])
|
||||
elif 'default' in kwds:
|
||||
defother.append([arg, kwds['default']])
|
||||
|
||||
# if -h d specified, change the help epilog to info about argument defaults
|
||||
if defhelp:
|
||||
if not (deffiles or defother):
|
||||
deftext = "No defaults for parameters/options"
|
||||
else:
|
||||
deftext = "Defaults for parameters/options - see user docs for details\n"
|
||||
if deffiles:
|
||||
deftext = deftext + "\n Font/file names\n"
|
||||
for (param, defv) in deffiles:
|
||||
deftext = deftext + ' {:<20}{}\n'.format(param, defv)
|
||||
if defother:
|
||||
deftext = deftext + "\n Other parameters\n"
|
||||
for (param, defv) in defother:
|
||||
deftext = deftext + ' {:<20}{}\n'.format(param, defv)
|
||||
parser.epilog = deftext + "\n\n" + parser.epilog
|
||||
|
||||
# Parse the command-line arguments. If errors or -h used, procedure will exit here
|
||||
args = parser.parse_args(argv[1:])
|
||||
|
||||
# Process the first positional parameter to get defaults for file names
|
||||
fppval = getattr(args, arginfo[0]['name'])
|
||||
if isinstance(fppval, list): # When nargs="+" or nargs="*" is used a list is returned
|
||||
(fppath, fpbase, fpext) = splitfn(fppval[0])
|
||||
if len(fppval) > 1 : fpbase = "wildcard"
|
||||
else:
|
||||
if fppval is None: fppval = "" # For scripts that can be run with no positional parameters
|
||||
(fppath, fpbase, fpext) = splitfn(fppval) # First pos param use for defaulting
|
||||
|
||||
# Process parameters
|
||||
if chain:
|
||||
execparams = params.sets["main"]
|
||||
args.params = {} # clparams not used when chaining
|
||||
else:
|
||||
# Read config file from disk if it exists
|
||||
configname = os.path.join(fppath, "pysilfont.cfg")
|
||||
if os.path.exists(configname):
|
||||
params.addset("config file", configname, configfile=configname)
|
||||
else:
|
||||
params.addset("config file") # Create empty set
|
||||
if not quiet and "scrlevel" in params.sets["config file"]: logger.scrlevel = params.sets["config file"]["scrlevel"]
|
||||
|
||||
# Process command-line parameters
|
||||
clparams = {}
|
||||
if 'params' in args.__dict__:
|
||||
if args.params is not None:
|
||||
for param in args.params:
|
||||
x = param.split("=", 1)
|
||||
if len(x) != 2:
|
||||
logger.log("params must be of the form 'param=value'", "S")
|
||||
if x[1] == "\\t": x[1] = "\t" # Special handling for tab characters
|
||||
clparams[x[0]] = x[1]
|
||||
|
||||
args.params = clparams
|
||||
params.addset("command line", "command line", inputdict=clparams)
|
||||
if not quiet and "scrlevel" in params.sets["command line"]: logger.scrlevel = params.sets["command line"]["scrlevel"]
|
||||
|
||||
# Create main set of parameters based on defaults then update with config file values and command line values
|
||||
params.addset("main", copyset="default")
|
||||
params.sets["main"].updatewith("config file")
|
||||
params.sets["main"].updatewith("command line")
|
||||
execparams = params.sets["main"]
|
||||
|
||||
# Set up logging
|
||||
if chain:
|
||||
setattr(args, 'logger', logger)
|
||||
args.logfile = logger.logfile
|
||||
else:
|
||||
logfile = None
|
||||
logname = args.log if 'log' in args.__dict__ and args.log is not None else ""
|
||||
if 'log' in args.__dict__:
|
||||
if logdef is not None and (logname != "" or optlog == False):
|
||||
(path, base, ext) = splitfn(logname)
|
||||
(dpath, dbase, dext) = splitfn(logdef)
|
||||
if not path:
|
||||
if base and ext: # If both specified then use cwd, ie no path
|
||||
path = ""
|
||||
else:
|
||||
path = (fppath if dpath == "" else os.path.join(fppath, dpath))
|
||||
path = os.path.join(path, "logs")
|
||||
if not base:
|
||||
if dbase == "":
|
||||
base = fpbase
|
||||
elif dbase[0] == "_": # Append to font name if starts with _
|
||||
base = fpbase + dbase
|
||||
else:
|
||||
base = dbase
|
||||
if not ext and dext: ext = dext
|
||||
logname = os.path.join(path, base+ext)
|
||||
if logname == "":
|
||||
logfile = None
|
||||
else:
|
||||
(logname, logpath, exists) = fullpath(logname)
|
||||
if not exists:
|
||||
(parent,subd) = os.path.split(logpath)
|
||||
if subd == "logs" and os.path.isdir(parent): # Create directory if just logs subdir missing
|
||||
logger.log("Creating logs subdirectory in " + parent, "P")
|
||||
os.makedirs(logpath, exist_ok=True)
|
||||
else: # Fails, since missing dir is probably a typo!
|
||||
logger.log("Directory " + parent + " does not exist", "S")
|
||||
logger.log('Opening log file for output: ' + logname, "P")
|
||||
try:
|
||||
logfile = open(logname, "w", encoding="utf-8")
|
||||
except Exception as e:
|
||||
print(e)
|
||||
sys.exit(1)
|
||||
args.log = logfile
|
||||
# Set up logger details
|
||||
logger.loglevel = execparams['loglevel'].upper()
|
||||
logger.logfile = logfile
|
||||
if not quiet: logger.scrlevel = "E" # suppress next log message from screen
|
||||
logger.log("Running: " + " ".join(argv), "P")
|
||||
if not quiet: logger.scrlevel = execparams['scrlevel'].upper()
|
||||
setattr(args, 'logger', logger)
|
||||
|
||||
# Process the argument values returned from argparse
|
||||
|
||||
outfont = None
|
||||
infontlist = []
|
||||
for c, ainfo in enumerate(arginfo):
|
||||
aval = getattr(args, ainfo['name'])
|
||||
if ainfo['name'] in ('params', 'log'): continue # params and log already processed
|
||||
atype = None
|
||||
adef = None
|
||||
if 'type' in ainfo:
|
||||
atype = ainfo['type']
|
||||
if atype not in ('infont', 'outfont', 'infile', 'outfile', 'incsv', 'filename', 'optiondict'):
|
||||
logger.log("Invalid type of " + atype + " supplied in argspec", "X")
|
||||
if atype != 'optiondict': # All other types are file types, so adef must be set, even if just to ""
|
||||
adef = ainfo['def'] if 'def' in ainfo else ""
|
||||
if adef is None and aval is None: # If def explicitly set to None then this is optional
|
||||
setattr(args, ainfo['name'], None)
|
||||
continue
|
||||
|
||||
if c == 0:
|
||||
if aval is None : logger.log("Invalid first positional parameter spec", "X")
|
||||
if aval[-1] in ("\\","/"): aval = aval[0:-1] # Remove trailing slashes
|
||||
else: #Handle defaults for all but first positional parameter
|
||||
if adef is not None:
|
||||
if not aval: aval = ""
|
||||
# if aval == "" and adef == "": # Only valid for output font parameter
|
||||
# if atype != "outfont":
|
||||
# logger.log("No value suppiled for " + ainfo['name'], "S")
|
||||
# ## Not sure why this needs to fail - we need to cope with other optional file or filename parameters
|
||||
(apath, abase, aext) = splitfn(aval)
|
||||
(dpath, dbase, dext) = splitfn(adef) # dpath should be None
|
||||
if not apath:
|
||||
if abase and aext: # If both specified then use cwd, ie no path
|
||||
apath = ""
|
||||
else:
|
||||
apath = fppath
|
||||
if not abase:
|
||||
if dbase == "":
|
||||
abase = fpbase
|
||||
elif dbase[0] == "_": # Append to font name if starts with _
|
||||
abase = fpbase + dbase
|
||||
else:
|
||||
abase = dbase
|
||||
if not aext:
|
||||
if dext:
|
||||
aext = dext
|
||||
elif (atype == 'outfont' or atype == 'infont'): aext = fpext
|
||||
aval = os.path.join(apath, abase+aext)
|
||||
|
||||
# Open files/fonts
|
||||
if atype == 'infont':
|
||||
if tool is None:
|
||||
logger.log("Can't specify a font without a font tool", "X")
|
||||
infontlist.append((ainfo['name'], aval)) # Build list of fonts to open when other args processed
|
||||
elif atype == 'infile':
|
||||
logger.log('Opening file for input: '+aval, "P")
|
||||
try:
|
||||
aval = open(aval, "r", encoding="utf-8")
|
||||
except Exception as e:
|
||||
print(e)
|
||||
sys.exit(1)
|
||||
elif atype == 'incsv':
|
||||
logger.log('Opening file for input: '+aval, "P")
|
||||
aval = csvreader(aval, logger=logger)
|
||||
elif atype == 'outfile':
|
||||
(aval, path, exists) = fullpath(aval)
|
||||
if not exists:
|
||||
logger.log("Output file directory " + path + " does not exist", "S")
|
||||
logger.log('Opening file for output: ' + aval, "P")
|
||||
try:
|
||||
aval = open(aval, 'w', encoding="utf-8")
|
||||
except Exception as e:
|
||||
print(e)
|
||||
sys.exit(1)
|
||||
elif atype == 'outfont':
|
||||
if tool is None:
|
||||
logger.log("Can't specify a font without a font tool", "X")
|
||||
outfont = aval
|
||||
outfontpath = apath
|
||||
outfontbase = abase
|
||||
outfontext = aext
|
||||
|
||||
elif atype == 'optiondict': # Turn multiple options in the form ['opt1=a', 'opt2=b'] into a dictionary
|
||||
avaldict={}
|
||||
if aval is not None:
|
||||
for option in aval:
|
||||
x = option.split("=", 1)
|
||||
if len(x) != 2:
|
||||
logger.log("options must be of the form 'param=value'", "S")
|
||||
if x[1] == "\\t": x[1] = "\t" # Special handling for tab characters
|
||||
avaldict[x[0]] = x[1]
|
||||
aval = avaldict
|
||||
|
||||
setattr(args, ainfo['name'], aval)
|
||||
|
||||
# Open fonts - needs to be done after processing other arguments so logger and params are defined
|
||||
|
||||
for name, aval in infontlist:
|
||||
if chain and name == 'ifont':
|
||||
aval = chain["font"]
|
||||
else:
|
||||
if tool == "UFO": aval = Ufont(aval, params=params)
|
||||
if tool == "FT" : aval = ttLib.TTFont(aval)
|
||||
if tool == "FP" : aval = OpenFont(aval)
|
||||
setattr(args, name, aval) # Assign the font object to args attribute
|
||||
|
||||
# All arguments processed, now call the main function
|
||||
setattr(args, "paramsobj", params)
|
||||
setattr(args, "cmdlineargs", argv)
|
||||
newfont = fn(args)
|
||||
# If an output font is expected and one is returned, output the font
|
||||
if chainfirst: chain = True # Special handling for first call of chaining
|
||||
if newfont:
|
||||
if chain: # return font to be handled by chain()
|
||||
return (args, newfont)
|
||||
else:
|
||||
if outfont:
|
||||
# Backup the font if output is overwriting original input font
|
||||
if outfont == infontlist[0][1]:
|
||||
backupdir = os.path.join(outfontpath, execparams['backupdir'])
|
||||
backupmax = int(execparams['backupkeep'])
|
||||
backup = str2bool(execparams['backup'])
|
||||
|
||||
if backup:
|
||||
if not os.path.isdir(backupdir): # Create backup directory if not present
|
||||
try:
|
||||
os.mkdir(backupdir)
|
||||
except Exception as e:
|
||||
print(e)
|
||||
sys.exit(1)
|
||||
backupbase = os.path.join(backupdir, outfontbase+outfontext)
|
||||
# Work out backup name based on existing backups
|
||||
nums = sorted([int(i[len(backupbase)+1-len(i):-1]) for i in glob(backupbase+".*~")]) # Extract list of backup numbers from existing backups
|
||||
newnum = max(nums)+1 if nums else 1
|
||||
backupname = backupbase+"."+str(newnum)+"~"
|
||||
# Backup the font
|
||||
logger.log("Backing up input font to "+backupname, "P")
|
||||
shutil.copytree(outfont, backupname)
|
||||
# Purge old backups
|
||||
for i in range(0, len(nums) - backupmax + 1):
|
||||
backupname = backupbase+"."+str(nums[i])+"~"
|
||||
logger.log("Purging old backup "+backupname, "I")
|
||||
shutil.rmtree(backupname)
|
||||
else:
|
||||
logger.log("No font backup done due to backup parameter setting", "I")
|
||||
# Output the font
|
||||
if tool in ("FT", "FP"):
|
||||
logger.log("Saving font to " + outfont, "P")
|
||||
newfont.save(outfont)
|
||||
else: # Must be Pyslifont Ufont
|
||||
newfont.write(outfont)
|
||||
else:
|
||||
logger.log("Font returned to execute() but no output font is specified in arg spec", "X")
|
||||
elif chain: # ) When chaining return just args - the font can be accessed by args.ifont
|
||||
return (args, None) # ) assuming that the script has not changed the input font
|
||||
|
||||
if logger.errorcount or logger.warningcount:
|
||||
message = "Command completed with " + str(logger.errorcount) + " errors and " + str(logger.warningcount) + " warnings"
|
||||
if logger.scrlevel in ("S", "E") and logname != "":
|
||||
if logger.scrlevel == "S" or logger.warningcount: message = message + " - see " + logname
|
||||
if logger.errorcount:
|
||||
if quiet: logger.raisescrlevel("E")
|
||||
logger.log(message, "E")
|
||||
logger.resetscrlevel()
|
||||
else:
|
||||
logger.log(message, "P")
|
||||
if logger.scrlevel == "P" and logger.warningcount: logger.log("See log file for warning messages or rerun with '-p scrlevel=w'", "P")
|
||||
else:
|
||||
logger.log("Command completed with no warnings", "P")
|
||||
|
||||
return (args, newfont)
|
||||
|
||||
|
||||
def chain(argv, function, argspec, font, params, logger, quiet): # Chain multiple command-line scripts using UFO module together without writing font to disk
|
||||
''' argv is a command-line call to a script in sys.argv format. function and argspec are from the script being called.
|
||||
Although input font name must be supplied for the command line to be parsed correctly by execute() it is not used - instead the supplied
|
||||
font object is used. Similarly -params, logfile and quiet settings in argv are not used by execute() when chaining is used'''
|
||||
if quiet and "-q" not in argv: argv.append("-q")
|
||||
logger.log("Chaining to " + argv[0], "P")
|
||||
font = execute("UFO", function, argspec,
|
||||
{'argv' : argv,
|
||||
'font' : font,
|
||||
'params': params,
|
||||
'logger': logger,
|
||||
'quiet' : quiet})
|
||||
logger.log("Returning from " + argv[0], "P")
|
||||
return font
|
||||
|
||||
|
||||
def splitfn(fn): # Split filename into path, base and extension
|
||||
if fn: # Remove trailing slashes
|
||||
if fn[-1] in ("\\","/"): fn = fn[0:-1]
|
||||
(path, base) = os.path.split(fn)
|
||||
(base, ext) = os.path.splitext(base)
|
||||
# Handle special case where just a directory is supplied
|
||||
if ext == "": # If there's an extension, treat as file name, eg a ufo directory
|
||||
if os.path.isdir(fn):
|
||||
path = fn
|
||||
base = ""
|
||||
return (path, base, ext)
|
||||
|
||||
|
||||
def str2bool(v): # If v is not a boolean, convert from string to boolean
|
||||
if type(v) == bool: return v
|
||||
v = v.lower()
|
||||
if v in ("yes", "y", "true", "t", "1"):
|
||||
v = True
|
||||
elif v in ("no", "n", "false", "f", "0"):
|
||||
v = False
|
||||
else:
|
||||
v = None
|
||||
return v
|
||||
|
||||
def fullpath(filen): # Changes file name to one with full path and checks directory exists
|
||||
fullname = os.path.abspath(filen)
|
||||
(fpath,dummy) = os.path.split(fullname)
|
||||
return fullname, fpath, os.path.isdir(fpath)
|
308
src/silfont/data/required_chars.csv
Normal file
308
src/silfont/data/required_chars.csv
Normal file
|
@ -0,0 +1,308 @@
|
|||
USV,ps_name,glyph_name,sil_set,rationale,additional_notes
|
||||
U+0020,space,space,basic,A,
|
||||
U+0021,exclam,exclam,basic,A,
|
||||
U+0022,quotedbl,quotedbl,basic,A,
|
||||
U+0023,numbersign,numbersign,basic,A,
|
||||
U+0024,dollar,dollar,basic,A,
|
||||
U+0025,percent,percent,basic,A,
|
||||
U+0026,ampersand,ampersand,basic,A,
|
||||
U+0027,quotesingle,quotesingle,basic,A,
|
||||
U+0028,parenleft,parenleft,basic,A,
|
||||
U+0029,parenright,parenright,basic,A,
|
||||
U+002A,asterisk,asterisk,basic,A,
|
||||
U+002B,plus,plus,basic,A,
|
||||
U+002C,comma,comma,basic,A,
|
||||
U+002D,hyphen,hyphen,basic,A,
|
||||
U+002E,period,period,basic,A,
|
||||
U+002F,slash,slash,basic,A,
|
||||
U+0030,zero,zero,basic,A,
|
||||
U+0031,one,one,basic,A,
|
||||
U+0032,two,two,basic,A,
|
||||
U+0033,three,three,basic,A,
|
||||
U+0034,four,four,basic,A,
|
||||
U+0035,five,five,basic,A,
|
||||
U+0036,six,six,basic,A,
|
||||
U+0037,seven,seven,basic,A,
|
||||
U+0038,eight,eight,basic,A,
|
||||
U+0039,nine,nine,basic,A,
|
||||
U+003A,colon,colon,basic,A,
|
||||
U+003B,semicolon,semicolon,basic,A,
|
||||
U+003C,less,less,basic,A,
|
||||
U+003D,equal,equal,basic,A,
|
||||
U+003E,greater,greater,basic,A,
|
||||
U+003F,question,question,basic,A,
|
||||
U+0040,at,at,basic,A,
|
||||
U+0041,A,A,basic,A,
|
||||
U+0042,B,B,basic,A,
|
||||
U+0043,C,C,basic,A,
|
||||
U+0044,D,D,basic,A,
|
||||
U+0045,E,E,basic,A,
|
||||
U+0046,F,F,basic,A,
|
||||
U+0047,G,G,basic,A,
|
||||
U+0048,H,H,basic,A,
|
||||
U+0049,I,I,basic,A,
|
||||
U+004A,J,J,basic,A,
|
||||
U+004B,K,K,basic,A,
|
||||
U+004C,L,L,basic,A,
|
||||
U+004D,M,M,basic,A,
|
||||
U+004E,N,N,basic,A,
|
||||
U+004F,O,O,basic,A,
|
||||
U+0050,P,P,basic,A,
|
||||
U+0051,Q,Q,basic,A,
|
||||
U+0052,R,R,basic,A,
|
||||
U+0053,S,S,basic,A,
|
||||
U+0054,T,T,basic,A,
|
||||
U+0055,U,U,basic,A,
|
||||
U+0056,V,V,basic,A,
|
||||
U+0057,W,W,basic,A,
|
||||
U+0058,X,X,basic,A,
|
||||
U+0059,Y,Y,basic,A,
|
||||
U+005A,Z,Z,basic,A,
|
||||
U+005B,bracketleft,bracketleft,basic,A,
|
||||
U+005C,backslash,backslash,basic,A,
|
||||
U+005D,bracketright,bracketright,basic,A,
|
||||
U+005E,asciicircum,asciicircum,basic,A,
|
||||
U+005F,underscore,underscore,basic,A,
|
||||
U+0060,grave,grave,basic,A,
|
||||
U+0061,a,a,basic,A,
|
||||
U+0062,b,b,basic,A,
|
||||
U+0063,c,c,basic,A,
|
||||
U+0064,d,d,basic,A,
|
||||
U+0065,e,e,basic,A,
|
||||
U+0066,f,f,basic,A,
|
||||
U+0067,g,g,basic,A,
|
||||
U+0068,h,h,basic,A,
|
||||
U+0069,i,i,basic,A,
|
||||
U+006A,j,j,basic,A,
|
||||
U+006B,k,k,basic,A,
|
||||
U+006C,l,l,basic,A,
|
||||
U+006D,m,m,basic,A,
|
||||
U+006E,n,n,basic,A,
|
||||
U+006F,o,o,basic,A,
|
||||
U+0070,p,p,basic,A,
|
||||
U+0071,q,q,basic,A,
|
||||
U+0072,r,r,basic,A,
|
||||
U+0073,s,s,basic,A,
|
||||
U+0074,t,t,basic,A,
|
||||
U+0075,u,u,basic,A,
|
||||
U+0076,v,v,basic,A,
|
||||
U+0077,w,w,basic,A,
|
||||
U+0078,x,x,basic,A,
|
||||
U+0079,y,y,basic,A,
|
||||
U+007A,z,z,basic,A,
|
||||
U+007B,braceleft,braceleft,basic,A,
|
||||
U+007C,bar,bar,basic,A,
|
||||
U+007D,braceright,braceright,basic,A,
|
||||
U+007E,asciitilde,asciitilde,basic,A,
|
||||
U+00A0,uni00A0,nbspace,basic,A,
|
||||
U+00A1,exclamdown,exclamdown,basic,A,
|
||||
U+00A2,cent,cent,basic,A,
|
||||
U+00A3,sterling,sterling,basic,A,
|
||||
U+00A4,currency,currency,basic,A,
|
||||
U+00A5,yen,yen,basic,A,
|
||||
U+00A6,brokenbar,brokenbar,basic,A,
|
||||
U+00A7,section,section,basic,A,
|
||||
U+00A8,dieresis,dieresis,basic,A,
|
||||
U+00A9,copyright,copyright,basic,A,
|
||||
U+00AA,ordfeminine,ordfeminine,basic,A,
|
||||
U+00AB,guillemotleft,guillemetleft,basic,A,
|
||||
U+00AC,logicalnot,logicalnot,basic,A,
|
||||
U+00AD,uni00AD,softhyphen,basic,A,
|
||||
U+00AE,registered,registered,basic,A,
|
||||
U+00AF,macron,macron,basic,A,
|
||||
U+00B0,degree,degree,basic,A,
|
||||
U+00B1,plusminus,plusminus,basic,A,
|
||||
U+00B2,uni00B2,twosuperior,basic,A,
|
||||
U+00B3,uni00B3,threesuperior,basic,A,
|
||||
U+00B4,acute,acute,basic,A,
|
||||
U+00B5,mu,micro,basic,A,
|
||||
U+00B6,paragraph,paragraph,basic,A,
|
||||
U+00B7,periodcentered,periodcentered,basic,A,
|
||||
U+00B8,cedilla,cedilla,basic,A,
|
||||
U+00B9,uni00B9,onesuperior,basic,A,
|
||||
U+00BA,ordmasculine,ordmasculine,basic,A,
|
||||
U+00BB,guillemotright,guillemetright,basic,A,
|
||||
U+00BC,onequarter,onequarter,basic,A,
|
||||
U+00BD,onehalf,onehalf,basic,A,
|
||||
U+00BE,threequarters,threequarters,basic,A,
|
||||
U+00BF,questiondown,questiondown,basic,A,
|
||||
U+00C0,Agrave,Agrave,basic,A,
|
||||
U+00C1,Aacute,Aacute,basic,A,
|
||||
U+00C2,Acircumflex,Acircumflex,basic,A,
|
||||
U+00C3,Atilde,Atilde,basic,A,
|
||||
U+00C4,Adieresis,Adieresis,basic,A,
|
||||
U+00C5,Aring,Aring,basic,A,
|
||||
U+00C6,AE,AE,basic,A,
|
||||
U+00C7,Ccedilla,Ccedilla,basic,A,
|
||||
U+00C8,Egrave,Egrave,basic,A,
|
||||
U+00C9,Eacute,Eacute,basic,A,
|
||||
U+00CA,Ecircumflex,Ecircumflex,basic,A,
|
||||
U+00CB,Edieresis,Edieresis,basic,A,
|
||||
U+00CC,Igrave,Igrave,basic,A,
|
||||
U+00CD,Iacute,Iacute,basic,A,
|
||||
U+00CE,Icircumflex,Icircumflex,basic,A,
|
||||
U+00CF,Idieresis,Idieresis,basic,A,
|
||||
U+00D0,Eth,Eth,basic,A,
|
||||
U+00D1,Ntilde,Ntilde,basic,A,
|
||||
U+00D2,Ograve,Ograve,basic,A,
|
||||
U+00D3,Oacute,Oacute,basic,A,
|
||||
U+00D4,Ocircumflex,Ocircumflex,basic,A,
|
||||
U+00D5,Otilde,Otilde,basic,A,
|
||||
U+00D6,Odieresis,Odieresis,basic,A,
|
||||
U+00D7,multiply,multiply,basic,A,
|
||||
U+00D8,Oslash,Oslash,basic,A,
|
||||
U+00D9,Ugrave,Ugrave,basic,A,
|
||||
U+00DA,Uacute,Uacute,basic,A,
|
||||
U+00DB,Ucircumflex,Ucircumflex,basic,A,
|
||||
U+00DC,Udieresis,Udieresis,basic,A,
|
||||
U+00DD,Yacute,Yacute,basic,A,
|
||||
U+00DE,Thorn,Thorn,basic,A,
|
||||
U+00DF,germandbls,germandbls,basic,A,
|
||||
U+00E0,agrave,agrave,basic,A,
|
||||
U+00E1,aacute,aacute,basic,A,
|
||||
U+00E2,acircumflex,acircumflex,basic,A,
|
||||
U+00E3,atilde,atilde,basic,A,
|
||||
U+00E4,adieresis,adieresis,basic,A,
|
||||
U+00E5,aring,aring,basic,A,
|
||||
U+00E6,ae,ae,basic,A,
|
||||
U+00E7,ccedilla,ccedilla,basic,A,
|
||||
U+00E8,egrave,egrave,basic,A,
|
||||
U+00E9,eacute,eacute,basic,A,
|
||||
U+00EA,ecircumflex,ecircumflex,basic,A,
|
||||
U+00EB,edieresis,edieresis,basic,A,
|
||||
U+00EC,igrave,igrave,basic,A,
|
||||
U+00ED,iacute,iacute,basic,A,
|
||||
U+00EE,icircumflex,icircumflex,basic,A,
|
||||
U+00EF,idieresis,idieresis,basic,A,
|
||||
U+00F0,eth,eth,basic,A,
|
||||
U+00F1,ntilde,ntilde,basic,A,
|
||||
U+00F2,ograve,ograve,basic,A,
|
||||
U+00F3,oacute,oacute,basic,A,
|
||||
U+00F4,ocircumflex,ocircumflex,basic,A,
|
||||
U+00F5,otilde,otilde,basic,A,
|
||||
U+00F6,odieresis,odieresis,basic,A,
|
||||
U+00F7,divide,divide,basic,A,
|
||||
U+00F8,oslash,oslash,basic,A,
|
||||
U+00F9,ugrave,ugrave,basic,A,
|
||||
U+00FA,uacute,uacute,basic,A,
|
||||
U+00FB,ucircumflex,ucircumflex,basic,A,
|
||||
U+00FC,udieresis,udieresis,basic,A,
|
||||
U+00FD,yacute,yacute,basic,A,
|
||||
U+00FE,thorn,thorn,basic,A,
|
||||
U+00FF,ydieresis,ydieresis,basic,A,
|
||||
U+0131,dotlessi,idotless,basic,B,
|
||||
U+0152,OE,OE,basic,A,
|
||||
U+0153,oe,oe,basic,A,
|
||||
U+0160,Scaron,Scaron,basic,A,
|
||||
U+0161,scaron,scaron,basic,A,
|
||||
U+0178,Ydieresis,Ydieresis,basic,A,
|
||||
U+017D,Zcaron,Zcaron,basic,A,
|
||||
U+017E,zcaron,zcaron,basic,A,
|
||||
U+0192,florin,florin,basic,A,
|
||||
U+02C6,circumflex,circumflex,basic,A,
|
||||
U+02C7,caron,caron,basic,B,
|
||||
U+02D8,breve,breve,basic,B,
|
||||
U+02D9,dotaccent,dotaccent,basic,B,
|
||||
U+02DA,ring,ring,basic,B,
|
||||
U+02DB,ogonek,ogonek,basic,B,
|
||||
U+02DC,tilde,tilde,basic,A,
|
||||
U+02DD,hungarumlaut,hungarumlaut,basic,B,
|
||||
U+034F,uni034F,graphemejoinercomb,basic,D,
|
||||
U+03C0,pi,pi,basic,B,
|
||||
U+2000,uni2000,enquad,basic,C,
|
||||
U+2001,uni2001,emquad,basic,C,
|
||||
U+2002,uni2002,enspace,basic,C,
|
||||
U+2003,uni2003,emspace,basic,C,
|
||||
U+2004,uni2004,threeperemspace,basic,C,
|
||||
U+2005,uni2005,fourperemspace,basic,C,
|
||||
U+2006,uni2006,sixperemspace,basic,C,
|
||||
U+2007,uni2007,figurespace,basic,C,
|
||||
U+2008,uni2008,punctuationspace,basic,C,
|
||||
U+2009,uni2009,thinspace,basic,C,
|
||||
U+200A,uni200A,hairspace,basic,C,
|
||||
U+200B,uni200B,zerowidthspace,basic,C,
|
||||
U+200C,uni200C,zerowidthnonjoiner,basic,D,
|
||||
U+200D,uni200D,zerowidthjoiner,basic,D,
|
||||
U+200E,uni200E,lefttorightmark,rtl,D,
|
||||
U+200F,uni200F,righttoleftmark,rtl,D,
|
||||
U+2010,uni2010,hyphentwo,basic,C,
|
||||
U+2011,uni2011,nonbreakinghyphen,basic,C,
|
||||
U+2012,figuredash,figuredash,basic,C,
|
||||
U+2013,endash,endash,basic,A,
|
||||
U+2014,emdash,emdash,basic,A,
|
||||
U+2015,uni2015,horizontalbar,basic,C,
|
||||
U+2018,quoteleft,quoteleft,basic,A,
|
||||
U+2019,quoteright,quoteright,basic,A,
|
||||
U+201A,quotesinglbase,quotesinglbase,basic,A,
|
||||
U+201C,quotedblleft,quotedblleft,basic,A,
|
||||
U+201D,quotedblright,quotedblright,basic,A,
|
||||
U+201E,quotedblbase,quotedblbase,basic,A,
|
||||
U+2020,dagger,dagger,basic,A,
|
||||
U+2021,daggerdbl,daggerdbl,basic,A,
|
||||
U+2022,bullet,bullet,basic,A,
|
||||
U+2026,ellipsis,ellipsis,basic,A,
|
||||
U+2027,uni2027,hyphenationpoint,basic,C,
|
||||
U+2028,uni2028,lineseparator,basic,C,
|
||||
U+2029,uni2029,paragraphseparator,basic,C,
|
||||
U+202A,uni202A,lefttorightembedding,rtl,D,
|
||||
U+202B,uni202B,righttoleftembedding,rtl,D,
|
||||
U+202C,uni202C,popdirectionalformatting,rtl,D,
|
||||
U+202D,uni202D,lefttorightoverride,rtl,D,
|
||||
U+202E,uni202E,righttoleftoverride,rtl,D,
|
||||
U+202F,uni202F,narrownbspace,basic,C,
|
||||
U+2030,perthousand,perthousand,basic,A,
|
||||
U+2039,guilsinglleft,guilsinglleft,basic,A,
|
||||
U+203A,guilsinglright,guilsinglright,basic,A,
|
||||
U+2044,fraction,fraction,basic,B,
|
||||
U+2060,uni2060,wordjoiner,basic,D,
|
||||
U+2066,uni2066,lefttorightisolate,rtl,D,
|
||||
U+2067,uni2067,righttoleftisolate,rtl,D,
|
||||
U+2068,uni2068,firststrongisolate,rtl,D,
|
||||
U+2069,uni2069,popdirectionalisolate,rtl,D,
|
||||
U+206C,uni206C,inhibitformshaping-ar,rtl,D,
|
||||
U+206D,uni206D,activateformshaping-ar,rtl,D,
|
||||
U+2074,uni2074,foursuperior,basic,E,
|
||||
U+20AC,Euro,euro,basic,A,
|
||||
U+2122,trademark,trademark,basic,A,
|
||||
U+2126,Omega,Ohm,basic,B,
|
||||
U+2202,partialdiff,partialdiff,basic,B,
|
||||
U+2206,Delta,Delta,basic,B,
|
||||
U+220F,product,product,basic,B,
|
||||
U+2211,summation,summation,basic,B,
|
||||
U+2212,minus,minus,basic,E,
|
||||
U+2215,uni2215,divisionslash,basic,E,
|
||||
U+2219,uni2219,bulletoperator,basic,C,Some applications use this instead of 00B7
|
||||
U+221A,radical,radical,basic,B,
|
||||
U+221E,infinity,infinity,basic,B,
|
||||
U+222B,integral,integral,basic,B,
|
||||
U+2248,approxequal,approxequal,basic,B,
|
||||
U+2260,notequal,notequal,basic,B,
|
||||
U+2264,lessequal,lessequal,basic,B,
|
||||
U+2265,greaterequal,greaterequal,basic,B,
|
||||
U+2423,uni2423,blank,basic,F,Advanced width should probably be the same as a space.
|
||||
U+25CA,lozenge,lozenge,basic,B,
|
||||
U+25CC,uni25CC,dottedCircle,basic,J,"If your OpenType font supports combining diacritics, be sure to include U+25CC DOTTED CIRCLE in your font, and optionally include this in your positioning rules for all your combining marks. This is because Uniscribe will insert U+25CC between ""illegal"" diacritic sequences (such as two U+064E characters in a row) to make the mistake more visible. (https://docs.microsoft.com/en-us/typography/script-development/arabic#handling-invalid-combining-marks)"
|
||||
U+F130,uniF130,FontBslnSideBrngMrkrLft,sil,K,
|
||||
U+F131,uniF131,FontBslnSideBrngMrkrRt,sil,K,
|
||||
U+FB01,uniFB01,fi,basic,B,
|
||||
U+FB02,uniFB02,fl,basic,B,
|
||||
U+FE00,uniFE00,VS1,basic,H,Add this to the cmap and point them to null glyphs
|
||||
U+FE01,uniFE01,VS2,basic,H,Add this to the cmap and point them to null glyphs
|
||||
U+FE02,uniFE02,VS3,basic,H,Add this to the cmap and point them to null glyphs
|
||||
U+FE03,uniFE03,VS4,basic,H,Add this to the cmap and point them to null glyphs
|
||||
U+FE04,uniFE04,VS5,basic,H,Add this to the cmap and point them to null glyphs
|
||||
U+FE05,uniFE05,VS6,basic,H,Add this to the cmap and point them to null glyphs
|
||||
U+FE06,uniFE06,VS7,basic,H,Add this to the cmap and point them to null glyphs
|
||||
U+FE07,uniFE07,VS8,basic,H,Add this to the cmap and point them to null glyphs
|
||||
U+FE08,uniFE08,VS9,basic,H,Add this to the cmap and point them to null glyphs
|
||||
U+FE09,uniFE09,VS10,basic,H,Add this to the cmap and point them to null glyphs
|
||||
U+FE0A,uniFE0A,VS11,basic,H,Add this to the cmap and point them to null glyphs
|
||||
U+FE0B,uniFE0B,VS12,basic,H,Add this to the cmap and point them to null glyphs
|
||||
U+FE0C,uniFE0C,VS13,basic,H,Add this to the cmap and point them to null glyphs
|
||||
U+FE0D,uniFE0D,VS14,basic,H,Add this to the cmap and point them to null glyphs
|
||||
U+FE0E,uniFE0E,VS15,basic,H,Add this to the cmap and point them to null glyphs
|
||||
U+FE0F,uniFE0F,VS16,basic,H,Add this to the cmap and point them to null glyphs
|
||||
U+FEFF,uniFEFF,zeroWidthNoBreakSpace,basic,I,Making this visible might be helpful
|
||||
U+FFFC,uniFFFC,objectReplacementCharacter,basic,G,It is easier for someone looking at the converted text to figure out what's going on if these have a visual representation.
|
||||
U+FFFD,uniFFFD,replacementCharacter,basic,G,It is easier for someone looking at the converted text to figure out what's going on if these have a visual representation.
|
||||
,,,,,
|
|
32
src/silfont/data/required_chars.md
Normal file
32
src/silfont/data/required_chars.md
Normal file
|
@ -0,0 +1,32 @@
|
|||
# required_chars - recommended characters for Non-Roman fonts
|
||||
|
||||
For optimal compatibility with a variety of operating systems, all Non-Roman fonts should include
|
||||
a set of glyphs for basic Roman characters and punctuation. Ideally this should include all the
|
||||
following characters, although some depend on other considerations (see the notes). The basis
|
||||
for this list is a union of the Windows Codepage 1252 and MacRoman character sets plus additional
|
||||
useful characters.
|
||||
|
||||
The csv includes the following headers:
|
||||
|
||||
* USV - Unicode Scalar Value
|
||||
* ps_name - postscript name of glyph that will end up in production
|
||||
* glyph_name - glyphsApp name that will be used in UFO
|
||||
* sil_set - set to include in a font
|
||||
* basic - should be included in any Non-Roman font
|
||||
* rtl - should be included in any right-to-left script font
|
||||
* sil - should be included in any SIL font
|
||||
* rationale - worded to complete the phrase: "This character is needed ..."
|
||||
* A - in Codepage 1252
|
||||
* B - in MacRoman
|
||||
* C - for publishing
|
||||
* D - for Non-Roman fonts and publishing
|
||||
* E - by Google Fonts
|
||||
* F - by TeX for visible space
|
||||
* G - for encoding conversion utilities
|
||||
* H - in case Variation Sequences are defined in future
|
||||
* I - to detect byte order
|
||||
* J - to render combining marks in isolation
|
||||
* K - to view sidebearings for every glyph using these characters
|
||||
* additional_notes - how the character might be used
|
||||
|
||||
The list was previously maintained here: https://scriptsource.org/entry/gg5wm9hhd3
|
270
src/silfont/etutil.py
Normal file
270
src/silfont/etutil.py
Normal file
|
@ -0,0 +1,270 @@
|
|||
#!/usr/bin/env python3
|
||||
'Classes and functions for handling XML files in pysilfont scripts'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from xml.etree import ElementTree as ET
|
||||
import silfont.core
|
||||
|
||||
import re, os, codecs, io, collections
|
||||
|
||||
_elementprotect = {
|
||||
'&' : '&',
|
||||
'<' : '<',
|
||||
'>' : '>' }
|
||||
_attribprotect = dict(_elementprotect)
|
||||
_attribprotect['"'] = '"' # Copy of element protect with double quote added
|
||||
|
||||
class ETWriter(object) :
|
||||
""" General purpose ElementTree pretty printer complete with options for attribute order
|
||||
beyond simple sorting, and which elements should use cdata
|
||||
|
||||
Note there is no support for namespaces. Originally there was, and if it is needed in the future look at
|
||||
commits from 10th May 2018 or earlier. The code there would need reworking!"""
|
||||
|
||||
def __init__(self, etree, attributeOrder = {}, takesCData = set(),
|
||||
indentIncr = " ", indentFirst = " ", indentML = False, inlineelem=[], precision = None, floatAttribs = [], intAttribs = []):
|
||||
self.root = etree
|
||||
self.attributeOrder = attributeOrder # Sort order for attributes - just one list for all elements
|
||||
self.takesCData = takesCData
|
||||
self.indentIncr = indentIncr # Incremental increase in indent
|
||||
self.indentFirst = indentFirst # Indent for first level
|
||||
self.indentML = indentML # Add indent to multi-line strings
|
||||
self.inlineelem = inlineelem # For supporting in-line elements. Does not work with mix of inline and other subelements in same element
|
||||
self.precision = precision # Precision to use outputting numeric attribute values
|
||||
self.floatAttribs = floatAttribs # List of float/real attributes used with precision
|
||||
self.intAttribs = intAttribs
|
||||
|
||||
def _protect(self, txt, base=_attribprotect) :
|
||||
return re.sub(r'['+r"".join(base.keys())+r"]", lambda m: base[m.group(0)], txt)
|
||||
|
||||
def serialize_xml(self, base = None, indent = '') :
|
||||
# Create the xml and return as a string
|
||||
outstrings = []
|
||||
outstr=""
|
||||
if base is None :
|
||||
base = self.root
|
||||
outstr += '<?xml version="1.0" encoding="UTF-8"?>\n'
|
||||
if '.pi' in base.attrib : # Processing instructions
|
||||
for pi in base.attrib['.pi'].split(",") : outstr += '<?{}?>\n'.format(pi)
|
||||
|
||||
if '.doctype' in base.attrib : outstr += '<!DOCTYPE {}>\n'.format(base.attrib['.doctype'])
|
||||
|
||||
tag = base.tag
|
||||
attribs = base.attrib
|
||||
|
||||
if '.comments' in attribs :
|
||||
for c in attribs['.comments'].split(",") : outstr += '{}<!--{}-->\n'.format(indent, c)
|
||||
|
||||
i = indent if tag not in self.inlineelem else ""
|
||||
outstr += '{}<{}'.format(i, tag)
|
||||
|
||||
for k in sorted(list(attribs.keys()), key=lambda x: self.attributeOrder.get(x, x)):
|
||||
if k[0] != '.' :
|
||||
att = attribs[k]
|
||||
if self.precision is not None and k in self.floatAttribs :
|
||||
if "." in att:
|
||||
num = round(float(att), self.precision)
|
||||
att = int(num) if num == int(num) else num
|
||||
elif k in self.intAttribs :
|
||||
att = int(round(float(att)))
|
||||
else:
|
||||
att = self._protect(att)
|
||||
outstr += ' {}="{}"'.format(k, att)
|
||||
|
||||
if len(base) or (base.text and base.text.strip()) :
|
||||
outstr += '>'
|
||||
if base.text and base.text.strip() :
|
||||
if tag not in self.takesCData :
|
||||
t = base.text
|
||||
if self.indentML : t = t.replace('\n', '\n' + indent)
|
||||
t = self._protect(t, base=_elementprotect)
|
||||
else :
|
||||
t = "<![CDATA[\n\t" + indent + base.text.replace('\n', '\n\t' + indent) + "\n" + indent + "]]>"
|
||||
outstr += t
|
||||
if len(base) :
|
||||
if base[0].tag not in self.inlineelem : outstr += '\n'
|
||||
if base == self.root:
|
||||
incr = self.indentFirst
|
||||
else:
|
||||
incr = self.indentIncr
|
||||
outstrings.append(outstr); outstr=""
|
||||
for b in base : outstrings.append(self.serialize_xml(base=b, indent=indent + incr))
|
||||
if base[-1].tag not in self.inlineelem : outstr += indent
|
||||
outstr += '</{}>'.format(tag)
|
||||
else :
|
||||
outstr += '/>'
|
||||
if base.tail and base.tail.strip() :
|
||||
outstr += self._protect(base.tail, base=_elementprotect)
|
||||
if tag not in self.inlineelem : outstr += "\n"
|
||||
|
||||
if '.commentsafter' in base.attrib :
|
||||
for c in base.attrib['.commentsafter'].split(",") : outstr += '{}<!--{}-->\n'.format(indent, c)
|
||||
|
||||
outstrings.append(outstr)
|
||||
return "".join(outstrings)
|
||||
|
||||
class _container(object) :
|
||||
# Parent class for other objects
|
||||
def __init_(self) :
|
||||
self._contents = {}
|
||||
# Define methods so it acts like an imutable container
|
||||
# (changes should be made via object functions etc)
|
||||
def __len__(self):
|
||||
return len(self._contents)
|
||||
def __getitem__(self, key):
|
||||
return self._contents[key]
|
||||
def __iter__(self):
|
||||
return iter(self._contents)
|
||||
def keys(self) :
|
||||
return self._contents.keys()
|
||||
|
||||
class xmlitem(_container):
|
||||
""" The xml data item for an xml file"""
|
||||
|
||||
def __init__(self, dirn = None, filen = None, parse = True, logger=None) :
|
||||
self.logger = logger if logger else silfont.core.loggerobj()
|
||||
self._contents = {}
|
||||
self.dirn = dirn
|
||||
self.filen = filen
|
||||
self.inxmlstr = ""
|
||||
self.outxmlstr = ""
|
||||
self.etree = None
|
||||
self.type = None
|
||||
if filen and dirn :
|
||||
fulln = os.path.join( dirn, filen)
|
||||
self.inxmlstr = io.open(fulln, "rt", encoding="utf-8").read()
|
||||
if parse :
|
||||
try:
|
||||
self.etree = ET.fromstring(self.inxmlstr)
|
||||
except:
|
||||
try:
|
||||
self.etree = ET.fromstring(self.inxmlstr.encode("utf-8"))
|
||||
except Exception as e:
|
||||
self.logger.log("Failed to parse xml for " + fulln, "E")
|
||||
self.logger.log(str(e), "S")
|
||||
|
||||
def write_to_file(self,dirn,filen) :
|
||||
outfile = io.open(os.path.join(dirn,filen),'w', encoding="utf-8")
|
||||
outfile.write(self.outxmlstr)
|
||||
|
||||
class ETelement(_container):
|
||||
# Class for an etree element. Mainly used as a parent class
|
||||
# For each tag in the element, ETelement[tag] returns a list of sub-elements with that tag
|
||||
# process_subelements can set attributes for each tag based on a supplied spec
|
||||
def __init__(self,element) :
|
||||
self.element = element
|
||||
self._contents = {}
|
||||
self.reindex()
|
||||
|
||||
def reindex(self) :
|
||||
self._contents = collections.defaultdict(list)
|
||||
for e in self.element :
|
||||
self._contents[e.tag].append(e)
|
||||
|
||||
def remove(self,subelement) :
|
||||
self._contents[subelement.tag].remove(subelement)
|
||||
self.element.remove(subelement)
|
||||
|
||||
def append(self,subelement) :
|
||||
self._contents[subelement.tag].append(subelement)
|
||||
self.element.append(subelement)
|
||||
|
||||
def insert(self,index,subelement) :
|
||||
self._contents[subelement.tag].insert(index,subelement)
|
||||
self.element.insert(index,subelement)
|
||||
|
||||
def replace(self,index,subelement) :
|
||||
self._contents[subelement.tag][index] = subelement
|
||||
self.element[index] = subelement
|
||||
|
||||
def process_attributes(self, attrspec, others = False) :
|
||||
# Process attributes based on list of attributes in the format:
|
||||
# (element attr name, object attr name, required)
|
||||
# If attr does not exist and is not required, set to None
|
||||
# If others is True, attributes not in the list are allowed
|
||||
# Attributes should be listed in the order they should be output if writing xml out
|
||||
|
||||
if not hasattr(self,"parseerrors") or self.parseerrors is None: self.parseerrors=[]
|
||||
|
||||
speclist = {}
|
||||
for (i,spec) in enumerate(attrspec) : speclist[spec[0]] = attrspec[i]
|
||||
|
||||
for eaname in speclist :
|
||||
(eaname,oaname,req) = speclist[eaname]
|
||||
setattr(self, oaname, getattrib(self.element,eaname))
|
||||
if req and getattr(self, oaname) is None : self.parseerrors.append("Required attribute " + eaname + " missing")
|
||||
|
||||
# check for any other attributes
|
||||
for att in self.element.attrib :
|
||||
if att not in speclist :
|
||||
if others:
|
||||
setattr(self, att, getattrib(self.element,att))
|
||||
else :
|
||||
self.parseerrors.append("Invalid attribute " + att)
|
||||
|
||||
def process_subelements(self,subspec, offspec = False) :
|
||||
# Process all subelements based on spec of expected elements
|
||||
# subspec is a list of elements, with each list in the format:
|
||||
# (element name, attribute name, class name, required, multiple valeus allowed)
|
||||
# If cl is set, attribute is set to an object made with that class; otherwise just text of the element
|
||||
|
||||
if not hasattr(self,"parseerrors") or self.parseerrors is None : self.parseerrors=[]
|
||||
|
||||
def make_obj(self,cl,element) : # Create object from element and cascade parse errors down
|
||||
if cl is None : return element.text
|
||||
if cl is ETelement :
|
||||
obj = cl(element) # ETelement does not require parent object, ie self
|
||||
else :
|
||||
obj = cl(self,element)
|
||||
if hasattr(obj,"parseerrors") and obj.parseerrors != [] :
|
||||
if hasattr(obj,"name") and obj.name is not None : # Try to find a name for error reporting
|
||||
name = obj.name
|
||||
elif hasattr(obj,"label") and obj.label is not None :
|
||||
name = obj.label
|
||||
else :
|
||||
name = ""
|
||||
|
||||
self.parseerrors.append("Errors parsing " + element.tag + " element: " + name)
|
||||
for error in obj.parseerrors :
|
||||
self.parseerrors.append(" " + error)
|
||||
return obj
|
||||
|
||||
speclist = {}
|
||||
for (i,spec) in enumerate(subspec) : speclist[spec[0]] = subspec[i]
|
||||
|
||||
for ename in speclist :
|
||||
(ename,aname,cl,req,multi) = speclist[ename]
|
||||
initval = [] if multi else None
|
||||
setattr(self,aname,initval)
|
||||
|
||||
for ename in self : # Process all elements
|
||||
if ename in speclist :
|
||||
(ename,aname,cl,req,multi) = speclist[ename]
|
||||
elements = self[ename]
|
||||
if multi :
|
||||
for elem in elements : getattr(self,aname).append(make_obj(self,cl,elem))
|
||||
else :
|
||||
setattr(self,aname,make_obj(self,cl,elements[0]))
|
||||
if len(elements) > 1 : self.parseerrors.append("Multiple " + ename + " elements not allowed")
|
||||
else:
|
||||
if offspec: # Elements not in spec are allowed so create list of sub-elemente.
|
||||
setattr(self,ename,[])
|
||||
for elem in elements : getattr(self,ename).append(ETelement(elem))
|
||||
else :
|
||||
self.parseerrors.append("Invalid element: " + ename)
|
||||
|
||||
for ename in speclist : # Check values exist for required elements etc
|
||||
(ename,aname,cl,req,multi) = speclist[ename]
|
||||
|
||||
val = getattr(self,aname)
|
||||
if req :
|
||||
if multi and val == [] : self.parseerrors.append("No " + ename + " elements ")
|
||||
if not multi and val == None : self.parseerrors.append("No " + ename + " element")
|
||||
|
||||
def makeAttribOrder(attriblist) : # Turn a list of attrib names into an attributeOrder dict for ETWriter
|
||||
return dict(map(lambda x:(x[1], x[0]), enumerate(attriblist)))
|
||||
|
||||
def getattrib(element,attrib) : return element.attrib[attrib] if attrib in element.attrib else None
|
0
src/silfont/fbtests/__init__.py
Normal file
0
src/silfont/fbtests/__init__.py
Normal file
231
src/silfont/fbtests/silnotcjk.py
Normal file
231
src/silfont/fbtests/silnotcjk.py
Normal file
|
@ -0,0 +1,231 @@
|
|||
#!/usr/bin/env python3
|
||||
'''These are copies of checks that have the "not is_cjk" condition, but these versions have that condition removed.
|
||||
The is_cjk condition was being matched by multiple fonts that are not cjk fonts - but do have some cjk punctuation characters.
|
||||
These checks based on based on examples from Font Bakery, copyright 2017 The Font Bakery Authors, licensed under the Apache 2.0 license'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2022 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from fontbakery.status import PASS, FAIL, WARN, ERROR, INFO, SKIP
|
||||
from fontbakery.callable import condition, check, disable
|
||||
from fontbakery.message import Message
|
||||
from fontbakery.profiles.shared_conditions import typo_metrics_enabled
|
||||
import os
|
||||
from fontbakery.constants import NameID, PlatformID, WindowsEncodingID
|
||||
|
||||
@check(
|
||||
id = 'org.sil/check/family/win_ascent_and_descent',
|
||||
conditions = ['vmetrics'],
|
||||
rationale = """
|
||||
Based on com.google.fonts/check/family/win_ascent_and_descent but with the 'not is_cjk' condition removed
|
||||
"""
|
||||
)
|
||||
def org_sil_check_family_win_ascent_and_descent(ttFont, vmetrics):
|
||||
"""Checking OS/2 usWinAscent & usWinDescent."""
|
||||
|
||||
if "OS/2" not in ttFont:
|
||||
yield FAIL,\
|
||||
Message("lacks-OS/2",
|
||||
"Font file lacks OS/2 table")
|
||||
return
|
||||
|
||||
failed = False
|
||||
os2_table = ttFont['OS/2']
|
||||
win_ascent = os2_table.usWinAscent
|
||||
win_descent = os2_table.usWinDescent
|
||||
y_max = vmetrics['ymax']
|
||||
y_min = vmetrics['ymin']
|
||||
|
||||
# OS/2 usWinAscent:
|
||||
if win_ascent < y_max:
|
||||
failed = True
|
||||
yield FAIL,\
|
||||
Message("ascent",
|
||||
f"OS/2.usWinAscent value should be"
|
||||
f" equal or greater than {y_max},"
|
||||
f" but got {win_ascent} instead")
|
||||
if win_ascent > y_max * 2:
|
||||
failed = True
|
||||
yield FAIL,\
|
||||
Message("ascent",
|
||||
f"OS/2.usWinAscent value"
|
||||
f" {win_ascent} is too large."
|
||||
f" It should be less than double the yMax."
|
||||
f" Current yMax value is {y_max}")
|
||||
# OS/2 usWinDescent:
|
||||
if win_descent < abs(y_min):
|
||||
failed = True
|
||||
yield FAIL,\
|
||||
Message("descent",
|
||||
f"OS/2.usWinDescent value should be equal or"
|
||||
f" greater than {abs(y_min)}, but got"
|
||||
f" {win_descent} instead.")
|
||||
|
||||
if win_descent > abs(y_min) * 2:
|
||||
failed = True
|
||||
yield FAIL,\
|
||||
Message("descent",
|
||||
f"OS/2.usWinDescent value"
|
||||
f" {win_descent} is too large."
|
||||
f" It should be less than double the yMin."
|
||||
f" Current absolute yMin value is {abs(y_min)}")
|
||||
if not failed:
|
||||
yield PASS, "OS/2 usWinAscent & usWinDescent values look good!"
|
||||
|
||||
|
||||
@check(
|
||||
id = 'org.sil/check/os2_metrics_match_hhea',
|
||||
rationale="""
|
||||
Based on com.google.fonts/check/os2_metrics_match_hhea but with the 'not is_cjk' condition removed
|
||||
"""
|
||||
)
|
||||
def org_sil_check_os2_metrics_match_hhea(ttFont):
|
||||
"""Checking OS/2 Metrics match hhea Metrics."""
|
||||
|
||||
filename = os.path.basename(ttFont.reader.file.name)
|
||||
|
||||
# Check both OS/2 and hhea are present.
|
||||
missing_tables = False
|
||||
|
||||
required = ["OS/2", "hhea"]
|
||||
for key in required:
|
||||
if key not in ttFont:
|
||||
missing_tables = True
|
||||
yield FAIL,\
|
||||
Message(f'lacks-{key}',
|
||||
f"{filename} lacks a '{key}' table.")
|
||||
|
||||
if missing_tables:
|
||||
return
|
||||
|
||||
# OS/2 sTypoAscender and sTypoDescender match hhea ascent and descent
|
||||
if ttFont["OS/2"].sTypoAscender != ttFont["hhea"].ascent:
|
||||
yield FAIL,\
|
||||
Message("ascender",
|
||||
f"OS/2 sTypoAscender ({ttFont['OS/2'].sTypoAscender})"
|
||||
f" and hhea ascent ({ttFont['hhea'].ascent})"
|
||||
f" must be equal.")
|
||||
elif ttFont["OS/2"].sTypoDescender != ttFont["hhea"].descent:
|
||||
yield FAIL,\
|
||||
Message("descender",
|
||||
f"OS/2 sTypoDescender ({ttFont['OS/2'].sTypoDescender})"
|
||||
f" and hhea descent ({ttFont['hhea'].descent})"
|
||||
f" must be equal.")
|
||||
elif ttFont["OS/2"].sTypoLineGap != ttFont["hhea"].lineGap:
|
||||
yield FAIL,\
|
||||
Message("lineGap",
|
||||
f"OS/2 sTypoLineGap ({ttFont['OS/2'].sTypoLineGap})"
|
||||
f" and hhea lineGap ({ttFont['hhea'].lineGap})"
|
||||
f" must be equal.")
|
||||
else:
|
||||
yield PASS, ("OS/2.sTypoAscender/Descender values"
|
||||
" match hhea.ascent/descent.")
|
||||
|
||||
@check(
|
||||
id = "org.sil/check/os2/use_typo_metrics",
|
||||
rationale="""
|
||||
Based on com.google.fonts/check/os2/use_typo_metrics but with the 'not is_cjk' condition removed
|
||||
"""
|
||||
)
|
||||
def corg_sil_check_os2_fsselectionbit7(ttFonts):
|
||||
"""OS/2.fsSelection bit 7 (USE_TYPO_METRICS) is set in all fonts."""
|
||||
|
||||
bad_fonts = []
|
||||
for ttFont in ttFonts:
|
||||
if not ttFont["OS/2"].fsSelection & (1 << 7):
|
||||
bad_fonts.append(ttFont.reader.file.name)
|
||||
|
||||
if bad_fonts:
|
||||
yield FAIL,\
|
||||
Message('missing-os2-fsselection-bit7',
|
||||
f"OS/2.fsSelection bit 7 (USE_TYPO_METRICS) was"
|
||||
f"NOT set in the following fonts: {bad_fonts}.")
|
||||
else:
|
||||
yield PASS, "OK"
|
||||
|
||||
|
||||
'''@check(
|
||||
id = 'org.sil/check/vertical_metrics',
|
||||
# conditions = ['not remote_styles'],
|
||||
rationale="""
|
||||
Based on com.google.fonts/check/vertical_metrics but with the 'not is_cjk' condition removed
|
||||
"""
|
||||
)
|
||||
def org_sil_check_vertical_metrics(ttFont):
|
||||
"""Check font follows the Google Fonts vertical metric schema"""
|
||||
filename = os.path.basename(ttFont.reader.file.name)
|
||||
|
||||
# Check necessary tables are present.
|
||||
missing_tables = False
|
||||
required = ["OS/2", "hhea", "head"]
|
||||
for key in required:
|
||||
if key not in ttFont:
|
||||
missing_tables = True
|
||||
yield FAIL,\
|
||||
Message(f'lacks-{key}',
|
||||
f"{filename} lacks a '{key}' table.")
|
||||
|
||||
if missing_tables:
|
||||
return
|
||||
|
||||
font_upm = ttFont['head'].unitsPerEm
|
||||
font_metrics = {
|
||||
'OS/2.sTypoAscender': ttFont['OS/2'].sTypoAscender,
|
||||
'OS/2.sTypoDescender': ttFont['OS/2'].sTypoDescender,
|
||||
'OS/2.sTypoLineGap': ttFont['OS/2'].sTypoLineGap,
|
||||
'hhea.ascent': ttFont['hhea'].ascent,
|
||||
'hhea.descent': ttFont['hhea'].descent,
|
||||
'hhea.lineGap': ttFont['hhea'].lineGap,
|
||||
'OS/2.usWinAscent': ttFont['OS/2'].usWinAscent,
|
||||
'OS/2.usWinDescent': ttFont['OS/2'].usWinDescent
|
||||
}
|
||||
expected_metrics = {
|
||||
'OS/2.sTypoLineGap': 0,
|
||||
'hhea.lineGap': 0,
|
||||
}
|
||||
|
||||
failed = False
|
||||
warn = False
|
||||
|
||||
# Check typo metrics and hhea lineGap match our expected values
|
||||
for k in expected_metrics:
|
||||
if font_metrics[k] != expected_metrics[k]:
|
||||
failed = True
|
||||
yield FAIL,\
|
||||
Message(f'bad-{k}',
|
||||
f'{k} is "{font_metrics[k]}" it should be {expected_metrics[k]}')
|
||||
|
||||
hhea_sum = (font_metrics['hhea.ascent'] +
|
||||
abs(font_metrics['hhea.descent']) +
|
||||
font_metrics['hhea.lineGap']) / font_upm
|
||||
|
||||
# Check the sum of the hhea metrics is not below 1.2
|
||||
# (120% of upm or 1200 units for 1000 upm font)
|
||||
if hhea_sum < 1.2:
|
||||
failed = True
|
||||
yield FAIL,\
|
||||
Message('bad-hhea-range',
|
||||
'The sum of hhea.ascender+abs(hhea.descender)+hhea.lineGap '
|
||||
f'is {int(hhea_sum*font_upm)} when it should be at least {int(font_upm*1.2)}')
|
||||
|
||||
# Check the sum of the hhea metrics is below 2.0
|
||||
elif hhea_sum > 2.0:
|
||||
failed = True
|
||||
yield FAIL,\
|
||||
Message('bad-hhea-range',
|
||||
'The sum of hhea.ascender+abs(hhea.descender)+hhea.lineGap '
|
||||
f'is {int(hhea_sum*font_upm)} when it should be at most {int(font_upm*2.0)}')
|
||||
|
||||
# Check the sum of the hhea metrics is between 1.1-1.5x of the font's upm
|
||||
elif hhea_sum > 1.5:
|
||||
warn = True
|
||||
yield WARN,\
|
||||
Message('bad-hhea-range',
|
||||
"We recommend the absolute sum of the hhea metrics should be"
|
||||
f" between 1.2-1.5x of the font's upm. This font has {hhea_sum}x ({int(hhea_sum*font_upm)})")
|
||||
|
||||
if not failed and not warn:
|
||||
yield PASS, 'Vertical metrics are good'
|
||||
'''
|
||||
|
250
src/silfont/fbtests/silttfchecks.py
Normal file
250
src/silfont/fbtests/silttfchecks.py
Normal file
|
@ -0,0 +1,250 @@
|
|||
#!/usr/bin/env python3
|
||||
'''Checks to be imported by ttfchecks.py
|
||||
Some checks based on examples from Font Bakery, copyright 2017 The Font Bakery Authors, licensed under the Apache 2.0 license'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2022 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from fontbakery.status import PASS, FAIL, WARN, ERROR, INFO, SKIP
|
||||
from fontbakery.callable import condition, check, disable
|
||||
from fontbakery.message import Message
|
||||
from fontbakery.constants import NameID, PlatformID, WindowsEncodingID
|
||||
|
||||
@check(
|
||||
id = 'org.sil/check/name/version_format',
|
||||
rationale = """
|
||||
Based on com.google.fonts/check/name/version_format but:
|
||||
- Checks for two valid formats:
|
||||
- Production: exactly 3 digits after decimal point
|
||||
|
||||
|
||||
- Allows major version to be 0
|
||||
- Allows extra info after numbers, eg for beta or dev versions
|
||||
"""
|
||||
)
|
||||
def org_sil_version_format(ttFont):
|
||||
"Version format is correct in 'name' table?"
|
||||
|
||||
from fontbakery.utils import get_name_entry_strings
|
||||
import re
|
||||
|
||||
failed = False
|
||||
version_entries = get_name_entry_strings(ttFont, NameID.VERSION_STRING)
|
||||
if len(version_entries) == 0:
|
||||
failed = True
|
||||
yield FAIL,\
|
||||
Message("no-version-string",
|
||||
f"Font lacks a NameID.VERSION_STRING"
|
||||
f" (nameID={NameID.VERSION_STRING}) entry")
|
||||
|
||||
for ventry in version_entries:
|
||||
if not re.match(r'Version [0-9]+\.\d{3}( .+)*$', ventry):
|
||||
failed = True
|
||||
yield FAIL,\
|
||||
Message("bad-version-strings",
|
||||
f'The NameID.VERSION_STRING'
|
||||
f' (nameID={NameID.VERSION_STRING}) value must'
|
||||
f' follow the pattern "Version X.nnn devstring" with X.nnn'
|
||||
f' greater than or equal to 0.000.'
|
||||
f' Current version string is: "{ventry}"')
|
||||
if not failed:
|
||||
yield PASS, "Version format in NAME table entries is correct."
|
||||
|
||||
@check(
|
||||
id = 'org.sil/check/whitespace_widths'
|
||||
)
|
||||
def org_sil_whitespace_widths(ttFont):
|
||||
"""Checks with widths of space characters in the font against best practice"""
|
||||
from fontbakery.utils import get_glyph_name
|
||||
|
||||
allok = True
|
||||
space_data = {
|
||||
0x0020: ['Space'],
|
||||
0x00A0: ['No-break space'],
|
||||
0x2008: ['Punctuation space'],
|
||||
0x2003: ['Em space'],
|
||||
0x2002: ['En space'],
|
||||
0x2000: ['En quad'],
|
||||
0x2001: ['Em quad'],
|
||||
0x2004: ['Three-per-em space'],
|
||||
0x2005: ['Four-per-em space'],
|
||||
0x2006: ['Six-per-em space'],
|
||||
0x2009: ['Thin space'],
|
||||
0x200A: ['Hair space'],
|
||||
0x202F: ['Narrow no-break space'],
|
||||
0x002E: ['Full stop'], # Non-space character where the width is needed for comparison
|
||||
}
|
||||
for sp in space_data:
|
||||
spname = get_glyph_name(ttFont, sp)
|
||||
if spname is None:
|
||||
spwidth = None
|
||||
else:
|
||||
spwidth = ttFont['hmtx'][spname][0]
|
||||
space_data[sp].append(spname)
|
||||
space_data[sp].append(spwidth)
|
||||
|
||||
# Other width info needed from the font
|
||||
upm = ttFont['head'].unitsPerEm
|
||||
fullstopw = space_data[46][2]
|
||||
|
||||
# Widths used for comparisons
|
||||
spw = space_data[32][2]
|
||||
if spw is None:
|
||||
allok = False
|
||||
yield WARN, "No space in the font so No-break space (if present) can't be checked"
|
||||
emw = space_data[0x2003][2]
|
||||
if emw is None:
|
||||
allok = False
|
||||
yield WARN, f'No em space in the font. Will be assumed to be units per em ({upm}) for other checking'
|
||||
emw = upm
|
||||
enw = space_data[0x2002][2]
|
||||
if enw is None:
|
||||
allok = False
|
||||
yield WARN, f'No en space in the font. Will be assumed to be 1/2 em space width ({emw/2}) for checking en quad (if present)'
|
||||
enw = emw/2
|
||||
|
||||
# Now check all the specific space widths. Only check if the space exists in the font
|
||||
def checkspace(spacechar, minwidth, maxwidth=None):
|
||||
sdata = space_data[spacechar]
|
||||
if sdata[1]: # Name is set to None if not in font
|
||||
# Allow for width(s) not being integer (eg em/6) so test against rounding up or down
|
||||
minw = int(minwidth)
|
||||
if maxwidth:
|
||||
maxw = int(maxwidth)
|
||||
if maxwidth > maxw: maxw += 1 # Had been rounded down, so round up
|
||||
else:
|
||||
maxw = minw if minw == minwidth else minw +1 # Had been rounded down, so allow rounded up as well
|
||||
charw = sdata[2]
|
||||
if not(minw <= charw <= maxw):
|
||||
return (f'Width of {sdata[0]} ({spacechar:#04x}) is {str(charw)}: ', minw, maxw)
|
||||
return (None,0,0)
|
||||
|
||||
# No-break space
|
||||
(message, minw, maxw) = checkspace(0x00A0, spw)
|
||||
if message: allok = False; yield FAIL, message + f"Should match width of space ({spw})"
|
||||
# Punctuation space
|
||||
(message, minw, maxw) = checkspace(0x2008, fullstopw)
|
||||
if message: allok = False; yield FAIL, message + f"Should match width of full stop ({fullstopw})"
|
||||
# Em space
|
||||
(message, minw, maxw) = checkspace(0x2003, upm)
|
||||
if message: allok = False; yield WARN, message + f"Should match units per em ({upm})"
|
||||
# En space
|
||||
(message, minw, maxw) = checkspace(0x2002, emw/2)
|
||||
if message:
|
||||
allok = False
|
||||
widths = f'{minw}' if minw == maxw else f'{minw} or {maxw}'
|
||||
yield WARN, message + f"Should be half the width of em ({widths})"
|
||||
# En quad
|
||||
(message, minw, maxw) = checkspace(0x2000, enw)
|
||||
if message: allok = False; yield WARN, message + f"Should be the same width as en ({enw})"
|
||||
# Em quad
|
||||
(message, minw, maxw) = checkspace(0x2001, emw)
|
||||
if message: allok = False; yield WARN, message + f"Should be the same width as em ({emw})"
|
||||
# Three-per-em space
|
||||
(message, minw, maxw) = checkspace(0x2004, emw/3)
|
||||
if message:
|
||||
allok = False
|
||||
widths = f'{minw}' if minw == maxw else f'{minw} or {maxw}'
|
||||
yield WARN, message + f"Should be 1/3 the width of em ({widths})"
|
||||
# Four-per-em space
|
||||
(message, minw, maxw) = checkspace(0x2005, emw/4)
|
||||
if message:
|
||||
allok = False
|
||||
widths = f'{minw}' if minw == maxw else f'{minw} or {maxw}'
|
||||
yield WARN, message + f"Should be 1/4 the width of em ({widths})",
|
||||
# Six-per-em space
|
||||
(message, minw, maxw) = checkspace(0x2006, emw/6)
|
||||
if message:
|
||||
allok = False
|
||||
widths = f'{minw}' if minw == maxw else f'{minw} or {maxw}'
|
||||
yield WARN, message + f"Should be 1/6 the width of em ({widths})",
|
||||
# Thin space
|
||||
(message, minw, maxw) = checkspace(0x2009, emw/6, emw/5)
|
||||
if message:
|
||||
allok = False
|
||||
yield WARN, message + f"Should be between 1/6 and 1/5 the width of em ({minw} and {maxw})"
|
||||
# Hair space
|
||||
(message, minw, maxw) = checkspace(0x200A,
|
||||
emw/16, emw/10)
|
||||
if message:
|
||||
allok = False
|
||||
yield WARN, message + f"Should be between 1/16 and 1/10 the width of em ({minw} and {maxw})"
|
||||
# Narrow no-break space
|
||||
(message, minw, maxw) = checkspace(0x202F,
|
||||
emw/6, emw/5)
|
||||
if message:
|
||||
allok = False
|
||||
yield WARN, message + f"Should be between 1/6 and 1/5 the width of em ({minw} and {maxw})"
|
||||
|
||||
if allok:
|
||||
yield PASS, "Space widths all match expected values"
|
||||
|
||||
@check(
|
||||
id = 'org.sil/check/number_widths'
|
||||
)
|
||||
def org_sil_number_widths(ttFont, config):
|
||||
"""Check widths of latin digits 0-9 are equal and match that of figure space"""
|
||||
from fontbakery.utils import get_glyph_name
|
||||
|
||||
num_data = {
|
||||
0x0030: ['zero'],
|
||||
0x0031: ['one'],
|
||||
0x0032: ['two'],
|
||||
0x0033: ['three'],
|
||||
0x0034: ['four'],
|
||||
0x0035: ['five'],
|
||||
0x0036: ['six'],
|
||||
0x0037: ['seven'],
|
||||
0x0038: ['eight'],
|
||||
0x0039: ['nine'],
|
||||
0x2007: ['figurespace'] # Figure space should be the same as numerals
|
||||
}
|
||||
|
||||
fontnames = []
|
||||
for x in (ttFont['name'].names[1].string, ttFont['name'].names[2].string):
|
||||
txt=""
|
||||
for i in range(1,len(x),2): txt += x.decode()[i]
|
||||
fontnames.append(txt)
|
||||
|
||||
for num in num_data:
|
||||
name = get_glyph_name(ttFont, num)
|
||||
if name is None:
|
||||
width = -1 # So different from Zero!
|
||||
else:
|
||||
width = ttFont['hmtx'][name][0]
|
||||
num_data[num].append(name)
|
||||
num_data[num].append(width)
|
||||
|
||||
zerowidth = num_data[48][2]
|
||||
if zerowidth ==-1:
|
||||
yield FAIL, "No zero in font - remainder of check not run"
|
||||
return
|
||||
|
||||
# Check non-zero digits are present and have same width as zero
|
||||
digitsdiff = ""
|
||||
digitsmissing = ""
|
||||
for i in range(49,58):
|
||||
ndata = num_data[i]
|
||||
width = ndata[2]
|
||||
if width != zerowidth:
|
||||
if width == -1:
|
||||
digitsmissing += ndata[1] + " "
|
||||
else:
|
||||
digitsdiff += ndata[1] + " "
|
||||
|
||||
# Check figure space
|
||||
figuremess = ""
|
||||
ndata = num_data[0x2007]
|
||||
width = ndata[2]
|
||||
if width != zerowidth:
|
||||
if width == -1:
|
||||
figuremess = "No figure space in font"
|
||||
else:
|
||||
figuremess = f'The width of figure space ({ndata[1]}) does not match the width of zero'
|
||||
if digitsmissing or digitsdiff or figuremess:
|
||||
if digitsmissing: yield FAIL, f"Digits missing: {digitsmissing}"
|
||||
if digitsdiff: yield WARN, f"Digits with different width from Zero: {digitsdiff}"
|
||||
if figuremess: yield WARN, figuremess
|
||||
else:
|
||||
yield PASS, "All number widths are OK"
|
329
src/silfont/fbtests/ttfchecks.py
Normal file
329
src/silfont/fbtests/ttfchecks.py
Normal file
|
@ -0,0 +1,329 @@
|
|||
#!/usr/bin/env python3
|
||||
'Support for use of Fontbakery ttf checks'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2020 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from fontbakery.section import Section
|
||||
from fontbakery.status import PASS, FAIL, WARN, ERROR, INFO, SKIP
|
||||
from fontbakery.fonts_profile import profile_factory
|
||||
from fontbakery.profiles.googlefonts import METADATA_CHECKS, REPO_CHECKS, DESCRIPTION_CHECKS
|
||||
from fontbakery.profiles.ufo_sources import UFO_PROFILE_CHECKS
|
||||
from silfont.fbtests.silttfchecks import *
|
||||
from silfont.fbtests.silnotcjk import *
|
||||
|
||||
|
||||
from collections import OrderedDict
|
||||
|
||||
# Set imports of standard ttf tests
|
||||
|
||||
profile_imports = ("fontbakery.profiles.universal",
|
||||
"fontbakery.profiles.googlefonts",
|
||||
"fontbakery.profiles.adobefonts",
|
||||
"fontbakery.profiles.notofonts",
|
||||
"fontbakery.profiles.fontval")
|
||||
|
||||
def make_base_profile():
|
||||
profile = profile_factory(default_section=Section("SIL Fonts"))
|
||||
profile.auto_register(globals())
|
||||
|
||||
# Exclude groups of checks that check files other than ttfs
|
||||
for checkid in DESCRIPTION_CHECKS + METADATA_CHECKS + REPO_CHECKS + UFO_PROFILE_CHECKS:
|
||||
if checkid in profile._check_registry: profile.remove_check(checkid)
|
||||
return profile
|
||||
|
||||
def make_profile(check_list, variable_font=False):
|
||||
profile = make_base_profile()
|
||||
|
||||
# Exclude all the checks we don't want to run
|
||||
for checkid in check_list:
|
||||
if checkid in profile._check_registry:
|
||||
check_item = check_list[checkid]
|
||||
exclude = check_item["exclude"] if "exclude" in check_item else False
|
||||
if exclude: profile.remove_check(checkid)
|
||||
|
||||
# Exclude further sets of checks to reduce number of skips and so have less clutter in html results
|
||||
for checkid in sorted(set(profile._check_registry.keys())):
|
||||
section = profile._check_registry[checkid]
|
||||
check = section.get_check(checkid)
|
||||
conditions = getattr(check, "conditions")
|
||||
exclude = False
|
||||
if variable_font and "not is_variable_font" in conditions: exclude = True
|
||||
if not variable_font and "is_variable_font" in conditions: exclude = True
|
||||
if "noto" in checkid.lower(): exclude = True # These will be specific to Noto fonts
|
||||
if ":adobefonts" in checkid.lower(): exclude = True # Copy of standard test with overridden results so no new info
|
||||
|
||||
if exclude: profile.remove_check(checkid)
|
||||
# Remove further checks that are only relevant for variable fonts but don't use the is_variable_font condition
|
||||
if not variable_font:
|
||||
for checkid in (
|
||||
"com.adobe.fonts/check/stat_has_axis_value_tables",
|
||||
"com.google.fonts/check/STAT_strings",
|
||||
"com.google.fonts/check/STAT/axis_order"):
|
||||
if checkid in profile._check_registry.keys(): profile.remove_check(checkid)
|
||||
return profile
|
||||
|
||||
def all_checks_dict(): # An ordered dict of all checks designed for exporting the data
|
||||
profile = make_base_profile()
|
||||
check_dict=OrderedDict()
|
||||
|
||||
for checkid in sorted(set(profile._check_registry.keys()), key=str.casefold):
|
||||
if "noto" in checkid.lower(): continue # We wxclude these in make_profile()
|
||||
if ":adobefonts" in checkid.lower(): continue # We wxclude these in make_profile()
|
||||
|
||||
section = profile._check_registry[checkid]
|
||||
check = section.get_check(checkid)
|
||||
|
||||
conditions = getattr(check, "conditions")
|
||||
conditionstxt=""
|
||||
for condition in conditions:
|
||||
conditionstxt += condition + "\n"
|
||||
conditionstxt = conditionstxt.strip()
|
||||
|
||||
rationale = getattr(check,"rationale")
|
||||
rationale = "" if rationale is None else rationale.strip().replace("\n ", "\n") # Remove extraneous whitespace
|
||||
|
||||
psfaction = psfcheck_list[checkid] if checkid in psfcheck_list else "Not in psfcheck_list"
|
||||
|
||||
item = {"psfaction": psfaction,
|
||||
"section": section.name,
|
||||
"description": getattr(check, "description"),
|
||||
"rationale": rationale,
|
||||
"conditions": conditionstxt
|
||||
}
|
||||
check_dict[checkid] = item
|
||||
|
||||
for checkid in psfcheck_list: # Look for checks no longer in Font Bakery
|
||||
if checkid not in check_dict:
|
||||
check_dict[checkid] = {"psfaction": psfcheck_list[checkid],
|
||||
"section": "Missing",
|
||||
"description": "Check not found",
|
||||
"rationale": "",
|
||||
"conditions": ""
|
||||
}
|
||||
|
||||
return check_dict
|
||||
|
||||
psfcheck_list = {}
|
||||
psfcheck_list['com.adobe.fonts/check/cff_call_depth'] = {'exclude': True}
|
||||
psfcheck_list['com.adobe.fonts/check/cff_deprecated_operators'] = {'exclude': True}
|
||||
psfcheck_list['com.adobe.fonts/check/cff2_call_depth'] = {'exclude': True}
|
||||
psfcheck_list['com.adobe.fonts/check/family/consistent_family_name'] = {}
|
||||
psfcheck_list['com.adobe.fonts/check/family/bold_italic_unique_for_nameid1'] = {}
|
||||
psfcheck_list['com.adobe.fonts/check/family/consistent_upm'] = {}
|
||||
psfcheck_list['com.adobe.fonts/check/family/max_4_fonts_per_family_name'] = {}
|
||||
psfcheck_list['com.adobe.fonts/check/find_empty_letters'] = {}
|
||||
psfcheck_list['com.adobe.fonts/check/freetype_rasterizer'] = {'exclude': True}
|
||||
#psfcheck_list['com.adobe.fonts/check/freetype_rasterizer:googlefonts'] = {'exclude': True}
|
||||
psfcheck_list['com.adobe.fonts/check/fsselection_matches_macstyle'] = {}
|
||||
psfcheck_list['com.adobe.fonts/check/name/empty_records'] = {}
|
||||
psfcheck_list['com.adobe.fonts/check/name/postscript_name_consistency'] = {}
|
||||
psfcheck_list['com.adobe.fonts/check/nameid_1_win_english'] = {}
|
||||
psfcheck_list['com.adobe.fonts/check/name/postscript_vs_cff'] = {'exclude': True}
|
||||
psfcheck_list['com.adobe.fonts/check/sfnt_version'] = {}
|
||||
psfcheck_list['com.adobe.fonts/check/stat_has_axis_value_tables'] = {}
|
||||
psfcheck_list['com.adobe.fonts/check/STAT_strings'] = {'exclude': True}
|
||||
psfcheck_list['com.adobe.fonts/check/unsupported_tables'] = {'exclude': True}
|
||||
psfcheck_list['com.adobe.fonts/check/varfont/distinct_instance_records'] = {}
|
||||
psfcheck_list['com.adobe.fonts/check/varfont/foundry_defined_tag_name'] = {}
|
||||
psfcheck_list['com.adobe.fonts/check/varfont/same_size_instance_records'] = {}
|
||||
psfcheck_list['com.adobe.fonts/check/varfont/valid_axis_nameid'] = {}
|
||||
psfcheck_list['com.adobe.fonts/check/varfont/valid_default_instance_nameids'] = {}
|
||||
psfcheck_list['com.adobe.fonts/check/varfont/valid_postscript_nameid'] = {}
|
||||
psfcheck_list['com.adobe.fonts/check/varfont/valid_subfamily_nameid'] = {}
|
||||
# psfcheck_list['com.fontwerk/check/inconsistencies_between_fvar_stat'] = {} # No longer in Font Bakery
|
||||
# psfcheck_list['com.fontwerk/check/weight_class_fvar'] = {} # No longer in Font Bakery
|
||||
psfcheck_list['com.google.fonts/check/aat'] = {}
|
||||
# psfcheck_list['com.google.fonts/check/all_glyphs_have_codepoints'] = {'exclude': True} # No longer in Font Bakery
|
||||
psfcheck_list['com.google.fonts/check/canonical_filename'] = {}
|
||||
psfcheck_list['com.google.fonts/check/caret_slope'] = {}
|
||||
psfcheck_list['com.google.fonts/check/cjk_chws_feature'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/cjk_not_enough_glyphs'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/cjk_vertical_metrics'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/cjk_vertical_metrics_regressions'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/cmap/alien_codepoints'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/cmap/format_12'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/cmap/unexpected_subtables'] = {}
|
||||
psfcheck_list['com.google.fonts/check/color_cpal_brightness'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/colorfont_tables'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/code_pages'] = {}
|
||||
psfcheck_list['com.google.fonts/check/contour_count'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/dotted_circle'] = {}
|
||||
psfcheck_list['com.google.fonts/check/dsig'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/empty_glyph_on_gid1_for_colrv0'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/epar'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/family/control_chars'] = {}
|
||||
psfcheck_list['com.google.fonts/check/family/equal_font_versions'] = {}
|
||||
psfcheck_list['com.google.fonts/check/family/equal_unicode_encodings'] = {}
|
||||
psfcheck_list['com.google.fonts/check/family/has_license'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/family/italics_have_roman_counterparts'] = {}
|
||||
psfcheck_list['com.google.fonts/check/family/panose_familytype'] = {}
|
||||
psfcheck_list['com.google.fonts/check/family/panose_proportion'] = {}
|
||||
psfcheck_list['com.google.fonts/check/family/single_directory'] = {}
|
||||
psfcheck_list['com.google.fonts/check/family/tnum_horizontal_metrics'] = {}
|
||||
psfcheck_list['com.google.fonts/check/family/underline_thickness'] = {}
|
||||
psfcheck_list['com.google.fonts/check/family/vertical_metrics'] = {}
|
||||
psfcheck_list['com.google.fonts/check/family/win_ascent_and_descent'] = {'exclude': True}
|
||||
# {'change_status': {'FAIL': 'WARN', 'reason': 'Under review'}}
|
||||
psfcheck_list['com.google.fonts/check/family_naming_recommendations'] = {}
|
||||
psfcheck_list['com.google.fonts/check/file_size'] = {}
|
||||
psfcheck_list['com.google.fonts/check/font_copyright'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/font_names'] = {}
|
||||
psfcheck_list['com.google.fonts/check/font_version'] = {}
|
||||
psfcheck_list['com.google.fonts/check/fontbakery_version'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/fontdata_namecheck'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/fontv'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/fontvalidator'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/fsselection'] = {}
|
||||
psfcheck_list['com.google.fonts/check/fstype'] = {}
|
||||
psfcheck_list['com.google.fonts/check/fvar_instances'] = {}
|
||||
psfcheck_list['com.google.fonts/check/fvar_name_entries'] = {}
|
||||
psfcheck_list['com.google.fonts/check/gasp'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/gdef_mark_chars'] = {}
|
||||
psfcheck_list['com.google.fonts/check/gdef_non_mark_chars'] = {}
|
||||
psfcheck_list['com.google.fonts/check/gdef_spacing_marks'] = {}
|
||||
psfcheck_list['com.google.fonts/check/gf_axisregistry/fvar_axis_defaults'] = {}
|
||||
psfcheck_list['com.google.fonts/check/glyf_nested_components'] = {}
|
||||
psfcheck_list['com.google.fonts/check/glyf_non_transformed_duplicate_components'] = {}
|
||||
psfcheck_list['com.google.fonts/check/glyf_unused_data'] = {}
|
||||
psfcheck_list['com.google.fonts/check/glyph_coverage'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/gpos7'] = {}
|
||||
psfcheck_list['com.google.fonts/check/gpos_kerning_info'] = {}
|
||||
psfcheck_list['com.google.fonts/check/has_ttfautohint_params'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/hinting_impact'] = {}
|
||||
psfcheck_list['com.google.fonts/check/hmtx/comma_period'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/hmtx/encoded_latin_digits'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/hmtx/whitespace_advances'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/integer_ppem_if_hinted'] = {}
|
||||
psfcheck_list['com.google.fonts/check/interpolation_issues'] = {}
|
||||
psfcheck_list['com.google.fonts/check/italic_angle'] = {}
|
||||
psfcheck_list['com.google.fonts/check/italic_angle:googlefonts'] = {}
|
||||
psfcheck_list['com.google.fonts/check/italic_axis_in_stat'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/italic_axis_in_stat_is_boolean'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/italic_axis_in_stat_is_boolean:googlefonts']= {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/italic_axis_last'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/italic_axis_last:googlefonts'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/kern_table'] = {}
|
||||
psfcheck_list['com.google.fonts/check/kerning_for_non_ligated_sequences'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/layout_valid_feature_tags'] = {}
|
||||
psfcheck_list['com.google.fonts/check/layout_valid_language_tags'] = \
|
||||
{'change_status': {'FAIL': 'WARN', 'reason': 'The "invalid" ones are used by Harfbuzz'}}
|
||||
psfcheck_list['com.google.fonts/check/layout_valid_script_tags'] = {}
|
||||
psfcheck_list['com.google.fonts/check/ligature_carets'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/linegaps'] = {}
|
||||
psfcheck_list['com.google.fonts/check/loca/maxp_num_glyphs'] = {}
|
||||
psfcheck_list['com.google.fonts/check/mac_style'] = {}
|
||||
psfcheck_list['com.google.fonts/check/mandatory_avar_table'] = {}
|
||||
psfcheck_list['com.google.fonts/check/mandatory_glyphs'] = {}
|
||||
psfcheck_list['com.google.fonts/check/math_signs_width'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/maxadvancewidth'] = {}
|
||||
psfcheck_list['com.google.fonts/check/meta/script_lang_tags'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/missing_small_caps_glyphs'] = {}
|
||||
psfcheck_list['com.google.fonts/check/monospace'] = {}
|
||||
psfcheck_list['com.google.fonts/check/name/ascii_only_entries'] = {}
|
||||
psfcheck_list['com.google.fonts/check/name/copyright_length'] = {}
|
||||
psfcheck_list['com.google.fonts/check/name/description_max_length'] = {}
|
||||
psfcheck_list['com.google.fonts/check/name/family_and_style_max_length'] = {}
|
||||
psfcheck_list['com.google.fonts/check/name/family_name_compliance'] = {}
|
||||
# psfcheck_list['com.google.fonts/check/name/familyname'] = {} # No longer in Font Bakery
|
||||
psfcheck_list['com.google.fonts/check/name/familyname_first_char'] = {}
|
||||
# psfcheck_list['com.google.fonts/check/name/fullfontname'] = {} # No longer in Font Bakery
|
||||
psfcheck_list['com.google.fonts/check/name/italic_names'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/name/license'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/name/license_url'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/name/line_breaks'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/name/mandatory_entries'] = {}
|
||||
psfcheck_list['com.google.fonts/check/name/match_familyname_fullfont'] = {}
|
||||
psfcheck_list['com.google.fonts/check/name/no_copyright_on_description'] = {}
|
||||
# psfcheck_list['com.google.fonts/check/name/postscriptname'] = {} # No longer in Font Bakery
|
||||
psfcheck_list['com.google.fonts/check/name/rfn'] = {'exclude': True}
|
||||
# psfcheck_list['com.google.fonts/check/name/subfamilyname'] = {} # No longer in Font Bakery
|
||||
psfcheck_list['com.google.fonts/check/name/trailing_spaces'] = {'exclude': True}
|
||||
# psfcheck_list['com.google.fonts/check/name/typographicfamilyname'] = {} # No longer in Font Bakery
|
||||
# psfcheck_list['com.google.fonts/check/name/typographicsubfamilyname'] = {} # No longer in Font Bakery
|
||||
psfcheck_list['com.google.fonts/check/name/unwanted_chars'] = {}
|
||||
psfcheck_list['com.google.fonts/check/name/version_format'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/no_debugging_tables'] = {}
|
||||
psfcheck_list['com.google.fonts/check/old_ttfautohint'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/os2/use_typo_metrics'] = {'exclude': True}
|
||||
# psfcheck_list['com.google.fonts/check/os2/use_typo_metrics'] = \ (Left a copy commented out as an
|
||||
# {'change_status': {'FAIL': 'WARN', 'reason': 'Under review'}} example of an override!)
|
||||
psfcheck_list['com.google.fonts/check/os2_metrics_match_hhea'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/ots'] = {}
|
||||
psfcheck_list['com.google.fonts/check/outline_alignment_miss'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/outline_colinear_vectors'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/outline_jaggy_segments'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/outline_semi_vertical'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/outline_short_segments'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/points_out_of_bounds'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/post_table_version'] = {}
|
||||
psfcheck_list['com.google.fonts/check/production_glyphs_similarity'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/render_own_name'] = {}
|
||||
psfcheck_list['com.google.fonts/check/required_tables'] = {}
|
||||
psfcheck_list['com.google.fonts/check/rupee'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/shaping/collides'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/shaping/forbidden'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/shaping/regression'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/smart_dropout'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/slant_direction'] = {}
|
||||
psfcheck_list['com.google.fonts/check/soft_dotted'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/soft_hyphen'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/STAT'] = {}
|
||||
psfcheck_list['com.google.fonts/check/STAT/axis_order'] = {}
|
||||
psfcheck_list['com.google.fonts/check/STAT/gf_axisregistry'] = {}
|
||||
psfcheck_list['com.google.fonts/check/STAT_strings'] = {}
|
||||
psfcheck_list['com.google.fonts/check/STAT_in_statics'] = {}
|
||||
psfcheck_list['com.google.fonts/check/stylisticset_description'] = {}
|
||||
psfcheck_list['com.google.fonts/check/superfamily/list'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/superfamily/vertical_metrics'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/transformed_components'] = {}
|
||||
psfcheck_list['com.google.fonts/check/ttx_roundtrip'] = {}
|
||||
psfcheck_list['com.google.fonts/check/unicode_range_bits'] = {}
|
||||
psfcheck_list['com.google.fonts/check/unique_glyphnames'] = {}
|
||||
psfcheck_list['com.google.fonts/check/unitsperem'] = {}
|
||||
psfcheck_list['com.google.fonts/check/unitsperem_strict'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/unreachable_glyphs'] = {}
|
||||
psfcheck_list['com.google.fonts/check/unwanted_tables'] = {}
|
||||
psfcheck_list['com.google.fonts/check/usweightclass'] = {}
|
||||
psfcheck_list['com.google.fonts/check/valid_glyphnames'] = {}
|
||||
psfcheck_list['com.google.fonts/check/varfont_duplicate_instance_names'] = {}
|
||||
# psfcheck_list['com.google.fonts/check/varfont_has_instances'] = {} # No longer in Font Bakery
|
||||
# psfcheck_list['com.google.fonts/check/varfont_instance_coordinates'] = {} # No longer in Font Bakery
|
||||
# psfcheck_list['com.google.fonts/check/varfont_instance_names'] = {} # No longer in Font Bakery
|
||||
# psfcheck_list['com.google.fonts/check/varfont_weight_instances'] = {} # No longer in Font Bakery
|
||||
psfcheck_list['com.google.fonts/check/varfont/bold_wght_coord'] = {}
|
||||
psfcheck_list['com.google.fonts/check/varfont/consistent_axes'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/varfont/generate_static'] = {}
|
||||
psfcheck_list['com.google.fonts/check/varfont/grade_reflow'] = {}
|
||||
psfcheck_list['com.google.fonts/check/varfont/has_HVAR'] = {}
|
||||
psfcheck_list['com.google.fonts/check/varfont/regular_ital_coord'] = {}
|
||||
psfcheck_list['com.google.fonts/check/varfont/regular_opsz_coord'] = {}
|
||||
psfcheck_list['com.google.fonts/check/varfont/regular_slnt_coord'] = {}
|
||||
psfcheck_list['com.google.fonts/check/varfont/regular_wdth_coord'] = {}
|
||||
psfcheck_list['com.google.fonts/check/varfont/regular_wght_coord'] = {}
|
||||
psfcheck_list['com.google.fonts/check/varfont/slnt_range'] = {}
|
||||
psfcheck_list['com.google.fonts/check/varfont/stat_axis_record_for_each_axis'] = {}
|
||||
psfcheck_list['com.google.fonts/check/varfont/unsupported_axes'] = {}
|
||||
psfcheck_list['com.google.fonts/check/varfont/wdth_valid_range'] = {}
|
||||
psfcheck_list['com.google.fonts/check/varfont/wght_valid_range'] = {}
|
||||
psfcheck_list['com.google.fonts/check/vendor_id'] = {}
|
||||
psfcheck_list['com.google.fonts/check/version_bump'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/vertical_metrics'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/vertical_metrics_regressions'] = {'exclude': True}
|
||||
psfcheck_list['com.google.fonts/check/vttclean'] = {}
|
||||
psfcheck_list['com.google.fonts/check/whitespace_glyphnames'] = {}
|
||||
psfcheck_list['com.google.fonts/check/whitespace_glyphs'] = {}
|
||||
psfcheck_list['com.google.fonts/check/whitespace_ink'] = {}
|
||||
psfcheck_list['com.google.fonts/check/whitespace_widths'] = {}
|
||||
psfcheck_list['com.google.fonts/check/xavgcharwidth'] = {}
|
||||
psfcheck_list['com.thetypefounders/check/vendor_id'] = {'exclude': True}
|
||||
psfcheck_list['org.sil/check/family/win_ascent_and_descent'] = {}
|
||||
psfcheck_list['org.sil/check/os2/use_typo_metrics'] = {}
|
||||
psfcheck_list['org.sil/check/os2_metrics_match_hhea'] = {}
|
||||
#psfcheck_list['org.sil/check/vertical_metrics'] = {}
|
||||
psfcheck_list['org.sil/check/number_widths'] = {}
|
||||
psfcheck_list['org.sil/check/name/version_format'] = {}
|
||||
psfcheck_list['org.sil/check/whitespace_widths'] = {}
|
||||
|
||||
profile = make_profile(check_list=psfcheck_list)
|
459
src/silfont/feax_ast.py
Normal file
459
src/silfont/feax_ast.py
Normal file
|
@ -0,0 +1,459 @@
|
|||
import ast as pyast
|
||||
from fontTools.feaLib import ast
|
||||
from fontTools.feaLib.ast import asFea
|
||||
from fontTools.feaLib.error import FeatureLibError
|
||||
import re, math
|
||||
|
||||
def asFea(g):
|
||||
if hasattr(g, 'asClassFea'):
|
||||
return g.asClassFea()
|
||||
elif hasattr(g, 'asFea'):
|
||||
return g.asFea()
|
||||
elif isinstance(g, tuple) and len(g) == 2:
|
||||
return asFea(g[0]) + "-" + asFea(g[1]) # a range
|
||||
elif g.lower() in ast.fea_keywords:
|
||||
return "\\" + g
|
||||
else:
|
||||
return g
|
||||
|
||||
ast.asFea = asFea
|
||||
SHIFT = ast.SHIFT
|
||||
|
||||
def asLiteralFea(self, indent=""):
|
||||
Element.mode = 'literal'
|
||||
return self.asFea(indent=indent)
|
||||
Element.mode = 'flat'
|
||||
|
||||
ast.Element.asLiteralFea = asLiteralFea
|
||||
ast.Element.mode = 'flat'
|
||||
|
||||
class ast_Comment(ast.Comment):
|
||||
def __init__(self, text, location=None):
|
||||
super(ast_Comment, self).__init__(text, location=location)
|
||||
self.pretext = ""
|
||||
self.posttext = ""
|
||||
|
||||
def asFea(self, indent=""):
|
||||
return self.pretext + self.text + self.posttext
|
||||
|
||||
class ast_MarkClass(ast.MarkClass):
|
||||
# This is better fixed upstream in parser.parse_glyphclass_ to handle MarkClasses
|
||||
def asClassFea(self, indent=""):
|
||||
return "[" + " ".join(map(asFea, self.glyphs)) + "]"
|
||||
|
||||
class ast_BaseClass(ast_MarkClass) :
|
||||
def asFea(self, indent="") :
|
||||
return "@" + self.name + " = [" + " ".join(map(asFea, self.glyphs.keys())) + "];"
|
||||
|
||||
class ast_BaseClassDefinition(ast.MarkClassDefinition):
|
||||
def asFea(self, indent="") :
|
||||
# like base class asFea
|
||||
return ("# " if self.mode != 'literal' else "") + \
|
||||
"{}baseClass {} {} @{};".format(indent, self.glyphs.asFea(),
|
||||
self.anchor.asFea(), self.markClass.name)
|
||||
|
||||
class ast_MarkBasePosStatement(ast.MarkBasePosStatement):
|
||||
def asFea(self, indent=""):
|
||||
# handles members added by parse_position_base_ with feax syntax
|
||||
if isinstance(self.base, ast.MarkClassName): # flattens pos @BASECLASS mark @MARKCLASS
|
||||
res = ""
|
||||
if self.mode == 'literal':
|
||||
res += "pos base @{} ".format(self.base.markClass.name)
|
||||
res += " ".join("mark @{}".format(m.name) for m in self.marks)
|
||||
res += ";"
|
||||
else:
|
||||
for bcd in self.base.markClass.definitions:
|
||||
if res != "":
|
||||
res += "\n{}".format(indent)
|
||||
res += "pos base {} {}".format(bcd.glyphs.asFea(), bcd.anchor.asFea())
|
||||
res += "".join(" mark @{}".format(m.name) for m in self.marks)
|
||||
res += ";"
|
||||
else: # like base class method
|
||||
res = "pos base {}".format(self.base.asFea())
|
||||
res += "".join(" {} mark @{}".format(a.asFea(), m.name) for a, m in self.marks)
|
||||
res += ";"
|
||||
return res
|
||||
|
||||
def build(self, builder) :
|
||||
#TODO: do the right thing here (write to ttf?)
|
||||
pass
|
||||
|
||||
class ast_MarkMarkPosStatement(ast.MarkMarkPosStatement):
|
||||
# super class __init__() for reference
|
||||
# def __init__(self, location, baseMarks, marks):
|
||||
# Statement.__init__(self, location)
|
||||
# self.baseMarks, self.marks = baseMarks, marks
|
||||
|
||||
def asFea(self, indent=""):
|
||||
# handles members added by parse_position_base_ with feax syntax
|
||||
if isinstance(self.baseMarks, ast.MarkClassName): # flattens pos @MARKCLASS mark @MARKCLASS
|
||||
res = ""
|
||||
if self.mode == 'literal':
|
||||
res += "pos mark @{} ".format(self.base.markClass.name)
|
||||
res += " ".join("mark @{}".format(m.name) for m in self.marks)
|
||||
res += ";"
|
||||
else:
|
||||
for mcd in self.baseMarks.markClass.definitions:
|
||||
if res != "":
|
||||
res += "\n{}".format(indent)
|
||||
res += "pos mark {} {}".format(mcd.glyphs.asFea(), mcd.anchor.asFea())
|
||||
for m in self.marks:
|
||||
res += " mark @{}".format(m.name)
|
||||
res += ";"
|
||||
else: # like base class method
|
||||
res = "pos mark {}".format(self.baseMarks.asFea())
|
||||
for a, m in self.marks:
|
||||
res += " {} mark @{}".format(a.asFea() if a else "<anchor NULL>", m.name)
|
||||
res += ";"
|
||||
return res
|
||||
|
||||
def build(self, builder):
|
||||
# builder.add_mark_mark_pos(self.location, self.baseMarks.glyphSet(), self.marks)
|
||||
#TODO: do the right thing
|
||||
pass
|
||||
|
||||
class ast_CursivePosStatement(ast.CursivePosStatement):
|
||||
# super class __init__() for reference
|
||||
# def __init__(self, location, glyphclass, entryAnchor, exitAnchor):
|
||||
# Statement.__init__(self, location)
|
||||
# self.glyphclass = glyphclass
|
||||
# self.entryAnchor, self.exitAnchor = entryAnchor, exitAnchor
|
||||
|
||||
def asFea(self, indent=""):
|
||||
if isinstance(self.exitAnchor, ast.MarkClass): # pos cursive @BASE1 @BASE2
|
||||
res = ""
|
||||
if self.mode == 'literal':
|
||||
res += "pos cursive @{} @{};".format(self.glyphclass.name, self.exitAnchor.name)
|
||||
else:
|
||||
allglyphs = set(self.glyphclass.glyphSet())
|
||||
allglyphs.update(self.exitAnchor.glyphSet())
|
||||
for g in sorted(allglyphs):
|
||||
entry = self.glyphclass.glyphs.get(g, None)
|
||||
exit = self.exitAnchor.glyphs.get(g, None)
|
||||
if res != "":
|
||||
res += "\n{}".format(indent)
|
||||
res += "pos cursive {} {} {};".format(g,
|
||||
(entry.anchor.asFea() if entry else "<anchor NULL>"),
|
||||
(exit.anchor.asFea() if exit else "<anchor NULL>"))
|
||||
else:
|
||||
res = super(ast_CursivePosStatement, self).asFea(indent)
|
||||
return res
|
||||
|
||||
def build(self, builder) :
|
||||
#TODO: do the right thing here (write to ttf?)
|
||||
pass
|
||||
|
||||
class ast_MarkLigPosStatement(ast.MarkLigPosStatement):
|
||||
def __init__(self, ligatures, marks, location=None):
|
||||
ast.MarkLigPosStatement.__init__(self, ligatures, marks, location)
|
||||
self.classBased = False
|
||||
for l in marks:
|
||||
if l is not None:
|
||||
for m in l:
|
||||
if m is not None and not isinstance(m[0], ast.Anchor):
|
||||
self.classBased = True
|
||||
break
|
||||
|
||||
def build(self, builder):
|
||||
builder.add_mark_lig_pos(self.location, self.ligatures.glyphSet(), self.marks)
|
||||
|
||||
def asFea(self, indent=""):
|
||||
if not self.classBased or self.mode == "literal":
|
||||
return super(ast_MarkLigPosStatement, self).asFea(indent)
|
||||
|
||||
res = []
|
||||
for g in self.ligatures.glyphSet():
|
||||
comps = []
|
||||
for l in self.marks:
|
||||
onecomp = []
|
||||
if l is not None and len(l):
|
||||
for a, m in l:
|
||||
if not isinstance(a, ast.Anchor):
|
||||
if g not in a.markClass.glyphs:
|
||||
continue
|
||||
left = a.markClass.glyphs[g].anchor.asFea()
|
||||
else:
|
||||
left = a.asFea()
|
||||
onecomp.append("{} mark @{}".format(left, m.name))
|
||||
if not len(onecomp):
|
||||
onecomp = ["<anchor NULL>"]
|
||||
comps.append(" ".join(onecomp))
|
||||
res.append("pos ligature {} ".format(asFea(g)) + ("\n"+indent+SHIFT+"ligComponent ").join(comps))
|
||||
return (";\n"+indent).join(res) + ";"
|
||||
|
||||
#similar to ast.MultipleSubstStatement
|
||||
#one-to-many substitution, one glyph class is on LHS, multiple glyph classes may be on RHS
|
||||
# equivalent to generation of one stmt for each glyph in the LHS class
|
||||
# that's matched to corresponding glyphs in the RHS classes
|
||||
#prefix and suffx are for contextual lookups and do not need processing
|
||||
#replacement could contain multiple slots
|
||||
#TODO: below only supports one RHS class?
|
||||
class ast_MultipleSubstStatement(ast.Statement):
|
||||
def __init__(self, prefix, glyph, suffix, replacement, forceChain, location=None):
|
||||
ast.Statement.__init__(self, location)
|
||||
self.prefix, self.glyph, self.suffix = prefix, glyph, suffix
|
||||
self.replacement = replacement
|
||||
self.forceChain = forceChain
|
||||
lenglyphs = len(self.glyph.glyphSet())
|
||||
for i, r in enumerate(self.replacement) :
|
||||
if len(r.glyphSet()) == lenglyphs:
|
||||
self.multindex = i #first RHS slot with a glyph class
|
||||
break
|
||||
else:
|
||||
if lenglyphs > 1:
|
||||
raise FeatureLibError("No replacement class is of the same length as the matching class",
|
||||
location)
|
||||
else:
|
||||
self.multindex = 0;
|
||||
|
||||
def build(self, builder):
|
||||
prefix = [p.glyphSet() for p in self.prefix]
|
||||
suffix = [s.glyphSet() for s in self.suffix]
|
||||
glyphs = self.glyph.glyphSet()
|
||||
replacements = self.replacement[self.multindex].glyphSet()
|
||||
lenglyphs = len(glyphs)
|
||||
for i in range(max(lenglyphs, len(replacements))) :
|
||||
builder.add_multiple_subst(
|
||||
self.location, prefix, glyphs[i if lenglyphs > 1 else 0], suffix,
|
||||
self.replacement[0:self.multindex] + [replacements[i]] + self.replacement[self.multindex+1:],
|
||||
self.forceChain)
|
||||
|
||||
def asFea(self, indent=""):
|
||||
res = ""
|
||||
pres = (" ".join(map(asFea, self.prefix)) + " ") if len(self.prefix) else ""
|
||||
sufs = (" " + " ".join(map(asFea, self.suffix))) if len(self.suffix) else ""
|
||||
mark = "'" if len(self.prefix) or len(self.suffix) or self.forceChain else ""
|
||||
if self.mode == 'literal':
|
||||
res += "sub " + pres + self.glyph.asFea() + mark + sufs + " by "
|
||||
res += " ".join(asFea(g) for g in self.replacement) + ";"
|
||||
return res
|
||||
glyphs = self.glyph.glyphSet()
|
||||
replacements = self.replacement[self.multindex].glyphSet()
|
||||
lenglyphs = len(glyphs)
|
||||
count = max(lenglyphs, len(replacements))
|
||||
for i in range(count) :
|
||||
res += ("\n" + indent if i > 0 else "") + "sub " + pres
|
||||
res += asFea(glyphs[i if lenglyphs > 1 else 0]) + mark + sufs
|
||||
res += " by "
|
||||
res += " ".join(asFea(g) for g in self.replacement[0:self.multindex] + [replacements[i]] + self.replacement[self.multindex+1:])
|
||||
res += ";"
|
||||
return res
|
||||
|
||||
|
||||
# similar to ast.LigatureSubstStatement
|
||||
# many-to-one substitution, one glyph class is on RHS, multiple glyph classes may be on LHS
|
||||
# equivalent to generation of one stmt for each glyph in the RHS class
|
||||
# that's matched to corresponding glyphs in the LHS classes
|
||||
# it's unclear which LHS class should correspond to the RHS class
|
||||
# prefix and suffx are for contextual lookups and do not need processing
|
||||
# replacement could contain multiple slots
|
||||
#TODO: below only supports one LHS class?
|
||||
class ast_LigatureSubstStatement(ast.Statement):
|
||||
def __init__(self, prefix, glyphs, suffix, replacement,
|
||||
forceChain, location=None):
|
||||
ast.Statement.__init__(self, location)
|
||||
self.prefix, self.glyphs, self.suffix = (prefix, glyphs, suffix)
|
||||
self.replacement, self.forceChain = replacement, forceChain
|
||||
lenreplace = len(self.replacement.glyphSet())
|
||||
for i, g in enumerate(self.glyphs):
|
||||
if len(g.glyphSet()) == lenreplace:
|
||||
self.multindex = i #first LHS slot with a glyph class
|
||||
break
|
||||
else:
|
||||
if lenreplace > 1:
|
||||
raise FeatureLibError("No class matches replacement class length", location)
|
||||
else:
|
||||
self.multindex = 0
|
||||
|
||||
def build(self, builder):
|
||||
prefix = [p.glyphSet() for p in self.prefix]
|
||||
glyphs = [g.glyphSet() for g in self.glyphs]
|
||||
suffix = [s.glyphSet() for s in self.suffix]
|
||||
replacements = self.replacement.glyphSet()
|
||||
lenreplace = len(replacements.glyphSet())
|
||||
glyphs = self.glyphs[self.multindex].glyphSet()
|
||||
for i in range(max(len(glyphs), len(replacements))):
|
||||
builder.add_ligature_subst(
|
||||
self.location, prefix,
|
||||
self.glyphs[:self.multindex] + glyphs[i] + self.glyphs[self.multindex+1:],
|
||||
suffix, replacements[i if lenreplace > 1 else 0], self.forceChain)
|
||||
|
||||
def asFea(self, indent=""):
|
||||
res = ""
|
||||
pres = (" ".join(map(asFea, self.prefix)) + " ") if len(self.prefix) else ""
|
||||
sufs = (" " + " ".join(map(asFea, self.suffix))) if len(self.suffix) else ""
|
||||
mark = "'" if len(self.prefix) or len(self.suffix) or self.forceChain else ""
|
||||
if self.mode == 'literal':
|
||||
res += "sub " + pres + " ".join(asFea(g)+mark for g in self.glyphs) + sufs + " by "
|
||||
res += self.replacements.asFea() + ";"
|
||||
return res
|
||||
glyphs = self.glyphs[self.multindex].glyphSet()
|
||||
replacements = self.replacement.glyphSet()
|
||||
lenreplace = len(replacements)
|
||||
count = max(len(glyphs), len(replacements))
|
||||
for i in range(count) :
|
||||
res += ("\n" + indent if i > 0 else "") + "sub " + pres
|
||||
res += " ".join(asFea(g)+mark for g in self.glyphs[:self.multindex] + [glyphs[i]] + self.glyphs[self.multindex+1:])
|
||||
res += sufs + " by "
|
||||
res += asFea(replacements[i if lenreplace > 1 else 0])
|
||||
res += ";"
|
||||
return res
|
||||
|
||||
class ast_AlternateSubstStatement(ast.Statement):
|
||||
def __init__(self, prefix, glyphs, suffix, replacements, location=None):
|
||||
ast.Statement.__init__(self, location)
|
||||
self.prefix, self.glyphs, self.suffix = (prefix, glyphs, suffix)
|
||||
self.replacements = replacements
|
||||
|
||||
def build(self, builder):
|
||||
prefix = [p.glyphSet() for p in self.prefix]
|
||||
suffix = [s.glyphSet() for s in self.suffix]
|
||||
l = len(self.glyphs.glyphSet())
|
||||
for i, glyph in enumerate(self.glyphs.glyphSet()):
|
||||
replacement = self.replacements.glyphSet()[i::l]
|
||||
builder.add_alternate_subst(self.location, prefix, glyph, suffix,
|
||||
replacement)
|
||||
|
||||
def asFea(self, indent=""):
|
||||
res = ""
|
||||
l = len(self.glyphs.glyphSet())
|
||||
for i, glyph in enumerate(self.glyphs.glyphSet()):
|
||||
if i > 0:
|
||||
res += "\n" + indent
|
||||
res += "sub "
|
||||
if len(self.prefix) or len(self.suffix):
|
||||
if len(self.prefix):
|
||||
res += " ".join(map(asFea, self.prefix)) + " "
|
||||
res += asFea(glyph) + "'" # even though we really only use 1
|
||||
if len(self.suffix):
|
||||
res += " " + " ".join(map(asFea, self.suffix))
|
||||
else:
|
||||
res += asFea(glyph)
|
||||
res += " from "
|
||||
replacements = ast.GlyphClass(glyphs=self.replacements.glyphSet()[i::l], location=self.location)
|
||||
res += asFea(replacements)
|
||||
res += ";"
|
||||
return res
|
||||
|
||||
class ast_IfBlock(ast.Block):
|
||||
def __init__(self, testfn, name, cond, location=None):
|
||||
ast.Block.__init__(self, location=location)
|
||||
self.testfn = testfn
|
||||
self.name = name
|
||||
|
||||
def asFea(self, indent=""):
|
||||
if self.mode == 'literal':
|
||||
res = "{}if{}({}) {{".format(indent, name, cond)
|
||||
res += ast.Block.asFea(self, indent=indent)
|
||||
res += indent + "}\n"
|
||||
return res
|
||||
elif self.testfn():
|
||||
return ast.Block.asFea(self, indent=indent)
|
||||
else:
|
||||
return ""
|
||||
|
||||
|
||||
class ast_DoSubStatement(ast.Statement):
|
||||
def __init__(self, varnames, location=None):
|
||||
ast.Statement.__init__(self, location=location)
|
||||
self.names = varnames
|
||||
|
||||
def items(self, variables):
|
||||
yield ((None, None),)
|
||||
|
||||
class ast_DoForSubStatement(ast_DoSubStatement):
|
||||
def __init__(self, varname, glyphs, location=None):
|
||||
ast_DoSubStatement.__init__(self, [varname], location=location)
|
||||
self.glyphs = glyphs.glyphSet()
|
||||
|
||||
def items(self, variables):
|
||||
for g in self.glyphs:
|
||||
yield((self.names[0], g),)
|
||||
|
||||
def safeeval(exp):
|
||||
# no dunders in attribute names
|
||||
for n in pyast.walk(pyast.parse(exp)):
|
||||
v = getattr(n, 'id', "")
|
||||
# if v in ('_getiter_', '__next__'):
|
||||
# continue
|
||||
if "__" in v:
|
||||
return False
|
||||
return True
|
||||
|
||||
class ast_DoLetSubStatement(ast_DoSubStatement):
|
||||
def __init__(self, varnames, expression, parser, location=None):
|
||||
ast_DoSubStatement.__init__(self, varnames, location=location)
|
||||
self.parser = parser
|
||||
if not safeeval(expression):
|
||||
expression='"Unsafe Expression"'
|
||||
self.expr = expression
|
||||
|
||||
def items(self, variables):
|
||||
gbls = dict(self.parser.fns, **variables)
|
||||
try:
|
||||
v = eval(self.expr, gbls)
|
||||
except Exception as e:
|
||||
raise FeatureLibError(str(e) + " in " + self.expr, self.location)
|
||||
if self.names is None: # in an if
|
||||
yield((None, v),)
|
||||
elif len(self.names) == 1:
|
||||
yield((self.names[0], v),)
|
||||
else:
|
||||
yield(zip(self.names, list(v) + [None] * (len(self.names) - len(v))))
|
||||
|
||||
class ast_DoForLetSubStatement(ast_DoLetSubStatement):
|
||||
def items(self, variables):
|
||||
gbls = dict(self.parser.fns, **variables)
|
||||
try:
|
||||
v = eval(self.expr, gbls)
|
||||
except Exception as e:
|
||||
raise FeatureLibError(str(e) + " in " + self.expr, self.location)
|
||||
if len(self.names) == 1:
|
||||
for e in v:
|
||||
yield((self.names[0], e),)
|
||||
else:
|
||||
for e in v:
|
||||
yield(zip(self.names, list(e) + [None] * (len(self.names) - len(e))))
|
||||
|
||||
class ast_DoIfSubStatement(ast_DoLetSubStatement):
|
||||
def __init__(self, expression, parser, block, location=None):
|
||||
ast_DoLetSubStatement.__init__(self, None, expression, parser, location=None)
|
||||
self.block = block
|
||||
|
||||
def items(self, variables):
|
||||
(_, v) = list(ast_DoLetSubStatement.items(self, variables))[0][0]
|
||||
yield (None, (v if v else None),)
|
||||
|
||||
class ast_KernPairsStatement(ast.Statement):
|
||||
def __init__(self, kerninfo, location=None):
|
||||
super(ast_KernPairsStatement, self).__init__(location)
|
||||
self.kerninfo = kerninfo
|
||||
|
||||
def asFea(self, indent=""):
|
||||
# return ("\n"+indent).join("pos {} {} {};".format(k1, round(v), k2) \
|
||||
# for k1, x in self.kerninfo.items() for k2, v in x.items())
|
||||
coverage = set()
|
||||
rules = dict()
|
||||
|
||||
# first sort into lists by type of rule
|
||||
for k1, x in self.kerninfo.items():
|
||||
for k2, v in x.items():
|
||||
# Determine pair kern type, where:
|
||||
# 'gg' = glyph-glyph, 'gc' = glyph-class', 'cg' = class-glyph, 'cc' = class-class
|
||||
ruleType = 'gc'[k1[0]=='@'] + 'gc'[k2[0]=='@']
|
||||
rules.setdefault(ruleType, list()).append([k1, round(v), k2])
|
||||
# for glyph-glyph rules, make list of first glyphs:
|
||||
if ruleType == 'gg':
|
||||
coverage.add(k1)
|
||||
|
||||
# Now assemble lines in order and convert gc rules to gg where possible:
|
||||
res = []
|
||||
for ruleType in filter(lambda x: x in rules, ('gg', 'gc', 'cg', 'cc')):
|
||||
if ruleType != 'gc':
|
||||
res.extend(['pos {} {} {};'.format(k1, v, k2) for k1,v,k2 in rules[ruleType]])
|
||||
else:
|
||||
res.extend(['enum pos {} {} {};'.format(k1, v, k2) for k1, v, k2 in rules[ruleType] if k1 not in coverage])
|
||||
res.extend(['pos {} {} {};'.format(k1, v, k2) for k1, v, k2 in rules[ruleType] if k1 in coverage])
|
||||
|
||||
return ("\n"+indent).join(res)
|
||||
|
105
src/silfont/feax_lexer.py
Normal file
105
src/silfont/feax_lexer.py
Normal file
|
@ -0,0 +1,105 @@
|
|||
from fontTools.feaLib.lexer import IncludingLexer, Lexer
|
||||
from fontTools.feaLib.error import FeatureLibError
|
||||
import re, io
|
||||
|
||||
VARIABLE = "VARIABLE"
|
||||
|
||||
class feax_Lexer(Lexer):
|
||||
|
||||
def __init__(self, *a):
|
||||
Lexer.__init__(self, *a)
|
||||
self.tokens_ = None
|
||||
self.stack_ = []
|
||||
self.empty_ = False
|
||||
|
||||
def next_(self, recurse=False):
|
||||
while (not self.empty_):
|
||||
if self.tokens_ is not None:
|
||||
res = self.tokens_.pop(0)
|
||||
if not len(self.tokens_):
|
||||
self.popstack()
|
||||
if res[0] != VARIABLE:
|
||||
return (res[0], res[1], self.location_())
|
||||
return self.parse_variable(res[1])
|
||||
|
||||
try:
|
||||
res = Lexer.next_(self)
|
||||
except IndexError as e:
|
||||
self.popstack()
|
||||
continue
|
||||
except StopIteration as e:
|
||||
self.popstack()
|
||||
continue
|
||||
except FeatureLibError as e:
|
||||
if u"Unexpected character" not in str(e):
|
||||
raise e
|
||||
|
||||
# only executes if exception occurred
|
||||
location = self.location_()
|
||||
text = self.text_
|
||||
start = self.pos_
|
||||
cur_char = text[start]
|
||||
if cur_char == '$':
|
||||
self.pos_ += 1
|
||||
self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_)
|
||||
varname = text[start+1:self.pos_]
|
||||
if len(varname) < 1 or len(varname) > 63:
|
||||
raise FeatureLibError("Bad variable name length for: %s" % varname, location)
|
||||
res = (VARIABLE, varname, location)
|
||||
else:
|
||||
raise FeatureLibError("Unexpected character: %r" % cur_char, location)
|
||||
return res
|
||||
raise StopIteration
|
||||
|
||||
def __repr__(self):
|
||||
if self.tokens_ is not None:
|
||||
return str(self.tokens_)
|
||||
else:
|
||||
return str((self.text_[self.pos_:self.pos_+20], self.pos_, self.text_length_))
|
||||
|
||||
def popstack(self):
|
||||
if len(self.stack_) == 0:
|
||||
self.empty_ = True
|
||||
return
|
||||
t = self.stack_.pop()
|
||||
if t[0] == 'tokens':
|
||||
self.tokens_ = t[1]
|
||||
else:
|
||||
self.text_, self.pos_, self.text_length_ = t[1]
|
||||
self.tokens_ = None
|
||||
|
||||
def pushstack(self, v):
|
||||
if self.tokens_ is None:
|
||||
self.stack_.append(('text', (self.text_, self.pos_, self.text_length_)))
|
||||
else:
|
||||
self.stack_.append(('tokens', self.tokens_))
|
||||
self.stack_.append(v)
|
||||
self.popstack()
|
||||
|
||||
def pushback(self, token_type, token):
|
||||
if self.tokens_ is not None:
|
||||
self.tokens_.append((token_type, token))
|
||||
else:
|
||||
self.pushstack(('tokens', [(token_type, token)]))
|
||||
|
||||
def parse_variable(self, vname):
|
||||
t = str(self.scope.get(vname, ''))
|
||||
if t != '':
|
||||
self.pushstack(['text', (t + " ", 0, len(t)+1)])
|
||||
return self.next_()
|
||||
|
||||
class feax_IncludingLexer(IncludingLexer):
|
||||
|
||||
@staticmethod
|
||||
def make_lexer_(file_or_path):
|
||||
if hasattr(file_or_path, "read"):
|
||||
fileobj, closing = file_or_path, False
|
||||
else:
|
||||
filename, closing = file_or_path, True
|
||||
fileobj = io.open(filename, mode="r", encoding="utf-8")
|
||||
data = fileobj.read()
|
||||
filename = getattr(fileobj, "name", None)
|
||||
if closing:
|
||||
fileobj.close()
|
||||
return feax_Lexer(data, filename)
|
||||
|
736
src/silfont/feax_parser.py
Normal file
736
src/silfont/feax_parser.py
Normal file
|
@ -0,0 +1,736 @@
|
|||
from fontTools.feaLib import ast
|
||||
from fontTools.feaLib.parser import Parser
|
||||
from fontTools.feaLib.lexer import IncludingLexer, Lexer
|
||||
import silfont.feax_lexer as feax_lexer
|
||||
from fontTools.feaLib.error import FeatureLibError
|
||||
import silfont.feax_ast as astx
|
||||
import io, re, math, os
|
||||
import logging
|
||||
|
||||
class feaplus_ast(object) :
|
||||
MarkBasePosStatement = astx.ast_MarkBasePosStatement
|
||||
MarkMarkPosStatement = astx.ast_MarkMarkPosStatement
|
||||
MarkLigPosStatement = astx.ast_MarkLigPosStatement
|
||||
CursivePosStatement = astx.ast_CursivePosStatement
|
||||
BaseClass = astx.ast_BaseClass
|
||||
MarkClass = astx.ast_MarkClass
|
||||
BaseClassDefinition = astx.ast_BaseClassDefinition
|
||||
MultipleSubstStatement = astx.ast_MultipleSubstStatement
|
||||
LigatureSubstStatement = astx.ast_LigatureSubstStatement
|
||||
IfBlock = astx.ast_IfBlock
|
||||
DoForSubStatement = astx.ast_DoForSubStatement
|
||||
DoForLetSubStatement = astx.ast_DoForLetSubStatement
|
||||
DoLetSubStatement = astx.ast_DoLetSubStatement
|
||||
DoIfSubStatement = astx.ast_DoIfSubStatement
|
||||
AlternateSubstStatement = astx.ast_AlternateSubstStatement
|
||||
Comment = astx.ast_Comment
|
||||
KernPairsStatement = astx.ast_KernPairsStatement
|
||||
|
||||
def __getattr__(self, name):
|
||||
return getattr(ast, name) # retrieve undefined attrs from imported fontTools.feaLib ast module
|
||||
|
||||
class feaplus_parser(Parser) :
|
||||
extensions = {
|
||||
'baseClass': lambda s: s.parseBaseClass(),
|
||||
'ifclass': lambda s: s.parseIfClass(),
|
||||
'ifinfo': lambda s: s.parseIfInfo(),
|
||||
'do': lambda s: s.parseDoStatement_(),
|
||||
'def': lambda s: s.parseDefStatement_(),
|
||||
'kernpairs': lambda s: s.parseKernPairsStatement_()
|
||||
}
|
||||
ast = feaplus_ast()
|
||||
|
||||
def __init__(self, filename, glyphmap, fontinfo, kerninfo, defines) :
|
||||
if filename is None :
|
||||
empty_file = io.StringIO("")
|
||||
super(feaplus_parser, self).__init__(empty_file, glyphmap)
|
||||
else :
|
||||
super(feaplus_parser, self).__init__(filename, glyphmap)
|
||||
self.fontinfo = fontinfo
|
||||
self.kerninfo = kerninfo
|
||||
self.glyphs = glyphmap
|
||||
self.defines = defines
|
||||
self.fns = {
|
||||
'__builtins__': None,
|
||||
're' : re,
|
||||
'math' : math,
|
||||
'APx': lambda g, a, d=0: int(self.glyphs[g].anchors.get(a, [d])[0]),
|
||||
'APy': lambda g, a, d=0: int(self.glyphs[g].anchors.get(a, [0,d])[1]),
|
||||
'ADVx': lambda g: int(self.glyphs[g].advance),
|
||||
'MINx': lambda g: int(self.glyphs[g].bbox[0]),
|
||||
'MINy': lambda g: int(self.glyphs[g].bbox[1]),
|
||||
'MAXx': lambda g: int(self.glyphs[g].bbox[2]),
|
||||
'MAXy': lambda g: int(self.glyphs[g].bbox[3]),
|
||||
'feaclass': lambda c: self.resolve_glyphclass(c).glyphSet(),
|
||||
'allglyphs': lambda : self.glyphs.keys(),
|
||||
'lf': lambda : "\n",
|
||||
'info': lambda s: self.fontinfo.get(s, ""),
|
||||
'fileexists': lambda s: os.path.exists(s),
|
||||
'kerninfo': lambda s:[(k1, k2, v) for k1, x in self.kerninfo.items() for k2, v in x.items()],
|
||||
'opt': lambda s: self.defines.get(s, "")
|
||||
}
|
||||
# Document which builtins we really need. Of course still insecure.
|
||||
for x in ('True', 'False', 'None', 'int', 'float', 'str', 'abs', 'all', 'any', 'bool',
|
||||
'dict', 'enumerate', 'filter', 'hasattr', 'hex', 'isinstance', 'len', 'list', 'map', 'print',
|
||||
'max', 'min', 'ord', 'range', 'set', 'sorted', 'sum', 'tuple', 'type', 'zip'):
|
||||
self.fns[x] = __builtins__[x]
|
||||
|
||||
def parse(self, filename=None) :
|
||||
if filename is not None :
|
||||
self.lexer_ = feax_lexer.feax_IncludingLexer(filename)
|
||||
self.advance_lexer_(comments=True)
|
||||
return super(feaplus_parser, self).parse()
|
||||
|
||||
def back_lexer_(self):
|
||||
self.lexer_.lexers_[-1].pushback(self.next_token_type_, self.next_token_)
|
||||
self.next_token_type_ = self.cur_token_type_
|
||||
self.next_token_ = self.cur_token_
|
||||
self.next_token_location_ = self.cur_token_location_
|
||||
|
||||
# methods to limit layer violations
|
||||
def define_glyphclass(self, ap_nm, gc) :
|
||||
self.glyphclasses_.define(ap_nm, gc)
|
||||
|
||||
def resolve_glyphclass(self, ap_nm):
|
||||
try:
|
||||
return self.glyphclasses_.resolve(ap_nm)
|
||||
except KeyError:
|
||||
raise FeatureLibError("Glyphclass '{}' missing".format(ap_nm), self.lexer_.location_())
|
||||
return None
|
||||
|
||||
def add_statement(self, val) :
|
||||
self.doc_.statements.append(val)
|
||||
|
||||
def set_baseclass(self, ap_nm) :
|
||||
gc = self.ast.BaseClass(ap_nm)
|
||||
if not hasattr(self.doc_, 'baseClasses') :
|
||||
self.doc_.baseClasses = {}
|
||||
self.doc_.baseClasses[ap_nm] = gc
|
||||
self.define_glyphclass(ap_nm, gc)
|
||||
return gc
|
||||
|
||||
def set_markclass(self, ap_nm) :
|
||||
gc = self.ast.MarkClass(ap_nm)
|
||||
if not hasattr(self.doc_, 'markClasses') :
|
||||
self.doc_.markClasses = {}
|
||||
self.doc_.markClasses[ap_nm] = gc
|
||||
self.define_glyphclass(ap_nm, gc)
|
||||
return gc
|
||||
|
||||
|
||||
# like base class parse_position_base_ & overrides it
|
||||
def parse_position_base_(self, enumerated, vertical):
|
||||
location = self.cur_token_location_
|
||||
self.expect_keyword_("base")
|
||||
if enumerated:
|
||||
raise FeatureLibError(
|
||||
'"enumerate" is not allowed with '
|
||||
'mark-to-base attachment positioning',
|
||||
location)
|
||||
base = self.parse_glyphclass_(accept_glyphname=True)
|
||||
if self.next_token_ == "<": # handle pos base [glyphs] <anchor> mark @MARKCLASS
|
||||
marks = self.parse_anchor_marks_()
|
||||
else: # handle pos base @BASECLASS mark @MARKCLASS; like base class parse_anchor_marks_
|
||||
marks = []
|
||||
while self.next_token_ == "mark": #TODO: is more than one 'mark' meaningful?
|
||||
self.expect_keyword_("mark")
|
||||
m = self.expect_markClass_reference_()
|
||||
marks.append(m)
|
||||
self.expect_symbol_(";")
|
||||
return self.ast.MarkBasePosStatement(base, marks, location=location)
|
||||
|
||||
# like base class parse_position_mark_ & overrides it
|
||||
def parse_position_mark_(self, enumerated, vertical):
|
||||
location = self.cur_token_location_
|
||||
self.expect_keyword_("mark")
|
||||
if enumerated:
|
||||
raise FeatureLibError(
|
||||
'"enumerate" is not allowed with '
|
||||
'mark-to-mark attachment positioning',
|
||||
location)
|
||||
baseMarks = self.parse_glyphclass_(accept_glyphname=True)
|
||||
if self.next_token_ == "<": # handle pos mark [glyphs] <anchor> mark @MARKCLASS
|
||||
marks = self.parse_anchor_marks_()
|
||||
else: # handle pos mark @MARKCLASS mark @MARKCLASS; like base class parse_anchor_marks_
|
||||
marks = []
|
||||
while self.next_token_ == "mark": #TODO: is more than one 'mark' meaningful?
|
||||
self.expect_keyword_("mark")
|
||||
m = self.expect_markClass_reference_()
|
||||
marks.append(m)
|
||||
self.expect_symbol_(";")
|
||||
return self.ast.MarkMarkPosStatement(baseMarks, marks, location=location)
|
||||
|
||||
def parse_position_cursive_(self, enumerated, vertical):
|
||||
location = self.cur_token_location_
|
||||
self.expect_keyword_("cursive")
|
||||
if enumerated:
|
||||
raise FeatureLibError(
|
||||
'"enumerate" is not allowed with '
|
||||
'cursive attachment positioning',
|
||||
location)
|
||||
glyphclass = self.parse_glyphclass_(accept_glyphname=True)
|
||||
if self.next_token_ == "<": # handle pos cursive @glyphClass <anchor entry> <anchor exit>
|
||||
entryAnchor = self.parse_anchor_()
|
||||
exitAnchor = self.parse_anchor_()
|
||||
self.expect_symbol_(";")
|
||||
return self.ast.CursivePosStatement(
|
||||
glyphclass, entryAnchor, exitAnchor, location=location)
|
||||
else: # handle pos cursive @baseClass @baseClass;
|
||||
mc = self.expect_markClass_reference_()
|
||||
return self.ast.CursivePosStatement(glyphclass.markClass, None, mc, location=location)
|
||||
|
||||
def parse_position_ligature_(self, enumerated, vertical):
|
||||
location = self.cur_token_location_
|
||||
self.expect_keyword_("ligature")
|
||||
if enumerated:
|
||||
raise FeatureLibError(
|
||||
'"enumerate" is not allowed with '
|
||||
'mark-to-ligature attachment positioning',
|
||||
location)
|
||||
ligatures = self.parse_glyphclass_(accept_glyphname=True)
|
||||
marks = [self._parse_anchorclass_marks_()]
|
||||
while self.next_token_ == "ligComponent":
|
||||
self.expect_keyword_("ligComponent")
|
||||
marks.append(self._parse_anchorclass_marks_())
|
||||
self.expect_symbol_(";")
|
||||
return self.ast.MarkLigPosStatement(ligatures, marks, location=location)
|
||||
|
||||
def _parse_anchorclass_marks_(self):
|
||||
"""Parses a sequence of [<anchor> | @BASECLASS mark @MARKCLASS]*."""
|
||||
anchorMarks = [] # [(self.ast.Anchor, markClassName)*]
|
||||
while True:
|
||||
if self.next_token_ == "<":
|
||||
anchor = self.parse_anchor_()
|
||||
else:
|
||||
anchor = self.parse_glyphclass_(accept_glyphname=False)
|
||||
if anchor is not None:
|
||||
self.expect_keyword_("mark")
|
||||
markClass = self.expect_markClass_reference_()
|
||||
anchorMarks.append((anchor, markClass))
|
||||
if self.next_token_ == "ligComponent" or self.next_token_ == ";":
|
||||
break
|
||||
return anchorMarks
|
||||
|
||||
# like base class parseMarkClass
|
||||
# but uses BaseClass and BaseClassDefinition which subclass Mark counterparts
|
||||
def parseBaseClass(self):
|
||||
if not hasattr(self.doc_, 'baseClasses'):
|
||||
self.doc_.baseClasses = {}
|
||||
location = self.cur_token_location_
|
||||
glyphs = self.parse_glyphclass_(accept_glyphname=True)
|
||||
anchor = self.parse_anchor_()
|
||||
name = self.expect_class_name_()
|
||||
self.expect_symbol_(";")
|
||||
baseClass = self.doc_.baseClasses.get(name)
|
||||
if baseClass is None:
|
||||
baseClass = self.ast.BaseClass(name)
|
||||
self.doc_.baseClasses[name] = baseClass
|
||||
self.glyphclasses_.define(name, baseClass)
|
||||
bcdef = self.ast.BaseClassDefinition(baseClass, anchor, glyphs, location=location)
|
||||
baseClass.addDefinition(bcdef)
|
||||
return bcdef
|
||||
|
||||
#similar to and overrides parser.parse_substitute_
|
||||
def parse_substitute_(self):
|
||||
assert self.cur_token_ in {"substitute", "sub", "reversesub", "rsub"}
|
||||
location = self.cur_token_location_
|
||||
reverse = self.cur_token_ in {"reversesub", "rsub"}
|
||||
old_prefix, old, lookups, values, old_suffix, hasMarks = \
|
||||
self.parse_glyph_pattern_(vertical=False)
|
||||
if any(values):
|
||||
raise FeatureLibError(
|
||||
"Substitution statements cannot contain values", location)
|
||||
new = []
|
||||
if self.next_token_ == "by":
|
||||
keyword = self.expect_keyword_("by")
|
||||
while self.next_token_ != ";":
|
||||
gc = self.parse_glyphclass_(accept_glyphname=True)
|
||||
new.append(gc)
|
||||
elif self.next_token_ == "from":
|
||||
keyword = self.expect_keyword_("from")
|
||||
new = [self.parse_glyphclass_(accept_glyphname=False)]
|
||||
else:
|
||||
keyword = None
|
||||
self.expect_symbol_(";")
|
||||
if len(new) == 0 and not any(lookups):
|
||||
raise FeatureLibError(
|
||||
'Expected "by", "from" or explicit lookup references',
|
||||
self.cur_token_location_)
|
||||
|
||||
# GSUB lookup type 3: Alternate substitution.
|
||||
# Format: "substitute a from [a.1 a.2 a.3];"
|
||||
if keyword == "from":
|
||||
if reverse:
|
||||
raise FeatureLibError(
|
||||
'Reverse chaining substitutions do not support "from"',
|
||||
location)
|
||||
# allow classes on lhs
|
||||
if len(old) != 1:
|
||||
raise FeatureLibError(
|
||||
'Expected single glyph or glyph class before "from"',
|
||||
location)
|
||||
if len(new) != 1:
|
||||
raise FeatureLibError(
|
||||
'Expected a single glyphclass after "from"',
|
||||
location)
|
||||
if len(old[0].glyphSet()) == 0 or len(new[0].glyphSet()) % len(old[0].glyphSet()) != 0:
|
||||
raise FeatureLibError(
|
||||
'The glyphclass after "from" must be a multiple of length of the glyphclass on before',
|
||||
location)
|
||||
return self.ast.AlternateSubstStatement(
|
||||
old_prefix, old[0], old_suffix, new[0], location=location)
|
||||
|
||||
num_lookups = len([l for l in lookups if l is not None])
|
||||
|
||||
# GSUB lookup type 1: Single substitution.
|
||||
# Format A: "substitute a by a.sc;"
|
||||
# Format B: "substitute [one.fitted one.oldstyle] by one;"
|
||||
# Format C: "substitute [a-d] by [A.sc-D.sc];"
|
||||
if (not reverse and len(old) == 1 and len(new) == 1 and
|
||||
num_lookups == 0):
|
||||
glyphs = list(old[0].glyphSet())
|
||||
replacements = list(new[0].glyphSet())
|
||||
if len(replacements) == 1:
|
||||
replacements = replacements * len(glyphs)
|
||||
if len(glyphs) != len(replacements):
|
||||
raise FeatureLibError(
|
||||
'Expected a glyph class with %d elements after "by", '
|
||||
'but found a glyph class with %d elements' %
|
||||
(len(glyphs), len(replacements)), location)
|
||||
return self.ast.SingleSubstStatement(
|
||||
old, new,
|
||||
old_prefix, old_suffix,
|
||||
forceChain=hasMarks, location=location
|
||||
)
|
||||
|
||||
# GSUB lookup type 2: Multiple substitution.
|
||||
# Format: "substitute f_f_i by f f i;"
|
||||
if (not reverse and
|
||||
len(old) == 1 and len(new) > 1 and num_lookups == 0):
|
||||
return self.ast.MultipleSubstStatement(old_prefix, old[0], old_suffix, new,
|
||||
hasMarks, location=location)
|
||||
|
||||
# GSUB lookup type 4: Ligature substitution.
|
||||
# Format: "substitute f f i by f_f_i;"
|
||||
if (not reverse and
|
||||
len(old) > 1 and len(new) == 1 and num_lookups == 0):
|
||||
return self.ast.LigatureSubstStatement(old_prefix, old, old_suffix, new[0],
|
||||
forceChain=hasMarks, location=location)
|
||||
|
||||
# GSUB lookup type 8: Reverse chaining substitution.
|
||||
if reverse:
|
||||
if len(old) != 1:
|
||||
raise FeatureLibError(
|
||||
"In reverse chaining single substitutions, "
|
||||
"only a single glyph or glyph class can be replaced",
|
||||
location)
|
||||
if len(new) != 1:
|
||||
raise FeatureLibError(
|
||||
'In reverse chaining single substitutions, '
|
||||
'the replacement (after "by") must be a single glyph '
|
||||
'or glyph class', location)
|
||||
if num_lookups != 0:
|
||||
raise FeatureLibError(
|
||||
"Reverse chaining substitutions cannot call named lookups",
|
||||
location)
|
||||
glyphs = sorted(list(old[0].glyphSet()))
|
||||
replacements = sorted(list(new[0].glyphSet()))
|
||||
if len(replacements) == 1:
|
||||
replacements = replacements * len(glyphs)
|
||||
if len(glyphs) != len(replacements):
|
||||
raise FeatureLibError(
|
||||
'Expected a glyph class with %d elements after "by", '
|
||||
'but found a glyph class with %d elements' %
|
||||
(len(glyphs), len(replacements)), location)
|
||||
return self.ast.ReverseChainSingleSubstStatement(
|
||||
old_prefix, old_suffix, old, new, location=location)
|
||||
|
||||
# GSUB lookup type 6: Chaining contextual substitution.
|
||||
assert len(new) == 0, new
|
||||
rule = self.ast.ChainContextSubstStatement(
|
||||
old_prefix, old, old_suffix, lookups, location=location)
|
||||
return rule
|
||||
|
||||
def parse_glyphclass_(self, accept_glyphname):
|
||||
if (accept_glyphname and
|
||||
self.next_token_type_ in (Lexer.NAME, Lexer.CID)):
|
||||
glyph = self.expect_glyph_()
|
||||
return self.ast.GlyphName(glyph, location=self.cur_token_location_)
|
||||
if self.next_token_type_ is Lexer.GLYPHCLASS:
|
||||
self.advance_lexer_()
|
||||
gc = self.glyphclasses_.resolve(self.cur_token_)
|
||||
if gc is None:
|
||||
raise FeatureLibError(
|
||||
"Unknown glyph class @%s" % self.cur_token_,
|
||||
self.cur_token_location_)
|
||||
if isinstance(gc, self.ast.MarkClass):
|
||||
return self.ast.MarkClassName(gc, location=self.cur_token_location_)
|
||||
else:
|
||||
return self.ast.GlyphClassName(gc, location=self.cur_token_location_)
|
||||
|
||||
self.expect_symbol_("[")
|
||||
location = self.cur_token_location_
|
||||
glyphs = self.ast.GlyphClass(location=location)
|
||||
while self.next_token_ != "]":
|
||||
if self.next_token_type_ is Lexer.NAME:
|
||||
glyph = self.expect_glyph_()
|
||||
location = self.cur_token_location_
|
||||
if '-' in glyph and glyph not in self.glyphNames_:
|
||||
start, limit = self.split_glyph_range_(glyph, location)
|
||||
glyphs.add_range(
|
||||
start, limit,
|
||||
self.make_glyph_range_(location, start, limit))
|
||||
elif self.next_token_ == "-":
|
||||
start = glyph
|
||||
self.expect_symbol_("-")
|
||||
limit = self.expect_glyph_()
|
||||
glyphs.add_range(
|
||||
start, limit,
|
||||
self.make_glyph_range_(location, start, limit))
|
||||
else:
|
||||
glyphs.append(glyph)
|
||||
elif self.next_token_type_ is Lexer.CID:
|
||||
glyph = self.expect_glyph_()
|
||||
if self.next_token_ == "-":
|
||||
range_location = self.cur_token_location_
|
||||
range_start = self.cur_token_
|
||||
self.expect_symbol_("-")
|
||||
range_end = self.expect_cid_()
|
||||
glyphs.add_cid_range(range_start, range_end,
|
||||
self.make_cid_range_(range_location,
|
||||
range_start, range_end))
|
||||
else:
|
||||
glyphs.append("cid%05d" % self.cur_token_)
|
||||
elif self.next_token_type_ is Lexer.GLYPHCLASS:
|
||||
self.advance_lexer_()
|
||||
gc = self.glyphclasses_.resolve(self.cur_token_)
|
||||
if gc is None:
|
||||
raise FeatureLibError(
|
||||
"Unknown glyph class @%s" % self.cur_token_,
|
||||
self.cur_token_location_)
|
||||
# fix bug don't output class definition, just the name.
|
||||
if isinstance(gc, self.ast.MarkClass):
|
||||
gcn = self.ast.MarkClassName(gc, location=self.cur_token_location_)
|
||||
else:
|
||||
gcn = self.ast.GlyphClassName(gc, location=self.cur_token_location_)
|
||||
glyphs.add_class(gcn)
|
||||
else:
|
||||
raise FeatureLibError(
|
||||
"Expected glyph name, glyph range, "
|
||||
"or glyph class reference. Found %s" % self.next_token_,
|
||||
self.next_token_location_)
|
||||
self.expect_symbol_("]")
|
||||
return glyphs
|
||||
|
||||
def parseIfClass(self):
|
||||
location = self.cur_token_location_
|
||||
self.expect_symbol_("(")
|
||||
if self.next_token_type_ is Lexer.GLYPHCLASS:
|
||||
self.advance_lexer_()
|
||||
def ifClassTest():
|
||||
gc = self.glyphclasses_.resolve(self.cur_token_)
|
||||
return gc is not None and len(gc.glyphSet())
|
||||
block = self.ast.IfBlock(ifClassTest, 'ifclass', '@'+self.cur_token_, location=location)
|
||||
self.expect_symbol_(")")
|
||||
import inspect # oh this is so ugly!
|
||||
calledby = inspect.stack()[2][3] # called through lambda since extension
|
||||
if calledby == 'parse_block_':
|
||||
self.parse_subblock_(block, False)
|
||||
else:
|
||||
self.parse_statements_block_(block)
|
||||
return block
|
||||
else:
|
||||
raise FeatureLibError("Syntax error missing glyphclass", location)
|
||||
|
||||
def parseIfInfo(self):
|
||||
location = self.cur_token_location_
|
||||
self.expect_symbol_("(")
|
||||
name = self.expect_name_()
|
||||
self.expect_symbol_(",")
|
||||
reg = self.expect_string_()
|
||||
self.expect_symbol_(")")
|
||||
def ifInfoTest():
|
||||
s = self.fontinfo.get(name, "")
|
||||
return re.search(reg, s)
|
||||
block = self.ast.IfBlock(ifInfoTest, 'ifinfo', '{}, "{}"'.format(name, reg), location=location)
|
||||
import inspect # oh this is so ugly! Instead caller should pass in context
|
||||
calledby = inspect.stack()[2][3] # called through a lambda since extension
|
||||
if calledby == 'parse_block_':
|
||||
self.parse_subblock_(block, False)
|
||||
else:
|
||||
self.parse_statements_block_(block)
|
||||
return block
|
||||
|
||||
def parseKernPairsStatement_(self):
|
||||
location = self.cur_token_location_
|
||||
res = self.ast.KernPairsStatement(self.kerninfo, location)
|
||||
return res
|
||||
|
||||
def parse_statements_block_(self, block):
|
||||
self.expect_symbol_("{")
|
||||
statements = block.statements
|
||||
while self.next_token_ != "}" or self.cur_comments_:
|
||||
self.advance_lexer_(comments=True)
|
||||
if self.cur_token_type_ is Lexer.COMMENT:
|
||||
statements.append(
|
||||
self.ast.Comment(self.cur_token_,
|
||||
location=self.cur_token_location_))
|
||||
elif self.is_cur_keyword_("include"):
|
||||
statements.append(self.parse_include_())
|
||||
elif self.cur_token_type_ is Lexer.GLYPHCLASS:
|
||||
statements.append(self.parse_glyphclass_definition_())
|
||||
elif self.is_cur_keyword_(("anon", "anonymous")):
|
||||
statements.append(self.parse_anonymous_())
|
||||
elif self.is_cur_keyword_("anchorDef"):
|
||||
statements.append(self.parse_anchordef_())
|
||||
elif self.is_cur_keyword_("languagesystem"):
|
||||
statements.append(self.parse_languagesystem_())
|
||||
elif self.is_cur_keyword_("lookup"):
|
||||
statements.append(self.parse_lookup_(vertical=False))
|
||||
elif self.is_cur_keyword_("markClass"):
|
||||
statements.append(self.parse_markClass_())
|
||||
elif self.is_cur_keyword_("feature"):
|
||||
statements.append(self.parse_feature_block_())
|
||||
elif self.is_cur_keyword_("table"):
|
||||
statements.append(self.parse_table_())
|
||||
elif self.is_cur_keyword_("valueRecordDef"):
|
||||
statements.append(
|
||||
self.parse_valuerecord_definition_(vertical=False))
|
||||
elif self.cur_token_type_ is Lexer.NAME and self.cur_token_ in self.extensions:
|
||||
statements.append(self.extensions[self.cur_token_](self))
|
||||
elif self.cur_token_type_ is Lexer.SYMBOL and self.cur_token_ == ";":
|
||||
continue
|
||||
else:
|
||||
raise FeatureLibError(
|
||||
"Expected feature, languagesystem, lookup, markClass, "
|
||||
"table, or glyph class definition, got {} \"{}\"".format(self.cur_token_type_, self.cur_token_),
|
||||
self.cur_token_location_)
|
||||
|
||||
self.expect_symbol_("}")
|
||||
# self.expect_symbol_(";") # can't have }; since tokens are space separated
|
||||
|
||||
def parse_subblock_(self, block, vertical, stylisticset=False,
|
||||
size_feature=None, cv_feature=None):
|
||||
self.expect_symbol_("{")
|
||||
for symtab in self.symbol_tables_:
|
||||
symtab.enter_scope()
|
||||
|
||||
statements = block.statements
|
||||
while self.next_token_ != "}" or self.cur_comments_:
|
||||
self.advance_lexer_(comments=True)
|
||||
if self.cur_token_type_ is Lexer.COMMENT:
|
||||
statements.append(self.ast.Comment(
|
||||
self.cur_token_, location=self.cur_token_location_))
|
||||
elif self.cur_token_type_ is Lexer.GLYPHCLASS:
|
||||
statements.append(self.parse_glyphclass_definition_())
|
||||
elif self.is_cur_keyword_("anchorDef"):
|
||||
statements.append(self.parse_anchordef_())
|
||||
elif self.is_cur_keyword_({"enum", "enumerate"}):
|
||||
statements.append(self.parse_enumerate_(vertical=vertical))
|
||||
elif self.is_cur_keyword_("feature"):
|
||||
statements.append(self.parse_feature_reference_())
|
||||
elif self.is_cur_keyword_("ignore"):
|
||||
statements.append(self.parse_ignore_())
|
||||
elif self.is_cur_keyword_("language"):
|
||||
statements.append(self.parse_language_())
|
||||
elif self.is_cur_keyword_("lookup"):
|
||||
statements.append(self.parse_lookup_(vertical))
|
||||
elif self.is_cur_keyword_("lookupflag"):
|
||||
statements.append(self.parse_lookupflag_())
|
||||
elif self.is_cur_keyword_("markClass"):
|
||||
statements.append(self.parse_markClass_())
|
||||
elif self.is_cur_keyword_({"pos", "position"}):
|
||||
statements.append(
|
||||
self.parse_position_(enumerated=False, vertical=vertical))
|
||||
elif self.is_cur_keyword_("script"):
|
||||
statements.append(self.parse_script_())
|
||||
elif (self.is_cur_keyword_({"sub", "substitute",
|
||||
"rsub", "reversesub"})):
|
||||
statements.append(self.parse_substitute_())
|
||||
elif self.is_cur_keyword_("subtable"):
|
||||
statements.append(self.parse_subtable_())
|
||||
elif self.is_cur_keyword_("valueRecordDef"):
|
||||
statements.append(self.parse_valuerecord_definition_(vertical))
|
||||
elif stylisticset and self.is_cur_keyword_("featureNames"):
|
||||
statements.append(self.parse_featureNames_(stylisticset))
|
||||
elif cv_feature and self.is_cur_keyword_("cvParameters"):
|
||||
statements.append(self.parse_cvParameters_(cv_feature))
|
||||
elif size_feature and self.is_cur_keyword_("parameters"):
|
||||
statements.append(self.parse_size_parameters_())
|
||||
elif size_feature and self.is_cur_keyword_("sizemenuname"):
|
||||
statements.append(self.parse_size_menuname_())
|
||||
elif self.cur_token_type_ is Lexer.NAME and self.cur_token_ in self.extensions:
|
||||
statements.append(self.extensions[self.cur_token_](self))
|
||||
elif self.cur_token_ == ";":
|
||||
continue
|
||||
else:
|
||||
raise FeatureLibError(
|
||||
"Expected glyph class definition or statement: got {} {}".format(self.cur_token_type_, self.cur_token_),
|
||||
self.cur_token_location_)
|
||||
|
||||
self.expect_symbol_("}")
|
||||
for symtab in self.symbol_tables_:
|
||||
symtab.exit_scope()
|
||||
|
||||
def collect_block_(self):
|
||||
self.expect_symbol_("{")
|
||||
tokens = [(self.cur_token_type_, self.cur_token_)]
|
||||
count = 1
|
||||
while count > 0:
|
||||
self.advance_lexer_()
|
||||
if self.cur_token_ == "{":
|
||||
count += 1
|
||||
elif self.cur_token_ == "}":
|
||||
count -= 1
|
||||
tokens.append((self.cur_token_type_, self.cur_token_))
|
||||
return tokens
|
||||
|
||||
def parseDoStatement_(self):
|
||||
location = self.cur_token_location_
|
||||
substatements = []
|
||||
ifs = []
|
||||
while True:
|
||||
self.advance_lexer_()
|
||||
if self.is_cur_keyword_("forlet"):
|
||||
substatements.append(self.parseDoForLet_())
|
||||
elif self.is_cur_keyword_("forgroup") or self.is_cur_keyword_("for"):
|
||||
substatements.append(self.parseDoFor_())
|
||||
elif self.is_cur_keyword_("let"):
|
||||
substatements.append(self.parseDoLet_())
|
||||
elif self.is_cur_keyword_("if"):
|
||||
ifs.append(self.parseDoIf_())
|
||||
elif self.cur_token_ == '{':
|
||||
self.back_lexer_()
|
||||
ifs.append(self.parseEmptyIf_())
|
||||
break
|
||||
elif self.cur_token_type_ == Lexer.COMMENT:
|
||||
continue
|
||||
else:
|
||||
self.back_lexer_()
|
||||
break
|
||||
res = self.ast.Block()
|
||||
lex = self.lexer_.lexers_[-1]
|
||||
for s in self.DoIterateValues_(substatements):
|
||||
for i in ifs:
|
||||
(_, v) = next(i.items(s))
|
||||
if v:
|
||||
lex.scope = s
|
||||
#import pdb; pdb.set_trace()
|
||||
lex.pushstack(('tokens', i.block[:]))
|
||||
self.advance_lexer_()
|
||||
self.advance_lexer_()
|
||||
try:
|
||||
import inspect # oh this is so ugly!
|
||||
calledby = inspect.stack()[2][3] # called through lambda since extension
|
||||
if calledby == 'parse_block_':
|
||||
self.parse_subblock_(res, False)
|
||||
else:
|
||||
self.parse_statements_block_(res)
|
||||
except Exception as e:
|
||||
logging.warning("In do context: " + str(s) + " lexer: " + repr(lex) + " at: " + str((self.cur_token_, self.next_token_)))
|
||||
raise
|
||||
return res
|
||||
|
||||
def DoIterateValues_(self, substatements):
|
||||
def updated(d, *a, **kw):
|
||||
d.update(*a, **kw)
|
||||
return d
|
||||
results = [{}]
|
||||
#import pdb; pdb.set_trace()
|
||||
for s in substatements:
|
||||
newresults = []
|
||||
for x in results:
|
||||
for r in s.items(x):
|
||||
c = x.copy()
|
||||
c.update(r)
|
||||
newresults.append(c)
|
||||
results = newresults
|
||||
for r in results:
|
||||
yield r
|
||||
|
||||
def parseDoFor_(self):
|
||||
location = self.cur_token_location_
|
||||
self.advance_lexer_()
|
||||
if self.cur_token_type_ is Lexer.NAME:
|
||||
name = self.cur_token_
|
||||
else:
|
||||
raise FeatureLibError("Bad name in do for statement", location)
|
||||
self.expect_symbol_("=")
|
||||
glyphs = self.parse_glyphclass_(True)
|
||||
self.expect_symbol_(";")
|
||||
res = self.ast.DoForSubStatement(name, glyphs, location=location)
|
||||
return res
|
||||
|
||||
def parseLetish_(self, callback):
|
||||
# import pdb; pdb.set_trace()
|
||||
location = self.cur_token_location_
|
||||
self.advance_lexer_()
|
||||
names = []
|
||||
while self.cur_token_type_ == Lexer.NAME:
|
||||
names.append(self.cur_token_)
|
||||
if self.next_token_type_ is Lexer.SYMBOL:
|
||||
if self.next_token_ == ",":
|
||||
self.advance_lexer_()
|
||||
elif self.next_token_ == "=":
|
||||
break
|
||||
self.advance_lexer_()
|
||||
else:
|
||||
raise FeatureLibError("Expected '=', found '%s'" % self.cur_token_,
|
||||
self.cur_token_location_)
|
||||
lex = self.lexer_.lexers_[-1]
|
||||
lex.scan_over_(Lexer.CHAR_WHITESPACE_)
|
||||
start = lex.pos_
|
||||
lex.scan_until_(";")
|
||||
expr = lex.text_[start:lex.pos_]
|
||||
self.advance_lexer_()
|
||||
self.expect_symbol_(";")
|
||||
return callback(names, expr, self, location=location)
|
||||
|
||||
def parseDoLet_(self):
|
||||
return self.parseLetish_(self.ast.DoLetSubStatement)
|
||||
|
||||
def parseDoForLet_(self):
|
||||
return self.parseLetish_(self.ast.DoForLetSubStatement)
|
||||
|
||||
def parseDoIf_(self):
|
||||
location = self.cur_token_location_
|
||||
lex = self.lexer_.lexers_[-1]
|
||||
start = lex.pos_
|
||||
lex.scan_until_(";")
|
||||
expr = self.next_token_ + " " + lex.text_[start:lex.pos_]
|
||||
self.advance_lexer_()
|
||||
self.expect_symbol_(";")
|
||||
block = self.collect_block_()
|
||||
keep = (self.next_token_type_, self.next_token_)
|
||||
block = [keep] + block + [keep]
|
||||
return self.ast.DoIfSubStatement(expr, self, block, location=location)
|
||||
|
||||
def parseEmptyIf_(self):
|
||||
location = self.cur_token_location_
|
||||
lex = self.lexer_.lexers_[-1]
|
||||
start = lex.pos_
|
||||
expr = "True"
|
||||
block = self.collect_block_()
|
||||
keep = (self.next_token_type_, self.next_token_)
|
||||
block = [keep] + block + [keep]
|
||||
return self.ast.DoIfSubStatement(expr, self, block, location=location)
|
||||
|
||||
def parseDefStatement_(self):
|
||||
lex = self.lexer_.lexers_[-1]
|
||||
start = lex.pos_
|
||||
lex.scan_until_("{")
|
||||
fname = self.next_token_
|
||||
fsig = fname + lex.text_[start:lex.pos_].strip()
|
||||
tag = re.escape(fname)
|
||||
_, content, location = lex.scan_anonymous_block(tag)
|
||||
self.advance_lexer_()
|
||||
start = lex.pos_
|
||||
lex.scan_until_(";")
|
||||
endtag = lex.text_[start:lex.pos_].strip()
|
||||
assert(fname == endtag)
|
||||
self.advance_lexer_()
|
||||
self.advance_lexer_()
|
||||
funcstr = "def " + fsig + ":\n" + content
|
||||
if astx.safeeval(funcstr):
|
||||
exec(funcstr, self.fns)
|
||||
return self.ast.Comment("# def " + fname)
|
433
src/silfont/ftml.py
Normal file
433
src/silfont/ftml.py
Normal file
|
@ -0,0 +1,433 @@
|
|||
#!/usr/bin/env python3
|
||||
'Classes and functions for use handling FTML objects in pysilfont scripts'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2016 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from xml.etree import ElementTree as ET
|
||||
from fontTools import ttLib
|
||||
import re
|
||||
from xml.sax.saxutils import quoteattr
|
||||
import silfont.core
|
||||
import silfont.etutil as ETU
|
||||
|
||||
# Regular expression for parsing font name
|
||||
fontspec = re.compile(r"""^ # beginning of string
|
||||
(?P<rest>[A-Za-z ]+?) # Font Family Name
|
||||
\s*(?P<bold>Bold)? # Bold
|
||||
\s*(?P<italic>Italic)? # Italic
|
||||
\s*(?P<regular>Regular)? # Regular
|
||||
$""", re.VERBOSE) # end of string
|
||||
|
||||
class Fxml(ETU.ETelement) :
|
||||
def __init__(self, file = None, xmlstring = None, testgrouplabel = None, logger = None, params = None) :
|
||||
self.logger = logger if logger is not None else silfont.core.loggerobj()
|
||||
self.params = params if params is not None else silfont.core.parameters()
|
||||
self.parseerrors=None
|
||||
if not exactlyoneof(file, xmlstring, testgrouplabel) : self.logger.log("Must supply exactly one of file, xmlstring and testgrouplabel","X")
|
||||
|
||||
if testgrouplabel : # Create minimal valid ftml
|
||||
xmlstring = '<ftml version="1.0"><head></head><testgroup label=' + quoteattr(testgrouplabel) +'></testgroup></ftml>'
|
||||
|
||||
if file and not hasattr(file, 'read') : self.logger.log("'file' is not a file object", "X") # ET.parse would also work on file name, but other code assumes file object
|
||||
|
||||
try :
|
||||
if file :
|
||||
self.element = ET.parse(file).getroot()
|
||||
else :
|
||||
self.element = ET.fromstring(xmlstring)
|
||||
except Exception as e :
|
||||
self.logger.log("Error parsing FTML input: " + str(e), "S")
|
||||
|
||||
super(Fxml,self).__init__(self.element)
|
||||
|
||||
self.version = getattrib(self.element,"version")
|
||||
if self.version != "1.0" : self.logger.log("ftml items must have a version of 1.0", "S")
|
||||
|
||||
self.process_subelements((
|
||||
("head", "head" , Fhead, True, False),
|
||||
("testgroup", "testgroups", Ftestgroup, True, True )),
|
||||
offspec = False)
|
||||
|
||||
self.stylesheet = {}
|
||||
if file : # If reading from file, look to see if a stylesheet is present in xml processing instructions
|
||||
file.seek(0) # Have to re-read file since ElementTree does not support processing instructions
|
||||
for line in file :
|
||||
if line[0:2] == "<?" :
|
||||
line = line.strip()[:-2] # Strip white space and removing training ?>
|
||||
parts = line.split(" ")
|
||||
if parts[0] == "<?xml-stylesheet" :
|
||||
for part in parts[1:] :
|
||||
(name,value) = part.split("=")
|
||||
self.stylesheet[name] = value[1:-1] # Strip quotes
|
||||
break
|
||||
else :
|
||||
break
|
||||
|
||||
self.filename = file if file else None
|
||||
|
||||
if self.parseerrors:
|
||||
self.logger.log("Errors parsing ftml element:","E")
|
||||
for error in self.parseerrors : self.logger.log(" " + error,"E")
|
||||
self.logger.log("Invalid FTML", "S")
|
||||
|
||||
def save(self, file) :
|
||||
self.outxmlstr=""
|
||||
element = self.create_element()
|
||||
etw = ETU.ETWriter(element, inlineelem = ["em"])
|
||||
self.outxmlstr = etw.serialize_xml()
|
||||
file.write(self.outxmlstr)
|
||||
|
||||
def create_element(self) : # Create a new Elementtree element based on current object contents
|
||||
element = ET.Element('ftml', version = str(self.version))
|
||||
if self.stylesheet : # Create dummy .pi attribute for style sheet processing instruction
|
||||
pi = "xml-stylesheet"
|
||||
for attrib in sorted(self.stylesheet) : pi = pi + ' ' + attrib + '="' + self.stylesheet[attrib] + '"' ## Spec is not clear about what order attributes should be in
|
||||
element.attrib['.pi'] = pi
|
||||
element.append(self.head.create_element())
|
||||
for testgroup in self.testgroups : element.append(testgroup.create_element())
|
||||
return element
|
||||
|
||||
class Fhead(ETU.ETelement) :
|
||||
def __init__(self, parent, element) :
|
||||
self.parent = parent
|
||||
self.logger = parent.logger
|
||||
super(Fhead,self).__init__(element)
|
||||
|
||||
self.process_subelements((
|
||||
("comment", "comment", None, False, False),
|
||||
("fontscale", "fontscale", None, False, False),
|
||||
("fontsrc", "fontsrc", Ffontsrc, False, True),
|
||||
("styles", "styles", ETU.ETelement, False, False ), # Initially just basic elements; Fstyles created below
|
||||
("title", "title", None, False, False),
|
||||
("widths", "widths", _Fwidth, False, False)),
|
||||
offspec = True)
|
||||
|
||||
if self.fontscale is not None : self.fontscale = int(self.fontscale)
|
||||
if self.styles is not None :
|
||||
styles = {}
|
||||
for styleelem in self.styles["style"] :
|
||||
style = Fstyle(self, element = styleelem)
|
||||
styles[style.name] = style
|
||||
if style.parseerrors:
|
||||
name = "" if style.name is None else style.name
|
||||
self.parseerrors.append("Errors parsing style element: " + name)
|
||||
for error in style.parseerrors : self.parseerrors.append(" " + error)
|
||||
self.styles = styles
|
||||
if self.widths is not None : self.widths = self.widths.widthsdict # Convert _Fwidths object into dict
|
||||
|
||||
self.elements = dict(self._contents) # Dictionary of all elements, particularly for handling non-standard elements
|
||||
|
||||
def findstyle(self, name = None, feats = None, lang = None) :
|
||||
if self.styles is not None:
|
||||
for s in self.styles :
|
||||
style = self.styles[s]
|
||||
if style.feats == feats and style.lang == lang :
|
||||
if name is None or name == style.name : return style # if name is supplied it must match
|
||||
return None
|
||||
|
||||
def addstyle(self, name, feats = None, lang = None) : # Return style if it exists otherwise create new style with newname
|
||||
s = self.findstyle(name, feats, lang)
|
||||
if s is None :
|
||||
if self.styles is None:
|
||||
self.styles = {}
|
||||
if name in self.styles : self.logger.log("Adding duplicate style name " + name, "X")
|
||||
s = Fstyle(self, name = name, feats = feats, lang = lang)
|
||||
self.styles[name] = s
|
||||
return s
|
||||
|
||||
def create_element(self) :
|
||||
element = ET.Element('head')
|
||||
# Add in-spec sub-elements in alphabetic order
|
||||
if self.comment : x = ET.SubElement(element, 'comment') ; x.text = self.comment
|
||||
if self.fontscale : x = ET.SubElement(element, 'fontscale') ; x.text = str(self.fontscale)
|
||||
if isinstance(self.fontsrc, list):
|
||||
# Allow multiple fontsrc
|
||||
for fontsrc in self.fontsrc:
|
||||
element.append(fontsrc.create_element())
|
||||
elif self.fontsrc is not None:
|
||||
element.append(self.fontsrc.create_element())
|
||||
if self.styles :
|
||||
x = ET.SubElement(element, 'styles')
|
||||
for style in sorted(self.styles) : x.append(self.styles[style].create_element())
|
||||
if self.title : y = ET.SubElement(element, 'title') ; y.text = self.title
|
||||
if not self.widths is None :
|
||||
x = ET.SubElement(element, 'widths')
|
||||
for width in sorted(self.widths) :
|
||||
if self.widths[width] is not None: x.set(width, self.widths[width])
|
||||
|
||||
# Add any non-spec elements
|
||||
for el in sorted(self.elements) :
|
||||
if el not in ("comment", "fontscale", "fontsrc", "styles", "title", "widths") :
|
||||
for elem in self.elements[el] : element.append(elem)
|
||||
|
||||
return element
|
||||
|
||||
class Ffontsrc(ETU.ETelement) :
|
||||
# This library only supports a single font in the fontsrc as recommended by the FTML spec
|
||||
# Currently it only supports simple url() and local() values
|
||||
|
||||
def __init__(self, parent, element = None, text = None, label=None) :
|
||||
self.parent = parent
|
||||
self.logger = parent.logger
|
||||
self.parseerrors = []
|
||||
|
||||
if not exactlyoneof(element, text) : self.logger.log("Must supply exactly one of element and text","X")
|
||||
|
||||
try:
|
||||
(txt, url, local) = parsefontsrc(text, allowplain=True) if text else parsefontsrc(element.text)
|
||||
except ValueError as e :
|
||||
txt = text if text else element.text
|
||||
self.parseerrors.append(str(e) + ": " + txt)
|
||||
else :
|
||||
if text : element = ET.Element("fontsrc") ; element.text = txt
|
||||
if label : element.set('label', label)
|
||||
super(Ffontsrc,self).__init__(element)
|
||||
self.process_attributes((
|
||||
("label", "label", False),),
|
||||
others=False)
|
||||
self.text = txt
|
||||
self.url = url
|
||||
self.local = local
|
||||
if self.local : # Parse font name to find if bold, italic etc
|
||||
results = re.match(fontspec, self.local) ## Does not cope with -, eg Gentium-Bold. Should it?"
|
||||
self.fontfamily = results.group('rest')
|
||||
self.bold = results.group('bold') != None
|
||||
self.italic = results.group('italic') != None
|
||||
else :
|
||||
self.fontfamily = None # If details are needed call getweights()
|
||||
|
||||
def addfontinfo(self) : # set fontfamily, bold and italic by looking inside font
|
||||
(ff, bold, italic) = getfontinfo(self.url)
|
||||
self.fontfamily = ff
|
||||
self.bold = bold
|
||||
self.italic = italic
|
||||
|
||||
def create_element(self) :
|
||||
element = ET.Element("fontsrc")
|
||||
element.text = self.text
|
||||
if self.label : element.set("label", self.label)
|
||||
return element
|
||||
|
||||
class Fstyle(ETU.ETelement) :
|
||||
def __init__(self, parent, element = None, name = None, feats = None, lang = None) :
|
||||
self.parent = parent
|
||||
self.logger = parent.logger
|
||||
if element is not None :
|
||||
if name or feats or lang : parent.logger("Can't supply element and other parameters", "X")
|
||||
else :
|
||||
if name is None : self.logger.log("Must supply element or name to Fstyle", "X")
|
||||
element = self.element = ET.Element("style", name = name)
|
||||
if feats is not None :
|
||||
if type(feats) is dict : feats = self.dict_to_string(feats)
|
||||
element.set('feats',feats)
|
||||
if lang is not None : element.set('lang', lang)
|
||||
super(Fstyle,self).__init__(element)
|
||||
|
||||
self.process_attributes((
|
||||
("feats", "feats", False),
|
||||
("lang", "lang", False),
|
||||
("name", "name", True)),
|
||||
others = False)
|
||||
|
||||
if type(self.feats) is str : self.feats = self.string_to_dict(self.feats)
|
||||
|
||||
def string_to_dict(self, string) : # Split string on ',', then add to dict splitting on " " and removing quotes
|
||||
dict={}
|
||||
for f in string.split(','):
|
||||
f = f.strip()
|
||||
m = re.match(r'''(?P<quote>['"])(\w{4})(?P=quote)\s+(\d+|on|off)$''', f)
|
||||
if m:
|
||||
dict[m.group(2)] = m.group(3)
|
||||
else:
|
||||
self.logger.log(f'Invalid feature syntax "{f}"', 'E')
|
||||
return dict
|
||||
|
||||
def dict_to_string(self, dict) :
|
||||
str=""
|
||||
for name in sorted(dict) :
|
||||
if dict[name] is not None : str += "'" + name + "' " + dict[name] + ", "
|
||||
str = str[0:-2] # remove final ", "
|
||||
return str
|
||||
|
||||
def create_element(self) :
|
||||
element = ET.Element("style", name = self.name)
|
||||
if self.feats : element.set("feats", self.dict_to_string(self.feats))
|
||||
if self.lang : element.set("lang", self.lang)
|
||||
return element
|
||||
|
||||
|
||||
class _Fwidth(ETU.ETelement) : # Only used temporarily whilst parsing xml
|
||||
def __init__(self, parent, element) :
|
||||
super(_Fwidth,self).__init__(element)
|
||||
self.parent = parent
|
||||
self.logger = parent.logger
|
||||
|
||||
self.process_attributes((
|
||||
("comment", "comment", False),
|
||||
("label", "label", False),
|
||||
("string", "string", False),
|
||||
("stylename", "stylename", False),
|
||||
("table", "table", False)),
|
||||
others = False)
|
||||
self.widthsdict = {
|
||||
"comment": self.comment,
|
||||
"label": self.label,
|
||||
"string": self.string,
|
||||
"stylename": self.stylename,
|
||||
"table": self.table}
|
||||
|
||||
class Ftestgroup(ETU.ETelement) :
|
||||
def __init__(self, parent, element = None, label = None) :
|
||||
self.parent = parent
|
||||
self.logger = parent.logger
|
||||
if not exactlyoneof(element, label) : self.logger.log("Must supply exactly one of element and label","X")
|
||||
|
||||
if label : element = ET.Element("testgroup", label = label)
|
||||
|
||||
super(Ftestgroup,self).__init__(element)
|
||||
|
||||
self.subgroup = True if type(parent) is Ftestgroup else False
|
||||
self.process_attributes((
|
||||
("background", "background", False),
|
||||
("label", "label", True)),
|
||||
others = False)
|
||||
self.process_subelements((
|
||||
("comment", "comment", None, False, False),
|
||||
("test", "tests", Ftest, False, True),
|
||||
("testgroup", "testgroups", Ftestgroup, False, True)),
|
||||
offspec = False)
|
||||
if self.subgroup and self.testgroups != [] : parent.parseerrors.append("Only one level of testgroup nesting permitted")
|
||||
|
||||
# Merge any sub-testgroups into tests
|
||||
if self.testgroups != [] :
|
||||
tests = []
|
||||
tg = list(self.testgroups) # Want to preserve original list
|
||||
for elem in self.element :
|
||||
if elem.tag == "test":
|
||||
tests.append(self.tests.pop(0))
|
||||
elif elem.tag == "testgroup" :
|
||||
tests.append(tg.pop(0))
|
||||
self.tests = tests
|
||||
|
||||
def create_element(self) :
|
||||
element = ET.Element("testgroup")
|
||||
if self.background : element.set("background", self.background)
|
||||
element.set("label", self.label)
|
||||
if self.comment : x = ET.SubElement(element, 'comment') ; x.text = self.comment
|
||||
for test in self.tests : element.append(test.create_element())
|
||||
return element
|
||||
|
||||
class Ftest(ETU.ETelement) :
|
||||
def __init__(self, parent, element = None, label = None, string = None) :
|
||||
self.parent = parent
|
||||
self.logger = parent.logger
|
||||
if not exactlyoneof(element, (label, string)) : self.logger.log("Must supply exactly one of element and label/string","X")
|
||||
|
||||
if label :
|
||||
element = ET.Element("test", label = label)
|
||||
x = ET.SubElement(element,"string") ; x.text = string
|
||||
|
||||
super(Ftest,self).__init__(element)
|
||||
|
||||
self.process_attributes((
|
||||
("background", "background", False),
|
||||
("label", "label", True),
|
||||
("rtl", "rtl", False),
|
||||
("stylename", "stylename", False)),
|
||||
others = False)
|
||||
|
||||
self.process_subelements((
|
||||
("comment", "comment", None, False, False),
|
||||
("string", "string", _Fstring, True, False)),
|
||||
offspec = False)
|
||||
|
||||
self.string = self.string.string # self.string initially a temporary _Fstring element
|
||||
|
||||
def str(self, noems = False) : # Return formatted version of string
|
||||
string = self.string
|
||||
if noems :
|
||||
string = string.replace("<em>","")
|
||||
string = string.replace("</em>","")
|
||||
return string ## Other formatting options to be added as needed cf ftml2odt
|
||||
|
||||
def create_element(self) :
|
||||
element = ET.Element("test")
|
||||
if self.background : element.set("background", self.background)
|
||||
element.set("label", self.label)
|
||||
if self.rtl : element.set("rtl", self.rtl)
|
||||
if self.stylename : element.set("stylename", self.stylename)
|
||||
if self.comment : x = ET.SubElement(element, "comment") ; x.text = self.comment
|
||||
x = ET.SubElement(element, "string") ; x.text = self.string
|
||||
|
||||
return element
|
||||
|
||||
class _Fstring(ETU.ETelement) : # Only used temporarily whilst parsing xml
|
||||
def __init__(self, parent, element = None) :
|
||||
self.parent = parent
|
||||
self.logger = parent.logger
|
||||
super(_Fstring,self).__init__(element)
|
||||
self.process_subelements((("em", "em", ETU.ETelement,False, True),), offspec = False)
|
||||
# Need to build text of string to include <em> subelements
|
||||
self.string = element.text if element.text else ""
|
||||
for em in self.em :
|
||||
self.string += "<em>{}</em>{}".format(em.element.text, em.element.tail)
|
||||
|
||||
def getattrib(element,attrib) :
|
||||
return element.attrib[attrib] if attrib in element.attrib else None
|
||||
|
||||
def exactlyoneof( *args ) : # Check one and only one of args is not None
|
||||
|
||||
last = args[-1] # Check if last argument is a tuple - in which case
|
||||
if type(last) is tuple : # either all or none of list must be None
|
||||
for test in last[1:] :
|
||||
if (test is None) != (last[0] == None) : return False
|
||||
args = list(args) # Convert to list so last val can be changed
|
||||
args[-1] = last[0] # Now valid to test on any item in tuple
|
||||
|
||||
one = False
|
||||
for test in args :
|
||||
if test is not None :
|
||||
if one : return False # already have found one not None
|
||||
one = True
|
||||
if one : return True
|
||||
return False
|
||||
|
||||
def parsefontsrc(text, allowplain = False) : # Check fontsrc text is valid and return normalised text, url and local values
|
||||
''' - if multiple (fallback) fonts are specified, just process the first one
|
||||
- just handles simple url() or local() formats
|
||||
- if allowplain is set, allows text without url() or local() and decides which based on "." in text '''
|
||||
text = text.split(",")[0] # If multiple (fallback) fonts are specified, just process the first one
|
||||
#if allowplain and not re.match(r"^(url|local)[(][^)]+[)]",text) : # Allow for text without url() or local() form
|
||||
if allowplain and not "(" in text : # Allow for text without url() or local() form
|
||||
plain = True
|
||||
if "." in text :
|
||||
type = "url"
|
||||
else :
|
||||
type = "local"
|
||||
else :
|
||||
type = text.split("(")[0]
|
||||
if type == "url" :
|
||||
text = text.split("(")[1][:-1].strip()
|
||||
elif type == "local" :
|
||||
text = text.split("(")[1][:-1].strip()
|
||||
else : raise ValueError("Invalid fontsrc string")
|
||||
if type == "url" :
|
||||
return ("url("+text+")", text, None)
|
||||
else :
|
||||
return ("local("+text+")", None , text)
|
||||
|
||||
return (text,url,local)
|
||||
|
||||
def getfontinfo(filename) : # peek inside the font for the name, weight, style
|
||||
f = ttLib.TTFont(filename)
|
||||
# take name from name table, NameID 1, platform ID 3, Encoding ID 1 (possible fallback platformID 1, EncodingID =0)
|
||||
n = f['name'] # name table from font
|
||||
fontname = n.getName(1,3,1).toUnicode() # nameID 1 = Font Family name
|
||||
# take bold and italic info from OS/2 table, fsSelection bits 0 and 5
|
||||
o = f['OS/2'] # OS/2 table
|
||||
italic = (o.fsSelection & 1) > 0
|
||||
bold = (o.fsSelection & 32) > 0
|
||||
return (fontname, bold, italic)
|
||||
|
750
src/silfont/ftml_builder.py
Normal file
750
src/silfont/ftml_builder.py
Normal file
|
@ -0,0 +1,750 @@
|
|||
#!/usr/bin/env python3
|
||||
"""classes and functions for building ftml tests from glyph_data.csv and UFO"""
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2018 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Bob Hallissy'
|
||||
|
||||
from silfont.ftml import Fxml, Ftestgroup, Ftest, Ffontsrc
|
||||
from palaso.unicode.ucd import get_ucd
|
||||
from itertools import product
|
||||
import re
|
||||
import collections.abc
|
||||
|
||||
# This module comprises two related functionalities:
|
||||
# 1. The FTML object which acts as a staging object for ftml test data. The methods of this class
|
||||
# permit a gradual build-up of an ftml file, e.g.,
|
||||
#
|
||||
# startTestGroup(...)
|
||||
# setFeatures(...)
|
||||
# addToTest(...)
|
||||
# addToTest(...)
|
||||
# clearFeatures(...)
|
||||
# setLang(...)
|
||||
# addToTest(...)
|
||||
# closeTestGroup(...)
|
||||
# ...
|
||||
# writeFile(...)
|
||||
#
|
||||
# The module is clever enough, for example, to automatically close a test when changing features, languages or direction.
|
||||
#
|
||||
# 2. The FTMLBuilder object which reads and processes glyph_data.csv and provides assistance in iterating over
|
||||
# the characters, features, and languages that should be supported by the font, e.g.:
|
||||
#
|
||||
# ftml.startTestGroup('Encoded characters')
|
||||
# for uid in sorted(builder.uids()):
|
||||
# if uid < 32: continue
|
||||
# c = builder.char(uid)
|
||||
# for featlist in builder.permuteFeatures(uids=[uid]):
|
||||
# ftml.setFeatures(featlist)
|
||||
# builder.render([uid], ftml)
|
||||
# ftml.clearFeatures()
|
||||
# for langID in sorted(c.langs):
|
||||
# ftml.setLang(langID)
|
||||
# builder.render([uid], ftml)
|
||||
# ftml.clearLang()
|
||||
#
|
||||
# See examples/psfgenftml.py for ideas
|
||||
|
||||
class FTML(object):
|
||||
"""a staging class for collecting ftml content and finally writing the xml"""
|
||||
|
||||
# Assumes no nesting of test groups
|
||||
|
||||
def __init__(self, title, logger, comment = None, fontsrc = None, fontlabel = None, fontscale = None,
|
||||
widths = None, rendercheck = True, xslfn = None, defaultrtl = False):
|
||||
self.logger = logger
|
||||
# Initialize an Fxml object
|
||||
fxml = Fxml(testgrouplabel = "dummy")
|
||||
fxml.stylesheet = {'type': 'text/xsl', 'href': xslfn if xslfn is not None else 'ftml.xsl'}
|
||||
fxml.head.title = title
|
||||
fxml.head.comment = comment
|
||||
if isinstance(fontsrc, (tuple, list)):
|
||||
# Allow multiple fontsrc
|
||||
fxml.head.fontsrc = [Ffontsrc(fxml.head, text=fontsrc,
|
||||
label=fontlabel[i] if fontlabel is not None and i < len(fontlabel) else None)
|
||||
for i, fontsrc in enumerate(fontsrc)]
|
||||
elif fontsrc:
|
||||
fxml.head.fontsrc = Ffontsrc(fxml.head, text=fontsrc, label=fontlabel)
|
||||
|
||||
if fontscale: fxml.head.fontscale = int(fontscale)
|
||||
if widths: fxml.head.widths = widths
|
||||
fxml.testgroups.pop() # Remove dummy test group
|
||||
# Save object
|
||||
self._fxml = fxml
|
||||
# Initialize state
|
||||
self._curTest = None
|
||||
self.closeTestGroup()
|
||||
self.defaultRTL = defaultrtl
|
||||
# Add first testgroup if requested
|
||||
if rendercheck:
|
||||
self.startTestGroup("Rendering Check", background="#F0F0F0")
|
||||
self.addToTest(None, "RenderingUnknown", "check", rtl = False)
|
||||
self.closeTest()
|
||||
self.closeTestGroup()
|
||||
|
||||
_colorMap = {
|
||||
'aqua': '#00ffff',
|
||||
'black': '#000000',
|
||||
'blue': '#0000ff',
|
||||
'fuchsia': '#ff00ff',
|
||||
'green': '#008000',
|
||||
'grey': '#808080',
|
||||
'lime': '#00ff00',
|
||||
'maroon': '#800000',
|
||||
'navy': '#000080',
|
||||
'olive': '#808000',
|
||||
'purple': '#800080',
|
||||
'red': '#ff0000',
|
||||
'silver': '#c0c0c0',
|
||||
'teal': '#008080',
|
||||
'white': '#ffffff',
|
||||
'yellow': '#ffff00',
|
||||
'orange': '#ffa500'
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _getColor(color):
|
||||
if color is None or len(color) == 0:
|
||||
return None
|
||||
color = color.lower()
|
||||
if color in FTML._colorMap:
|
||||
return FTML._colorMap[color]
|
||||
if re.match(r'#[0-9a-f]{6}$', color):
|
||||
return color
|
||||
self.logger.log(f'Color "{color}" not understood; ignored', 'W')
|
||||
return None
|
||||
|
||||
def closeTest(self, comment = None):
|
||||
if self._curTest:
|
||||
if comment is not None:
|
||||
self._curTest.comment = comment
|
||||
if self._curColor:
|
||||
self._curTest.background = self._curColor
|
||||
self._curTest = None
|
||||
self._lastUID = None
|
||||
self._lastRTL = None
|
||||
|
||||
def addToTest(self, uid, s = "", label = None, comment = None, rtl = None):
|
||||
if rtl is None: rtl = self.defaultRTL
|
||||
if (self._lastUID and uid and uid not in range(self._lastUID, self._lastUID + 2))\
|
||||
or (self._lastRTL is not None and rtl != self._lastRTL):
|
||||
self.closeTest()
|
||||
self._lastUID = uid
|
||||
self._lastRTL = rtl
|
||||
if self._curTestGroup is None:
|
||||
# Create a new Ftestgroup
|
||||
self.startTestGroup("Group")
|
||||
if self._curTest is None:
|
||||
# Create a new Ftest
|
||||
if label is None:
|
||||
label = "U+{0:04X}".format(uid) if uid is not None else "test"
|
||||
test = Ftest(self._curTestGroup, label = label, string = '')
|
||||
if comment:
|
||||
test.comment = comment
|
||||
if rtl: test.rtl = "True"
|
||||
# Construct stylename and add style if needed:
|
||||
x = ['{}_{}'.format(t,v) for t,v in self._curFeatures.items()] if self._curFeatures else []
|
||||
if self._curLang:
|
||||
x.insert(0,self._curLang)
|
||||
if len(x):
|
||||
test.stylename = '_'.join(x)
|
||||
self._fxml.head.addstyle(test.stylename, feats = self._curFeatures, lang = self._curLang)
|
||||
# Append to current test group
|
||||
self._curTestGroup.tests.append(test)
|
||||
self._curTest = test
|
||||
if len(self._curTest.string): self._curTest.string += ' '
|
||||
# Special hack until we get to python3 with full unicode support
|
||||
self._curTest.string += ''.join([ c if ord(c) < 128 else '\\u{0:06X}'.format(ord(c)) for c in s ])
|
||||
# self._curTest.string += s
|
||||
|
||||
def setFeatures(self, features):
|
||||
# features can be None or a list; list elements can be:
|
||||
# None
|
||||
# a feature setting in the form [tag,value]
|
||||
if features is None:
|
||||
return self.clearFeatures()
|
||||
features = [x for x in features if x]
|
||||
if len(features) == 0:
|
||||
return self.clearFeatures()
|
||||
features = dict(features) # Convert to a dictionary -- this is what we'll keep.
|
||||
if features != self._curFeatures:
|
||||
self.closeTest()
|
||||
self._curFeatures = features
|
||||
|
||||
def clearFeatures(self):
|
||||
if self._curFeatures is not None:
|
||||
self.closeTest()
|
||||
self._curFeatures = None
|
||||
|
||||
def setLang(self, langID):
|
||||
if langID != self._curLang:
|
||||
self.closeTest();
|
||||
self._curLang = langID
|
||||
|
||||
def clearLang(self):
|
||||
if self._curLang:
|
||||
self.closeTest()
|
||||
self._curLang = None
|
||||
|
||||
def setBackground(self, color):
|
||||
color = self._getColor(color)
|
||||
if color != self._curColor:
|
||||
self.closeTest()
|
||||
self._curColor = color
|
||||
|
||||
def clearBackground(self):
|
||||
if self._curColor is not None:
|
||||
self.closeTest()
|
||||
self._curColor = None
|
||||
|
||||
def closeTestGroup(self):
|
||||
self.closeTest()
|
||||
self._curTestGroup = None
|
||||
self._curFeatures = None
|
||||
self._curLang = None
|
||||
self._curColor = None
|
||||
|
||||
def startTestGroup(self, label, background = None):
|
||||
if self._curTestGroup is not None:
|
||||
if label == self._curTestGroup.label:
|
||||
return
|
||||
self.closeTestGroup()
|
||||
# Add new test group
|
||||
self._curTestGroup = Ftestgroup(self._fxml, label = label)
|
||||
background = self._getColor(background)
|
||||
if background is not None:
|
||||
self._curTestGroup.background = background
|
||||
|
||||
# append to root test groups
|
||||
self._fxml.testgroups.append(self._curTestGroup)
|
||||
|
||||
def writeFile(self, output):
|
||||
self.closeTestGroup()
|
||||
self._fxml.save(output)
|
||||
|
||||
|
||||
class Feature(object):
|
||||
"""abstraction of a feature"""
|
||||
|
||||
def __init__(self, tag):
|
||||
self.tag = tag
|
||||
self.default = 0
|
||||
self.maxval = 1
|
||||
self._tvlist = None
|
||||
|
||||
def __getattr__(self,name):
|
||||
if name == "tvlist":
|
||||
# tvlist is a list of all possible tag,value pairs (except the default but including None) for this feature
|
||||
# This attribute shouldn't be needed until all the possible feature value are known,
|
||||
# therefore we'll generate this the first time we need it and save it
|
||||
if self._tvlist is None:
|
||||
self._tvlist = [ None ]
|
||||
for v in range (0, self.maxval+1):
|
||||
if v != self.default:
|
||||
self._tvlist.append( [self.tag, str(v)])
|
||||
return self._tvlist
|
||||
|
||||
|
||||
class FChar(object):
|
||||
"""abstraction of an encoded glyph in the font"""
|
||||
|
||||
def __init__(self, uids, basename, logger):
|
||||
self.logger = logger
|
||||
# uids can be a singleton integer or, for multiple-encoded glyphs, some kind of sequence of integers
|
||||
if isinstance(uids,collections.abc.Sequence):
|
||||
uids1 = uids
|
||||
else:
|
||||
uids1 = (uids,)
|
||||
# test each uid to make sure valid; remove if not.
|
||||
uids2=[]
|
||||
self.general = "unknown"
|
||||
for uid in uids1:
|
||||
try:
|
||||
gc = get_ucd(uid,'gc')
|
||||
if self.general == "unknown":
|
||||
self.general = gc
|
||||
uids2.append(uid)
|
||||
except (TypeError, IndexError):
|
||||
self.logger.log(f'Invalid USV "{uid}" -- ignored.', 'E')
|
||||
continue
|
||||
except KeyError:
|
||||
self.logger.log('USV %04X not defined; no properties known' % uid, 'W')
|
||||
# make sure there's at least one left
|
||||
assert len(uids2) > 0, f'No valid USVs found in {repr(uids)}'
|
||||
self._uids = tuple(uids2)
|
||||
self.basename = basename
|
||||
self.feats = set() # feat tags that affect this char
|
||||
self.langs = set() # lang tags that affect this char
|
||||
self.aps = set()
|
||||
self.altnames = {} # alternate glyph names.
|
||||
# the above is a dict keyed by either:
|
||||
# lang tag e.g., 'ur', or
|
||||
# feat tag and value, e.g., 'cv24=3'
|
||||
# and returns a the glyphname for that alternate.
|
||||
# Additional info from UFO:
|
||||
self.takesMarks = self.isMark = self.isBase = self.notInUFO = False
|
||||
|
||||
# Most callers don't need to support or or care about multiple-encoded glyphs, so we
|
||||
# support the old .uid attribute by returning the first (I guess we consider it primary) uid.
|
||||
def __getattr__(self,name):
|
||||
if name == 'uids':
|
||||
return self._uids
|
||||
elif name == 'uid':
|
||||
return self._uids[0]
|
||||
else:
|
||||
raise AttributeError
|
||||
|
||||
# the static method FTMLBuilder.checkGlyph is likely preferred
|
||||
# but leave this instance method for backwards compatibility
|
||||
def checkGlyph(self, gname, font, apRE):
|
||||
# glean info from UFO if glyph is present
|
||||
if gname in font.deflayer:
|
||||
self.notInUFO = False
|
||||
for a in font.deflayer[gname]['anchor'] :
|
||||
name = a.element.get('name')
|
||||
if apRE.match(name) is None:
|
||||
continue
|
||||
self.aps.add(name)
|
||||
if name.startswith("_") :
|
||||
self.isMark = True
|
||||
else:
|
||||
self.takesMarks = True
|
||||
self.isBase = self.takesMarks and not self.isMark
|
||||
else:
|
||||
self.notInUFO = True
|
||||
|
||||
|
||||
class FSpecial(object):
|
||||
"""abstraction of a ligature or other interesting sequence"""
|
||||
|
||||
# Similar to FChar but takes a uid list rather than a single uid
|
||||
def __init__(self, uids, basename, logger):
|
||||
self.logger = logger
|
||||
self.uids = uids
|
||||
self.basename = basename
|
||||
# a couple of properties based on the first uid:
|
||||
try:
|
||||
self.general = get_ucd(uids[0],'gc')
|
||||
except KeyError:
|
||||
self.logger.log('USV %04X not defined; no properties known' % uids[0], 'W')
|
||||
self.feats = set() # feat tags that affect this char
|
||||
self.aps = set()
|
||||
self.langs = set() # lang tags that affect this char
|
||||
self.altnames = {} # alternate glyph names.
|
||||
self.takesMarks = self.isMark = self.isBase = self.notInUFO = False
|
||||
|
||||
class FTMLBuilder(object):
|
||||
"""glyph_data and UFO processing for building FTML"""
|
||||
|
||||
def __init__(self, logger, incsv = None, fontcode = None, font = None, langs = None, rtlenable = False, ap = None ):
|
||||
self.logger = logger
|
||||
self.rtlEnable = rtlenable
|
||||
|
||||
# Default diacritic base:
|
||||
self.diacBase = 0x25CC
|
||||
|
||||
# Default joinBefore and joinAfter sequence
|
||||
self.joinBefore = '\u200D' # put before a sequence to force joining shape; def = zwj
|
||||
self.joinAfter = '\u200D' # put after a sequence to force joining shape; def = zwj
|
||||
|
||||
# Dict mapping tag to Feature
|
||||
self.features = {}
|
||||
|
||||
# Set of all languages seen
|
||||
if langs is not None:
|
||||
# Use a list so we keep the order (assuming caller wouldn't give us dups
|
||||
self.allLangs = list(re.split(r'\s*[\s,]\s*', langs)) # Allow comma- or space-separated tags
|
||||
self._langsComplete = True # We have all the lang tags desired
|
||||
else:
|
||||
# use a set because the langtags are going to dribble in and be repeated.
|
||||
self.allLangs = set()
|
||||
self._langsComplete = False # Add lang_tags from glyph_data
|
||||
|
||||
# Be able to find chars and specials:
|
||||
self._charFromUID = {}
|
||||
self._charFromBasename = {}
|
||||
self._specialFromUIDs = {}
|
||||
self._specialFromBasename = {}
|
||||
|
||||
# list of USVs that are in the CSV but whose glyphs are not in the UFO
|
||||
self.uidsMissingFromUFO = set()
|
||||
|
||||
# DummyUSV (see charAuto())
|
||||
self.curDummyUSV = 0x100000 # Supplemental Private Use Area B
|
||||
|
||||
# Compile --ap parameter
|
||||
if ap is None:
|
||||
ap = "."
|
||||
try:
|
||||
self.apRE = re.compile(ap)
|
||||
except re.error as e:
|
||||
logger.log("--ap parameter '{}' doesn't compile as regular expression: {}".format(ap, e), "S")
|
||||
|
||||
if incsv is not None:
|
||||
self.readGlyphData(incsv, fontcode, font)
|
||||
|
||||
def addChar(self, uids, basename):
|
||||
# Add an FChar
|
||||
# assume parameters are OK:
|
||||
c = FChar(uids, basename, self.logger)
|
||||
# fatal error if the basename or any of uids have already been seen
|
||||
fatal = False
|
||||
for uid in c.uids:
|
||||
if uid in self._charFromUID:
|
||||
self.logger.log('Attempt to add duplicate USV %04X' % uid, 'E')
|
||||
fatal = True
|
||||
self._charFromUID[uid] = c
|
||||
if basename in self._charFromBasename:
|
||||
self.logger.log('Attempt to add duplicate basename %s' % basename, 'E')
|
||||
fatal = True
|
||||
self._charFromBasename[basename] = c
|
||||
if fatal:
|
||||
self.logger.log('Cannot continue due to previous errors', 'S')
|
||||
return c
|
||||
|
||||
def uids(self):
|
||||
""" returns list of uids in glyph_data """
|
||||
return self._charFromUID.keys()
|
||||
|
||||
def char(self, x):
|
||||
""" finds an FChar based either basename or uid;
|
||||
generates KeyError if not found."""
|
||||
return self._charFromBasename[x] if isinstance(x, str) else self._charFromUID[x]
|
||||
|
||||
def charAuto(self, x):
|
||||
""" Like char() but will issue a warning and add a dummy """
|
||||
try:
|
||||
return self._charFromBasename[x] if isinstance(x, str) else self._charFromUID[x]
|
||||
except KeyError:
|
||||
# Issue error message and create dummy Char object for this character
|
||||
if isinstance(x, str):
|
||||
self.logger.log(f'Glyph "{x}" isn\'t in glyph_data.csv - adding dummy', 'E')
|
||||
while self.curDummyUSV in self._charFromUID:
|
||||
self.curDummyUSV += 1
|
||||
c = self.addChar(self.curDummyUSV, x)
|
||||
else:
|
||||
self.logger.log(f'Char U+{x:04x} isn\'t in glyph_data.csv - adding dummy', 'E')
|
||||
c = self.addChar(x, f'U+{x:04x}')
|
||||
return c
|
||||
|
||||
def addSpecial(self, uids, basename):
|
||||
# Add an FSpecial:
|
||||
# fatal error if basename has already been seen:
|
||||
if basename in self._specialFromBasename:
|
||||
self.logger.log('Attempt to add duplicate basename %s' % basename, 'S')
|
||||
c = FSpecial(uids, basename, self.logger)
|
||||
# remember it:
|
||||
self._specialFromUIDs[tuple(uids)] = c
|
||||
self._specialFromBasename[basename] = c
|
||||
return c
|
||||
|
||||
def specials(self):
|
||||
"""returns a list of the basenames of specials"""
|
||||
return self._specialFromBasename.keys()
|
||||
|
||||
def special(self, x):
|
||||
""" finds an FSpecial based either basename or uid sequence;
|
||||
generates KeyError if not found."""
|
||||
return self._specialFromBasename[x] if isinstance(x, str) else self._specialFromUIDs[tuple(x)]
|
||||
|
||||
def _csvWarning(self, msg, exception = None):
|
||||
m = "glyph_data line {1}: {0}".format(msg, self.incsv.line_num)
|
||||
if exception is not None:
|
||||
m += '; ' + str(exception)
|
||||
self.logger.log(m, 'W')
|
||||
|
||||
def readGlyphData(self, incsv, fontcode = None, font = None):
|
||||
# Remember csv file for other methods:
|
||||
self.incsv = incsv
|
||||
|
||||
# Validate fontcode, if provided
|
||||
if fontcode is not None:
|
||||
whichfont = fontcode.strip().lower()
|
||||
if len(whichfont) != 1:
|
||||
self.logger.log('fontcode must be a single letter', 'S')
|
||||
else:
|
||||
whichfont = None
|
||||
|
||||
# Get headings from csvfile:
|
||||
fl = incsv.firstline
|
||||
if fl is None: self.logger.log("Empty input file", "S")
|
||||
# required columns:
|
||||
try:
|
||||
nameCol = fl.index('glyph_name');
|
||||
usvCol = fl.index('USV')
|
||||
except ValueError as e:
|
||||
self.logger.log('Missing csv input field: ' + str(e), 'S')
|
||||
except Exception as e:
|
||||
self.logger.log('Error reading csv input field: ' + str(e), 'S')
|
||||
# optional columns:
|
||||
# If -f specified, make sure we have the fonts column
|
||||
if whichfont is not None:
|
||||
if 'Fonts' not in fl: self.logger.log('-f requires "Fonts" column in glyph_data', 'S')
|
||||
fontsCol = fl.index('Fonts')
|
||||
# Allow for projects that use only production glyph names (ps_name same as glyph_name)
|
||||
psCol = fl.index('ps_name') if 'ps_name' in fl else nameCol
|
||||
# Allow for projects that have no feature and/or lang-specific behaviors
|
||||
featCol = fl.index('Feat') if 'Feat' in fl else None
|
||||
bcp47Col = fl.index('bcp47tags') if 'bcp47tags' in fl else None
|
||||
|
||||
next(incsv.reader, None) # Skip first line with headers
|
||||
|
||||
# RE that matches names of glyphs we don't care about
|
||||
namesToSkipRE = re.compile('^(?:[._].*|null|cr|nonmarkingreturn|tab|glyph_name)$',re.IGNORECASE)
|
||||
|
||||
# RE that matches things like 'cv23' or 'cv23=4' or 'cv23=2,3'
|
||||
featRE = re.compile('^(\w{2,4})(?:=([\d,]+))?$')
|
||||
|
||||
# RE that matches USV sequences for ligatures
|
||||
ligatureRE = re.compile('^[0-9A-Fa-f]{4,6}(?:_[0-9A-Fa-f]{4,6})+$')
|
||||
|
||||
# RE that matches space-separated USV sequences
|
||||
USVsRE = re.compile('^[0-9A-Fa-f]{4,6}(?:\s+[0-9A-Fa-f]{4,6})*$')
|
||||
|
||||
# keep track of glyph names we've seen to detect duplicates
|
||||
namesSeen = set()
|
||||
psnamesSeen = set()
|
||||
|
||||
# OK, process all records in glyph_data
|
||||
for line in incsv:
|
||||
gname = line[nameCol].strip()
|
||||
|
||||
# things to ignore:
|
||||
if namesToSkipRE.match(gname):
|
||||
continue
|
||||
if whichfont is not None and line[fontsCol] != '*' and line[fontsCol].lower().find(whichfont) < 0:
|
||||
continue
|
||||
if len(gname) == 0:
|
||||
self._csvWarning('empty glyph name in glyph_data; ignored')
|
||||
continue
|
||||
if gname.startswith('#'):
|
||||
continue
|
||||
if gname in namesSeen:
|
||||
self._csvWarning('glyph name %s previously seen in glyph_data; ignored' % gname)
|
||||
continue
|
||||
|
||||
psname = line[psCol].strip() or gname # If psname absent, working name will be production name
|
||||
if psname in psnamesSeen:
|
||||
self._csvWarning('psname %s previously seen; ignored' % psname)
|
||||
continue
|
||||
namesSeen.add(gname)
|
||||
psnamesSeen.add(psname)
|
||||
|
||||
# compute basename-- the glyph name without extensions:
|
||||
basename = gname.split('.',1)[0]
|
||||
|
||||
# Process USV(s)
|
||||
# could be empty string, a single USV, space-separated list of USVs for multiple encoding,
|
||||
# or underscore-connected USVs indicating ligatures.
|
||||
|
||||
usvs = line[usvCol].strip()
|
||||
if len(usvs) == 0:
|
||||
# Empty USV field, unencoded glyph
|
||||
usvs = ()
|
||||
elif USVsRE.match(usvs):
|
||||
# space-separated hex values:
|
||||
usvs = usvs.split()
|
||||
isLigature = False
|
||||
elif ligatureRE.match(usvs):
|
||||
# '_' separated hex values (ligatures)
|
||||
usvs = usvs.split('_')
|
||||
isLigature = True
|
||||
else:
|
||||
self._csvWarning(f"invalid USV field '{usvs}'; ignored")
|
||||
usvs = ()
|
||||
uids = [int(x, 16) for x in usvs]
|
||||
|
||||
if len(uids) == 0:
|
||||
# Handle unencoded glyphs
|
||||
uids = None # Prevents using this record to set default feature values
|
||||
if basename in self._charFromBasename:
|
||||
c = self._charFromBasename[basename]
|
||||
# Check for additional AP info
|
||||
c.checkGlyph(gname, font, self.apRE)
|
||||
elif basename in self._specialFromBasename:
|
||||
c = self._specialFromBasename[basename]
|
||||
else:
|
||||
self._csvWarning('unencoded variant %s found before encoded glyph' % gname)
|
||||
c = None
|
||||
elif isLigature:
|
||||
# Handle ligatures
|
||||
c = self.addSpecial(uids, basename)
|
||||
uids = None # Prevents using this record to set default feature values (TODO: Research this)
|
||||
else:
|
||||
# Handle simple encoded glyphs (could be multiple uids!)
|
||||
# Create character object
|
||||
c = self.addChar(uids, basename)
|
||||
if font is not None:
|
||||
# Examine APs to determine if this character takes marks:
|
||||
c.checkGlyph(gname, font, self.apRE)
|
||||
if c.notInUFO:
|
||||
self.uidsMissingFromUFO.update(uids)
|
||||
|
||||
if featCol is not None:
|
||||
feats = line[featCol].strip()
|
||||
if len(feats) > 0 and not(feats.startswith('#')):
|
||||
feats = feats.split(';')
|
||||
for feat in feats:
|
||||
m = featRE.match(feat)
|
||||
if m is None:
|
||||
self._csvWarning('incorrectly formed feature specification "%s"; ignored' % feat)
|
||||
else:
|
||||
# find/create structure for this feature:
|
||||
tag = m.group(1)
|
||||
try:
|
||||
feature = self.features[tag]
|
||||
except KeyError:
|
||||
feature = Feature(tag)
|
||||
self.features[tag] = feature
|
||||
# if values supplied, collect default and maximum values for this feature:
|
||||
if m.group(2) is not None:
|
||||
vals = [int(i) for i in m.group(2).split(',')]
|
||||
if len(vals) > 0:
|
||||
if uids is not None:
|
||||
feature.default = vals[0]
|
||||
elif len(feats) == 1: # TODO: This seems like wrong test.
|
||||
for v in vals:
|
||||
# remember the glyph name for this feature/value combination:
|
||||
feat = '{}={}'.format(tag,v)
|
||||
if c is not None and feat not in c.altnames:
|
||||
c.altnames[feat] = gname
|
||||
vals.append(feature.maxval)
|
||||
feature.maxval = max(vals)
|
||||
if c is not None:
|
||||
# Record that this feature affects this character:
|
||||
c.feats.add(tag)
|
||||
else:
|
||||
self._csvWarning('untestable feature "%s" : no known USV' % tag)
|
||||
|
||||
if bcp47Col is not None:
|
||||
bcp47 = line[bcp47Col].strip()
|
||||
if len(bcp47) > 0 and not(bcp47.startswith('#')):
|
||||
if c is not None:
|
||||
for tag in re.split(r'\s*[\s,]\s*', bcp47): # Allow comma- or space-separated tags
|
||||
c.langs.add(tag) # lang-tags mentioned for this character
|
||||
if not self._langsComplete:
|
||||
self.allLangs.add(tag) # keep track of all possible lang-tags
|
||||
else:
|
||||
self._csvWarning('untestable langs: no known USV')
|
||||
|
||||
# We're finally done, but if allLangs is a set, let's order it (for lack of anything better) and make a list:
|
||||
if not self._langsComplete:
|
||||
self.allLangs = list(sorted(self.allLangs))
|
||||
|
||||
def permuteFeatures(self, uids = None, feats = None):
|
||||
""" returns an iterator that provides all combinations of feature/value pairs, for a list of uids and/or a specific list of feature tags"""
|
||||
feats = set(feats) if feats is not None else set()
|
||||
if uids is not None:
|
||||
for uid in uids:
|
||||
if uid in self._charFromUID:
|
||||
feats.update(self._charFromUID[uid].feats)
|
||||
l = [self.features[tag].tvlist for tag in sorted(feats)]
|
||||
return product(*l)
|
||||
|
||||
@staticmethod
|
||||
def checkGlyph(obj, gname, font, apRE):
|
||||
# glean info from UFO if glyph is present
|
||||
if gname in font.deflayer:
|
||||
obj.notInUFO = False
|
||||
for a in font.deflayer[gname]['anchor']:
|
||||
name = a.element.get('name')
|
||||
if apRE.match(name) is None:
|
||||
continue
|
||||
obj.aps.add(name)
|
||||
if name.startswith("_"):
|
||||
obj.isMark = True
|
||||
else:
|
||||
obj.takesMarks = True
|
||||
obj.isBase = obj.takesMarks and not obj.isMark
|
||||
else:
|
||||
obj.notInUFO = True
|
||||
|
||||
@staticmethod
|
||||
def matchMarkBase(c_mark, c_base):
|
||||
""" test whether an _AP on c_mark matches an AP on c_base """
|
||||
for apM in c_mark.aps:
|
||||
if apM.startswith("_"):
|
||||
ap = apM[1:]
|
||||
for apB in c_base.aps:
|
||||
if apB == ap:
|
||||
return True
|
||||
return False
|
||||
|
||||
def render(self, uids, ftml, keyUID = 0, addBreaks = True, rtl = None, dualJoinMode = 3, label = None, comment = None):
|
||||
""" general purpose (but not required) function to generate ftml for a character sequence """
|
||||
if len(uids) == 0:
|
||||
return
|
||||
# Make a copy so we don't affect caller
|
||||
uids = list(uids)
|
||||
# Remember first uid and original length for later
|
||||
startUID = uids[0]
|
||||
uidLen = len(uids)
|
||||
# if keyUID wasn't supplied, use startUID
|
||||
if keyUID == 0: keyUID = startUID
|
||||
if label is None:
|
||||
# Construct label from uids:
|
||||
label = '\n'.join(['U+{0:04X}'.format(u) for u in uids])
|
||||
if comment is None:
|
||||
# Construct comment from glyph names:
|
||||
comment = ' '.join([self._charFromUID[u].basename for u in uids])
|
||||
# see if uid list includes a mirrored char
|
||||
hasMirrored = bool(len([x for x in uids if get_ucd(x,'Bidi_M')]))
|
||||
# Analyze first and last joining char
|
||||
joiningChars = [x for x in uids if get_ucd(x, 'jt') != 'T']
|
||||
if len(joiningChars):
|
||||
# If first or last non-TRANSPARENT char is a joining char, then we need to emit examples with zwj
|
||||
# Assumes any non-TRANSPARENT char that is bc != L must be a rtl character of some sort
|
||||
uid = joiningChars[0]
|
||||
zwjBefore = (get_ucd(uid,'jt') == 'D'
|
||||
or (get_ucd(uid,'bc') == 'L' and get_ucd(uid,'jt') == 'L')
|
||||
or (get_ucd(uid,'bc') != 'L' and get_ucd(uid,'jt') == 'R'))
|
||||
uid = joiningChars[-1]
|
||||
zwjAfter = (get_ucd(uid,'jt') == 'D'
|
||||
or (get_ucd(uid,'bc') == 'L' and get_ucd(uid,'jt') == 'R')
|
||||
or (get_ucd(uid,'bc') != 'L' and get_ucd(uid,'jt') == 'L'))
|
||||
else:
|
||||
zwjBefore = zwjAfter = False
|
||||
if get_ucd(startUID,'gc') == 'Mn':
|
||||
# First char is a NSM... prefix a suitable base
|
||||
uids.insert(0, self.diacBase)
|
||||
zwjBefore = False # No longer any need to put zwj before
|
||||
elif get_ucd(startUID, 'WSpace'):
|
||||
# First char is whitespace -- prefix with baseline brackets:
|
||||
uids.insert(0, 0xF130)
|
||||
lastNonMark = [x for x in uids if get_ucd(x,'gc') != 'Mn'][-1]
|
||||
if get_ucd(lastNonMark, 'WSpace'):
|
||||
# Last non-mark is whitespace -- append baseline brackets:
|
||||
uids.append(0xF131)
|
||||
s = ''.join([chr(uid) for uid in uids])
|
||||
if zwjBefore or zwjAfter:
|
||||
# Show contextual forms:
|
||||
# Start with isolate
|
||||
t = u'{0} '.format(s)
|
||||
if zwjBefore and zwjAfter:
|
||||
# For sequences that show dual-joining behavior, what we show depends on dualJoinMode:
|
||||
if dualJoinMode & 1:
|
||||
# show initial, medial, final separated by space:
|
||||
t += u'{0}{2} {1}{0}{2} {1}{0} '.format(s, self.joinBefore, self.joinAfter)
|
||||
if dualJoinMode & 2:
|
||||
# show 3 joined forms in sequence:
|
||||
t += u'{0}{0}{0} '.format(s)
|
||||
elif zwjAfter:
|
||||
t += u'{0}{1} '.format(s, self.joinAfter)
|
||||
elif zwjBefore:
|
||||
t += u'{1}{0} '.format(s, self.joinBefore)
|
||||
if addBreaks: ftml.closeTest()
|
||||
ftml.addToTest(keyUID, t, label = label, comment = comment, rtl = rtl)
|
||||
if addBreaks: ftml.closeTest()
|
||||
elif hasMirrored and self.rtlEnable:
|
||||
# Contains mirrored and rtl enabled:
|
||||
if addBreaks: ftml.closeTest()
|
||||
ftml.addToTest(keyUID, u'{0} LTR: \u202A{0}\u202C RTL: \u202B{0}\u202C'.format(s), label = label, comment = comment, rtl = rtl)
|
||||
if addBreaks: ftml.closeTest()
|
||||
# elif is LRE, RLE, PDF
|
||||
# elif is LRI, RLI, FSI, PDI
|
||||
elif uidLen > 1:
|
||||
ftml.addToTest(keyUID, s , label = label, comment = comment, rtl = rtl)
|
||||
else:
|
||||
ftml.addToTest(keyUID, s , comment = comment, rtl = rtl)
|
||||
|
290
src/silfont/gfr.py
Normal file
290
src/silfont/gfr.py
Normal file
|
@ -0,0 +1,290 @@
|
|||
#!/usr/bin/env python3
|
||||
__doc__ = '''General classes and functions for use with SIL's github fonts repository, github.com/silnrsi/fonts'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2022 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
import os, json, io
|
||||
import urllib.request as urllib2
|
||||
from zipfile import ZipFile
|
||||
from silfont.util import prettyjson
|
||||
from silfont.core import splitfn, loggerobj
|
||||
from collections import OrderedDict
|
||||
from fontTools.ttLib import TTFont
|
||||
|
||||
familyfields = OrderedDict([
|
||||
("familyid", {"opt": True, "manifest": False}), # req for families.json but not for base files; handled in code
|
||||
("fallback", {"opt": True, "manifest": False}),
|
||||
("family", {"opt": False, "manifest": True}),
|
||||
("altfamily", {"opt": True, "manifest": False}),
|
||||
("siteurl", {"opt": True, "manifest": False}),
|
||||
("packageurl", {"opt": True, "manifest": False}),
|
||||
("ziproot", {"opt": True, "manifest": False}),
|
||||
("files", {"opt": True, "manifest": True}),
|
||||
("defaults", {"opt": True, "manifest": True}),
|
||||
("version", {"opt": True, "manifest": True}),
|
||||
("status", {"opt": True, "manifest": False}),
|
||||
("license", {"opt": True, "manifest": False}),
|
||||
("distributable", {"opt": False, "manifest": False}),
|
||||
("source", {"opt": True, "manifest": False}),
|
||||
("googlefonts", {"opt": True, "manifest": False}),
|
||||
("features", {"opt": True, "manifest": False})
|
||||
])
|
||||
|
||||
filefields = OrderedDict([
|
||||
("altfamily", {"opt": True, "manifest": True, "mopt": True}),
|
||||
("url", {"opt": True, "manifest": False}),
|
||||
("flourl", {"opt": True, "manifest": False}),
|
||||
("packagepath", {"opt": True, "manifest": True}),
|
||||
("zippath", {"opt": True, "manifest": False}),
|
||||
("axes", {"opt": False, "manifest": True})
|
||||
])
|
||||
|
||||
defaultsfields = OrderedDict([
|
||||
("ttf", {"opt": True, "manifest": True}),
|
||||
("woff", {"opt": True, "manifest": True, "mopt": True}),
|
||||
("woff2", {"opt": True, "manifest": True, "mopt": True})
|
||||
])
|
||||
|
||||
class _familydata(object):
|
||||
"""Family data key for use with families.json, font manifests and base files
|
||||
"""
|
||||
def __init__(self, id=None, data=None, filename = None, type="f", logger=None):
|
||||
# Initial input can be a dictionary (data) in which case id nneds to be set
|
||||
# or it can be read from a file (containing just one family record), in which case id is taken from the file
|
||||
# Type can be f, b or m for families, base or manifest
|
||||
# With f, this would be for just a single entry from a families.json file
|
||||
self.id = id
|
||||
self.data = data if data else {}
|
||||
self.filename = filename
|
||||
self.type = type
|
||||
self.logger = logger if logger else loggerobj()
|
||||
|
||||
def fieldscheck(self, data, validfields, reqfields, logprefix, valid, logs):
|
||||
for key in data: # Check all keys have valid names
|
||||
if key not in validfields:
|
||||
logs.append((f'{logprefix}: Invalid field "{key}"', 'W'))
|
||||
valid = False
|
||||
continue
|
||||
# Are required fields present
|
||||
for key in reqfields:
|
||||
if key not in data:
|
||||
logs.append((f'{logprefix}: Required field "{key}" missing', 'W'))
|
||||
valid = False
|
||||
continue
|
||||
return (valid, logs)
|
||||
|
||||
def validate(self):
|
||||
global familyfields, filefields, defaultsfields
|
||||
logs = []
|
||||
valid = True
|
||||
if self.type == "m":
|
||||
validfields = reqfields = [key for key in familyfields if familyfields[key]["manifest"]]
|
||||
else:
|
||||
validfields = list(familyfields)
|
||||
reqfields = [key for key in familyfields if not familyfields[key]["opt"]]
|
||||
if self.type == "f":
|
||||
reqfields = reqfields + ["familyid"]
|
||||
else: # Must be b
|
||||
validfields = validfields + ["hosturl", "filesroot"]
|
||||
|
||||
(valid, logs) = self.fieldscheck(self.data, validfields, reqfields, "Main", valid, logs)
|
||||
# Now check sub-fields
|
||||
if "files" in self.data:
|
||||
fdata = self.data["files"]
|
||||
if self.type == "m":
|
||||
validfields = [key for key in filefields if filefields[key]["manifest"]]
|
||||
reqfields = [key for key in filefields if filefields[key]["manifest"] and not ("mopt" in filefields[key] and filefields[key]["mopt"])]
|
||||
else:
|
||||
validfields = list(filefields)
|
||||
reqfields = [key for key in filefields if not filefields[key]["opt"]]
|
||||
# Now need to check values for each record in files
|
||||
for filen in fdata:
|
||||
frecord = fdata[filen]
|
||||
(valid, logs) = self.fieldscheck(frecord, validfields, reqfields, "Files: " + filen, valid, logs)
|
||||
if "axes" in frecord: # (Will already have been reported above if axes is missing!)
|
||||
adata = frecord["axes"]
|
||||
avalidfields = [key for key in adata if len(key) == 4]
|
||||
areqfields = ["wght", "ital"] if self.type == "m" else []
|
||||
(valid, logs) = self.fieldscheck(adata, avalidfields, areqfields, "Files, axes: " + filen, valid, logs)
|
||||
if "defaults" in self.data:
|
||||
ddata = self.data["defaults"]
|
||||
if self.type == "m":
|
||||
validfields = [key for key in defaultsfields if defaultsfields[key]["manifest"]]
|
||||
reqfields = [key for key in defaultsfields if defaultsfields[key]["manifest"] and not ("mopt" in defaultsfields[key] and defaultsfields[key]["mopt"])]
|
||||
else:
|
||||
validfields = list(defaultsfields)
|
||||
reqfields = [key for key in defaultsfields if not defaultsfields[key]["opt"]]
|
||||
(valid, logs) = self.fieldscheck(ddata, validfields, reqfields, "Defaults:", valid, logs)
|
||||
return (valid, logs)
|
||||
|
||||
def read(self, filename=None): # Read data from file (not for families.json)
|
||||
if filename: self.filename = filename
|
||||
with open(self.filename) as infile:
|
||||
try:
|
||||
filedata = json.load(infile)
|
||||
except Exception as e:
|
||||
self.logger.log(f'Error opening {infile}: {e}', 'S')
|
||||
if len(filedata) != 1:
|
||||
self.logger.log(f'Files must contain just one record; {self.filename} has {len(filedata)}')
|
||||
self.id = list(filedata.keys())[0]
|
||||
self.data = filedata[self.id]
|
||||
|
||||
def write(self, filename=None): # Write data to a file (not for families.json)
|
||||
if filename is None: filename = self.filename
|
||||
self.logger.log(f'Writing to {filename}', 'P')
|
||||
filedata = {self.id: self.data}
|
||||
with open(filename, "w", encoding="utf-8") as outf:
|
||||
outf.write(prettyjson(filedata, oneliners=["files"]))
|
||||
|
||||
class gfr_manifest(_familydata):
|
||||
#
|
||||
def __init__(self, id=None, data=None, filename = None, logger=None):
|
||||
super(gfr_manifest, self).__init__(id=id, data=data, filename=filename, type="m", logger=logger)
|
||||
|
||||
def validate(self, version=None, filename=None, checkfiles=True):
|
||||
# Validate the manifest.
|
||||
# If version is supplied, check that that matches the version in the manifest
|
||||
# If self.filename not already set, the filename of the manifest must be supplied
|
||||
(valid, logs) = super(gfr_manifest, self).validate() # Field name validation based on _familydata validation
|
||||
|
||||
if filename is None: filename = self.filename
|
||||
data = self.data
|
||||
|
||||
if "files" in data and checkfiles:
|
||||
files = data["files"]
|
||||
mfilelist = {x: files[x]["packagepath"] for x in files}
|
||||
|
||||
# Check files that are on disk match the manifest files
|
||||
(path, base, ext) = splitfn(filename)
|
||||
fontexts = ['.ttf', '.woff', '.woff2']
|
||||
dfilelist = {}
|
||||
for dirname, subdirs, filenames in os.walk(path):
|
||||
for filen in filenames:
|
||||
(base, ext) = os.path.splitext(filen)
|
||||
if ext in fontexts:
|
||||
dfilelist[filen] = (os.path.relpath(os.path.join(dirname, filen), start=path).replace('\\', '/'))
|
||||
|
||||
if mfilelist == dfilelist:
|
||||
logs.append(('Files OK', 'I'))
|
||||
else:
|
||||
valid = False
|
||||
logs.append(('Files on disk and in manifest do not match.', 'W'))
|
||||
logs.append(('Files on disk:', 'I'))
|
||||
for filen in sorted(dfilelist):
|
||||
logs.append((f' {dfilelist[filen]}', 'I'))
|
||||
logs.append(('Files in manifest:', 'I'))
|
||||
for filen in sorted(mfilelist):
|
||||
logs.append((f' {mfilelist[filen]}', 'I'))
|
||||
|
||||
if "defaults" in data:
|
||||
defaults = data["defaults"]
|
||||
# Check defaults exist
|
||||
allthere = True
|
||||
for default in defaults:
|
||||
if defaults[default] not in mfilelist: allthere = False
|
||||
|
||||
if allthere:
|
||||
logs.append(('Defaults OK', 'I'))
|
||||
else:
|
||||
valid = False
|
||||
logs.append(('At least one default missing', 'W'))
|
||||
|
||||
if version:
|
||||
if "version" in data:
|
||||
mversion = data["version"]
|
||||
if version == mversion:
|
||||
logs.append(('Versions OK', 'I'))
|
||||
else:
|
||||
valid = False
|
||||
logs.append((f'Version mismatch: {version} supplied and {mversion} in manifest', "W"))
|
||||
|
||||
return (valid, logs)
|
||||
|
||||
class gfr_base(_familydata):
|
||||
#
|
||||
def __init__(self, id=None, data=None, filename = None, logger=None):
|
||||
super(gfr_base, self).__init__(id=id, data=data, filename=filename, type="b", logger=logger)
|
||||
|
||||
class gfr_family(object): # For families.json.
|
||||
#
|
||||
def __init__(self, data=None, filename=None, logger=None):
|
||||
self.filename = filename
|
||||
self.logger = logger if logger else loggerobj()
|
||||
self.familyrecords = {}
|
||||
if data is not None: self.familyrecords = data
|
||||
|
||||
def validate(self, familyid=None):
|
||||
allvalid = True
|
||||
alllogs = []
|
||||
if familyid:
|
||||
record = self.familyrecords[familyid]
|
||||
(allvalid, alllogs) = record.validate()
|
||||
else:
|
||||
for familyid in self.familyrecords:
|
||||
record = self.familyrecords[familyid]
|
||||
(valid, logs) = record.validate()
|
||||
if not valid:
|
||||
allvalid = False
|
||||
alllogs.append(logs)
|
||||
return allvalid, alllogs
|
||||
|
||||
def write(self, filename=None): # Write data to a file
|
||||
if filename is None: filename = self.filename
|
||||
self.logger.log(f'Writing to {filename}', "P")
|
||||
with open(filename, "w", encoding="utf-8") as outf:
|
||||
outf.write(prettyjson(self.familyrecords, oneliners=["files"]))
|
||||
|
||||
def setpaths(logger): # Check that the script is being run from the root of the repository and set standard paths
|
||||
repopath = os.path.abspath(os.path.curdir)
|
||||
# Do cursory checks that this is the root of the fonts repo
|
||||
if repopath[-5:] != "fonts" or not os.path.isdir(os.path.join(repopath, "fonts/sil")):
|
||||
logger.log("GFR scripts must be run from the root of the fonts repo", "S")
|
||||
# Set up standard paths for scripts to use
|
||||
silpath = os.path.join(repopath, "fonts/sil")
|
||||
otherpath = os.path.join(repopath, "fonts/other")
|
||||
basespath = os.path.join(repopath, "basefiles")
|
||||
if not os.path.isdir(basespath): os.makedirs(basespath)
|
||||
return repopath, silpath, otherpath, basespath
|
||||
|
||||
def getttfdata(ttf, logger): # Extract data from a ttf
|
||||
|
||||
try:
|
||||
font = TTFont(ttf)
|
||||
except Exception as e:
|
||||
logger.log(f'Error opening {ttf}: {e}', 'S')
|
||||
|
||||
name = font['name']
|
||||
os2 = font['OS/2']
|
||||
post = font['post']
|
||||
|
||||
values = {}
|
||||
|
||||
name16 = name.getName(nameID=16, platformID=3, platEncID=1, langID=0x409)
|
||||
|
||||
values["family"] = str(name16) if name16 else str(name.getName(nameID=1, platformID=3, platEncID=1, langID=0x409))
|
||||
values["subfamily"] = str(name.getName(nameID=2, platformID=3, platEncID=1, langID=0x409))
|
||||
values["version"] = str(name.getName(nameID=5, platformID=3, platEncID=1, langID=0x409))[8:] # Remove "Version " from the front
|
||||
values["wght"] = os2.usWeightClass
|
||||
values["ital"] = 0 if getattr(post, "italicAngle") == 0 else 1
|
||||
|
||||
return values
|
||||
def getziproot(url, ttfpath):
|
||||
req = urllib2.Request(url=url, headers={'User-Agent': 'Mozilla/4.0 (compatible; httpget)'})
|
||||
try:
|
||||
reqdat = urllib2.urlopen(req)
|
||||
except Exception as e:
|
||||
return (None, f'{url} not valid: {str(e)}')
|
||||
zipdat = reqdat.read()
|
||||
zipinfile = io.BytesIO(initial_bytes=zipdat)
|
||||
try:
|
||||
zipf = ZipFile(zipinfile)
|
||||
except Exception as e:
|
||||
return (None, f'{url} is not a valid zip file')
|
||||
for zf in zipf.namelist():
|
||||
if zf.endswith(ttfpath): # found a font, assume we want it
|
||||
ziproot = zf[:-len(ttfpath) - 1] # strip trailing /
|
||||
return (ziproot, "")
|
||||
else:
|
||||
return (None, f"Can't find {ttfpath} in {url}")
|
71
src/silfont/harfbuzz.py
Normal file
71
src/silfont/harfbuzz.py
Normal file
|
@ -0,0 +1,71 @@
|
|||
#!/usr/bin/env python3
|
||||
'Harfbuzz support for fonttools'
|
||||
|
||||
import gi
|
||||
gi.require_version('HarfBuzz', '0.0')
|
||||
from gi.repository import HarfBuzz as hb
|
||||
from gi.repository import GLib
|
||||
|
||||
class Glyph(object):
|
||||
def __init__(self, gid, **kw):
|
||||
self.gid = gid
|
||||
for k,v in kw.items():
|
||||
setattr(self, k, v)
|
||||
|
||||
def __repr__(self):
|
||||
return "[{gid}@({offset[0]},{offset[1]})+({advance[0]},{advance[1]})]".format(**self.__dict__)
|
||||
|
||||
def shape_text(f, text, features = [], lang=None, dir="", script="", shapers=""):
|
||||
fontfile = f.reader.file
|
||||
fontfile.seek(0, 0)
|
||||
fontdata = fontfile.read()
|
||||
blob = hb.glib_blob_create(GLib.Bytes.new(fontdata))
|
||||
face = hb.face_create(blob, 0)
|
||||
del blob
|
||||
font = hb.font_create(face)
|
||||
upem = hb.face_get_upem(face)
|
||||
del face
|
||||
hb.font_set_scale(font, upem, upem)
|
||||
hb.ot_font_set_funcs(font)
|
||||
|
||||
buf = hb.buffer_create()
|
||||
t = text.encode('utf-8')
|
||||
hb.buffer_add_utf8(buf, t, 0, -1)
|
||||
hb.buffer_guess_segment_properties(buf)
|
||||
if dir:
|
||||
hb.buffer_set_direction(buf, hb.direction_from_string(dir))
|
||||
if script:
|
||||
hb.buffer_set_script(buf, hb.script_from_string(script))
|
||||
if lang:
|
||||
hb.buffer_set_language(buf, hb.language_from_string(lang))
|
||||
|
||||
feats = []
|
||||
if len(features):
|
||||
for feat_string in features:
|
||||
if hb.feature_from_string(feat_string, -1, aFeats):
|
||||
feats.append(aFeats)
|
||||
if shapers:
|
||||
hb.shape_full(font, buf, feats, shapers)
|
||||
else:
|
||||
hb.shape(font, buf, feats)
|
||||
|
||||
num_glyphs = hb.buffer_get_length(buf)
|
||||
info = hb.buffer_get_glyph_infos(buf)
|
||||
pos = hb.buffer_get_glyph_positions(buf)
|
||||
|
||||
glyphs = []
|
||||
for i in range(num_glyphs):
|
||||
glyphs.append(Glyph(info[i].codepoint, cluster = info[i].cluster,
|
||||
offset = (pos[i].x_offset, pos[i].y_offset),
|
||||
advance = (pos[i].x_advance, pos[i].y_advance),
|
||||
flags = info[i].mask))
|
||||
return glyphs
|
||||
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
from fontTools.ttLib import TTFont
|
||||
font = sys.argv[1]
|
||||
text = sys.argv[2]
|
||||
f = TTFont(font)
|
||||
glyphs = shape_text(f, text)
|
||||
print(glyphs)
|
135
src/silfont/ipython.py
Normal file
135
src/silfont/ipython.py
Normal file
|
@ -0,0 +1,135 @@
|
|||
#!/usr/bin/env python3
|
||||
'IPython support for fonttools'
|
||||
|
||||
__all__ = ['displayGlyphs', 'loadFont', 'displayText', 'displayRaw']
|
||||
|
||||
from fontTools import ttLib
|
||||
from fontTools.pens.basePen import BasePen
|
||||
from fontTools.misc import arrayTools
|
||||
from IPython.display import SVG, HTML
|
||||
from defcon import Font
|
||||
from ufo2ft import compileTTF
|
||||
|
||||
class SVGPen(BasePen) :
|
||||
|
||||
def __init__(self, glyphSet, scale=1.0) :
|
||||
super(SVGPen, self).__init__(glyphSet);
|
||||
self.__commands = []
|
||||
self.__scale = scale
|
||||
|
||||
def __str__(self) :
|
||||
return " ".join(self.__commands)
|
||||
|
||||
def scale(self, pt) :
|
||||
return ((pt[0] or 0) * self.__scale, (pt[1] or 0) * self.__scale)
|
||||
|
||||
def _moveTo(self, pt):
|
||||
self.__commands.append("M {0[0]} {0[1]}".format(self.scale(pt)))
|
||||
|
||||
def _lineTo(self, pt):
|
||||
self.__commands.append("L {0[0]} {0[1]}".format(self.scale(pt)))
|
||||
|
||||
def _curveToOne(self, pt1, pt2, pt3) :
|
||||
self.__commands.append("C {0[0]} {0[1]} {1[0]} {1[1]} {2[0]} {2[1]}".format(self.scale(pt1), self.scale(pt2), self.scale(pt3)))
|
||||
|
||||
def _closePath(self) :
|
||||
self.__commands.append("Z")
|
||||
|
||||
def clear(self) :
|
||||
self.__commands = []
|
||||
|
||||
def _svgheader():
|
||||
return '''<?xml version="1.0"?>
|
||||
<svg xmlns="https://www.w3.org/2000/svg" xmlns:xlink="https://www.w3.org/1999/xlink" version="1.1">
|
||||
'''
|
||||
|
||||
def _bbox(f, gnames, points, scale=1):
|
||||
gset = f.glyphSet
|
||||
bbox = (0, 0, 0, 0)
|
||||
for i, gname in enumerate(gnames):
|
||||
if hasattr(points, '__len__') and i == len(points):
|
||||
points.append((bbox[2] / scale, 0))
|
||||
pt = points[i] if i < len(points) else (0, 0)
|
||||
g = gset[gname]._glyph
|
||||
if g is None or not hasattr(g, 'xMin') :
|
||||
gbox = (0, 0, 0, 0)
|
||||
else :
|
||||
gbox = (g.xMin * scale, g.yMin * scale, g.xMax * scale, g.yMax * scale)
|
||||
bbox = arrayTools.unionRect(bbox, arrayTools.offsetRect(gbox, pt[0] * scale, pt[1] * scale))
|
||||
return bbox
|
||||
|
||||
glyphsetcount = 0
|
||||
def _defglyphs(f, gnames, scale=1):
|
||||
global glyphsetcount
|
||||
glyphsetcount += 1
|
||||
gset = f.glyphSet
|
||||
p = SVGPen(gset, scale)
|
||||
res = "<defs><g>\n"
|
||||
for gname in sorted(set(gnames)):
|
||||
res += '<symbol overflow="visible" id="{}_{}">\n'.format(gname, glyphsetcount)
|
||||
g = gset[gname]
|
||||
p.clear()
|
||||
g.draw(p)
|
||||
res += '<path style="stroke:none;" d="' + str(p) + '"/>\n</symbol>\n'
|
||||
res += "</g></defs>\n"
|
||||
return res
|
||||
|
||||
def loadFont(fname):
|
||||
if fname.lower().endswith(".ufo"):
|
||||
ufo = Font(fname)
|
||||
f = compileTTF(ufo)
|
||||
else:
|
||||
f = ttLib.TTFont(fname)
|
||||
return f
|
||||
|
||||
def displayGlyphs(f, gnames, points=None, scale=None):
|
||||
if not hasattr(gnames, '__len__') or isinstance(gnames, basestring):
|
||||
gnames = [gnames]
|
||||
if not hasattr(points, '__len__'):
|
||||
points = []
|
||||
if not hasattr(f, 'glyphSet'):
|
||||
f.glyphSet = f.getGlyphSet()
|
||||
res = _svgheader()
|
||||
if points is None:
|
||||
points = []
|
||||
bbox = _bbox(f, gnames, points, scale or 1)
|
||||
maxh = 100.
|
||||
height = bbox[3] - (bbox[1] if bbox[1] < 0 else 0)
|
||||
if scale is None and height > maxh:
|
||||
scale = maxh / height
|
||||
bbox = [x * scale for x in bbox]
|
||||
res += _defglyphs(f, gnames, scale)
|
||||
res += '<g id="surface1" transform="matrix(1,0,0,-1,{},{})">\n'.format(-bbox[0], bbox[3])
|
||||
res += ' <rect x="{}" y="{}" width="{}" height="{}" style="fill:white;stroke:none"/>\n'.format(
|
||||
bbox[0], bbox[1], bbox[2]-bbox[0], bbox[3])
|
||||
res += ' <g style="fill:black">\n'
|
||||
for i, gname in enumerate(gnames):
|
||||
pt = points[i] if i < len(points) else (0, 0)
|
||||
res += ' <use xlink:href="#{0}_{3}" x="{1}" y="{2}"/>\n'.format(gname, pt[0] * scale, pt[1] * scale, glyphsetcount)
|
||||
res += ' </g></g>\n</svg>\n'
|
||||
return SVG(data=res)
|
||||
#return res
|
||||
|
||||
def displayText(f, text, features = [], lang=None, dir="", script="", shapers="", size=0):
|
||||
import harfbuzz
|
||||
glyphs = harfbuzz.shape_text(f, text, features, lang, dir, script, shapers)
|
||||
gnames = []
|
||||
points = []
|
||||
x = 0
|
||||
y = 0
|
||||
for g in glyphs:
|
||||
gnames.append(f.getGlyphName(g.gid))
|
||||
points.append((x+g.offset[0], y+g.offset[1]))
|
||||
x += g.advance[0]
|
||||
y += g.advance[1]
|
||||
if size == 0:
|
||||
scale = None
|
||||
else:
|
||||
upem = f['head'].unitsPerEm
|
||||
scale = 4. * size / (upem * 3.)
|
||||
return displayGlyphs(f, gnames, points, scale=scale)
|
||||
|
||||
def displayRaw(text):
|
||||
# res = "<html><body>"+text.encode('utf-8')+"</body></html>"
|
||||
res = u"<html><body><p>"+text+u"</p></body></html>"
|
||||
return HTML(data=res)
|
0
src/silfont/scripts/__init__.py
Normal file
0
src/silfont/scripts/__init__.py
Normal file
76
src/silfont/scripts/psfaddanchors.py
Normal file
76
src/silfont/scripts/psfaddanchors.py
Normal file
|
@ -0,0 +1,76 @@
|
|||
#!/usr/bin/env python3
|
||||
__doc__ = 'read anchor data from XML file and apply to UFO'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Rowe'
|
||||
|
||||
from silfont.core import execute
|
||||
from xml.etree import ElementTree as ET
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input UFO'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output UFO','nargs': '?' }, {'type': 'outfont'}),
|
||||
('-i','--anchorinfo',{'help': 'XML file with anchor data'}, {'type': 'infile', 'def': '_anc.xml'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': '_anc.log'}),
|
||||
('-a','--analysis',{'help': 'Analysis only; no output font generated', 'action': 'store_true'},{}),
|
||||
('-d','--delete',{'help': 'Delete APs from a glyph before adding', 'action': 'store_true'}, {}),
|
||||
# 'choices' for -r should correspond to infont.logger.loglevels.keys()
|
||||
('-r','--report',{'help': 'Set reporting level for log', 'type':str, 'choices':['X','S','E','P','W','I','V']},{})
|
||||
]
|
||||
|
||||
def doit(args) :
|
||||
infont = args.ifont
|
||||
if args.report: infont.logger.loglevel = args.report
|
||||
glyphcount = 0
|
||||
|
||||
try:
|
||||
for g in ET.parse(args.anchorinfo).getroot().findall('glyph'): ###
|
||||
glyphcount += 1
|
||||
gname = g.get('PSName')
|
||||
if gname not in infont.deflayer.keys():
|
||||
infont.logger.log("glyph element number " + str(glyphcount) + ": " + gname + " not in font, so skipping anchor data", "W")
|
||||
continue
|
||||
# anchors currently in font for this glyph
|
||||
glyph = infont.deflayer[gname]
|
||||
if args.delete:
|
||||
glyph['anchor'].clear()
|
||||
anchorsinfont = set([ ( a.element.get('name'), a.element.get('x'), a.element.get('y') ) for a in glyph['anchor']])
|
||||
# anchors in XML file to be added
|
||||
anchorstoadd = set()
|
||||
for p in g.findall('point'):
|
||||
name = p.get('type')
|
||||
x = p[0].get('x') # assume subelement location is first child
|
||||
y = p[0].get('y')
|
||||
if name and x and y:
|
||||
anchorstoadd.add( (name, x, y) )
|
||||
else:
|
||||
infont.logger.log("Incomplete information for anchor '" + name + "' for glyph " + gname, "E")
|
||||
# compare sets
|
||||
if anchorstoadd == anchorsinfont:
|
||||
if len(anchorstoadd) > 0:
|
||||
infont.logger.log("Anchors in file already in font for glyph " + gname + ": " + str(anchorstoadd), "V")
|
||||
else:
|
||||
infont.logger.log("No anchors in file or in font for glyph " + gname, "V")
|
||||
else:
|
||||
infont.logger.log("Anchors in file for glyph " + gname + ": " + str(anchorstoadd), "I")
|
||||
infont.logger.log("Anchors in font for glyph " + gname + ": " + str(anchorsinfont), "I")
|
||||
for name,x,y in anchorstoadd:
|
||||
# if anchor being added exists in font already, delete it first
|
||||
ancnames = [a.element.get('name') for a in glyph['anchor']]
|
||||
infont.logger.log(str(ancnames), "V") ###
|
||||
if name in ancnames:
|
||||
infont.logger.log("removing anchor " + name + ", index " + str(ancnames.index(name)), "V") ###
|
||||
glyph.remove('anchor', ancnames.index(name))
|
||||
infont.logger.log("adding anchor " + name + ": (" + x + ", " + y + ")", "V") ###
|
||||
glyph.add('anchor', {'name': name, 'x': x, 'y': y})
|
||||
# If analysis only, return without writing output font
|
||||
if args.analysis: return
|
||||
# Return changed font and let execute() write it out
|
||||
return infont
|
||||
except ET.ParseError as mess:
|
||||
infont.logger.log("Error parsing XML input file: " + str(mess), "S")
|
||||
return # but really should terminate after logging Severe error above
|
||||
|
||||
def cmd() : execute("UFO",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
309
src/silfont/scripts/psfbuildcomp.py
Normal file
309
src/silfont/scripts/psfbuildcomp.py
Normal file
|
@ -0,0 +1,309 @@
|
|||
#!/usr/bin/env python3
|
||||
__doc__ = '''Read Composite Definitions and add glyphs to a UFO font'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Rowe'
|
||||
|
||||
try:
|
||||
xrange
|
||||
except NameError:
|
||||
xrange = range
|
||||
from xml.etree import ElementTree as ET
|
||||
import re
|
||||
from silfont.core import execute
|
||||
import silfont.ufo as ufo
|
||||
from silfont.comp import CompGlyph
|
||||
from silfont.etutil import ETWriter
|
||||
from silfont.util import parsecolors
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input UFO'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output UFO','nargs': '?' }, {'type': 'outfont'}),
|
||||
('-i','--cdfile',{'help': 'Composite Definitions input file'}, {'type': 'infile', 'def': '_CD.txt'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': '_CD.log'}),
|
||||
('-a','--analysis',{'help': 'Analysis only; no output font generated', 'action': 'store_true'},{}),
|
||||
('-c','--color',{'help': 'Color cells of generated glyphs', 'action': 'store_true'},{}),
|
||||
('--colors', {'help': 'Color(s) to use when marking generated glyphs'},{}),
|
||||
('-f','--force',{'help': 'Force overwrite of glyphs having outlines', 'action': 'store_true'},{}),
|
||||
('-n','--noflatten',{'help': 'Do not flatten component references', 'action': 'store_true'},{}),
|
||||
('--remove',{'help': 'a regex matching anchor names that should always be removed from composites'},{}),
|
||||
('--preserve', {'help': 'a regex matching anchor names that, if present in glyphs about to be replace, should not be overwritten'}, {})
|
||||
]
|
||||
|
||||
glyphlist = [] # accessed as global by recursive function addtolist() and main function doit()
|
||||
|
||||
def doit(args):
|
||||
global glyphlist
|
||||
infont = args.ifont
|
||||
logger = args.logger
|
||||
params = infont.outparams
|
||||
|
||||
removeRE = re.compile(args.remove) if args.remove else None
|
||||
preserveRE = re.compile(args.preserve) if args.preserve else None
|
||||
|
||||
colors = None
|
||||
if args.color or args.colors:
|
||||
colors = args.colors if args.colors else "g_blue,g_purple"
|
||||
colors = parsecolors(colors, allowspecial=True)
|
||||
invalid = False
|
||||
for color in colors:
|
||||
if color[0] is None:
|
||||
invalid = True
|
||||
logger.log(color[2], "E")
|
||||
if len(colors) > 3:
|
||||
logger.log("A maximum of three colors can be supplied: " + str(len(colors)) + " supplied", "E")
|
||||
invalid = True
|
||||
if invalid: logger.log("Re-run with valid colors", "S")
|
||||
if len(colors) == 1: colors.append(colors[0])
|
||||
if len(colors) == 2: colors.append(colors[1])
|
||||
logstatuses = ("Glyph unchanged", "Glyph changed", "New glyph")
|
||||
|
||||
### temp section (these may someday be passed as optional parameters)
|
||||
RemoveUsedAnchors = True
|
||||
### end of temp section
|
||||
|
||||
cgobj = CompGlyph()
|
||||
|
||||
for linenum, rawCDline in enumerate(args.cdfile):
|
||||
CDline=rawCDline.strip()
|
||||
if len(CDline) == 0 or CDline[0] == "#": continue
|
||||
logger.log("Processing line " + str(linenum+1) + ": " + CDline,"I")
|
||||
cgobj.CDline=CDline
|
||||
try:
|
||||
cgobj.parsefromCDline()
|
||||
except ValueError as mess:
|
||||
logger.log("Parsing error: " + str(mess), "E")
|
||||
continue
|
||||
g = cgobj.CDelement
|
||||
|
||||
# Collect target glyph information and construct list of component glyphs
|
||||
targetglyphname = g.get("PSName")
|
||||
targetglyphunicode = g.get("UID")
|
||||
glyphlist = [] # list of component glyphs
|
||||
lsb = rsb = 0
|
||||
adv = None
|
||||
for e in g:
|
||||
if e.tag == 'note': pass
|
||||
elif e.tag == 'property': pass # ignore mark info
|
||||
elif e.tag == 'lsb': lsb = int(e.get('width'))
|
||||
elif e.tag == 'rsb': rsb = int(e.get('width'))
|
||||
elif e.tag == 'advance': adv = int(e.get('width'))
|
||||
elif e.tag == 'base':
|
||||
addtolist(e,None)
|
||||
logger.log(str(glyphlist),"V")
|
||||
|
||||
# find each component glyph and compute x,y position
|
||||
xadvance = lsb
|
||||
componentlist = []
|
||||
targetglyphanchors = {} # dictionary of {name: (xOffset,yOffset)}
|
||||
for currglyph, prevglyph, baseAP, diacAP, shiftx, shifty in glyphlist:
|
||||
# get current glyph and its anchor names from font
|
||||
if currglyph not in infont.deflayer:
|
||||
logger.log(currglyph + " not found in font", "E")
|
||||
continue
|
||||
cg = infont.deflayer[currglyph]
|
||||
cganc = [x.element.get('name') for x in cg['anchor']]
|
||||
diacAPx = diacAPy = 0
|
||||
baseAPx = baseAPy = 0
|
||||
if prevglyph is None: # this is new 'base'
|
||||
xOffset = xadvance
|
||||
yOffset = 0
|
||||
# Find advance width of currglyph and add to xadvance
|
||||
if 'advance' in cg:
|
||||
cgadvance = cg['advance']
|
||||
if cgadvance is not None and cgadvance.element.get('width') is not None:
|
||||
xadvance += int(float(cgadvance.element.get('width')))
|
||||
else: # this is 'attach'
|
||||
if diacAP is not None: # find diacritic Attachment Point in currglyph
|
||||
if diacAP not in cganc:
|
||||
logger.log("The AP '" + diacAP + "' does not exist on diacritic glyph " + currglyph, "E")
|
||||
else:
|
||||
i = cganc.index(diacAP)
|
||||
diacAPx = int(float(cg['anchor'][i].element.get('x')))
|
||||
diacAPy = int(float(cg['anchor'][i].element.get('y')))
|
||||
else:
|
||||
logger.log("No AP specified for diacritic " + currglyph, "E")
|
||||
if baseAP is not None: # find base character Attachment Point in targetglyph
|
||||
if baseAP not in targetglyphanchors.keys():
|
||||
logger.log("The AP '" + baseAP + "' does not exist on base glyph when building " + targetglyphname, "E")
|
||||
else:
|
||||
baseAPx = targetglyphanchors[baseAP][0]
|
||||
baseAPy = targetglyphanchors[baseAP][1]
|
||||
if RemoveUsedAnchors:
|
||||
logger.log("Removing used anchor " + baseAP, "V")
|
||||
del targetglyphanchors[baseAP]
|
||||
xOffset = baseAPx - diacAPx
|
||||
yOffset = baseAPy - diacAPy
|
||||
|
||||
if shiftx is not None: xOffset += int(shiftx)
|
||||
if shifty is not None: yOffset += int(shifty)
|
||||
|
||||
componentdic = {'base': currglyph}
|
||||
if xOffset != 0: componentdic['xOffset'] = str(xOffset)
|
||||
if yOffset != 0: componentdic['yOffset'] = str(yOffset)
|
||||
componentlist.append( componentdic )
|
||||
|
||||
# Move anchor information to targetglyphanchors
|
||||
for a in cg['anchor']:
|
||||
dic = a.element.attrib
|
||||
thisanchorname = dic['name']
|
||||
if RemoveUsedAnchors and thisanchorname == diacAP:
|
||||
logger.log("Skipping used anchor " + diacAP, "V")
|
||||
continue # skip this anchor
|
||||
# add anchor (adjusted for position in targetglyph)
|
||||
targetglyphanchors[thisanchorname] = ( int( dic['x'] ) + xOffset, int( dic['y'] ) + yOffset )
|
||||
logger.log("Adding anchor " + thisanchorname + ": " + str(targetglyphanchors[thisanchorname]), "V")
|
||||
logger.log(str(targetglyphanchors),"V")
|
||||
|
||||
if adv is not None:
|
||||
xadvance = adv ### if adv specified, then this advance value overrides calculated value
|
||||
else:
|
||||
xadvance += rsb ### adjust with rsb
|
||||
|
||||
logger.log("Glyph: " + targetglyphname + ", " + str(targetglyphunicode) + ", " + str(xadvance), "V")
|
||||
for c in componentlist:
|
||||
logger.log(str(c), "V")
|
||||
|
||||
# Flatten components unless -n set
|
||||
if not args.noflatten:
|
||||
newcomponentlist = []
|
||||
for compdic in componentlist:
|
||||
c = compdic['base']
|
||||
x = compdic.get('xOffset')
|
||||
y = compdic.get('yOffset')
|
||||
# look up component glyph
|
||||
g=infont.deflayer[c]
|
||||
# check if it has only components (that is, no contours) in outline
|
||||
if g['outline'] and g['outline'].components and not g['outline'].contours:
|
||||
# for each component, get base, x1, y1 and create new entry with base, x+x1, y+y1
|
||||
for subcomp in g['outline'].components:
|
||||
componentdic = subcomp.element.attrib.copy()
|
||||
x1 = componentdic.pop('xOffset', 0)
|
||||
y1 = componentdic.pop('yOffset', 0)
|
||||
xOffset = addtwo(x, x1)
|
||||
yOffset = addtwo(y, y1)
|
||||
if xOffset != 0: componentdic['xOffset'] = str(xOffset)
|
||||
if yOffset != 0: componentdic['yOffset'] = str(yOffset)
|
||||
newcomponentlist.append( componentdic )
|
||||
else:
|
||||
newcomponentlist.append( compdic )
|
||||
if componentlist == newcomponentlist:
|
||||
logger.log("No changes to flatten components", "V")
|
||||
else:
|
||||
componentlist = newcomponentlist
|
||||
logger.log("Components flattened", "V")
|
||||
for c in componentlist:
|
||||
logger.log(str(c), "V")
|
||||
|
||||
# Check if this new glyph exists in the font already; if so, decide whether to replace, or issue warning
|
||||
preservedAPs = set()
|
||||
if targetglyphname in infont.deflayer.keys():
|
||||
logger.log("Target glyph, " + targetglyphname + ", already exists in font.", "V")
|
||||
targetglyph = infont.deflayer[targetglyphname]
|
||||
if targetglyph['outline'] and targetglyph['outline'].contours and not args.force: # don't replace glyph with contours, unless -f set
|
||||
logger.log("Not replacing existing glyph, " + targetglyphname + ", because it has contours.", "W")
|
||||
continue
|
||||
else:
|
||||
logger.log("Replacing information in existing glyph, " + targetglyphname, "I")
|
||||
glyphstatus = "Replace"
|
||||
# delete information from existing glyph
|
||||
targetglyph.remove('outline')
|
||||
targetglyph.remove('advance')
|
||||
for i in xrange(len(targetglyph['anchor'])-1,-1,-1):
|
||||
aname = targetglyph['anchor'][i].element.attrib['name']
|
||||
if preserveRE is not None and preserveRE.match(aname):
|
||||
preservedAPs.add(aname)
|
||||
logger.log("Preserving anchor " + aname, "V")
|
||||
else:
|
||||
targetglyph.remove('anchor',index=i)
|
||||
else:
|
||||
logger.log("Adding new glyph, " + targetglyphname, "I")
|
||||
glyphstatus = "New"
|
||||
# create glyph, using targetglyphname, targetglyphunicode
|
||||
targetglyph = ufo.Uglif(layer=infont.deflayer, name=targetglyphname)
|
||||
# actually add the glyph to the font
|
||||
infont.deflayer.addGlyph(targetglyph)
|
||||
|
||||
if xadvance != 0: targetglyph.add('advance',{'width': str(xadvance)} )
|
||||
if targetglyphunicode: # remove any existing unicode value(s) before adding unicode value
|
||||
for i in xrange(len(targetglyph['unicode'])-1,-1,-1):
|
||||
targetglyph.remove('unicode',index=i)
|
||||
targetglyph.add('unicode',{'hex': targetglyphunicode} )
|
||||
targetglyph.add('outline')
|
||||
# to the outline element, add a component element for every entry in componentlist
|
||||
for compdic in componentlist:
|
||||
comp = ufo.Ucomponent(targetglyph['outline'],ET.Element('component',compdic))
|
||||
targetglyph['outline'].appendobject(comp,'component')
|
||||
# copy anchors to new glyph from targetglyphanchors which has format {'U': (500,1000), 'L': (500,0)}
|
||||
for a in sorted(targetglyphanchors):
|
||||
if removeRE is not None and removeRE.match(a):
|
||||
logger.log("Skipping unwanted anchor " + a, "V")
|
||||
continue # skip this anchor
|
||||
if a not in preservedAPs:
|
||||
targetglyph.add('anchor', {'name': a, 'x': str(targetglyphanchors[a][0]), 'y': str(targetglyphanchors[a][1])} )
|
||||
# mark glyphs as being generated by setting cell mark color if -c or --colors set
|
||||
if colors:
|
||||
# Need to see if the target glyph has changed.
|
||||
if glyphstatus == "Replace":
|
||||
# Need to recreate the xml element then normalize it for comparison with original
|
||||
targetglyph["anchor"].sort(key=lambda anchor: anchor.element.get("name"))
|
||||
targetglyph.rebuildET()
|
||||
attribOrder = params['attribOrders']['glif'] if 'glif' in params['attribOrders'] else {}
|
||||
if params["sortDicts"] or params["precision"] is not None: ufo.normETdata(targetglyph.etree, params, 'glif')
|
||||
etw = ETWriter(targetglyph.etree, attributeOrder=attribOrder, indentIncr=params["indentIncr"],
|
||||
indentFirst=params["indentFirst"], indentML=params["indentML"], precision=params["precision"],
|
||||
floatAttribs=params["floatAttribs"], intAttribs=params["intAttribs"])
|
||||
newxml = etw.serialize_xml()
|
||||
if newxml == targetglyph.inxmlstr: glyphstatus = 'Unchanged'
|
||||
|
||||
x = 0 if glyphstatus == "Unchanged" else 1 if glyphstatus == "Replace" else 2
|
||||
|
||||
color = colors[x]
|
||||
lib = targetglyph["lib"]
|
||||
if color[0]: # Need to set actual color
|
||||
if lib is None: targetglyph.add("lib")
|
||||
targetglyph["lib"].setval("public.markColor", "string", color[0])
|
||||
logger.log(logstatuses[x] + " - setting markColor to " + color[2], "I")
|
||||
elif x < 2: # No need to log for new glyphs
|
||||
if color[1] == "none": # Remove existing color
|
||||
if lib is not None and "public.markColor" in lib: lib.remove("public.markColor")
|
||||
logger.log(logstatuses[x] + " - Removing existing markColor", "I")
|
||||
else:
|
||||
logger.log(logstatuses[x] + " - Leaving existing markColor (if any)", "I")
|
||||
|
||||
# If analysis only, return without writing output font
|
||||
if args.analysis: return
|
||||
# Return changed font and let execute() write it out
|
||||
return infont
|
||||
|
||||
def addtolist(e, prevglyph):
|
||||
"""Given an element ('base' or 'attach') and the name of previous glyph,
|
||||
add a tuple to the list of glyphs in this composite, including
|
||||
"at" and "with" attachment point information, and x and y shift values
|
||||
"""
|
||||
global glyphlist
|
||||
subelementlist = []
|
||||
thisglyphname = e.get('PSName')
|
||||
atvalue = e.get("at")
|
||||
withvalue = e.get("with")
|
||||
shiftx = shifty = None
|
||||
for se in e:
|
||||
if se.tag == 'property': pass
|
||||
elif se.tag == 'shift':
|
||||
shiftx = se.get('x')
|
||||
shifty = se.get('y')
|
||||
elif se.tag == 'attach':
|
||||
subelementlist.append( se )
|
||||
glyphlist.append( ( thisglyphname, prevglyph, atvalue, withvalue, shiftx, shifty ) )
|
||||
for se in subelementlist:
|
||||
addtolist(se, thisglyphname)
|
||||
|
||||
def addtwo(a1, a2):
|
||||
"""Take two items (string, number or None), convert to integer and return sum"""
|
||||
b1 = int(a1) if a1 is not None else 0
|
||||
b2 = int(a2) if a2 is not None else 0
|
||||
return b1 + b2
|
||||
|
||||
def cmd() : execute("UFO",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
45
src/silfont/scripts/psfbuildcompgc.py
Normal file
45
src/silfont/scripts/psfbuildcompgc.py
Normal file
|
@ -0,0 +1,45 @@
|
|||
#!/usr/bin/env python3
|
||||
'''Uses the GlyphConstruction library to build composite glyphs.'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2018 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Victor Gaultney'
|
||||
|
||||
from silfont.core import execute
|
||||
from glyphConstruction import ParseGlyphConstructionListFromString, GlyphConstructionBuilder
|
||||
|
||||
argspec = [
|
||||
('ifont', {'help': 'Input font filename'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output font file','nargs': '?' }, {'type': 'outfont'}),
|
||||
('-i','--cdfile',{'help': 'Composite Definitions input file'}, {'type': 'infile', 'def': 'constructions.txt'}),
|
||||
('-l','--log',{'help': 'Set log file name'}, {'type': 'outfile', 'def': '_gc.log'})]
|
||||
|
||||
def doit(args) :
|
||||
font = args.ifont
|
||||
logger = args.logger
|
||||
|
||||
constructions = ParseGlyphConstructionListFromString(args.cdfile)
|
||||
|
||||
for construction in constructions :
|
||||
# Create a new constructed glyph object
|
||||
try:
|
||||
constructionGlyph = GlyphConstructionBuilder(construction, font)
|
||||
except ValueError as e:
|
||||
logger.log("Invalid CD line '" + construction + "' - " + str(e), "E")
|
||||
else:
|
||||
# Make a new glyph in target font with the new glyph name
|
||||
glyph = font.newGlyph(constructionGlyph.name)
|
||||
# Draw the constructed object onto the new glyph
|
||||
# This is rather odd in how it works
|
||||
constructionGlyph.draw(glyph.getPen())
|
||||
# Copy glyph metadata from constructed object
|
||||
glyph.name = constructionGlyph.name
|
||||
glyph.unicode = constructionGlyph.unicode
|
||||
glyph.note = constructionGlyph.note
|
||||
#glyph.markColor = constructionGlyph.mark
|
||||
glyph.width = constructionGlyph.width
|
||||
|
||||
return font
|
||||
|
||||
def cmd() : execute("FP",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
89
src/silfont/scripts/psfbuildfea.py
Normal file
89
src/silfont/scripts/psfbuildfea.py
Normal file
|
@ -0,0 +1,89 @@
|
|||
#!/usr/bin/env python3
|
||||
__doc__ = 'Build features.fea file into a ttf font'
|
||||
# TODO: add conditional compilation, compare to fea, compile to ttf
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2017 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Martin Hosken'
|
||||
|
||||
from fontTools.feaLib.builder import Builder
|
||||
from fontTools import configLogger
|
||||
from fontTools.ttLib import TTFont
|
||||
from fontTools.ttLib.tables.otTables import lookupTypes
|
||||
from fontTools.feaLib.lookupDebugInfo import LookupDebugInfo
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
class MyBuilder(Builder):
|
||||
|
||||
def __init__(self, font, featurefile, lateSortLookups=False, fronts=None):
|
||||
super(MyBuilder, self).__init__(font, featurefile)
|
||||
self.lateSortLookups = lateSortLookups
|
||||
self.fronts = fronts if fronts is not None else []
|
||||
|
||||
def buildLookups_(self, tag):
|
||||
assert tag in ('GPOS', 'GSUB'), tag
|
||||
countFeatureLookups = 0
|
||||
fronts = set([l for k, l in self.named_lookups_.items() if k in self.fronts])
|
||||
for bldr in self.lookups_:
|
||||
bldr.lookup_index = None
|
||||
if bldr.table == tag and getattr(bldr, '_feature', "") != "":
|
||||
countFeatureLookups += 1
|
||||
lookups = []
|
||||
latelookups = []
|
||||
for bldr in self.lookups_:
|
||||
if bldr.table != tag:
|
||||
continue
|
||||
if self.lateSortLookups and getattr(bldr, '_feature', "") == "":
|
||||
if bldr in fronts:
|
||||
latelookups.insert(0, bldr)
|
||||
else:
|
||||
latelookups.append(bldr)
|
||||
else:
|
||||
bldr.lookup_index = len(lookups)
|
||||
lookups.append(bldr)
|
||||
bldr.map_index = bldr.lookup_index
|
||||
numl = len(lookups)
|
||||
for i, l in enumerate(latelookups):
|
||||
l.lookup_index = numl + i
|
||||
l.map_index = l.lookup_index
|
||||
for l in lookups + latelookups:
|
||||
self.lookup_locations[tag][str(l.lookup_index)] = LookupDebugInfo(
|
||||
location=str(l.location),
|
||||
name=self.get_lookup_name_(l),
|
||||
feature=None)
|
||||
return [b.build() for b in lookups + latelookups]
|
||||
|
||||
def add_lookup_to_feature_(self, lookup, feature_name):
|
||||
super(MyBuilder, self).add_lookup_to_feature_(lookup, feature_name)
|
||||
lookup._feature = feature_name
|
||||
|
||||
|
||||
#TODO: provide more argument info
|
||||
argspec = [
|
||||
('input_fea', {'help': 'Input fea file'}, {}),
|
||||
('input_font', {'help': 'Input font file'}, {}),
|
||||
('-o', '--output', {'help': 'Output font file'}, {}),
|
||||
('-v', '--verbose', {'help': 'Repeat to increase verbosity', 'action': 'count', 'default': 0}, {}),
|
||||
('-m', '--lookupmap', {'help': 'File into which place lookup map'}, {}),
|
||||
('-l','--log',{'help': 'Optional log file'}, {'type': 'outfile', 'def': '_buildfea.log', 'optlog': True}),
|
||||
('-e','--end',{'help': 'Push lookups not in features to the end', 'action': 'store_true'}, {}),
|
||||
('-F','--front',{'help': 'Pull named lookups to the front of unnamed list', 'action': 'append'}, {}),
|
||||
]
|
||||
|
||||
def doit(args) :
|
||||
levels = ["WARNING", "INFO", "DEBUG"]
|
||||
configLogger(level=levels[min(len(levels) - 1, args.verbose)])
|
||||
|
||||
font = TTFont(args.input_font)
|
||||
builder = MyBuilder(font, args.input_fea, lateSortLookups=args.end, fronts=args.front)
|
||||
builder.build()
|
||||
if args.lookupmap:
|
||||
with open(args.lookupmap, "w") as outf:
|
||||
for n, l in sorted(builder.named_lookups_.items()):
|
||||
if l is not None:
|
||||
outf.write("{},{},{}\n".format(n, l.table, l.map_index))
|
||||
font.save(args.output)
|
||||
|
||||
def cmd(): execute(None, doit, argspec)
|
||||
if __name__ == '__main__': cmd()
|
160
src/silfont/scripts/psfchangegdlnames.py
Normal file
160
src/silfont/scripts/psfchangegdlnames.py
Normal file
|
@ -0,0 +1,160 @@
|
|||
#!/usr/bin/env python3
|
||||
__doc__ = '''Change graphite names within GDL based on a csv list in format
|
||||
old name, newname
|
||||
Logs any names not in list
|
||||
Also updates postscript names in postscript() statements based on psnames csv'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2016 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
import os, re
|
||||
|
||||
argspec = [
|
||||
('input',{'help': 'Input file or folder'}, {'type': 'filename'}),
|
||||
('output',{'help': 'Output file or folder', 'nargs': '?'}, {}),
|
||||
('-n','--names',{'help': 'Names csv file'}, {'type': 'incsv', 'def': 'gdlmap.csv'}),
|
||||
('--names2',{'help': '2nd names csv file', 'nargs': '?'}, {'type': 'incsv', 'def': None}),
|
||||
('--psnames',{'help': 'PS names csv file'}, {'type': 'incsv', 'def': 'psnames.csv'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': 'GDLchangeNames.log'})]
|
||||
|
||||
def doit(args) :
|
||||
logger = args.paramsobj.logger
|
||||
|
||||
exceptions = ("glyph", "gamma", "greek_circ")
|
||||
|
||||
# Process input which may be a single file or a directory
|
||||
input = args.input
|
||||
gdlfiles = []
|
||||
|
||||
if os.path.isdir(input) :
|
||||
inputisdir = True
|
||||
indir = input
|
||||
for name in os.listdir(input) :
|
||||
ext = os.path.splitext(name)[1]
|
||||
if ext in ('.gdl','.gdh') :
|
||||
gdlfiles.append(name)
|
||||
else :
|
||||
inputisdir = False
|
||||
indir,inname = os.path.split(input)
|
||||
gdlfiles = [inname]
|
||||
|
||||
# Process output file name - execute() will not have processed file/dir name at all
|
||||
output = "" if args.output is None else args.output
|
||||
outdir,outfile = os.path.split(output)
|
||||
if outfile != "" and os.path.splitext(outfile)[1] == "" : # if no extension on outfile, assume a dir was meant
|
||||
outdir = os.path.join(outdir,outfile)
|
||||
outfile = None
|
||||
if outfile == "" : outfile = None
|
||||
if outfile and inputisdir : logger.log("Can't specify an output file when input is a directory", "S")
|
||||
outappend = None
|
||||
if outdir == "" :
|
||||
if outfile is None :
|
||||
outappend = "_out"
|
||||
else :
|
||||
if outfile == gdlfiles[0] : logger.log("Specify a different output file", "S")
|
||||
outdir = indir
|
||||
else:
|
||||
if indir == outdir :
|
||||
if outfile :
|
||||
if outfile == gdlfiles[0] : logger.log("Specify a different output file", "S")
|
||||
else:
|
||||
logger.log("Specify a different output dir", "S")
|
||||
if not os.path.isdir(outdir) : logger.log("Output directory does not exist", "S")
|
||||
|
||||
# Process names csv file
|
||||
args.names.numfields = 2
|
||||
names = {}
|
||||
for line in args.names : names[line[0]] = line[1]
|
||||
|
||||
# Process names2 csv if present
|
||||
names2 = args.names2
|
||||
if names2 is not None :
|
||||
names2.numfields = 2
|
||||
for line in names2 :
|
||||
n1 = line[0]
|
||||
n2 = line[1]
|
||||
if n1 in names and n2 != names[n1] :
|
||||
logger.log(n1 + " in both names and names2 with different values","E")
|
||||
else :
|
||||
names[n1] = n2
|
||||
|
||||
# Process psnames csv file
|
||||
args.psnames.numfields = 2
|
||||
psnames = {}
|
||||
for line in args.psnames : psnames[line[1]] = line[0]
|
||||
|
||||
missed = []
|
||||
psmissed = []
|
||||
for filen in gdlfiles:
|
||||
dbg = True if filen == 'main.gdh' else False ##
|
||||
file = open(os.path.join(indir,filen),"r")
|
||||
if outappend :
|
||||
base,ext = os.path.splitext(filen)
|
||||
outfilen = base+outappend+ext
|
||||
else :
|
||||
outfilen = filen
|
||||
outfile = open(os.path.join(outdir,outfilen),"w")
|
||||
commentblock = False
|
||||
cnt = 0 ##
|
||||
for line in file:
|
||||
cnt += 1 ##
|
||||
#if cnt > 150 : break ##
|
||||
line = line.rstrip()
|
||||
# Skip comment blocks
|
||||
if line[0:2] == "/*" :
|
||||
outfile.write(line + "\n")
|
||||
if line.find("*/") == -1 : commentblock = True
|
||||
continue
|
||||
if commentblock :
|
||||
outfile.write(line + "\n")
|
||||
if line.find("*/") != -1 : commentblock = False
|
||||
continue
|
||||
# Scan for graphite names
|
||||
cpos = line.find("//")
|
||||
if cpos == -1 :
|
||||
scan = line
|
||||
comment = ""
|
||||
else :
|
||||
scan = line[0:cpos]
|
||||
comment = line[cpos:]
|
||||
tmpline = ""
|
||||
while re.search('[\s(\[,]g\w+?[\s)\],?:;=]'," "+scan+" ") :
|
||||
m = re.search('[\s(\[,]g\w+?[\s)\],?:;=]'," "+scan+" ")
|
||||
gname = m.group(0)[1:-1]
|
||||
if gname in names :
|
||||
gname = names[gname]
|
||||
else :
|
||||
if gname not in missed and gname not in exceptions :
|
||||
logger.log(gname + " from '" + line.strip() + "' in " + filen + " missing from csv", "W")
|
||||
missed.append(gname) # only log each missed name once
|
||||
tmpline = tmpline + scan[lastend:m.start()] + gname
|
||||
scan = scan[m.end()-2:]
|
||||
tmpline = tmpline + scan + comment
|
||||
|
||||
# Scan for postscript statements
|
||||
scan = tmpline[0:tmpline.find("//")] if tmpline.find("//") != -1 else tmpline
|
||||
newline = ""
|
||||
lastend = 0
|
||||
|
||||
for m in re.finditer('postscript\(.+?\)',scan) :
|
||||
psname = m.group(0)[12:-2]
|
||||
if psname in psnames :
|
||||
psname = psnames[psname]
|
||||
else :
|
||||
if psname not in psmissed :
|
||||
logger.log(psname + " from '" + line.strip() + "' in " + filen + " missing from ps csv", "W")
|
||||
psmissed.append(psname) # only log each missed name once
|
||||
newline = newline + scan[lastend:m.start()+12] + psname
|
||||
lastend = m.end()-2
|
||||
|
||||
newline = newline + tmpline[lastend:]
|
||||
outfile.write(newline + "\n")
|
||||
file.close()
|
||||
outfile.close()
|
||||
if missed != [] : logger.log("Names were missed from the csv file - see log file for details","E")
|
||||
return
|
||||
|
||||
def cmd() : execute(None,doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
35
src/silfont/scripts/psfchangettfglyphnames.py
Normal file
35
src/silfont/scripts/psfchangettfglyphnames.py
Normal file
|
@ -0,0 +1,35 @@
|
|||
#!/usr/bin/env python3
|
||||
__doc__ = 'Rename the glyphs in a ttf file based on production names in a UFO'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2017 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Alan Ward'
|
||||
|
||||
# Rename the glyphs in a ttf file based on production names in a UFO
|
||||
# using same technique as fontmake.
|
||||
# Production names come from ufo.lib.public.postscriptNames according to ufo2ft comments
|
||||
# but I don't know exactly where in the UFO that is
|
||||
|
||||
from silfont.core import execute
|
||||
import defcon, fontTools.ttLib, ufo2ft
|
||||
|
||||
argspec = [
|
||||
('iufo', {'help': 'Input UFO folder'}, {}),
|
||||
('ittf', {'help': 'Input ttf file name'}, {}),
|
||||
('ottf', {'help': 'Output ttf file name'}, {})]
|
||||
|
||||
def doit(args):
|
||||
ufo = defcon.Font(args.iufo)
|
||||
ttf = fontTools.ttLib.TTFont(args.ittf)
|
||||
|
||||
args.logger.log('Renaming the input ttf glyphs based on production names in the UFO', 'P')
|
||||
postProcessor = ufo2ft.PostProcessor(ttf, ufo)
|
||||
ttf = postProcessor.process(useProductionNames=True, optimizeCFF=False)
|
||||
|
||||
args.logger.log('Saving the output ttf file', 'P')
|
||||
ttf.save(args.ottf)
|
||||
|
||||
args.logger.log('Done', 'P')
|
||||
|
||||
def cmd(): execute(None, doit, argspec)
|
||||
if __name__ == '__main__': cmd()
|
68
src/silfont/scripts/psfcheckbasicchars.py
Normal file
68
src/silfont/scripts/psfcheckbasicchars.py
Normal file
|
@ -0,0 +1,68 @@
|
|||
#!/usr/bin/env python3
|
||||
__doc__ = '''Checks a UFO for the presence of glyphs that represent the
|
||||
Recommended characters for Non-Roman fonts and warns if any are missing.
|
||||
https://scriptsource.org/entry/gg5wm9hhd3'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2018 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Victor Gaultney'
|
||||
|
||||
from silfont.core import execute
|
||||
from silfont.util import required_chars
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('-r', '--rtl', {'help': 'Also include characters just for RTL scripts', 'action': 'store_true'}, {}),
|
||||
('-s', '--silpua', {'help': 'Also include characters in SIL PUA block', 'action': 'store_true'}, {}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': '_checkbasicchars.log'})]
|
||||
|
||||
def doit(args) :
|
||||
font = args.ifont
|
||||
logger = args.logger
|
||||
|
||||
rationales = {
|
||||
"A": "in Codepage 1252",
|
||||
"B": "in MacRoman",
|
||||
"C": "for publishing",
|
||||
"D": "for Non-Roman fonts and publishing",
|
||||
"E": "by Google Fonts",
|
||||
"F": "by TeX for visible space",
|
||||
"G": "for encoding conversion utilities",
|
||||
"H": "in case Variation Sequences are defined in future",
|
||||
"I": "to detect byte order",
|
||||
"J": "to render combining marks in isolation",
|
||||
"K": "to view sidebearings for every glyph using these characters"}
|
||||
|
||||
charsets = ["basic"]
|
||||
if args.rtl: charsets.append("rtl")
|
||||
if args.silpua: charsets.append("sil")
|
||||
|
||||
req_chars = required_chars(charsets)
|
||||
|
||||
glyphlist = font.deflayer.keys()
|
||||
|
||||
for glyphn in glyphlist :
|
||||
glyph = font.deflayer[glyphn]
|
||||
if len(glyph["unicode"]) == 1 :
|
||||
unival = glyph["unicode"][0].hex
|
||||
if unival in req_chars:
|
||||
del req_chars[unival]
|
||||
|
||||
cnt = len(req_chars)
|
||||
if cnt > 0:
|
||||
for usv in sorted(req_chars.keys()):
|
||||
item = req_chars[usv]
|
||||
psname = item["ps_name"]
|
||||
gname = item["glyph_name"]
|
||||
name = psname if psname == gname else psname + ", " + gname
|
||||
logger.log("U+" + usv + " from the " + item["sil_set"] +
|
||||
" set has no representative glyph (" + name + ")", "W")
|
||||
logger.log("Rationale: This character is needed " + rationales[item["rationale"]], "I")
|
||||
if item["notes"]:
|
||||
logger.log(item["notes"], "I")
|
||||
logger.log("There are " + str(cnt) + " required characters missing", "E")
|
||||
|
||||
return
|
||||
|
||||
def cmd() : execute("UFO",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
142
src/silfont/scripts/psfcheckclassorders.py
Normal file
142
src/silfont/scripts/psfcheckclassorders.py
Normal file
|
@ -0,0 +1,142 @@
|
|||
#!/usr/bin/env python3
|
||||
'''verify classes defined in xml have correct ordering where needed
|
||||
|
||||
Looks for comment lines in the classes.xml file that match the string:
|
||||
*NEXT n CLASSES MUST MATCH*
|
||||
where n is the number of upcoming class definitions that must result in the
|
||||
same glyph alignment when glyph names are sorted by TTF order (as described
|
||||
in the glyph_data.csv file).
|
||||
'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2019 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Bob Hallissy'
|
||||
|
||||
import re
|
||||
import types
|
||||
from xml.etree import ElementTree as ET
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [
|
||||
('classes', {'help': 'class definition in XML format', 'nargs': '?', 'default': 'classes.xml'}, {'type': 'infile'}),
|
||||
('glyphdata', {'help': 'Glyph info csv file', 'nargs': '?', 'default': 'glyph_data.csv'}, {'type': 'incsv'}),
|
||||
('--gname', {'help': 'Column header for glyph name', 'default': 'glyph_name'}, {}),
|
||||
('--sort', {'help': 'Column header(s) for sort order', 'default': 'sort_final'}, {}),
|
||||
]
|
||||
|
||||
# Dictionary of glyphName : sortValue
|
||||
sorts = dict()
|
||||
|
||||
# Keep track of glyphs mentioned in classes but not in glyph_data.csv
|
||||
missingGlyphs = set()
|
||||
|
||||
def doit(args):
|
||||
logger = args.logger
|
||||
|
||||
# Read input csv to get glyph sort order
|
||||
incsv = args.glyphdata
|
||||
fl = incsv.firstline
|
||||
if fl is None: logger.log("Empty input file", "S")
|
||||
if args.gname in fl:
|
||||
glyphnpos = fl.index(args.gname)
|
||||
else:
|
||||
logger.log("No" + args.gname + "field in csv headers", "S")
|
||||
if args.sort in fl:
|
||||
sortpos = fl.index(args.sort)
|
||||
else:
|
||||
logger.log('No "' + args.sort + '" heading in csv headers"', "S")
|
||||
next(incsv.reader, None) # Skip first line with containing headers
|
||||
for line in incsv:
|
||||
glyphn = line[glyphnpos]
|
||||
if len(glyphn) == 0:
|
||||
continue # No need to include cases where name is blank
|
||||
sorts[glyphn] = float(line[sortpos])
|
||||
|
||||
# RegEx we are looking for in comments
|
||||
matchCountRE = re.compile("\*NEXT ([1-9]\d*) CLASSES MUST MATCH\*")
|
||||
|
||||
# parse classes.xml but include comments
|
||||
class MyTreeBuilder(ET.TreeBuilder):
|
||||
def comment(self, data):
|
||||
res = matchCountRE.search(data)
|
||||
if res:
|
||||
# record the count of classes that must match
|
||||
self.start(ET.Comment, {})
|
||||
self.data(res.group(1))
|
||||
self.end(ET.Comment)
|
||||
doc = ET.parse(args.classes, parser=ET.XMLParser(target=MyTreeBuilder())).getroot()
|
||||
|
||||
# process results looking for both class elements and specially formatted comments
|
||||
matchCount = 0
|
||||
refClassList = None
|
||||
refClassName = None
|
||||
|
||||
for child in doc:
|
||||
if isinstance(child.tag, types.FunctionType):
|
||||
# Special type used for comments
|
||||
if matchCount > 0:
|
||||
logger.log("Unexpected match request '{}': matching {} is not yet complete".format(child.text, refClassName), "E")
|
||||
ref = None
|
||||
matchCount = int(child.text)
|
||||
# print "Match count = {}".format(matchCount)
|
||||
|
||||
elif child.tag == 'class':
|
||||
l = orderClass(child, logger) # Do this so we record classes whether we match them or not.
|
||||
if matchCount > 0:
|
||||
matchCount -= 1
|
||||
className = child.attrib['name']
|
||||
if refClassName is None:
|
||||
refClassList = l
|
||||
refLen = len(refClassList)
|
||||
refClassName = className
|
||||
else:
|
||||
# compare ref list and l
|
||||
if len(l) != refLen:
|
||||
logger.log("Class {} (length {}) and {} (length {}) have unequal length".format(refClassName, refLen, className, len(l)), "E")
|
||||
else:
|
||||
errCount = 0
|
||||
for i in range(refLen):
|
||||
if l[i][0] != refClassList[i][0]:
|
||||
logger.log ("Class {} and {} inconsistent order glyphs {} and {}".format(refClassName, className, refClassList[i][2], l[i][2]), "E")
|
||||
errCount += 1
|
||||
if errCount > 5:
|
||||
logger.log ("Abandoning compare between Classes {} and {}".format(refClassName, className), "E")
|
||||
break
|
||||
if matchCount == 0:
|
||||
refClassName = None
|
||||
|
||||
# List glyphs mentioned in classes.xml but not present in glyph_data:
|
||||
if len(missingGlyphs):
|
||||
logger.log('Glyphs mentioned in classes.xml but not present in glyph_data: ' + ', '.join(sorted(missingGlyphs)), 'W')
|
||||
|
||||
|
||||
classes = {} # Keep record of all classes we've seen so we can flatten references
|
||||
|
||||
def orderClass(classElement, logger):
|
||||
# returns a list of tuples, each containing (indexWithinClass, sortOrder, glyphName)
|
||||
# list is sorted by sortOrder
|
||||
glyphList = classElement.text.split()
|
||||
res = []
|
||||
for i in range(len(glyphList)):
|
||||
token = glyphList[i]
|
||||
if token.startswith('@'):
|
||||
# Nested class
|
||||
cname = token[1:]
|
||||
if cname in classes:
|
||||
res.extend(classes[cname])
|
||||
else:
|
||||
logger.log("Invalid fea: class {} referenced before being defined".format(cname),"S")
|
||||
else:
|
||||
# simple glyph name -- make sure it is in glyph_data:
|
||||
if token in sorts:
|
||||
res.append((i, sorts[token], token))
|
||||
else:
|
||||
missingGlyphs.add(token)
|
||||
|
||||
classes[classElement.attrib['name']] = res
|
||||
return sorted(res, key=lambda x: x[1])
|
||||
|
||||
|
||||
|
||||
def cmd() : execute(None,doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
65
src/silfont/scripts/psfcheckftml.py
Normal file
65
src/silfont/scripts/psfcheckftml.py
Normal file
|
@ -0,0 +1,65 @@
|
|||
#!/usr/bin/env python3
|
||||
'''Test structural integrity of one or more ftml files
|
||||
|
||||
Assumes ftml files have already validated against FTML.dtd, for example by using:
|
||||
xmllint --noout --dtdvalid FTML.dtd inftml.ftml
|
||||
|
||||
Verifies that:
|
||||
- silfont.ftml can parse the file
|
||||
- every stylename is defined the <styles> list '''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2021 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Bob Hallissy'
|
||||
|
||||
import glob
|
||||
from silfont.ftml import Fxml, Ftest
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [
|
||||
('inftml', {'help': 'Input ftml filename pattern (default: *.ftml) ', 'nargs' : '?', 'default' : '*.ftml'}, {}),
|
||||
]
|
||||
|
||||
def doit(args):
|
||||
logger = args.logger
|
||||
fnames = glob.glob(args.inftml)
|
||||
if len(fnames) == 0:
|
||||
logger.log(f'No files matching "{args.inftml}" found.','E')
|
||||
for fname in glob.glob(args.inftml):
|
||||
logger.log(f'checking {fname}', 'P')
|
||||
unknownStyles = set()
|
||||
usedStyles = set()
|
||||
|
||||
# recursively find and check all <test> elements in a <testsgroup>
|
||||
def checktestgroup(testgroup):
|
||||
for test in testgroup.tests:
|
||||
# Not sure why, but sub-testgroups are also included in tests, so filter those out for now
|
||||
if isinstance(test, Ftest) and test.stylename:
|
||||
sname = test.stylename
|
||||
usedStyles.add(sname)
|
||||
if sname is not None and sname not in unknownStyles and \
|
||||
not (hasStyles and sname in ftml.head.styles):
|
||||
logger.log(f' stylename "{sname}" not defined in head/styles', 'E')
|
||||
unknownStyles.add(sname)
|
||||
# recurse to nested testgroups if any:
|
||||
if testgroup.testgroups is not None:
|
||||
for subgroup in testgroup.testgroups:
|
||||
checktestgroup(subgroup)
|
||||
|
||||
with open(fname,encoding='utf8') as f:
|
||||
# Attempt to parse the ftml file
|
||||
ftml = Fxml(f)
|
||||
hasStyles = ftml.head.styles is not None # Whether or not any styles are defined in head element
|
||||
|
||||
# Look through all tests for undefined styles:
|
||||
for testgroup in ftml.testgroups:
|
||||
checktestgroup(testgroup)
|
||||
|
||||
if hasStyles:
|
||||
# look for unused styles:
|
||||
for style in ftml.head.styles:
|
||||
if style not in usedStyles:
|
||||
logger.log(f' defined style "{style}" not used in any test', 'W')
|
||||
|
||||
def cmd() : execute(None,doit, argspec)
|
||||
if __name__ == "__main__": cmd()
|
133
src/silfont/scripts/psfcheckglyphinventory.py
Normal file
133
src/silfont/scripts/psfcheckglyphinventory.py
Normal file
|
@ -0,0 +1,133 @@
|
|||
#!/usr/bin/env python3
|
||||
__doc__ = '''Warn for differences in glyph inventory and encoding between UFO and input file (e.g., glyph_data.csv).
|
||||
Input file can be:
|
||||
- simple text file with one glyph name per line
|
||||
- csv file with headers, using headers "glyph_name" and, if present, "USV"'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2020-2023 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Bob Hallissy'
|
||||
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [
|
||||
('ifont', {'help': 'Input UFO'}, {'type': 'infont'}),
|
||||
('-i', '--input', {'help': 'Input text file, default glyph_data.csv in current directory', 'default': 'glyph_data.csv'}, {'type': 'incsv'}),
|
||||
('--indent', {'help': 'size of indent (default 10)', 'type': int, 'default': 10}, {}),
|
||||
('-l', '--log', {'help': 'Log file'}, {'type': 'outfile', 'def': '_checkinventory.log'})]
|
||||
|
||||
def doit(args):
|
||||
font = args.ifont
|
||||
incsv = args.input
|
||||
logger = args.logger
|
||||
indent = ' '*args.indent
|
||||
|
||||
if not (args.quiet or 'scrlevel' in args.paramsobj.sets['command line']):
|
||||
logger.raisescrlevel('W') # Raise level to W if not already W or higher
|
||||
|
||||
def csvWarning(msg, exception=None):
|
||||
m = f'glyph_data line {incsv.line_num}: {msg}'
|
||||
if exception is not None:
|
||||
m += '; ' + exception.message
|
||||
logger.log(m, 'W')
|
||||
|
||||
# Get glyph names and encoding from input file
|
||||
glyphFromCSVuid = {}
|
||||
uidsFromCSVglyph = {}
|
||||
|
||||
# Identify file format (plain text or csv) from first line
|
||||
# If csv file, it must have headers for "glyph_name" and "USV"
|
||||
fl = incsv.firstline
|
||||
if fl is None: logger.log('Empty input file', 'S')
|
||||
numfields = len(fl)
|
||||
incsv.numfields = numfields
|
||||
usvCol = None # Use this as a flag later to determine whether to check USV inventory
|
||||
if numfields > 1: # More than 1 column, so must have headers
|
||||
# Required columns:
|
||||
try:
|
||||
nameCol = fl.index('glyph_name');
|
||||
except ValueError as e:
|
||||
logger.log('Missing csv input field: ' + e.message, 'S')
|
||||
except Exception as e:
|
||||
logger.log('Error reading csv input field: ' + e.message, 'S')
|
||||
# Optional columns:
|
||||
usvCol = fl.index('USV') if 'USV' in fl else None
|
||||
|
||||
next(incsv.reader, None) # Skip first line with headers in
|
||||
|
||||
glyphList = set()
|
||||
for line in incsv:
|
||||
gname = line[nameCol]
|
||||
if len(gname) == 0 or line[0].strip().startswith('#'):
|
||||
continue # No need to include cases where name is blank or comment
|
||||
if gname in glyphList:
|
||||
csvWarning(f'glyph name {gname} previously seen; ignored')
|
||||
continue
|
||||
glyphList.add(gname)
|
||||
|
||||
if usvCol:
|
||||
# Process USV field, which can be:
|
||||
# empty string -- unencoded glyph
|
||||
# single USV -- encoded glyph
|
||||
# USVs connected by '_' -- ligature (in glyph_data for test generation, not glyph encoding)
|
||||
# space-separated list of the above, where presence of multiple USVs indicates multiply-encoded glyph
|
||||
for usv in line[usvCol].split():
|
||||
if '_' in usv:
|
||||
# ignore ligatures -- these are for test generation, not encoding
|
||||
continue
|
||||
try:
|
||||
uid = int(usv, 16)
|
||||
except Exception as e:
|
||||
csvWarning("invalid USV '%s' (%s); ignored: " % (usv, e.message))
|
||||
|
||||
if uid in glyphFromCSVuid:
|
||||
csvWarning('USV %04X previously seen; ignored' % uid)
|
||||
else:
|
||||
# Remember this glyph encoding
|
||||
glyphFromCSVuid[uid] = gname
|
||||
uidsFromCSVglyph.setdefault(gname, set()).add(uid)
|
||||
elif numfields == 1: # Simple text file.
|
||||
glyphList = set(line[0] for line in incsv)
|
||||
else:
|
||||
logger.log('Invalid csv file', 'S')
|
||||
|
||||
# Get the list of glyphs in the UFO
|
||||
ufoList = set(font.deflayer.keys())
|
||||
|
||||
notInUFO = glyphList - ufoList
|
||||
notInGlyphData = ufoList - glyphList
|
||||
|
||||
if len(notInUFO):
|
||||
logger.log('Glyphs present in glyph_data but missing from UFO:\n' + '\n'.join(indent + g for g in sorted(notInUFO)), 'W')
|
||||
|
||||
if len(notInGlyphData):
|
||||
logger.log('Glyphs present in UFO but missing from glyph_data:\n' + '\n'.join(indent + g for g in sorted(notInGlyphData)), 'W')
|
||||
|
||||
if len(notInUFO) == 0 and len(notInGlyphData) == 0:
|
||||
logger.log('No glyph inventory differences found', 'P')
|
||||
|
||||
if usvCol:
|
||||
# We can check encoding of glyphs in common
|
||||
inBoth = glyphList & ufoList # Glyphs we want to examine
|
||||
|
||||
csvEncodings = set(f'{gname}|{uid:04X}' for gname in filter(lambda x: x in uidsFromCSVglyph, inBoth) for uid in uidsFromCSVglyph[gname] )
|
||||
ufoEncodings = set(f'{gname}|{int(u.hex, 16):04X}' for gname in inBoth for u in font.deflayer[gname]['unicode'])
|
||||
|
||||
notInUFO = csvEncodings - ufoEncodings
|
||||
notInGlyphData = ufoEncodings - csvEncodings
|
||||
|
||||
if len(notInUFO):
|
||||
logger.log('Encodings present in glyph_data but missing from UFO:\n' + '\n'.join(indent + g for g in sorted(notInUFO)), 'W')
|
||||
|
||||
if len(notInGlyphData):
|
||||
logger.log('Encodings present in UFO but missing from glyph_data:\n' + '\n'.join(indent + g for g in sorted(notInGlyphData)), 'W')
|
||||
|
||||
if len(notInUFO) == 0 and len(notInGlyphData) == 0:
|
||||
logger.log('No glyph encoding differences found', 'P')
|
||||
|
||||
else:
|
||||
logger.log('Glyph encodings not compared', 'P')
|
||||
|
||||
|
||||
def cmd(): execute('UFO', doit, argspec)
|
||||
if __name__ == '__main__': cmd()
|
75
src/silfont/scripts/psfcheckinterpolatable.py
Normal file
75
src/silfont/scripts/psfcheckinterpolatable.py
Normal file
|
@ -0,0 +1,75 @@
|
|||
#!/usr/bin/env python3
|
||||
__doc__ = '''Check that the ufos in a designspace file are interpolatable'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2021 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
from fontParts.world import OpenFont
|
||||
import fontTools.designspaceLib as DSD
|
||||
|
||||
argspec = [
|
||||
('designspace', {'help': 'Design space file'}, {'type': 'filename'}),
|
||||
('-l','--log', {'help': 'Log file'}, {'type': 'outfile', 'def': '_checkinterp.log'}),
|
||||
]
|
||||
|
||||
def doit(args) :
|
||||
logger = args.logger
|
||||
|
||||
ds = DSD.DesignSpaceDocument()
|
||||
ds.read(args.designspace)
|
||||
if len(ds.sources) == 1: logger.log("The design space file has only one source UFO", "S")
|
||||
|
||||
# Find all the UFOs from the DS Sources. Where there are more than 2, the primary one will be considered to be
|
||||
# the one where info copy="1" is set (as per psfsyncmasters). If not set for any, use the first ufo.
|
||||
pufo = None
|
||||
otherfonts = {}
|
||||
for source in ds.sources:
|
||||
ufo = source.path
|
||||
try:
|
||||
font = OpenFont(ufo)
|
||||
except Exception as e:
|
||||
logger.log("Unable to open " + ufo, "S")
|
||||
if source.copyInfo:
|
||||
if pufo: logger.log('Multiple fonts with <info copy="1" />', "S")
|
||||
pufo = ufo
|
||||
pfont = font
|
||||
else:
|
||||
otherfonts[ufo] = font
|
||||
if pufo is None: # If we can't identify the primary font by conyInfo, just use the first one
|
||||
pufo = ds.sources[0].path
|
||||
pfont = otherfonts[pufo]
|
||||
del otherfonts[pufo]
|
||||
|
||||
pinventory = set(glyph.name for glyph in pfont)
|
||||
|
||||
for oufo in otherfonts:
|
||||
logger.log(f'Comparing {pufo} with {oufo}', 'P')
|
||||
ofont = otherfonts[oufo]
|
||||
oinventory = set(glyph.name for glyph in ofont)
|
||||
|
||||
if pinventory != oinventory:
|
||||
logger.log("The glyph inventories in the two UFOs differ", "E")
|
||||
for glyphn in sorted(pinventory - oinventory):
|
||||
logger.log(f'{glyphn} is only in {pufo}', "W")
|
||||
for glyphn in sorted(oinventory - pinventory):
|
||||
logger.log(f'{glyphn} is only in {oufo}', "W")
|
||||
else:
|
||||
logger.log("The UFOs have the same glyph inventories", "P")
|
||||
# Are glyphs compatible for interpolation
|
||||
incompatibles = {}
|
||||
for glyphn in pinventory & oinventory:
|
||||
compatible, report = pfont[glyphn].isCompatible(ofont[glyphn])
|
||||
if not compatible: incompatibles[glyphn] = report
|
||||
if incompatibles:
|
||||
logger.log(f'{len(incompatibles)} glyphs are not interpolatable', 'E')
|
||||
for glyphn in sorted(incompatibles):
|
||||
logger.log(f'{glyphn} is not interpolatable', 'W')
|
||||
logger.log(incompatibles[glyphn], "I")
|
||||
if logger.scrlevel == "W": logger.log("To see detailed reports run with scrlevel and/or loglevel set to I")
|
||||
else:
|
||||
logger.log("All the glyphs are interpolatable", "P")
|
||||
|
||||
def cmd() : execute(None,doit, argspec)
|
||||
if __name__ == "__main__": cmd()
|
114
src/silfont/scripts/psfcheckproject.py
Normal file
114
src/silfont/scripts/psfcheckproject.py
Normal file
|
@ -0,0 +1,114 @@
|
|||
#!/usr/bin/env python3
|
||||
__doc__ = '''Run project-wide checks. Currently just checking glyph inventory and unicode values for ufo sources in
|
||||
the designspace files supplied but maybe expanded to do more checks later'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2022 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute, splitfn
|
||||
import fontTools.designspaceLib as DSD
|
||||
import glob, os
|
||||
import silfont.ufo as UFO
|
||||
import silfont.etutil as ETU
|
||||
|
||||
argspec = [
|
||||
('ds', {'help': 'designspace files to check; wildcards allowed', 'nargs': "+"}, {'type': 'filename'})
|
||||
]
|
||||
|
||||
## Quite a few things are being set and then not used at the moment - this is to allow for more checks to be added in the future.
|
||||
# For example projectroot, psource
|
||||
|
||||
def doit(args):
|
||||
logger = args.logger
|
||||
|
||||
# Open all the supplied DS files and ufos within them
|
||||
dsinfos = []
|
||||
failures = False
|
||||
for pattern in args.ds:
|
||||
cnt = 0
|
||||
for fullpath in glob.glob(pattern):
|
||||
cnt += 1
|
||||
logger.log(f'Opening {fullpath}', 'P')
|
||||
try:
|
||||
ds = DSD.DesignSpaceDocument.fromfile(fullpath)
|
||||
except Exception as e:
|
||||
logger.log(f'Error opening {fullpath}: {e}', 'E')
|
||||
failures = True
|
||||
break
|
||||
dsinfos.append({'dspath': fullpath, 'ds': ds})
|
||||
if not cnt: logger.log(f'No files matched {pattern}', "S")
|
||||
if failures: logger.log("Failed to open all the designspace files", "S")
|
||||
|
||||
# Find the project root based on first ds assuming the project root is one level above a source directory containing the DS files
|
||||
path = dsinfos[0]['dspath']
|
||||
(path, base, ext) = splitfn(path)
|
||||
(parent,dir) = os.path.split(path)
|
||||
projectroot = parent if dir == "source" else None
|
||||
logger.log(f'Project root: {projectroot}', "V")
|
||||
|
||||
# Find and open all the unique UFO sources in the DSs
|
||||
ufos = {}
|
||||
refufo = None
|
||||
for dsinfo in dsinfos:
|
||||
logger.log(f'Processing {dsinfo["dspath"]}', "V")
|
||||
ds = dsinfo['ds']
|
||||
for source in ds.sources:
|
||||
if source.path not in ufos:
|
||||
ufos[source.path] = Ufo(source, logger)
|
||||
if not refufo: refufo = source.path # For now use the first found. Need to work out how to choose the best one
|
||||
|
||||
refunicodes = ufos[refufo].unicodes
|
||||
refglyphlist = set(refunicodes)
|
||||
(path,refname) = os.path.split(refufo)
|
||||
|
||||
# Now compare with other UFOs
|
||||
logger.log(f'Comparing glyph inventory and unicode values with those in {refname}', "P")
|
||||
for ufopath in ufos:
|
||||
if ufopath == refufo: continue
|
||||
ufo = ufos[ufopath]
|
||||
logger.log(f'Checking {ufo.name}', "I")
|
||||
unicodes = ufo.unicodes
|
||||
glyphlist = set(unicodes)
|
||||
missing = refglyphlist - glyphlist
|
||||
extras = glyphlist - refglyphlist
|
||||
both = glyphlist - extras
|
||||
if missing: logger.log(f'These glyphs are missing from {ufo.name}: {str(list(missing))}', 'E')
|
||||
if extras: logger.log(f'These extra glyphs are in {ufo.name}: {", ".join(extras)}', 'E')
|
||||
valdiff = [f'{g}: {str(unicodes[g])}/{str(refunicodes[g])}'
|
||||
for g in both if refunicodes[g] != unicodes[g]]
|
||||
if valdiff:
|
||||
valdiff = "\n".join(valdiff)
|
||||
logger.log(f'These glyphs in {ufo.name} have different unicode values to those in {refname}:\n'
|
||||
f'{valdiff}', 'E')
|
||||
|
||||
class Ufo(object): # Read just the bits for UFO needed for current checks for efficientcy reasons
|
||||
def __init__(self, source, logger):
|
||||
self.source = source
|
||||
(path, self.name) = os.path.split(source.path)
|
||||
self.logger = logger
|
||||
self.ufodir = source.path
|
||||
self.unicodes = {}
|
||||
if not os.path.isdir(self.ufodir): logger.log(self.ufodir + " in designspace doc does not exist", "S")
|
||||
try:
|
||||
self.layercontents = UFO.Uplist(font=None, dirn=self.ufodir, filen="layercontents.plist")
|
||||
except Exception as e:
|
||||
logger.log("Unable to open layercontents.plist in " + self.ufodir, "S")
|
||||
for i in sorted(self.layercontents.keys()):
|
||||
layername = self.layercontents[i][0].text
|
||||
if layername != 'public.default': continue
|
||||
layerdir = self.layercontents[i][1].text
|
||||
fulldir = os.path.join(self.ufodir, layerdir)
|
||||
self.contents = UFO.Uplist(font=None, dirn=fulldir, filen="contents.plist")
|
||||
for glyphn in sorted(self.contents.keys()):
|
||||
glifn = self.contents[glyphn][1].text
|
||||
glyph = ETU.xmlitem(os.path.join(self.ufodir,layerdir), glifn, logger=logger)
|
||||
unicode = None
|
||||
for x in glyph.etree:
|
||||
if x.tag == 'unicode':
|
||||
unicode = x.attrib['hex']
|
||||
break
|
||||
self.unicodes[glyphn] = unicode
|
||||
|
||||
def cmd(): execute('', doit, argspec)
|
||||
if __name__ == '__main__': cmd()
|
65
src/silfont/scripts/psfcompdef2xml.py
Normal file
65
src/silfont/scripts/psfcompdef2xml.py
Normal file
|
@ -0,0 +1,65 @@
|
|||
#!/usr/bin/env python3
|
||||
__doc__ = 'convert composite definition file to XML format'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2015 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Rowe'
|
||||
|
||||
from silfont.core import execute
|
||||
from silfont.etutil import ETWriter
|
||||
from silfont.comp import CompGlyph
|
||||
from xml.etree import ElementTree as ET
|
||||
|
||||
# specify three parameters: input file (single line format), output file (XML format), log file
|
||||
# and optional -p indentFirst " " -p indentIncr " " -p "PSName,UID,with,at,x,y" for XML formatting.
|
||||
argspec = [
|
||||
('input',{'help': 'Input file of CD in single line format'}, {'type': 'infile'}),
|
||||
('output',{'help': 'Output file of CD in XML format'}, {'type': 'outfile', 'def': '_out.xml'}),
|
||||
('log',{'help': 'Log file'},{'type': 'outfile', 'def': '_log.txt'}),
|
||||
('-p','--params',{'help': 'XML formatting parameters: indentFirst, indentIncr, attOrder','action': 'append'}, {'type': 'optiondict'})]
|
||||
|
||||
def doit(args) :
|
||||
ofile = args.output
|
||||
lfile = args.log
|
||||
filelinecount = 0
|
||||
linecount = 0
|
||||
elementcount = 0
|
||||
cgobj = CompGlyph()
|
||||
f = ET.Element('font')
|
||||
for line in args.input.readlines():
|
||||
filelinecount += 1
|
||||
testline = line.strip()
|
||||
if len(testline) > 0 and testline[0:1] != '#': # not whitespace or comment
|
||||
linecount += 1
|
||||
cgobj.CDline=line
|
||||
cgobj.CDelement=None
|
||||
try:
|
||||
cgobj.parsefromCDline()
|
||||
if cgobj.CDelement != None:
|
||||
f.append(cgobj.CDelement)
|
||||
elementcount += 1
|
||||
except ValueError as e:
|
||||
lfile.write("Line "+str(filelinecount)+": "+str(e)+'\n')
|
||||
if linecount != elementcount:
|
||||
lfile.write("Lines read from input file: " + str(filelinecount)+'\n')
|
||||
lfile.write("Lines parsed (excluding blank and comment lines): " + str(linecount)+'\n')
|
||||
lfile.write("Valid glyphs found: " + str(elementcount)+'\n')
|
||||
# instead of simple serialization with: ofile.write(ET.tostring(f))
|
||||
# create ETWriter object and specify indentation and attribute order to get normalized output
|
||||
indentFirst = " "
|
||||
indentIncr = " "
|
||||
attOrder = "PSName,UID,with,at,x,y"
|
||||
for k in args.params:
|
||||
if k == 'indentIncr': indentIncr = args.params['indentIncr']
|
||||
elif k == 'indentFirst': indentFirst = args.params['indentFirst']
|
||||
elif k == 'attOrder': attOrder = args.params['attOrder']
|
||||
x = attOrder.split(',')
|
||||
attributeOrder = dict(zip(x,range(len(x))))
|
||||
etwobj=ETWriter(f, indentFirst=indentFirst, indentIncr=indentIncr, attributeOrder=attributeOrder)
|
||||
ofile.write(etwobj.serialize_xml())
|
||||
|
||||
return
|
||||
|
||||
def cmd() : execute(None,doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
||||
|
100
src/silfont/scripts/psfcompressgr.py
Normal file
100
src/silfont/scripts/psfcompressgr.py
Normal file
|
@ -0,0 +1,100 @@
|
|||
#!/usr/bin/env python3
|
||||
__doc__ = 'Compress Graphite tables in a font'
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2017 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Martin Hosken'
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input TTF'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output TTF','nargs': '?' }, {'type': 'outfont'}),
|
||||
('-l','--log',{'help': 'Optional log file'}, {'type': 'outfile', 'def': '_compressgr', 'optlog': True})
|
||||
]
|
||||
|
||||
from silfont.core import execute
|
||||
from fontTools.ttLib.tables.DefaultTable import DefaultTable
|
||||
import lz4.block
|
||||
import sys, struct
|
||||
|
||||
class lz4tuple(object) :
|
||||
def __init__(self, start) :
|
||||
self.start = start
|
||||
self.literal = start
|
||||
self.literal_len = 0
|
||||
self.match_dist = 0
|
||||
self.match_len = 0
|
||||
self.end = 0
|
||||
|
||||
def __str__(self) :
|
||||
return "lz4tuple(@{},{}+{},-{}+{})={}".format(self.start, self.literal, self.literal_len, self.match_dist, self.match_len, self.end)
|
||||
|
||||
def read_literal(t, dat, start, datlen) :
|
||||
if t == 15 and start < datlen :
|
||||
v = ord(dat[start:start+1])
|
||||
t += v
|
||||
while v == 0xFF and start < datlen :
|
||||
start += 1
|
||||
v = ord(dat[start:start+1])
|
||||
t += v
|
||||
start += 1
|
||||
return (t, start)
|
||||
|
||||
def write_literal(num, shift) :
|
||||
res = []
|
||||
if num > 14 :
|
||||
res.append(15 << shift)
|
||||
num -= 15
|
||||
while num > 255 :
|
||||
res.append(255)
|
||||
num -= 255
|
||||
res.append(num)
|
||||
else :
|
||||
res.append(num << shift)
|
||||
return bytearray(res)
|
||||
|
||||
def parseTuple(dat, start, datlen) :
|
||||
res = lz4tuple(start)
|
||||
token = ord(dat[start:start+1])
|
||||
(res.literal_len, start) = read_literal(token >> 4, dat, start+1, datlen)
|
||||
res.literal = start
|
||||
start += res.literal_len
|
||||
res.end = start
|
||||
if start > datlen - 2 :
|
||||
return res
|
||||
res.match_dist = ord(dat[start:start+1]) + (ord(dat[start+1:start+2]) << 8)
|
||||
start += 2
|
||||
(res.match_len, start) = read_literal(token & 0xF, dat, start, datlen)
|
||||
res.end = start
|
||||
return res
|
||||
|
||||
def compressGr(dat, version) :
|
||||
if ord(dat[1:2]) < version :
|
||||
vstr = bytes([version]) if sys.version_info.major > 2 else chr(version)
|
||||
dat = dat[0:1] + vstr + dat[2:]
|
||||
datc = lz4.block.compress(dat[:-4], mode='high_compression', compression=16, store_size=False)
|
||||
# now find the final tuple
|
||||
end = len(datc)
|
||||
start = 0
|
||||
curr = lz4tuple(start)
|
||||
while curr.end < end :
|
||||
start = curr.end
|
||||
curr = parseTuple(datc, start, end)
|
||||
if curr.end > end :
|
||||
print("Sync error: {!s}".format(curr))
|
||||
newend = write_literal(curr.literal_len + 4, 4) + datc[curr.literal:curr.literal+curr.literal_len+1] + dat[-4:]
|
||||
lz4hdr = struct.pack(">L", (1 << 27) + (len(dat) & 0x7FFFFFF))
|
||||
return dat[0:4] + lz4hdr + datc[0:curr.start] + newend
|
||||
|
||||
def doit(args) :
|
||||
infont = args.ifont
|
||||
for tag, version in (('Silf', 5), ('Glat', 3)) :
|
||||
dat = infont.getTableData(tag)
|
||||
newdat = bytes(compressGr(dat, version))
|
||||
table = DefaultTable(tag)
|
||||
table.decompile(newdat, infont)
|
||||
infont[tag] = table
|
||||
return infont
|
||||
|
||||
def cmd() : execute('FT', doit, argspec)
|
||||
if __name__ == "__main__" : cmd()
|
||||
|
243
src/silfont/scripts/psfcopyglyphs.py
Normal file
243
src/silfont/scripts/psfcopyglyphs.py
Normal file
|
@ -0,0 +1,243 @@
|
|||
#!/usr/bin/env python3
|
||||
__doc__ = """Copy glyphs from one UFO to another"""
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2018 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Bob Hallissy'
|
||||
|
||||
from xml.etree import ElementTree as ET
|
||||
from silfont.core import execute
|
||||
from silfont.ufo import makeFileName, Uglif
|
||||
import re
|
||||
|
||||
argspec = [
|
||||
('ifont',{'help': 'Input font file'}, {'type': 'infont'}),
|
||||
('ofont',{'help': 'Output font file','nargs': '?' }, {'type': 'outfont'}),
|
||||
('-s','--source',{'help': 'Font to get glyphs from'}, {'type': 'infont'}),
|
||||
('-i','--input',{'help': 'Input csv file'}, {'type': 'incsv', 'def': 'glyphlist.csv'}),
|
||||
('-f','--force',{'help' : 'Overwrite existing glyphs in the font', 'action' : 'store_true'}, {}),
|
||||
('-l','--log',{'help': 'Set log file name'}, {'type': 'outfile', 'def': '_copy.log'}),
|
||||
('-n', '--name', {'help': 'Include glyph named name', 'action': 'append'}, {}),
|
||||
('--rename',{'help' : 'Rename glyphs to names in this column'}, {}),
|
||||
('--unicode', {'help': 'Re-encode glyphs to USVs in this column'}, {}),
|
||||
('--scale',{'type' : float, 'help' : 'Scale glyphs by this factor'}, {})
|
||||
]
|
||||
|
||||
class Glyph:
|
||||
"""details about a glyph we have, or need to, copy; mostly just for syntactic sugar"""
|
||||
|
||||
# Glyphs that are used *only* as component glyphs may have to be renamed if there already exists a glyph
|
||||
# by the same name in the target font. we compute a new name by appending .copy1, .copy2, etc until we get a
|
||||
# unique name. We keep track of the mapping from source font glyphname to target font glyphname using a dictionary.
|
||||
# For ease of use, glyphs named by the input file (which won't have their names changed, see --force) will also
|
||||
# be added to this dictionary because they can also be used as components.
|
||||
nameMap = dict()
|
||||
|
||||
def __init__(self, oldname, newname="", psname="", dusv=None):
|
||||
self.oldname = oldname
|
||||
self.newname = newname or oldname
|
||||
self.psname = psname or None
|
||||
self.dusv = dusv or None
|
||||
# Keep track of old-to-new name mapping
|
||||
Glyph.nameMap[oldname] = self.newname
|
||||
|
||||
|
||||
# Mapping from decimal USV to glyphname in target font
|
||||
dusv2gname = None
|
||||
|
||||
# RE for parsing glyph names and peeling off the .copyX if present in order to search for a unique name to use:
|
||||
gcopyRE = re.compile(r'(^.+?)(?:\.copy(\d+))?$')
|
||||
|
||||
|
||||
def copyglyph(sfont, tfont, g, args):
|
||||
"""copy glyph from source font to target font"""
|
||||
# Generally, 't' variables are target, 's' are source. E.g., tfont is target font.
|
||||
|
||||
global dusv2gname
|
||||
if not dusv2gname:
|
||||
# Create mappings to find existing glyph name from decimal usv:
|
||||
dusv2gname = {int(unicode.hex, 16): gname for gname in tfont.deflayer for unicode in tfont.deflayer[gname]['unicode']}
|
||||
# NB: Assumes font is well-formed and has at most one glyph with any particular Unicode value.
|
||||
|
||||
# The layer where we want the copied glyph:
|
||||
tlayer = tfont.deflayer
|
||||
|
||||
# if new name present in target layer, delete it.
|
||||
if g.newname in tlayer:
|
||||
# New name is already in font:
|
||||
tfont.logger.log("Replacing glyph '{0}' with new glyph".format(g.newname), "V")
|
||||
glyph = tlayer[g.newname]
|
||||
# While here, remove from our mapping any Unicodes from the old glyph:
|
||||
for unicode in glyph["unicode"]:
|
||||
dusv = int(unicode.hex, 16)
|
||||
if dusv in dusv2gname:
|
||||
del dusv2gname[dusv]
|
||||
# Ok, remove old glyph from the layer
|
||||
tlayer.delGlyph(g.newname)
|
||||
else:
|
||||
# New name is not in the font:
|
||||
tfont.logger.log("Adding glyph '{0}'".format(g.newname), "V")
|
||||
|
||||
# Create new glyph
|
||||
glyph = Uglif(layer = tlayer)
|
||||
# Set etree from source glyph
|
||||
glyph.etree = ET.fromstring(sfont.deflayer[g.oldname].inxmlstr)
|
||||
glyph.process_etree()
|
||||
# Rename the glyph if needed
|
||||
if glyph.name != g.newname:
|
||||
# Use super to bypass normal glyph renaming logic since it isn't yet in the layer
|
||||
super(Uglif, glyph).__setattr__("name", g.newname)
|
||||
# add new glyph to layer:
|
||||
tlayer.addGlyph(glyph)
|
||||
tfont.logger.log("Added glyph '{0}'".format(g.newname), "V")
|
||||
|
||||
# todo: set psname if requested; adjusting any other glyphs in the font as needed.
|
||||
|
||||
# Adjust encoding of new glyph
|
||||
if args.unicode:
|
||||
# First remove any encodings the copied glyph had in the source font:
|
||||
for i in range(len(glyph['unicode']) - 1, -1, -1):
|
||||
glyph.remove('unicode', index=i)
|
||||
if g.dusv:
|
||||
# we want this glyph to be encoded.
|
||||
# First remove this Unicode from any other glyph in the target font
|
||||
if g.dusv in dusv2gname:
|
||||
oglyph = tlayer[dusv2gname[g.dusv]]
|
||||
for unicode in oglyph["unicode"]:
|
||||
if int(unicode.hex,16) == g.dusv:
|
||||
oglyph.remove("unicode", object=unicode)
|
||||
tfont.logger.log("Removed USV {0:04X} from existing glyph '{1}'".format(g.dusv,dusv2gname[g.dusv]), "V")
|
||||
break
|
||||
# Now add and record it:
|
||||
glyph.add("unicode", {"hex": '{:04X}'.format(g.dusv)})
|
||||
dusv2gname[g.dusv] = g.newname
|
||||
tfont.logger.log("Added USV {0:04X} to glyph '{1}'".format(g.dusv, g.newname), "V")
|
||||
|
||||
# Scale glyph if desired
|
||||
if args.scale:
|
||||
for e in glyph.etree.iter():
|
||||
for attr in ('width', 'height', 'x', 'y', 'xOffset', 'yOffset'):
|
||||
if attr in e.attrib: e.set(attr, str(int(float(e.get(attr))* args.scale)))
|
||||
|
||||
# Look through components, adjusting names and finding out if we need to copy some.
|
||||
for component in glyph.etree.findall('./outline/component[@base]'):
|
||||
oldname = component.get('base')
|
||||
# Note: the following will cause recursion:
|
||||
component.set('base', copyComponent(sfont, tfont, oldname ,args))
|
||||
|
||||
|
||||
|
||||
def copyComponent(sfont, tfont, oldname, args):
|
||||
"""copy component glyph if not already copied; make sure name and psname are unique; return its new name"""
|
||||
if oldname in Glyph.nameMap:
|
||||
# already copied
|
||||
return Glyph.nameMap[oldname]
|
||||
|
||||
# if oldname is already in the target font, make up a new name by adding ".copy1", incrementing as necessary
|
||||
if oldname not in tfont.deflayer:
|
||||
newname = oldname
|
||||
tfont.logger.log("Copying component '{0}' with existing name".format(oldname), "V")
|
||||
else:
|
||||
x = gcopyRE.match(oldname)
|
||||
base = x.group(1)
|
||||
try: i = int(x.group(2))
|
||||
except: i = 1
|
||||
while "{0}.copy{1}".format(base,i) in tfont.deflayer:
|
||||
i += 1
|
||||
newname = "{0}.copy{1}".format(base,i)
|
||||
tfont.logger.log("Copying component '{0}' with new name '{1}'".format(oldname, newname), "V")
|
||||
|
||||
# todo: something similar to above but for psname
|
||||
|
||||
# Now copy the glyph, giving it new name if needed.
|
||||
copyglyph(sfont, tfont, Glyph(oldname, newname), args)
|
||||
|
||||
return newname
|
||||
|
||||
def doit(args) :
|
||||
sfont = args.source # source UFO
|
||||
tfont = args.ifont # target UFO
|
||||
incsv = args.input
|
||||
logger = args.logger
|
||||
|
||||
# Get headings from csvfile:
|
||||
fl = incsv.firstline
|
||||
if fl is None: logger.log("Empty input file", "S")
|
||||
numfields = len(fl)
|
||||
incsv.numfields = numfields
|
||||
# defaults for single column csv (no headers):
|
||||
nameCol = 0
|
||||
renameCol = None
|
||||
psCol = None
|
||||
usvCol = None
|
||||
if numfields > 1 or args.rename or args.unicode:
|
||||
# required columns:
|
||||
try:
|
||||
nameCol = fl.index('glyph_name');
|
||||
if args.rename:
|
||||
renameCol = fl.index(args.rename);
|
||||
if args.unicode:
|
||||
usvCol = fl.index(args.unicode);
|
||||
except ValueError as e:
|
||||
logger.log('Missing csv input field: ' + e.message, 'S')
|
||||
except Exception as e:
|
||||
logger.log('Error reading csv input field: ' + e.message, 'S')
|
||||
# optional columns
|
||||
psCol = fl.index('ps_name') if 'ps_name' in fl else None
|
||||
if 'glyph_name' in fl:
|
||||
next(incsv.reader, None) # Skip first line with headers in
|
||||
|
||||
# list of glyphs to copy
|
||||
glist = list()
|
||||
|
||||
def checkname(oldname, newname = None):
|
||||
if not newname: newname = oldname
|
||||
if oldname in Glyph.nameMap:
|
||||
logger.log("Line {0}: Glyph '{1}' specified more than once; only the first kept".format(incsv.line_num, oldname), 'W')
|
||||
elif oldname not in sfont.deflayer:
|
||||
logger.log("Line {0}: Glyph '{1}' is not in source font; skipping".format(incsv.line_num, oldname),"W")
|
||||
elif newname in tfont.deflayer and not args.force:
|
||||
logger.log("Line {0}: Glyph '{1}' already present; skipping".format(incsv.line_num, newname), "W")
|
||||
else:
|
||||
return True
|
||||
return False
|
||||
|
||||
# glyphs specified in csv file
|
||||
for r in incsv:
|
||||
oldname = r[nameCol]
|
||||
newname = r[renameCol] if args.rename else oldname
|
||||
psname = r[psCol] if psCol is not None else None
|
||||
if args.unicode and r[usvCol]:
|
||||
# validate USV:
|
||||
try:
|
||||
dusv = int(r[usvCol],16)
|
||||
except ValueError:
|
||||
logger.log("Line {0}: Invalid USV '{1}'; ignored.".format(incsv.line_num, r[usvCol]), "W")
|
||||
dusv = None
|
||||
else:
|
||||
dusv = None
|
||||
|
||||
if checkname(oldname, newname):
|
||||
glist.append(Glyph(oldname, newname, psname, dusv))
|
||||
|
||||
# glyphs specified on the command line
|
||||
if args.name:
|
||||
for gname in args.name:
|
||||
if checkname(gname):
|
||||
glist.append(Glyph(gname))
|
||||
|
||||
# Ok, now process them:
|
||||
if len(glist) == 0:
|
||||
logger.log("No glyphs to copy", "S")
|
||||
|
||||
# copy glyphs by name
|
||||
while len(glist) :
|
||||
g = glist.pop(0)
|
||||
tfont.logger.log("Copying source glyph '{0}' as '{1}'{2}".format(g.oldname, g.newname,
|
||||
" (U+{0:04X})".format(g.dusv) if g.dusv else ""), "I")
|
||||
copyglyph(sfont, tfont, g, args)
|
||||
|
||||
return tfont
|
||||
|
||||
def cmd() : execute("UFO",doit,argspec)
|
||||
if __name__ == "__main__": cmd()
|
148
src/silfont/scripts/psfcopymeta.py
Normal file
148
src/silfont/scripts/psfcopymeta.py
Normal file
|
@ -0,0 +1,148 @@
|
|||
#!/usr/bin/env python3
|
||||
__doc__ = '''Copy metadata between fonts in different (related) families
|
||||
Usually run against the master (regular) font in each family then data synced within family afterwards'''
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2017 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'David Raymond'
|
||||
|
||||
from silfont.core import execute
|
||||
import silfont.ufo as UFO
|
||||
from xml.etree import ElementTree as ET
|
||||
|
||||
argspec = [
|
||||
('fromfont',{'help': 'From font file'}, {'type': 'infont'}),
|
||||
('tofont',{'help': 'To font file'}, {'type': 'infont'}),
|
||||
('-l','--log',{'help': 'Log file'}, {'type': 'outfile', 'def': '_copymeta.log'}),
|
||||
('-r','--reportonly', {'help': 'Report issues but no updating', 'action': 'store_true', 'default': False},{})
|
||||
]
|
||||
|
||||
def doit(args) :
|
||||
|
||||
fields = ["copyright", "openTypeNameDescription", "openTypeNameDesigner", "openTypeNameDesignerURL", "openTypeNameLicense", # General feilds
|
||||
"openTypeNameLicenseURL", "openTypeNameManufacturer", "openTypeNameManufacturerURL", "openTypeOS2CodePageRanges",
|
||||
"openTypeOS2UnicodeRanges", "openTypeOS2VendorID", "trademark",
|
||||
"openTypeNameVersion", "versionMajor", "versionMinor", # Version fields
|
||||
"ascender", "descender", "openTypeHheaAscender", "openTypeHheaDescender", "openTypeHheaLineGap", # Design fields
|
||||
"openTypeOS2TypoAscender", "openTypeOS2TypoDescender", "openTypeOS2TypoLineGap", "openTypeOS2WinAscent", "openTypeOS2WinDescent"]
|
||||
libfields = ["public.postscriptNames", "public.glyphOrder", "com.schriftgestaltung.glyphOrder"]
|
||||
|
||||
fromfont = args.fromfont
|
||||
tofont = args.tofont
|
||||
logger = args.logger
|
||||
reportonly = args.reportonly
|
||||
|
||||
updatemessage = " to be updated: " if reportonly else " updated: "
|
||||
precision = fromfont.paramset["precision"]
|
||||
# Increase screen logging level to W unless specific level supplied on command-line
|
||||
if not(args.quiet or "scrlevel" in args.paramsobj.sets["command line"]) : logger.scrlevel = "W"
|
||||
|
||||
# Process fontinfo.plist
|
||||
ffi = fromfont.fontinfo
|
||||
tfi = tofont.fontinfo
|
||||
fupdated = False
|
||||
for field in fields:
|
||||
if field in ffi :
|
||||
felem = ffi[field][1]
|
||||
ftag = felem.tag
|
||||
ftext = felem.text
|
||||
if ftag == 'real' : ftext = processnum(ftext,precision)
|
||||
message = field + updatemessage
|
||||
|
||||
if field in tfi : # Need to compare values to see if update is needed
|
||||
telem = tfi[field][1]
|
||||
ttag = telem.tag
|
||||
ttext = telem.text
|
||||
if ttag == 'real' : ttext = processnum(ttext,precision)
|
||||
|
||||
if ftag in ("real", "integer", "string") :
|
||||
if ftext != ttext :
|
||||
if field == "openTypeNameLicense" : # Too long to display all
|
||||
addmess = " Old: '" + ttext[0:80] + "...' New: '" + ftext[0:80] + "...'"
|
||||
else: addmess = " Old: '" + ttext + "' New: '" + str(ftext) + "'"
|
||||
telem.text = ftext
|
||||
logger.log(message + addmess, "W")
|
||||
fupdated = True
|
||||
elif ftag in ("true, false") :
|
||||
if ftag != ttag :
|
||||
fti.setelem(field, ET.fromstring("<" + ftag + "/>"))
|
||||
logger.log(message + " Old: '" + ttag + "' New: '" + str(ftag) + "'", "W")
|
||||
fupdated = True
|
||||
elif ftag == "array" : # Assume simple array with just values to compare
|
||||
farray = []
|
||||
for subelem in felem : farray.append(subelem.text)
|
||||
tarray = []
|
||||
for subelem in telem : tarray.append(subelem.text)
|
||||
if farray != tarray :
|
||||
tfi.setelem(field, ET.fromstring(ET.tostring(felem)))
|
||||
logger.log(message + "Some values different Old: " + str(tarray) + " New: " + str(farray), "W")
|
||||
fupdated = True
|
||||
else : logger.log("Non-standard fontinfo field type: "+ ftag + " in " + fontname, "S")
|
||||
else :
|
||||
tfi.addelem(field, ET.fromstring(ET.tostring(felem)))
|
||||
logger.log(message + "is missing from destination font so will be copied from source font", "W")
|
||||
fupdated = True
|
||||
else: # Field not in from font
|
||||
if field in tfi :
|
||||
logger.log( field + " is missing from source font but present in destination font", "E")
|
||||
else :
|
||||
logger.log( field + " is in neither font", "W")
|
||||
|
||||
# Process lib.plist - currently just public.postscriptNames and glyph order fields which are all simple dicts or arrays
|
||||
flib = fromfont.lib
|
||||
tlib = tofont.lib
|
||||
lupdated = False
|
||||
for field in libfields:
|
||||
action = None
|
||||
if field in flib:
|
||||
if field in tlib: # Need to compare values to see if update is needed
|
||||
if flib.getval(field) != tlib.getval(field):
|
||||
action = "Updatefield"
|
||||
else:
|
||||
action = "Copyfield"
|
||||
else:
|
||||
action = "Error" if field == ("public.GlyphOrder", "public.postscriptNames") else "Warn"
|
||||
issue = field + " not in source font lib.plist"
|
||||
|
||||
# Process the actions, create log messages etc
|
||||
if action is None or action == "Ignore":
|
||||
pass
|
||||
elif action == "Warn":
|
||||
logger.log(field + " needs manual correction: " + issue, "W")
|
||||
elif action == "Error":
|
||||
logger.log(field + " needs manual correction: " + issue, "E")
|
||||
elif action in ("Updatefield", "Copyfield"): # Updating actions
|
||||
lupdated = True
|
||||
message = field + updatemessage
|
||||
if action == "Copyfield":
|
||||
message = message + "is missing so will be copied from source font"
|
||||
tlib.addelem(field, ET.fromstring(ET.tostring(flib[field][1])))
|
||||
elif action == "Updatefield":
|
||||
message = message + "Some values different"
|
||||
tlib.setelem(field, ET.fromstring(ET.tostring(flib[field][1])))
|
||||
logger.log(message, "W")
|
||||
else:
|
||||
logger.log("Uncoded action: " + action + " - oops", "X")
|
||||
|
||||
# Now update on disk
|
||||
if not reportonly:
|
||||
if fupdated:
|
||||
logger.log("Writing updated fontinfo.plist", "P")
|
||||
UFO.writeXMLobject(tfi, tofont.outparams, tofont.ufodir, "fontinfo.plist", True, fobject=True)
|
||||
if lupdated:
|
||||
logger.log("Writing updated lib.plist", "P")
|
||||
UFO.writeXMLobject(tlib, tofont.outparams, tofont.ufodir, "lib.plist", True, fobject=True)
|
||||
|
||||
return
|
||||
|
||||
|
||||
def processnum(text, precision) : # Apply same processing to real numbers that normalization will
|
||||
if precision is not None:
|
||||
val = round(float(text), precision)
|
||||
if val == int(val) : val = int(val) # Removed trailing decimal .0
|
||||
text = str(val)
|
||||
return text
|
||||
|
||||
|
||||
def cmd(): execute("UFO",doit, argspec)
|
||||
if __name__ == "__main__": cmd()
|
228
src/silfont/scripts/psfcreateinstances.py
Normal file
228
src/silfont/scripts/psfcreateinstances.py
Normal file
|
@ -0,0 +1,228 @@
|
|||
#!/usr/bin/env python3
|
||||
__doc__ = 'Generate instance UFOs from a designspace document and master UFOs'
|
||||
|
||||
# Python 2.7 script to build instance UFOs from a designspace document
|
||||
# If a file is given, all instances are built
|
||||
# A particular instance to build can be specified using the -i option
|
||||
# and the 'name' attribute value for an 'instance' element in the designspace file
|
||||
# Or it can be specified using the -a and -v options
|
||||
# to specify any attribute and value pair for an 'instance' in the designspace file
|
||||
# If more than one instances matches, all will be built
|
||||
# A prefix for the output path can be specified (for smith processing)
|
||||
# If the location of an instance UFO matches a master's location,
|
||||
# glyphs are copied instead of calculated
|
||||
# This allows instances to build with glyphs that are not interpolatable
|
||||
# An option exists to calculate glyphs instead of copying them
|
||||
# If a folder is given using an option, all instances in all designspace files are built
|
||||
# Specifying an instance to build or an output path prefix is not supported with a folder
|
||||
# Also, all glyphs will be calculated
|
||||
|
||||
__url__ = 'https://github.com/silnrsi/pysilfont'
|
||||
__copyright__ = 'Copyright (c) 2018 SIL International (https://www.sil.org)'
|
||||
__license__ = 'Released under the MIT License (https://opensource.org/licenses/MIT)'
|
||||
__author__ = 'Alan Ward'
|
||||
|
||||
import os, re
|
||||
from mutatorMath.ufo.document import DesignSpaceDocumentReader
|
||||
from mutatorMath.ufo.instance import InstanceWriter
|
||||
from fontMath.mathGlyph import MathGlyph
|
||||
from mutatorMath.ufo import build as build_designspace
|
||||
from silfont.core import execute
|
||||
|
||||
argspec = [
|
||||
('designspace_path', {'help': 'Path to designspace document (or folder of them)'}, {}),
|
||||
('-i', '--instanceName', {'help': 'Font name for instance to build'}, {}),
|
||||
('-a', '--instanceAttr', {'help': 'Attribute used to specify instance to build'}, {}),
|
||||
('-v', '--instanceVal', {'help': 'Value of attribute specifying instance to build'}, {}),
|
||||
('-f', '--folder', {'help': 'Build all designspace files in a folder','action': 'store_true'}, {}),
|
||||
('-o', '--output', {'help': 'Prepend path to all output paths'}, {}),
|
||||
('--forceInterpolation', {'help': 'If an instance matches a master, calculate glyphs instead of copying them',
|
||||
'action': 'store_true'}, {}),
|
||||
('--roundInstances', {'help': 'Apply integer rounding to all geometry when interpolating',
|
||||
'action': 'store_true'}, {}),
|
||||
('-l','--log',{'help': 'Log file (default: *_createinstances.log)'}, {'type': 'outfile', 'def': '_createinstances.log'}),
|
||||
('-W','--weightfix',{'help': 'Enable RIBBI style weight fixing', 'action': 'store_true'}, {}),
|
||||
]
|
||||
|
||||
# Class factory to wrap a subclass in a closure to store values not defined in the original class
|
||||
# that our method overrides will utilize
|
||||
# The class methods will fail unless the class is generated by the factory, which is enforced by scoping
|
||||
# Using class attribs or global variables would violate encapsulation even more
|
||||
# and would only allow for one instance of the class
|
||||
|
||||
weightClasses = {
|
||||
'bold': 700
|
||||
}
|
||||
|
||||
def InstanceWriterCF(output_path_prefix, calc_glyphs, fix_weight):
|
||||
|
||||
class LocalInstanceWriter(InstanceWriter):
|
||||
fixWeight = fix_weight
|
||||
|
||||
def __init__(self, path, *args, **kw):
|
||||
if output_path_prefix:
|
||||
path = os.path.join(output_path_prefix, path)
|
||||
return super(LocalInstanceWriter, self).__init__(path, *args, **kw)
|
||||
|
||||
# Override the method used to calculate glyph geometry
|
||||
# If copy_glyphs is true and the glyph being processed is in the same location
|
||||
# (has all the same axes values) as a master UFO,
|
||||
# then extract the glyph geometry directly into the target glyph.
|
||||
# FYI, in the superclass method, m = buildMutator(); m.makeInstance() returns a MathGlyph
|
||||
def _calculateGlyph(self, targetGlyphObject, instanceLocationObject, glyphMasters):
|
||||
# Search for a glyphMaster with the same location as instanceLocationObject
|
||||
found = False
|
||||
if not calc_glyphs: # i.e. if copying glyphs
|
||||
for item in glyphMasters:
|
||||
locationObject = item['location'] # mutatorMath Location
|
||||
if locationObject.sameAs(instanceLocationObject) == 0:
|
||||
found = True
|
||||
fontObject = item['font'] # defcon Font
|
||||
glyphName = item['glyphName'] # string
|
||||
glyphObject = MathGlyph(fontObject[glyphName])
|
||||
glyphObject.extractGlyph(targetGlyphObject, onlyGeometry=True)
|
||||
break
|
||||
|
||||
if not found: # includes case of calc_glyphs == True
|
||||
super(LocalInstanceWriter, self)._calculateGlyph(targetGlyphObject,
|
||||
instanceLocationObject,
|
||||
glyphMasters)
|
||||
|
||||
def _copyFontInfo(self, targetInfo, sourceInfo):
|
||||
super(LocalInstanceWriter, self)._copyFontInfo(targetInfo, sourceInfo)
|
||||
|
||||
if getattr(self, 'fixWeight', False):
|
||||
# fixWeight is True since the --weightfix (or -W) option was specified
|
||||
|
||||
# This mode is used for RIBBI font builds,
|
||||
# therefore the weight class can be determined
|
||||
# by the style name
|
||||
if self.font.info.styleMapStyleName.lower().startswith("bold"):
|
||||
weight_class = 700
|
||||
else:
|
||||
weight_class = 400
|
||||
else:
|
||||
# fixWeight is False (or None)
|
||||
|
||||
# This mode is used for non-RIBBI font builds,
|
||||
# therefore the weight class can be determined
|
||||
# by the weight axis map in the Designspace file
|
||||
foundmap = False
|
||||
weight = int(self.locationObject["weight"])
|
||||
for map_space in self.axes["weight"]["map"]:
|
||||
userspace = int(map_space[0]) # called input in the Designspace file
|
||||
designspace = int(map_space[1]) # called output in the Designspace file
|
||||
if designspace == weight:
|
||||
weight_class = userspace
|
||||
foundmap = True
|
||||
if not foundmap:
|
||||
weight_class = 399 # Dummy value designed to look non-standard
|
||||
logger.log(f'No entry in designspace axis mapping for {weight}; set to 399', 'W')
|
||||
setattr(targetInfo, 'openTypeOS2WeightClass', weight_class)
|
||||
|
||||
localinfo = {}
|
||||
for k in (('openTypeNameManufacturer', None),
|
||||
('styleMapFamilyName', 'familyName'),
|
||||
('styleMapStyleName', 'styleName')):
|
||||
localinfo[k[0]] = getattr(targetInfo, k[0], (getattr(targetInfo, k[1]) if k[1] is not None else ""))
|
||||
localinfo['styleMapStyleName'] = localinfo['styleMapStyleName'].title()
|
||||
localinfo['year'] = re.sub(r'^.*?([0-9]+)\s*$', r'\1', getattr(targetInfo, 'openTypeNameUniqueID'))
|
||||
uniqueID = "{openTypeNameManufacturer}: {styleMapFamilyName} {styleMapStyleName} {year}".format(**localinfo)
|
||||
setattr(targetInfo, 'openTypeNameUniqueID', uniqueID)
|
||||
|
||||
return LocalInstanceWriter
|
||||
|
||||
logger = None
|
||||
severe_error = False
|
||||
def progress_func(state="update", action=None, text=None, tick=0):
|
||||
global severe_error
|
||||
if logger:
|
||||
if state == 'error':
|
||||
if str(action) == 'unicodes':
|
||||
logger.log("%s: %s\n%s" % (state, str(action), str(text)), 'W')
|
||||
else:
|
||||
logger.log("%s: %s\n%s" % (state, str(action), str(text)), 'E')
|
||||
severe_error = True
|
||||
else:
|
||||
logger.log("%s: %s\n%s" % (state, str(action), str(text)), 'I')
|
||||
|
||||
def doit(args):
|
||||
global logger
|
||||
logger = args.logger
|
||||
|
||||
designspace_path = args.designspace_path
|
||||
instance_font_name = args.instanceName
|
||||
instance_attr = args.instanceAttr
|
||||
instance_val = args.instanceVal
|
||||
output_path_prefix = args.output
|
||||
calc_glyphs = args.forceInterpolation
|
||||
build_folder = args.folder
|
||||
round_instances = args.roundInstances
|
||||
|
||||
if instance_font_name and (instance_attr or instance_val):
|
||||
args.logger.log('--instanceName is mutually exclusive with --instanceAttr or --instanceVal','S')
|
||||
if (instance_attr and not instance_val) or (instance_val and not instance_attr):
|
||||
args.logger.log('--instanceAttr and --instanceVal must be used together', 'S')
|
||||
if (build_folder and (instance_font_name or instance_attr or instance_val
|
||||
or output_path_prefix or calc_glyphs)):
|
||||
args.logger.log('--folder cannot be used with options: -i, -a, -v, -o, --forceInterpolation', 'S')
|
||||
|
||||
args.logger.log('Interpolating master UFOs from designspace', 'P')
|
||||
if not build_folder:
|
||||
if not os.path.isfile(designspace_path):
|
||||
args.logger.log('A designspace file (not a folder) is required', 'S')
|
||||
reader = DesignSpaceDocumentReader(designspace_path, ufoVersion=3,
|
||||
roundGeometry=round_instances,
|
||||
progressFunc=progress_func)
|
||||
# assignment to an internal object variable is a kludge, probably should use subclassing instead
|
||||
reader._instanceWriterClass = InstanceWriterCF(output_path_prefix, calc_glyphs, args.weightfix)
|
||||
if calc_glyphs:
|
||||
args.logger.log('Interpolating glyphs where an instance font location matches a master', 'P')
|
||||
if instance_font_name or instance_attr:
|
||||
key_attr = instance_attr if instance_val else 'name'
|
||||
key_val = instance_val if instance_attr else instance_font_name
|
||||
reader.readInstance((key_attr, key_val))
|
||||
else:
|
||||
reader.readInstances()
|
||||
else:
|
||||
# The below uses a utility function that's part of mutatorMath
|
||||
# It will accept a folder and processes all designspace files there
|
||||
args.logger.log('Interpolating glyphs where an instance font location matches a master', 'P')
|
||||
build_designspace(designspace_path,
|
||||
outputUFOFormatVersion=3, roundGeometry=round_instances,
|
||||
progressFunc=progress_func)
|
||||
|
||||
if not severe_error:
|
||||
args.logger.log('Done', 'P')
|
||||
else:
|
||||
args.logger.log('Done with severe error', 'S')
|
||||
|
||||
def cmd(): execute(None, doit, argspec)
|
||||
if __name__ == '__main__': cmd()
|
||||
|
||||
# Future development might use: fonttools\Lib\fontTools\designspaceLib to read
|
||||
# the designspace file (which is the most up-to-date approach)
|
||||
# then pass that object to mutatorMath, but there's no way to do that today.
|
||||
|
||||
|
||||
# For reference:
|
||||
# from mutatorMath/ufo/__init__.py:
|
||||
# build() is a convenience function for reading and executing a designspace file.
|
||||
# documentPath: filepath to the .designspace document
|
||||
# outputUFOFormatVersion: ufo format for output
|
||||
# verbose: True / False for lots or no feedback [to log file]
|
||||
# logPath: filepath to a log file
|
||||
# progressFunc: an optional callback to report progress.
|
||||
# see mutatorMath.ufo.tokenProgressFunc
|
||||
#
|
||||
# class DesignSpaceDocumentReader(object):
|
||||
# def __init__(self, documentPath,
|
||||
# ufoVersion,
|
||||
# roundGeometry=False,
|
||||
# verbose=False,
|
||||
# logPath=None,
|
||||
# progressFunc=None
|
||||
# ):
|
||||
#
|
||||
# def readInstance(self, key, makeGlyphs=True, makeKerning=True, makeInfo=True):
|
||||
# def readInstances(self, makeGlyphs=True, makeKerning=True, makeInfo=True):
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue