Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ jobs:
run: |
export PATH=$HOME/.local/bin:$PATH
python -m pip install --upgrade pip setuptools wheel
python -m pip install --upgrade pytest
python -m pip install --upgrade pytest pytest-benchmark

- name: Versions
run: |
Expand Down
7 changes: 7 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,13 @@
.venv/
.env/

# cache directories
.cache/
.mypy_cache/
.pytest_cache/
.coverage/
.benchmarks/

# doc generated
/docs/build
/docs/_build
Expand Down
37 changes: 36 additions & 1 deletion CONTRIBUTING.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ It be installed with ``pipx`` as shown below.
$ pipx upgrade poetry
$ pipx ensurepath


``runner`` Script
-----------------

Expand All @@ -45,3 +44,39 @@ See the help for various tasks to run.
.. code-block:: shell

$ ./runner help

Benchmarking
------------

There are some benchmarking tests in the pytest suite.
These benchmarks are not run by default with the ``runner`` script nor are they run for the CI processes, since they are significantly slower than normal tests.
The benchmark tests are intended to be used by developers to compare the execution speeds when looking for improvements.
Benchmarks can be run by executing:

.. code-block:: shell

$ ./runner benchmark

Only relative comparisons are meaningful since the computer hardware and other factors can impact the timing.
Thus, when comparing benchmarks, be sure you use the same computer and setup.
When running benchmarks repeatedly to compare changes in the code, it may be useful to run a subset of benchmarks (or only 1) so that it completes more quickly.


Testing Arguments
-----------------

Testing is performed with ``pytest`` and when running ``./runner test`` or ``./runner benchmark``, any additional arguments are passed on to ``pytest``.
This can be useful for various purposes, such as running only a subset (or only 1) test case.
A few examples are shown below, please consult the ``pytest`` documentation for many more examples and details.

Skip all of the tests marked with the "slow" marker:

.. code-block:: shell

$ ./runner test -m "not slow"

Only run tests the match the expression "Rectangle":

.. code-block:: shell

$ ./runner test -k "Rectangle"
8 changes: 0 additions & 8 deletions docs/source/rst/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,11 +40,3 @@ Once *meshpy* has been installed, the rest of the *sectionproperties* package ca
be installed using the python package index::

$ pip install sectionproperties

Testing the Installation
------------------------

Python *unittest* modules are located in the *sectionproperties.tests* package.
To see if your installation is working correctly, run this simple test::

$ python -m unittest sectionproperties.tests.test_rectangle
61 changes: 60 additions & 1 deletion poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

56 changes: 35 additions & 21 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -25,29 +25,30 @@ classifiers=[
include = ['CHANGELOG.rst']

[tool.poetry.dependencies]
python = '>=3.7,<4.0'
numpy = '^1.20'
scipy = '^1.6'
matplotlib = '^3.3'
shapely = '^1.7'
pybind11 = '^2.6'
MeshPy = '^2020.1'
python = ">=3.7,<4.0"
numpy = "^1.20"
scipy = "^1.6"
matplotlib = "^3.3"
shapely = "^1.7"
pybind11 = "^2.6"
MeshPy = "^2020.1"

[tool.poetry.dev-dependencies]
black = '20.8b1'
isort = '^5.6.4'
flake8 = '>=3.6.0'
flake8-docstrings = '^1.3.0'
flake8-bugbear = '^20.11.1'
flake8-rst-docstrings = '^0.0.14'
pylint = '^2.6.0'
rstcheck = '^3.3.1'
pytest = '^6.2.2'
coverage = { version = "^5.4", extras = ['toml'] }
pre-commit = '^2.10'
Sphinx = '^3.5'
sphinx-rtd-theme = '^0.5'
twine = '^3.4'
black = "20.8b1"
isort = "^5.6.4"
flake8 = ">=3.6.0"
flake8-docstrings = "^1.3.0"
flake8-bugbear = "^20.11.1"
flake8-rst-docstrings = "^0.0.14"
pylint = "^2.6.0"
rstcheck = "^3.3.1"
pytest = "^6.2.2"
pytest-benchmark = "^3.2.3"
coverage = { version = "^5.4", extras = ["toml"] }
pre-commit = "^2.10"
Sphinx = "^3.5"
sphinx-rtd-theme = "^0.5"
twine = "^3.4"

[tool.black]
line-length = 100
Expand Down Expand Up @@ -101,6 +102,19 @@ reports = 'no'
[tool.pytest.ini_options]
minversion = '6.2'
cache_dir = '.cache/pytest'
# benchmarks are very slow and so are disable by default; run "pytest --benchmark-enable" to run them
addopts = '''
-x
--strict-markers
--benchmark-disable
--benchmark-storage=.cache/benchmarks
--benchmark-autosave
--benchmark-compare
--benchmark-group-by=name,fullname
'''
markers = [
"slow: marks tests as slow (deselect with '-m \"not slow\"')",
]

[tool.coverage]
[tool.coverage.run]
Expand Down
44 changes: 25 additions & 19 deletions runner
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ run_test() {
echo "--------------------------------------------------------------------"
echo "---[ Test with pytest and coverage ]--------------------------------"
echo "--------------------------------------------------------------------"
poetry run coverage run --parallel -m pytest
poetry run coverage run --parallel -m pytest "$*"
if [[ "${CI:-}" = "1" ]]; then
echo "skiping coverage combine and report until all jobs are done"
else
Expand Down Expand Up @@ -139,7 +139,7 @@ run_clean() {

echo "deleting temporary files and directories..."
find . -regex '^.*\(__pycache__\|\.py[co]\)$' -delete
rm -rf *.spec .eggs .tox build dist *.egg-info .cache site
rm -rf *.spec .eggs .tox build dist *.egg-info .cache site docs/build

if [ $all == "yes" ]; then
echo "deleting virtual environment..."
Expand Down Expand Up @@ -168,8 +168,13 @@ else
doc*)
run_docs
;;
test*)
run_test
pytest|test*)
shift 1
run_test $*
;;
benchmarks)
shift 1
run_test --benchmark-enable $*
;;
build*)
run_build
Expand All @@ -186,21 +191,22 @@ else
echo ""
echo "Usage: run [command]"
echo ""
echo "- runner : (no arguments) to run each listed task:"
echo " - install"
echo " - format"
echo " - lint"
echo " - docs"
echo " - test"
echo " - build"
echo "- runner install : Create virtual environment and install pre-commit hooks"
echo "- runner format : Run formatters"
echo "- runner lint : Run linters/code analysis"
echo "- runner docs : Build the documentation"
echo "- runner test : Run pytest with coverage"
echo "- runner build : Build sdist and wheel distributions"
echo "- runner clean : Delete all temporary files and directories"
echo "- runner clean -a: Do clean plus delete the virutal environment"
echo "- runner : (no arguments) to run each listed task:"
echo " - install"
echo " - format"
echo " - lint"
echo " - docs"
echo " - test"
echo " - build"
echo "- runner install : Create virtual environment and install pre-commit hooks"
echo "- runner format : Run formatters"
echo "- runner lint : Run linters/code analysis"
echo "- runner docs : Build the documentation"
echo "- runner test : Run pytest with coverage"
echo "- runner build : Build sdist and wheel distributions"
echo "- runner benchmark : Run benchmarks to compare execution timings"
echo "- runner clean : Delete all temporary files and directories"
echo "- runner clean -a : Do clean plus delete the virutal environment"
echo ""
exit 1
esac
Expand Down
Empty file.
Loading