Skip to content

Commit 93aa988

Browse files
Merge pull request #6901 from RonnyPfannschmidt/regendoc-fix-simple
run and fix tox -e regen to prepare 5.4
2 parents 7996724 + 378a75d commit 93aa988

17 files changed

+158
-13
lines changed

doc/en/assert.rst

+6
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,8 @@ you will see the return value of the function call:
4747
E + where 3 = f()
4848
4949
test_assert1.py:6: AssertionError
50+
========================= short test summary info ==========================
51+
FAILED test_assert1.py::test_function - assert 3 == 4
5052
============================ 1 failed in 0.12s =============================
5153
5254
``pytest`` has support for showing the values of the most common subexpressions
@@ -208,6 +210,8 @@ if you run this module:
208210
E Use -v to get the full diff
209211
210212
test_assert2.py:6: AssertionError
213+
========================= short test summary info ==========================
214+
FAILED test_assert2.py::test_set_comparison - AssertionError: assert {'0'...
211215
============================ 1 failed in 0.12s =============================
212216
213217
Special comparisons are done for a number of cases:
@@ -279,6 +283,8 @@ the conftest file:
279283
E vals: 1 != 2
280284
281285
test_foocompare.py:12: AssertionError
286+
========================= short test summary info ==========================
287+
FAILED test_foocompare.py::test_compare - assert Comparing Foo instances:
282288
1 failed in 0.12s
283289
284290
.. _assert-details:

doc/en/builtin.rst

+2
Original file line numberDiff line numberDiff line change
@@ -137,9 +137,11 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
137137
tmpdir_factory [session scope]
138138
Return a :class:`_pytest.tmpdir.TempdirFactory` instance for the test session.
139139
140+
140141
tmp_path_factory [session scope]
141142
Return a :class:`_pytest.tmpdir.TempPathFactory` instance for the test session.
142143
144+
143145
tmpdir
144146
Return a temporary directory path object
145147
which is unique to each test function invocation,

doc/en/cache.rst

+15-2
Original file line numberDiff line numberDiff line change
@@ -75,6 +75,9 @@ If you run this for the first time you will see two failures:
7575
E Failed: bad luck
7676
7777
test_50.py:7: Failed
78+
========================= short test summary info ==========================
79+
FAILED test_50.py::test_num[17] - Failed: bad luck
80+
FAILED test_50.py::test_num[25] - Failed: bad luck
7881
2 failed, 48 passed in 0.12s
7982
8083
If you then run it with ``--lf``:
@@ -86,7 +89,7 @@ If you then run it with ``--lf``:
8689
platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
8790
cachedir: $PYTHON_PREFIX/.pytest_cache
8891
rootdir: $REGENDOC_TMPDIR
89-
collected 50 items / 48 deselected / 2 selected
92+
collected 2 items
9093
run-last-failure: rerun previous 2 failures
9194
9295
test_50.py FF [100%]
@@ -114,7 +117,10 @@ If you then run it with ``--lf``:
114117
E Failed: bad luck
115118
116119
test_50.py:7: Failed
117-
===================== 2 failed, 48 deselected in 0.12s =====================
120+
========================= short test summary info ==========================
121+
FAILED test_50.py::test_num[17] - Failed: bad luck
122+
FAILED test_50.py::test_num[25] - Failed: bad luck
123+
============================ 2 failed in 0.12s =============================
118124
119125
You have run only the two failing tests from the last run, while the 48 passing
120126
tests have not been run ("deselected").
@@ -158,6 +164,9 @@ of ``FF`` and dots):
158164
E Failed: bad luck
159165
160166
test_50.py:7: Failed
167+
========================= short test summary info ==========================
168+
FAILED test_50.py::test_num[17] - Failed: bad luck
169+
FAILED test_50.py::test_num[25] - Failed: bad luck
161170
======================= 2 failed, 48 passed in 0.12s =======================
162171
163172
.. _`config.cache`:
@@ -230,6 +239,8 @@ If you run this command for the first time, you can see the print statement:
230239
test_caching.py:20: AssertionError
231240
-------------------------- Captured stdout setup ---------------------------
232241
running expensive computation...
242+
========================= short test summary info ==========================
243+
FAILED test_caching.py::test_function - assert 42 == 23
233244
1 failed in 0.12s
234245
235246
If you run it a second time, the value will be retrieved from
@@ -249,6 +260,8 @@ the cache and nothing will be printed:
249260
E assert 42 == 23
250261
251262
test_caching.py:20: AssertionError
263+
========================= short test summary info ==========================
264+
FAILED test_caching.py::test_function - assert 42 == 23
252265
1 failed in 0.12s
253266
254267
See the :fixture:`config.cache fixture <config.cache>` for more details.

doc/en/capture.rst

+2
Original file line numberDiff line numberDiff line change
@@ -100,6 +100,8 @@ of the failing function and hide the other one:
100100
test_module.py:12: AssertionError
101101
-------------------------- Captured stdout setup ---------------------------
102102
setting up <function test_func2 at 0xdeadbeef>
103+
========================= short test summary info ==========================
104+
FAILED test_module.py::test_func2 - assert False
103105
======================= 1 failed, 1 passed in 0.12s ========================
104106
105107
Accessing captured output from a test function

doc/en/example/markers.rst

+7
Original file line numberDiff line numberDiff line change
@@ -715,6 +715,9 @@ We can now use the ``-m option`` to select one set:
715715
test_module.py:8: in test_interface_complex
716716
assert 0
717717
E assert 0
718+
========================= short test summary info ==========================
719+
FAILED test_module.py::test_interface_simple - assert 0
720+
FAILED test_module.py::test_interface_complex - assert 0
718721
===================== 2 failed, 2 deselected in 0.12s ======================
719722
720723
or to select both "event" and "interface" tests:
@@ -743,4 +746,8 @@ or to select both "event" and "interface" tests:
743746
test_module.py:12: in test_event_simple
744747
assert 0
745748
E assert 0
749+
========================= short test summary info ==========================
750+
FAILED test_module.py::test_interface_simple - assert 0
751+
FAILED test_module.py::test_interface_complex - assert 0
752+
FAILED test_module.py::test_event_simple - assert 0
746753
===================== 3 failed, 1 deselected in 0.12s ======================

doc/en/example/nonpython.rst

+4
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,8 @@ now execute the test specification:
4141
usecase execution failed
4242
spec failed: 'some': 'other'
4343
no further details known at this point.
44+
========================= short test summary info ==========================
45+
FAILED test_simple.yaml::hello
4446
======================= 1 failed, 1 passed in 0.12s ========================
4547
4648
.. regendoc:wipe
@@ -77,6 +79,8 @@ consulted when reporting in ``verbose`` mode:
7779
usecase execution failed
7880
spec failed: 'some': 'other'
7981
no further details known at this point.
82+
========================= short test summary info ==========================
83+
FAILED test_simple.yaml::hello
8084
======================= 1 failed, 1 passed in 0.12s ========================
8185
8286
.. regendoc:wipe

doc/en/example/parametrize.rst

+10-7
Original file line numberDiff line numberDiff line change
@@ -73,6 +73,8 @@ let's run the full monty:
7373
E assert 4 < 4
7474
7575
test_compute.py:4: AssertionError
76+
========================= short test summary info ==========================
77+
FAILED test_compute.py::test_compute[4] - assert 4 < 4
7678
1 failed, 4 passed in 0.12s
7779
7880
As expected when running the full range of ``param1`` values
@@ -343,6 +345,8 @@ And then when we run the test:
343345
E Failed: deliberately failing for demo purposes
344346
345347
test_backends.py:8: Failed
348+
========================= short test summary info ==========================
349+
FAILED test_backends.py::test_db_initialized[d2] - Failed: deliberately f...
346350
1 failed, 1 passed in 0.12s
347351
348352
The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.
@@ -457,6 +461,8 @@ argument sets to use for each test function. Let's run it:
457461
E assert 1 == 2
458462
459463
test_parametrize.py:21: AssertionError
464+
========================= short test summary info ==========================
465+
FAILED test_parametrize.py::TestClass::test_equals[1-2] - assert 1 == 2
460466
1 failed, 2 passed in 0.12s
461467
462468
Indirect parametrization with multiple fixtures
@@ -478,11 +484,8 @@ Running it results in some skips if we don't have all the python interpreters in
478484
.. code-block:: pytest
479485
480486
. $ pytest -rs -q multipython.py
481-
ssssssssssss...ssssssssssss [100%]
482-
========================= short test summary info ==========================
483-
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:29: 'python3.5' not found
484-
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:29: 'python3.7' not found
485-
3 passed, 24 skipped in 0.12s
487+
........................... [100%]
488+
27 passed in 0.12s
486489
487490
Indirect parametrization of optional implementations/imports
488491
--------------------------------------------------------------------
@@ -607,13 +610,13 @@ Then run ``pytest`` with verbose mode and with only the ``basic`` marker:
607610
platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
608611
cachedir: $PYTHON_PREFIX/.pytest_cache
609612
rootdir: $REGENDOC_TMPDIR
610-
collecting ... collected 17 items / 14 deselected / 3 selected
613+
collecting ... collected 14 items / 11 deselected / 3 selected
611614
612615
test_pytest_param_example.py::test_eval[1+7-8] PASSED [ 33%]
613616
test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%]
614617
test_pytest_param_example.py::test_eval[basic_6*9] XFAIL [100%]
615618
616-
=============== 2 passed, 14 deselected, 1 xfailed in 0.12s ================
619+
=============== 2 passed, 11 deselected, 1 xfailed in 0.12s ================
617620
618621
As the result:
619622

doc/en/example/reportingdemo.rst

+47-2
Original file line numberDiff line numberDiff line change
@@ -436,7 +436,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
436436
items = [1, 2, 3]
437437
print("items is {!r}".format(items))
438438
> a, b = items.pop()
439-
E TypeError: 'int' object is not iterable
439+
E TypeError: cannot unpack non-iterable int object
440440
441441
failure_demo.py:181: TypeError
442442
--------------------------- Captured stdout call ---------------------------
@@ -516,7 +516,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
516516
def test_z2_type_error(self):
517517
items = 3
518518
> a, b = items
519-
E TypeError: 'int' object is not iterable
519+
E TypeError: cannot unpack non-iterable int object
520520
521521
failure_demo.py:222: TypeError
522522
______________________ TestMoreErrors.test_startswith ______________________
@@ -650,4 +650,49 @@ Here is a nice run of several failures and how ``pytest`` presents things:
650650
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
651651
652652
failure_demo.py:282: AssertionError
653+
========================= short test summary info ==========================
654+
FAILED failure_demo.py::test_generative[3-6] - assert (3 * 2) < 6
655+
FAILED failure_demo.py::TestFailing::test_simple - assert 42 == 43
656+
FAILED failure_demo.py::TestFailing::test_simple_multiline - assert 42 == 54
657+
FAILED failure_demo.py::TestFailing::test_not - assert not 42
658+
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_text - Asser...
659+
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_similar_text
660+
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_multiline_text
661+
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_long_text - ...
662+
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_long_text_multiline
663+
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_list - asser...
664+
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_list_long - ...
665+
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_dict - Asser...
666+
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_set - Assert...
667+
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_longer_list
668+
FAILED failure_demo.py::TestSpecialisedExplanations::test_in_list - asser...
669+
FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_multiline
670+
FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single
671+
FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single_long
672+
FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single_long_term
673+
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_dataclass - ...
674+
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_attrs - Asse...
675+
FAILED failure_demo.py::test_attribute - assert 1 == 2
676+
FAILED failure_demo.py::test_attribute_instance - AssertionError: assert ...
677+
FAILED failure_demo.py::test_attribute_failure - Exception: Failed to get...
678+
FAILED failure_demo.py::test_attribute_multiple - AssertionError: assert ...
679+
FAILED failure_demo.py::TestRaises::test_raises - ValueError: invalid lit...
680+
FAILED failure_demo.py::TestRaises::test_raises_doesnt - Failed: DID NOT ...
681+
FAILED failure_demo.py::TestRaises::test_raise - ValueError: demo error
682+
FAILED failure_demo.py::TestRaises::test_tupleerror - ValueError: not eno...
683+
FAILED failure_demo.py::TestRaises::test_reinterpret_fails_with_print_for_the_fun_of_it
684+
FAILED failure_demo.py::TestRaises::test_some_error - NameError: name 'na...
685+
FAILED failure_demo.py::test_dynamic_compile_shows_nicely - AssertionError
686+
FAILED failure_demo.py::TestMoreErrors::test_complex_error - assert 44 == 43
687+
FAILED failure_demo.py::TestMoreErrors::test_z1_unpack_error - ValueError...
688+
FAILED failure_demo.py::TestMoreErrors::test_z2_type_error - TypeError: c...
689+
FAILED failure_demo.py::TestMoreErrors::test_startswith - AssertionError:...
690+
FAILED failure_demo.py::TestMoreErrors::test_startswith_nested - Assertio...
691+
FAILED failure_demo.py::TestMoreErrors::test_global_func - assert False
692+
FAILED failure_demo.py::TestMoreErrors::test_instance - assert 42 != 42
693+
FAILED failure_demo.py::TestMoreErrors::test_compare - assert 11 < 5
694+
FAILED failure_demo.py::TestMoreErrors::test_try_finally - assert 1 == 0
695+
FAILED failure_demo.py::TestCustomAssertMsg::test_single_line - Assertion...
696+
FAILED failure_demo.py::TestCustomAssertMsg::test_multiline - AssertionEr...
697+
FAILED failure_demo.py::TestCustomAssertMsg::test_custom_repr - Assertion...
653698
============================ 44 failed in 0.12s ============================

doc/en/example/simple.rst

+22-1
Original file line numberDiff line numberDiff line change
@@ -65,6 +65,8 @@ Let's run this without supplying our new option:
6565
test_sample.py:6: AssertionError
6666
--------------------------- Captured stdout call ---------------------------
6767
first
68+
========================= short test summary info ==========================
69+
FAILED test_sample.py::test_answer - assert 0
6870
1 failed in 0.12s
6971
7072
And now with supplying a command line option:
@@ -89,6 +91,8 @@ And now with supplying a command line option:
8991
test_sample.py:6: AssertionError
9092
--------------------------- Captured stdout call ---------------------------
9193
second
94+
========================= short test summary info ==========================
95+
FAILED test_sample.py::test_answer - assert 0
9296
1 failed in 0.12s
9397
9498
You can see that the command line option arrived in our test. This
@@ -261,6 +265,8 @@ Let's run our little function:
261265
E Failed: not configured: 42
262266
263267
test_checkconfig.py:11: Failed
268+
========================= short test summary info ==========================
269+
FAILED test_checkconfig.py::test_something - Failed: not configured: 42
264270
1 failed in 0.12s
265271
266272
If you only want to hide certain exceptions, you can set ``__tracebackhide__``
@@ -443,7 +449,7 @@ Now we can profile which test functions execute the slowest:
443449
========================= slowest 3 test durations =========================
444450
0.30s call test_some_are_slow.py::test_funcslow2
445451
0.20s call test_some_are_slow.py::test_funcslow1
446-
0.11s call test_some_are_slow.py::test_funcfast
452+
0.10s call test_some_are_slow.py::test_funcfast
447453
============================ 3 passed in 0.12s =============================
448454
449455
incremental testing - test steps
@@ -461,6 +467,9 @@ an ``incremental`` marker which is to be used on classes:
461467
462468
# content of conftest.py
463469
470+
from typing import Dict, Tuple
471+
import pytest
472+
464473
# store history of failures per test class name and per index in parametrize (if parametrize used)
465474
_test_failed_incremental: Dict[str, Dict[Tuple[int, ...], str]] = {}
466475
@@ -669,6 +678,11 @@ We can run this:
669678
E assert 0
670679
671680
a/test_db2.py:2: AssertionError
681+
========================= short test summary info ==========================
682+
FAILED test_step.py::TestUserHandling::test_modification - assert 0
683+
FAILED a/test_db.py::test_a1 - AssertionError: <conftest.DB object at 0x7...
684+
FAILED a/test_db2.py::test_a2 - AssertionError: <conftest.DB object at 0x...
685+
ERROR b/test_error.py::test_root
672686
============= 3 failed, 2 passed, 1 xfailed, 1 error in 0.12s ==============
673687
674688
The two test modules in the ``a`` directory see the same ``db`` fixture instance
@@ -758,6 +772,9 @@ and run them:
758772
E assert 0
759773
760774
test_module.py:6: AssertionError
775+
========================= short test summary info ==========================
776+
FAILED test_module.py::test_fail1 - assert 0
777+
FAILED test_module.py::test_fail2 - assert 0
761778
============================ 2 failed in 0.12s =============================
762779
763780
you will have a "failures" file which contains the failing test ids:
@@ -873,6 +890,10 @@ and run it:
873890
E assert 0
874891
875892
test_module.py:19: AssertionError
893+
========================= short test summary info ==========================
894+
FAILED test_module.py::test_call_fails - assert 0
895+
FAILED test_module.py::test_fail2 - assert 0
896+
ERROR test_module.py::test_setup_fails - assert 0
876897
======================== 2 failed, 1 error in 0.12s ========================
877898
878899
You'll see that the fixture finalizers could use the precise reporting

0 commit comments

Comments
 (0)