2

我想为我的单元测试优化数据生成速度。似乎策略需要from_regexdictionaries长时间才能生成示例。

下面是我编写的示例,以尝试对示例生成进行基准测试:

from hypothesis import given
from hypothesis.strategies import (
    booleans,
    composite,
    dictionaries,
    from_regex,
    integers,
    lists,
    one_of,
    text,
)

param_names = from_regex(r"[a-z][a-zA-Z0-9]*(_[a-zA-Z0-9]+)*", fullmatch=True)
param_values = one_of(booleans(), integers(), text(), lists(text()))


@composite
def composite_params_dicts(draw, min_size=0):
    """Provides a dictionary of parameters."""
    params = draw(
        dictionaries(keys=param_names, values=param_values, min_size=min_size)
    )

    return params


params_dicts = dictionaries(keys=param_names, values=param_values)


@given(params=params_dicts)
def test_standard(params):
    assert params is not None


@given(params=composite_params_dicts(min_size=1))
def test_composite(params):
    assert len(params) > 0


@given(integer=integers(min_value=1))
def test_integer(integer):
    assert integer > 0

test_integer()测试用作参考,因为它使用简单的策略。

因为我的一个项目中的一些长期运行的测试使用正则表达式来生成参数名称和字典来生成这些参数,所以我使用这些策略添加了两个测试。

test_composite()使用采用可选参数的复合策略。 test_standard()使用类似的策略,但它不是复合的。

下面是测试结果:

> pytest hypothesis-sandbox/test_dicts.py --hypothesis-show-statistics
============================ test session starts =============================
platform linux -- Python 3.7.3, pytest-5.0.1, py-1.8.0, pluggy-0.12.0
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/damien/Sandbox/hypothesis/.hypothesis/examples')
rootdir: /home/damien/Sandbox/hypothesis
plugins: hypothesis-4.28.2
collected 3 items                                                                                                                                                       

hypothesis-sandbox/test_dicts.py ...                                    [100%]
=========================== Hypothesis Statistics ============================

hypothesis-sandbox/test_dicts.py::test_standard:

  - 100 passing examples, 0 failing examples, 1 invalid examples
  - Typical runtimes: 0-35 ms
  - Fraction of time spent in data generation: ~ 98%
  - Stopped because settings.max_examples=100
  - Events:
    * 2.97%, Retried draw from TupleStrategy((<hypothesis._strategies.CompositeStrategy object at 0x7f72108b9630>,
    one_of(booleans(), integers(), text(), lists(elements=text()))))
    .filter(lambda val: all(key(val) not in seen 
    for (key, seen) in zip(self.keys, seen_sets))) to satisfy filter

hypothesis-sandbox/test_dicts.py::test_composite:

  - 100 passing examples, 0 failing examples, 1 invalid examples
  - Typical runtimes: 0-47 ms
  - Fraction of time spent in data generation: ~ 98%
  - Stopped because settings.max_examples=100

hypothesis-sandbox/test_dicts.py::test_integer:

  - 100 passing examples, 0 failing examples, 0 invalid examples
  - Typical runtimes: < 1ms
  - Fraction of time spent in data generation: ~ 57%
  - Stopped because settings.max_examples=100

========================== 3 passed in 3.17 seconds ==========================

复合策略慢吗?

如何优化自定义策略?

4

1 回答 1

2

复合策略与生成相同数据的任何其他方式一样快,但人们倾向于将它们用于大而复杂的输入(比小而简单的输入慢)

策略优化技巧减少到“不要做慢事”,因为没有办法走得更快。

  • 尽量减少使用,.filter(...)因为重试比不重试要慢。
  • 帽子大小,尤其是嵌套的东西。

因此,对于您的示例,如果您限制列表的大小,它可能会更快,但否则它会很慢(ish!),因为您正在生成大量数据但没有做太多的数据。

于 2019-07-28T06:27:58.713 回答