Internal modules

Internal modules are part of catcher-core package. They become available as soon as you install catcher.

check - collection of different checks

class catcher.steps.check.All(body: dict, negative=False)[source]

Bases: catcher.steps.check.Operator

Fail if any check on the iterable fail.

Input:
Of:The source to check. Can be list or dictionary.
<check>:Check to perform on each element of the iterable.
Examples:

Pass if all elements of var has k == a

check:
    all:
        of: '{{ var }}'
        equals: {the: '{{ ITEM.k }}', is: 'a'}
class catcher.steps.check.And(body: dict, negative=False)[source]

Bases: catcher.steps.check.Operator

Fail if any of the conditions fails.

Input:The list of other checks.
Examples:

This is the same as 1 in list and list[1] != ‘b’ and list[2] > 2

check:
    and:
        - contains: {the: 1, in: '{{ list }}'}
        - equals: {the: '{{ list[1] }}', is_not: 'b'}
        - equals: {the: '{{ list[2] > 2 }}', is_not: true}
end
class catcher.steps.check.Any(body: dict, negative=False)[source]

Bases: catcher.steps.check.All

Fail if all checks on the iterable fail.

Input:
Of:The source to check. Can be list or dictionary.
<check>:Check to perform on each element of the iterable.
Examples:

Fail if var doesn’t contain element with k == a

check:
    any:
        of: '{{ var }}'
        equals: {the: '{{ ITEM.k }}', is: 'a'}
class catcher.steps.check.Contains(body: dict, negative=False)[source]

Bases: catcher.steps.check.Operator

Fail if list of dictionary doesn’t contain the value

Input:
The:value to contain
In:variable to check
Not_in:inverted in. Only one can be used at a time.
Examples:

Check ‘a’ not in variable ‘list’

check:
    contains: {the: 'a', not_in: '{{ list }}'}

Check variable ‘dict’ has key a.

check:
    contains: {the: 'a', in: '{{ dict }}'}
class catcher.steps.check.Equals(body: dict, negative=False)[source]

Bases: catcher.steps.check.Operator

Fail if elements are not equal

Input:
The:value
Is:variable to compare
Is_not:inverted is. Only one can be used at a time.
Examples:

Check ‘bar’ equals variable ‘foo’

check: {equals: {the: 'bar', is: '{{ foo }}'}}

Check list’s third element is not greater than 2.

check: {equals: {the: '{{ list[2] > 2 }}', is_not: true}}
class catcher.steps.check.Or(body: dict, negative=False)[source]

Bases: catcher.steps.check.And

Fail if all conditions fail.

Input:The list of other checks.
Examples:

This is the same as 1 in list or list[1] != ‘b’ or list[2] > 2

check:
    or:
        - contains: {the: 1, in: '{{ list }}'}
        - equals: {the: '{{ list[1] }}', is_not: 'b'}
        - equals: {the: '{{ list[2] > 2 }}', is_not: true}
end

echo - write data to stdout or file

class catcher.steps.echo.Echo(_path: str = None, _body=None, to=None, from_file=None, **kwargs)[source]

Print a string constant, variable or file to the console or file.

Input:
From:data source. Can be variable or constant string
From_file:file in resources.
To:output to file. Optional If not set - stdout will be used. Not resources-related

Has short from which just prints variable to stdout.

Examples:

Use short form to print variable to stdout

echo: '{{ var }}'

Print constant + variable to file

echo: {from: 'constant and {{ var }}', to: debug.output}

Use echo to register new variable

echo: {from: '{{ RANDOM_STR }}@test.com', register: {user_email: '{{ OUTPUT }}'}}

Read file content to a variable

echo: {from_file: debug.output, to: '{{ user_email }}'}

loop - loop over the data

class catcher.steps.loop.Loop(_get_action=None, _get_actions=None, **kwargs)[source]

Repeat one or several actions till the condition is true or for each element of the collection. Is useful, when you need to wait for some process to start or for async execution to finish.

Input:
While:perform action while the condition is true
  • if: your condition. It can be in short format: if: ‘{{ counter < 10 }}’ and
    long one: if: {equals: {the: ‘{{ counter }}’, is_not: 10000}}. The clause format is the same as in [checks](checks.md)
  • do: the aciton to be performed. Can be a list of actions or single one.
  • max_cycle: the limit of reductions. Optional default is no limit.
Foreach:iterate data structure
  • in: variable or static list. ITEM variable can be used to access each element of the data structure.
    Data structure can be list, dict or any other python data structure which supports iteration.
  • do: the action to be performed. Can be a list of actions or single one.
Examples:

Perform a single echo wile counter is less than 10

loop:
    while:
        if: '{{ counter < 10 }}'
        do:
            echo: {from: '{{ counter + 1 }}', register: {counter: '{{ OUTPUT }}'}}
        max_cycle: 100000

Perform to actions: consume message from kafka and send token via POST http. Do it until server returns passed true in http response.

loop:
    while:
        if:
            equals: {the: '{{ passed }}', is_not: True}
        do:
            - kafka:
                  consume:
                      server: '127.0.0.1:9092'
                      group_id: 'test'
                      topic: 'test_consume_with_timestamp'
                      timeout: {seconds: 5}
                      where:
                          equals: '{{ MESSAGE.timestamp > 1000 }}'
                  register: {token: '{{ OUTPUT.data.token }}'}
            - http:
                post:
                  headers: {Content-Type: 'application/json'}
                  url: 'http://test.com/check_my_token'
                  body: {'token': '{{ token }}'}
                register: {passed: '{{ OUTPUT.passed }}'}

Iterate over iterator variable, produce each element to kafka as json and debug it to file.

loop:
    foreach:
        in: '{{ iterator }}'
        do:
            - kafka:
                  produce:
                      server: '127.0.0.1:9092'
                      topic: 'test_produce_json'
                      data: '{{ ITEM|tojson }}'
            - echo: {from: '{{ ITEM }}', to: '{{ ITEM["filename"] }}.output'}

Iterate over several different configurations.

variables:
    db_1: 'test:test@localhost:5433/db1'
    db_2:
        url: 'test:test@localhost:5434/db2'
        type: postgres
    db_3:
        dbname: 'db3'
        user: 'test'
        password: 'test'
        host: 'localhost'
        port: 5435
        type: 'postgres'
steps:
    - loop:
        foreach:
            in: '["{{ db_1 }}", {{ db_2 }}, {{ db_3 }}]'
            do:
                - postgres:
                    request:
                        conf: '{{ ITEM }}'
                        query: 'select count(*) from test'
                    register: {documents: '{{ OUTPUT }}'}
                - check:
                    equals: {the: '{{ documents.count }}', is: 2}

Note that db_1 template has additional quotes "{{ db_1 }}". Your in should contain valid object. As db_1 is just a string - it should be put in quotes. Otherwise in value will be corrupted. Always make sure your in value is valid. It may also have some difficulties with json as string.

http - perform http request

class catcher.steps.http.Http(**kwargs)[source]

Perform an http request: from just getting the information from the server to pushing a file to it.

Input:
<method>:http method. Most frequent are get/post/put/delete. See docs for details
  • headers: Dictionary with custom headers. Optional
  • url: url to call
  • response_code: Code to await. Use ‘x’ for a wildcard or ‘-’ to set a range between 2 codes.
    Optional default is 200.
  • body: body to send. Optional.
  • body_from_file: File can be used as data source. Optional.
  • files: send file from resources (only for methods which support it). Optional
  • verify: Verify SSL Certificate in case of https. Optional. Default is true.
  • should_fail: true, if this request should fail, f.e. to test connection refused. Will fail the test if no errors.
  • session: http session name. Cookies are saved between sessions. Optional. Default session is ‘default’.
    If set to null - there would be no session.
  • fix_cookies: if true will make cookies secure if you use https and not secure if you don’t. Optional.
    Default is true. Is useful when you don’t have tls for your test env, but can’t change infra.
  • timeout: number of seconds to wait for response. Optional. Default is no timeout (wait forever)
Files:is a single file or list of files, where <file_param> is a name of request param. If you don’t specify headers ‘multipart/form-data’ will be set automatically.
  • <file_param>: path to the file
  • type: file mime type
Cookies:All requests are run in the session, sharing cookies got from previous requests. If you wish to start new empty session use session. If you don’t want a session to be saved use session: null
Examples:

Put data to server and await 200-299 code

http:
  put:
    url: 'http://test.com?user_id={{ user_id }}'
    body: {'foo': bar}
    response_code: 2XX

Put data to server and await 201-3XX code

http:
  put:
    url: 'http://test.com?user_id={{ user_id }}'
    body: {'foo': bar}
    response_code: 201-3xx

Post data to server with custom header

http:
  post:
    headers: {Content-Type: 'application/json', Authorization: '{{ token }}'}
    url: 'http://test.com?user_id={{ user_id }}'
    body: {'foo': bar}

Post file to remote server

http:
  post:
    url: 'http://test.com'
    body_from_file: "data/answers.json"

SSL without verification

http:
  post:
    url: 'https://my_server.de'
    body: {'user':'test'}
    verify: false

Manual set of json body. tojson will convert ‘var’ to json string

http:
  post:
    url: 'http://test.com?user_id={{ user_id }}'
    body: '{{ var |tojson }}'

Set json by providing json headers and passing python object to body

http:
  post:
    url: 'http://test.com?user_id={{ user_id }}'
    headers: {Content-Type: 'application/json'}
    body: '{{ var  }}'

Send file with a post request

http:
  post:
    url: 'http://example.com/upload'
    files:
        file: 'subdir/my_file_in_resources.csv'
        type: 'text/csv'

Send multiple files with a single post request

http:
  post:
    url: 'http://example.com/upload'
    files:
        - my_csv_file: 'one.csv'
          type: 'text/csv'
        - my_json_file: 'two.json'
          type: 'application/json'

Test disconnected service:

steps:
- docker:
    disconnect:
        hash: '{{ my_container }}'
- http:
    get:
        url: '{{ my_container_url }}'
        should_fail: true

Test correct and incorrect login (clear cookies):

steps:
    - http:
        post:
            url: 'http://test.com/login.php?user_id={{ user_id }}'
            body: {'pwd': secret}
            response_code: 2XX
            session: 'user1'
        name: "Do a login"
    - http:
        get:
            url: 'http://test.com/protected_path'
            response_code: 2XX
            session: 'user1'
        name: "Logged-in user can access protected_path"
    - http:
        get:
            url: 'http://test.com/protected_path'
            response_code: 401
            session: 'user2'
        name: "protected_path can't be accessed without login"

grpc - perform a remote procedure call request

class catcher.steps.grpc_step.GRPC(call=None, **kwargs)[source]

Perform a remote procedure call with protobuffers layer.

Input:
Call:Make a remote procedure call
  • url: server url
  • function: service and method you are going to call separated by dot. Case insensitive (MyClass.my_function)
  • schema: path to the .proto resource file. Optional Ignore it if reflection is configured on the server side
  • data: data to pass. Optional
Examples:

calculator.proto

message Number {
    float value = 1;
}

service Calculator {
    rpc SquareRoot(Number) returns (Number) {}
}

test

grpc:
    call:
        url: 'localhost:50051'
        function: calculator.squareroot
        schema: 'calculator.proto'
        data: {'value': 2}
    register: {'my_value': '{{ OUTPUT.value }}'

Complex schema case:

grpc:
    call:
        url: 'localhost:50051'
        function: greeter.greet
        schema: 'greeter.proto'
        data:
            result:
                url: '{{ my_url }}'
                title: 'test'
                snippets: 'test2'
    register: {value: '{{ OUTPUT.name }}'}

Useful tip: if you’d like to use templates in your .proto file - do not do it in the original resources, as Catcher shouldn’t modify them. Use echo step to fill a template and create another .proto file for you.

sh - run shell command

class catcher.steps.sh_step.Sh(command=None, path=None, return_code=0, **kwargs)[source]

Run shell command and return output.

Input:
  • command: Command to run.
  • path: Path to be used as a root for the command. Optional.
  • return_code: expected return code. Optional. 0 is default.
Examples:

List current directory

- sh:
    command: 'ls -la'

Determine if running in docker

variables:
    docker: true
steps:
    - sh:
        command: "grep 'docker|lxc' /proc/1/cgroup"
        return_code: 1
        ignore_errors: true
        register: {docker: false}
    - echo: {from: 'In docker: {{ docker }}'}

run another testcase

class catcher.steps.run.Run(ignore_errors=False, _body=None, run=None, include=None, tag=None, variables=None, **kwargs)[source]

Run another test, included later. Is useful when you need to run the same code from different tests or to repeat the same steps inside one test but with different input variables.

Input:
Include:include name. If contains dot - everything after dot will be considered as tag. In case of multiple dots the last one will be considered as tag.
Variables:Variables to override. Optional
Examples:

Use short form to run sign_up

include:
    file: register_user.yaml
    as: sign_up
steps:
    # .... some steps
    - run: sign_up
    # .... some steps

Run sign_up include test twice for 2 different users

include:
    file: register_user.yaml
    as: sign_up
variables:
    users: ['{{ random("email") }}', '{{ random("email") }}']
steps:
    # .... some steps
    - run:
        include: sign_up
        variables:
            user: '{{ users[0] }}'
    # .... some steps
    - run:
        include: sign_up
        variables:
            user: '{{ users[1] }}'

Include sign_up and run all steps with tag register from it.

include:
    file: register_and_login.yaml
    as: sign_up
steps:
    - run:
        include: sign_up.register

Include one.yaml from main and run only before tag of it. main.yaml -> one.yaml -> two.yaml. main.yaml

include:
    file: one.yaml
    as: one
steps:
    - run: 'one.before'

one.yaml

include:
    file: two.yaml
    as: run_me
steps:
    - run:
        include: two.run_me
        tag: before
    - echo: {from: '{{ bar }}', to: after.output, tag: after}

two.yaml

steps:
    - echo: {from: '1', to: foo.output, tag: run_me}
    - echo: {from: '2', to: baz.output, tag: two}
    - echo: {from: '3', to: bar.output, tag: three}

stop - stop testcase execution

class catcher.steps.stop.Stop(**kwargs)[source]

Stop a test without error

Input:
If:condition
Examples:

Stop execution if migration was applied.

steps:
    - postgres:
        request:
            conf: '{{ migrations_postgres }}'
            query: "select count(*) from migration where hash = '{{ TEST_NAME }}';"
        register: {result: '{{ OUTPUT }}'}
        tag: check
        name: 'check_migration_{{ TEST_NAME }}'
    - stop:
        if:
            equals: {the: '{{ result }}', is: 1}
    - postgres:
        request:
            conf: '{{ migrations_postgres }}'
            query: "insert into migration(id, hash) values(1, '{{ TEST_NAME }}');"

wait - delay testcase execution

class catcher.steps.wait.Wait(_get_action=None, _get_actions=None, **kwargs)[source]

Wait for a static delay or until some substep finished successfully. Is extremely useful for testing asnyc systems or when you are waiting for some service to launch.

Input:
Days:several days
Hours:several hours
Minutes:several minutes
Seconds:several seconds
Microseconds:several microseconds
Milliseconds:several milliseconds
Nanoseconds:several nanoseconds
For:(list of actions) will repeat them till they all finishes successfully. Will fail if time ends.
Examples:

Wait for 1 minute 30 seconds

wait: {minutes: 1, seconds: 30}

Wait for http to be ready. Will repeat inner http step till is succeeded or fails after 5 seconds

wait:
    seconds: 5
    for:
        http:
            put:
                url: 'http://localhost:8000/mockserver/expectation'
                body:
                    httpRequest: {'path': '/some/path'}
                    httpResponse: {'body': 'hello world'}
                response_code: 201

Wait for postgres to be populated

wait:
    seconds: 30
    for:
        - postgres:
              request:
                  conf: '{{ pg_conf }}'
                  query: 'select count(*) from users'
              register: {documents: '{{ OUTPUT }}'}
        - check: {equals: {the: '{{ documents }}', is_not: 0}}

External modules

External modules are a part of catcher-modules package. Each of them should be installed separately. Every module could have it’s own dependencies and libraries.

redis - works with redis cache

Put value to Redis cache or get it, increment/decrement or delete:

Decrement, increment by 5 and delete
::

    - redis:
        request:
            set:
                'foo': 11
    - redis:
        request:
            decr: baz
    - redis:
        request:
            incrby:
                foo: 5
    - redis:
        request:
            delete:
                - baz

Get value by key ‘key’ and register in variable ‘var’:

redis:
  request:
    get: 'key'
  register: {var: '{{ OUTPUT }}'}

For the full documentation see catcher-modules Redis.

couchbase - works with couchbase nosql database

Allows you to perform put/get/delete/query operations in Couchbase:

couchbase:
  request:
    conf:
        bucket: test
        user: test
        password: test
        host: localhost
    query: "select `baz` from test where `foo` = 'bar'"

For the full documentation see catcher-modules Couchbase.

Dependencies:

  • libcouchbase library is required to run this step.

postgres - works with postgres sql database

Allows you to run sql queries in Postgres. Supports both string and object configuration. Execute ddl resource resources/my_script.sql:

postgres:
  request:
    conf: 'postgresql://user:password@localhost:5432/test'
    sql: 'my_script.sql'

Fetch document and check if it’s id is equal to num variable:

- postgres:
    request:
        conf:
            dbname: test
            user: user
            password: password
            host: localhost
            port: 5433
        sql: select * from test where id={{ id }}
    register: {document: '{{ OUTPUT }}'}
- check:
    equals: {the: '{{ document.id }}', is: '{{ num }}'}

For the full documentation see catcher-modules Postgres.

mongo - works with mongodb nosql database

Allows you to run different commands in MongoDB Find one post for author Mike and register it as a document variable:

mongo:
    request:
        conf: 'mongodb://username:password@host'
        collection: 'your_collection'
        find_one: {'author': 'Mike'}
    register: {document: '{{ OUTPUT }}'}

Chain operations db.collection.find().sort().count() to find all posts of author Mike, sort them by title and count:

mongo:
    request:
        conf: 'mongodb://username:password@host'
        collection: 'your_collection'
        find: {'author': 'Mike'}
        next:
          sort: 'title'
          next: 'count'

For the full documentation see catcher-modules Mongo.

oracle - works with oracle sql database

Allows you to run sql queries in OracleDB. Supports both string and object configuration. Insert into test table one row:

oracle:
    request:
        conf: 'user:password@localhost:1521/test'
        sql: 'insert into test(id, num) values(3, 3);'

For the full documentation see catcher-modules Oracle.

Dependencies:

  • libclntsh.dylib is required for oracle. Read more here.

sqlite - works with sqlite sql embedded database

Allows you to create SQLite database on your local filesystem and work with it. Supports both string and object configuration.

Important - for relative path use one slash /. For absolute slash - two //. Select all from test, use relative path:

sqlite:
  request:
      conf: '/foo.db'
      sql: 'select count(*) as count from test'
  register: {documents: '{{ OUTPUT }}'}

Insert into test, using absolute path (with 2 slashes):

sqlite:
  request:
      conf: '//absolute/path/to/foo.db'
      sql: 'insert into test(id, num) values(3, 3);'

For the full documentation see catcher-modules SQLite.

mysql - works with mysql sql database

Allows you to run queries on MySQL (and all mysql compatible databases like MariaDB). Supports both string and object configuration. Insert one row in test table:

mysql:
  request:
      conf: 'user:password@localhost:3306/test'
      sql: 'insert into test(id, num) values({{ id }}, {{ num }});'

For the full documentation see catcher-modules MySQL.

mssql - works with mssql sql database

Allows you to run queries on Microsoft SQL Server. Supports both string and object configuration. Count all rows in test, specify driver manually:

mssql:
  request:
      conf:
          dbname: test
          user: user
          password: password
          host: localhost
          port: 1433
          driver: ODBC Driver 17 for SQL Server
      sql: 'select count(*) as count from test'
  register: {documents: '{{ OUTPUT }}'}

Insert row in test table. Use default ODBC Driver 17 for SQL Server driver name:

mssql:
  request:
      conf: 'user:password@localhost:5432/test'
      sql: 'insert into test(id, num) values(3, 3);'

Use pymssql library instead of odbc driver:

mssql:
  request:
      conf: 'mssql+pymssql://user:password@localhost:5432/test'
      sql: 'insert into test(id, num) values(3, 3);'

Dependencies:

  • mssql driver is required for mssql. Read more here.

For the full documentation see catcher-modules MSSQL.

selenium - run prepared selenium test for front-end testing

This complex step consists of two parts. First - you need to create a Selenium script and put it in the Catcher’s resources directory. Second - run the step in Catcher.

Catcher variables can be accessed from Selenium script via environment variables. All output from Selenium script is routed to Catcher OUTPUT variable.

If you specify java/kotlin source file as a Selenium script - Catcher will try to compile it using system’s compiler.

Use geckodriver to run python-selenium test my_test.py from resources directory:

- selenium:
    test:
        driver: '/opt/bin/geckodriver'
        file: 'my_test.py'

You can read more information on Catcher and Selenium integration in separate document

Dependencies:

  • Selenium browser drivers
  • Selenium client libraries
  • NodeJS for running JS Selenium steps
  • Java for running all Jar-precompiled Selenium steps
  • JDK if you wish to compile Java source code
  • Kotlin compiler if you wish to compile Kotlin source code

For the module documentation see catcher-modules Selenium.

marketo - interact with Adobe Marketo marketing automation tool

Allows you to read/write/delete leads in Adobe Marketo marketing automation tool.

Read id, email and custom_field_1 fields from lead found by custom_id field having my_value_1 or my_value_2 values:

marketo:
    read:
        conf:
            munchkin_id: '{{ marketo_munchkin_id }}'
            client_id: '{{ marketo_client_id }}'
            client_secret: '{{ marketo_client_secret }}'
        fields: ['id', 'email', 'custom_field_1']
        filter_key: 'custom_id'
        filter_value: ['my_value_1', 'my_value_2']
    register: {leads: '{{ OUTPUT }}'}

Update leads by custom_id field:

marketo:
    write:
        conf:
            munchkin_id: '{{ marketo_munchkin_id }}'
            client_id: '{{ marketo_client_id }}'
            client_secret: '{{ marketo_client_secret }}'
        action: 'updateOnly'
        lookupField: 'custom_id'
        leads:
            - custom_id: 14
              email: 'foo@bar.baz'
              custom_field_1: 'some value'
            - custom_id: 15
              email: 'foo2@bar.baz'
              custom_field_1: 'some other value'

For the module documentation see catcher-modules Marketo.

kafka - consume/produce in the kafka message queue

Allows you to consume/produce messages from/to Apache Kafka

Read message from test_consume_with_timestamp topic with timestamp field > 1000:

kafka:
    consume:
        server: '127.0.0.1:9092'
        group_id: 'test'
        topic: 'test_consume_with_timestamp'
        timeout: {seconds: 5}
        where:
            equals: '{{ MESSAGE.timestamp > 1000 }}'

Produce data variable as json message to the topic test_produce_json:

kafka:
    produce:
        server: '127.0.0.1:9092'
        topic: 'test_produce_json'
        data: '{{ data|tojson }}'

For the module documentation see catcher-modules Kafka.

rabbit - consume/produce in the rabbit message queue

Allows you to consume/produce messages from/to RabbitMQ

Publish resources/path/to/file.json file to the test.catcher.exchange exchange:

rabbit:
    publish:
        config:
            server: 127.0.0.1:5672
            username: 'guest'
            password: 'guest'
        exchange: 'test.catcher.exchange'
        routing_key: 'catcher.routing.key'
        data_from_file: '{{ path/to/file.json }}'

Consume message from my_queue and register it as a message variable. Configuration is stored in variable:

rabbit:
    consume:
        config: '{{ rabbit_conf }}'
        queue: 'my_queue'
    register: {message: '{{ OUTPUT }}'}

For the module documentation see catcher-modules Rabbit.

docker - interact with docker containers

Allows you to start/stop/disconnect/connect/exec commands, get logs and statuses of Docker containers. Is very useful when you need to run something like Mockserver and/or simulate network disconnects.

Run blocking command echo hello world in a new alpine container. Register output as a logs variable:

docker:
    start:
        image: 'alpine'
        cmd: 'echo hello world'
        detached: false
    register: {logs: '{{ OUTPUT.strip() }}'}

Start named container detached with volumes and environment:

docker:
    start:
        image: 'my-backend-service'
        name: 'mock server'
        ports:
            '1080/tcp': 8000
        environment:
            POOL_SIZE: 20
            OTHER_URL: {{ service1.url }}
        volumes:
            '{{ CURRENT_DIR }}/data': '/data'
            '/tmp/logs': '/var/log/service'

For the module documentation see catcher-modules Docker.

elastic - run queries on elasticsearch

Allows you to get data from Elasticsearch. Useful, when your services push their logs there and you need to check the logs automatically from the test.

Get only field name of all documents containing three in the payload, register as doc variable:

elastic:
    search:
        url: 'http://127.0.0.1:9200'
        index: test
        query:
            match: {payload : "three"}
        _source: ['name']
    register: {doc: '{{ OUTPUT }}'}

Get all documents which has round shape and red or blue color, register as doc variable:

elastic:
    search:
        url: 'http://127.0.0.1:9200'
        index: test
        query:
            bool:
                must:
                    - term: {shape: "round"}
                    - bool:
                        should:
                            - term: {color: "red"}
                            - term: {color": "blue"}
    register: {doc: '{{ OUTPUT }}'}

For the module documentation see catcher-modules Elastic.

s3 - work with files in aws s3

Allows you to get/put/list/delete files in Amazon S3

Useful hint: for local testing you can use Minio run in docker as it is S3 API compatible.

Put file resources/my_file as /foo/file.txt:

s3:
    put:
        config:
            url: http://127.0.0.1:9001
            key_id: minio
            secret_key: minio123
        path: /foo/file.txt
        content_resource: 'my_file'

Put variable content as a file /foo/file.txt:

s3:
    put:
        config: '{{ s3_config }}'
        path: /foo/file.txt
        content: '{{ content }}'

Get file /foo/baz/bar/file.txt from S3 and put it in data variable:

s3:
    get:
        config: '{{ s3_config }}'
        path: /foo/baz/bar/file.txt
    register: {data: '{{ OUTPUT }}'}

For the module documentation see catcher-modules S3.

prepare - allows you to generate and push data to the database

Used for bulk actions to prepare test data. Is useful when you need to prepare a lot of data. This step consists of 3 parts:

  1. write sql ddl schema file (optional) - describe all tables/schemas/privileges needed to be created
  2. prepare data in a csv file (optional)
  3. call Catcher’s prepare step to populate csv content into the database

Both sql schema and csv file supports templates.

Create resources/schema.sql:

CREATE TABLE foo(
                user_id      integer    primary key,
                email        varchar(36)    NOT NULL
            );

Create resources/foo.csv which generates rows for all users in users variable:

user_id,email
{%- for user in users %}
{{ user.uuid }},{{ user.email }}
{%- endfor -%}

Call prepare step and tell it to create foo table and use foo.csv to populate it:

prepare:
  populate:
    mysql:
      conf: '{{ mysql_conf }}'
      schema: schema.sql
      data:
        foo: foo.csv

Important:

  • populate step is designed to be supported by all steps (in future). Currently it is supported only by Postges/Oracle/MSSql/MySql/SQLite steps.
  • to populate json as Postgres Json data type you need to use use_json: true flag

Schema resources/pg_schema.sql:

CREATE TABLE my_table(
                user_id      integer    primary key,
                payload      json       NOT NULL
            );

Data file resources/json_table.csv:

user_id,payload\n
1,{\"date\": \"1990-07-20\"}

Postgres prepare step:

prepare:
    populate:
        postgres:
            conf: '{{ postgres }}'
            schema: pg_schema.sql
            data:
                my_table: json_table.csv
            use_json: true

Hint: You can specify multiple tables and databases:

prepare:
    populate:
        postgres:
            conf: '{{ postgres }}'
            schema: pg_schema.sql
            data:
                table1: resource1.csv
                table2: resource2.csv
        mysql:
            conf: '{{ mysql_conf }}'
            schema: schema.sql
            data:
                foo: foo.csv

You can find more information in a separate document prepare

For the module documentation see catcher-modules prepare step.

expect - allows you to bulk-check data

This is the opposite for prepare. It compares expected data from csv to what you have in the database. csv file supports templates.

Important:

  • populate step is designed to be supported by all steps (in future). Currently it is supported only by Postges/Oracle/MSSql/MySql/SQLite steps.
  • Schema comparison is not implemented.
  • You can use strict comparison (only data from csv should be in the table, in the same order as csv) or the default one (just check if the data is there)

Create resources/foo.csv expected file:

user_id,email
{%- for user in users %}
{{ user.uuid }},{{ user.email }}
{%- endfor -%}

Run expect step for both tables:

expect:
    compare:
        postgres:
            conf: 'test:test@localhost:5433/test'
            data:
                foo: foo.csv

Hint: You can specify multiple tables and databases:

expect:
    compare:
        postgres:
            conf: '{{ postgres }}'
            data:
                foo: foo.csv
                bar: bar.csv
        mysql:
            conf: '{{ mysql_conf }}'
            data:
                foo: foo.csv
                bar: bar.csv

You can find more information in a separate document expect

For the module documentation see catcher-modules expect step.

email - send/receive emails

Allows you to send and receive emails via IMAP protocol.

Find unread message containing blog name in subject, mark as read and register in mail variable:

email:
  receive:
      config:
        host: 'imap.google.com'
        user: 'my_user@google.com'
        pass: 'my_pass'
      filter: {unread: true, subject: 'justtech.blog'}
      ack: true
      limit: 1
  register: {mail: '{{ OUTPUT }}'}

Send message:

email:
  send:
      config: '{{ email_conf }}'
      to: 'test@test.com'
      from: 'me@test.com'
      subject: 'test_subject'
      html: '
      <html>
          <body>
            <p>Hi,<br>
               How are you?<br>
               <a href="http://example.com">Link</a>
            </p>
          </body>
      </html>'

For the module documentation see catcher-modules email.

airflow - interact with Apache Airflow

Allows you to run dag sync/async, get xcom and populate connections in Apache Airflow workflow management platform.

Run dag and wait for it to be completed or fail after 50 seconds or if dag fails:

- airflow:
    run:
        config:
            db_conf: 'airflow:airflow@localhost:5433/airflow'
            url: 'http://127.0.0.1:8080'
        dag_id: 'init_data_sync'
        sync: true
        wait_timeout: 50

Hint: if you’d like to populate Airflow connections based on Catcher’s inventory file use populate_connections flag.

For more information see separate document airflow. You may also find catcher-airflow-example github repo useful.