Skip to main content
Version: 0.17.19

expectation.py

class great_expectations.expectations.expectation.BatchExpectation(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None)

Base class for BatchExpectations.

BatchExpectations answer a semantic question about a Batch of data.

For example, expect_table_column_count_to_equal and expect_table_row_count_to_equal answer how many columns and rows are in your table.

BatchExpectations must implement a _validate(…) method containing logic for determining whether the Expectation is successfully validated.

BatchExpectations may optionally provide implementations of validate_configuration, which should raise an error if the configuration will not be usable for the Expectation.

Raises:

InvalidExpectationConfigurationError – The configuration does not contain the values required by the Expectation.

Parameters:

domain_keys (tuple) – A tuple of the keys used to determine the domain of the expectation.

domain_type: ClassVar = 'table'
get_success_kwargs(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None) Dict[str, Any]

Retrieve the success kwargs.

Parameters:

configuration – The ExpectationConfiguration that contains the kwargs. If no configuration arg is provided, the success kwargs from the configuration attribute of the Expectation instance will be returned.

print_diagnostic_checklist(diagnostics: Optional[great_expectations.core.expectation_diagnostics.expectation_diagnostics.ExpectationDiagnostics] = None, show_failed_tests: bool = False, backends: Optional[List[str]] = None, show_debug_messages: bool = False) str

Runs self.run_diagnostics and generates a diagnostic checklist.

This output from this method is a thin wrapper for ExpectationDiagnostics.generate_checklist() This method is experimental.

Parameters:
  • diagnostics (optional[ExpectationDiagnostics]) – If diagnostics are not provided, diagnostics will be ran on self.

  • show_failed_tests (bool) – If true, failing tests will be printed.

  • backends – list of backends to pass to run_diagnostics

  • show_debug_messages (bool) – If true, create a logger and pass to run_diagnostics

run_diagnostics(raise_exceptions_for_backends: bool = False, ignore_suppress: bool = False, ignore_only_for: bool = False, for_gallery: bool = False, debug_logger: Optional[logging.Logger] = None, only_consider_these_backends: Optional[List[str]] = None, context: Optional[AbstractDataContext] = None) ExpectationDiagnostics

Produce a diagnostic report about this Expectation.

The current uses for this method’s output are using the JSON structure to populate the Public Expectation Gallery and enabling a fast dev loop for developing new Expectations where the contributors can quickly check the completeness of their expectations.

The contents of the report are captured in the ExpectationDiagnostics dataclass. You can see some examples in test_expectation_diagnostics.py

Some components (e.g. description, examples, library_metadata) of the diagnostic report can be introspected directly from the Exepctation class. Other components (e.g. metrics, renderers, executions) are at least partly dependent on instantiating, validating, and/or executing the Expectation class. For these kinds of components, at least one test case with include_in_gallery=True must be present in the examples to produce the metrics, renderers and execution engines parts of the report. This is due to a get_validation_dependencies requiring expectation_config as an argument.

If errors are encountered in the process of running the diagnostics, they are assumed to be due to incompleteness of the Expectation’s implementation (e.g., declaring a dependency on Metrics that do not exist). These errors are added under “errors” key in the report.

Parameters:
  • raise_exceptions_for_backends – Bool object that when True will raise an Exception if a backend fails to connect.

  • ignore_suppress – Bool object that when True will ignore the suppress_test_for list on Expectation sample tests.

  • ignore_only_for – Bool object that when True will ignore the only_for list on Expectation sample tests.

  • for_gallery – Bool object that when True will create empty arrays to use as examples for the Expectation Diagnostics.

  • debug_logger (optional[logging.Logger]) – Logger object to use for sending debug messages to.

  • only_consider_these_backends (optional[List[str]]) –

  • context (optional[AbstractDataContext]) – Instance of any child of “AbstractDataContext” class.

Returns:

An Expectation Diagnostics report object

validate(validator: Validator, configuration: Optional[ExpectationConfiguration] = None, evaluation_parameters: Optional[dict] = None, interactive_evaluation: bool = True, data_context: Optional[AbstractDataContext] = None, runtime_configuration: Optional[dict] = None) ExpectationValidationResult

Validates the expectation against the provided data.

Parameters:
  • validator – A Validator object that can be used to create Expectations, validate Expectations, and get Metrics for Expectations.

  • configuration – Defines the parameters and name of a specific expectation.

  • evaluation_parameters – Dictionary of dynamic values used during Validation of an Expectation.

  • interactive_evaluation – Setting the interactive_evaluation flag on a DataAsset make it possible to declare expectations and store expectations without immediately evaluating them.

  • data_context – An instance of a GX DataContext.

  • runtime_configuration – The runtime configuration for the Expectation.

Returns:

An ExpectationValidationResult object

validate_configuration(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None) None

Validates the configuration for the Expectation.

For all expectations, the configuration’s expectation_type needs to match the type of the expectation being configured. This method is meant to be overridden by specific expectations to provide additional validation checks as required. Overriding methods should call super().validate_configuration(configuration).

Raises:

InvalidExpectationConfigurationError – The configuration does not contain the values required by the Expectation.

class great_expectations.expectations.expectation.ColumnAggregateExpectation(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None)

Base class for column aggregate Expectations.

These types of Expectation produce an aggregate metric for a column, such as the mean, standard deviation, number of unique values, column type, etc.

Relevant Documentation Links
Parameters:
  • domain_keys (tuple) – A tuple of the keys used to determine the domain of the expectation.

  • success_keys (tuple) – A tuple of the keys used to determine the success of the expectation.

  • default_kwarg_values (optional[dict]) –

    Optional. A dictionary that will be used to fill unspecified kwargs from the Expectation Configuration.

    • A “column” key is required for column expectations.

Raises:

InvalidExpectationConfigurationError – If no column is specified

domain_type: ClassVar = 'column'
get_success_kwargs(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None) Dict[str, Any]

Retrieve the success kwargs.

Parameters:

configuration – The ExpectationConfiguration that contains the kwargs. If no configuration arg is provided, the success kwargs from the configuration attribute of the Expectation instance will be returned.

print_diagnostic_checklist(diagnostics: Optional[great_expectations.core.expectation_diagnostics.expectation_diagnostics.ExpectationDiagnostics] = None, show_failed_tests: bool = False, backends: Optional[List[str]] = None, show_debug_messages: bool = False) str

Runs self.run_diagnostics and generates a diagnostic checklist.

This output from this method is a thin wrapper for ExpectationDiagnostics.generate_checklist() This method is experimental.

Parameters:
  • diagnostics (optional[ExpectationDiagnostics]) – If diagnostics are not provided, diagnostics will be ran on self.

  • show_failed_tests (bool) – If true, failing tests will be printed.

  • backends – list of backends to pass to run_diagnostics

  • show_debug_messages (bool) – If true, create a logger and pass to run_diagnostics

run_diagnostics(raise_exceptions_for_backends: bool = False, ignore_suppress: bool = False, ignore_only_for: bool = False, for_gallery: bool = False, debug_logger: Optional[logging.Logger] = None, only_consider_these_backends: Optional[List[str]] = None, context: Optional[AbstractDataContext] = None) ExpectationDiagnostics

Produce a diagnostic report about this Expectation.

The current uses for this method’s output are using the JSON structure to populate the Public Expectation Gallery and enabling a fast dev loop for developing new Expectations where the contributors can quickly check the completeness of their expectations.

The contents of the report are captured in the ExpectationDiagnostics dataclass. You can see some examples in test_expectation_diagnostics.py

Some components (e.g. description, examples, library_metadata) of the diagnostic report can be introspected directly from the Exepctation class. Other components (e.g. metrics, renderers, executions) are at least partly dependent on instantiating, validating, and/or executing the Expectation class. For these kinds of components, at least one test case with include_in_gallery=True must be present in the examples to produce the metrics, renderers and execution engines parts of the report. This is due to a get_validation_dependencies requiring expectation_config as an argument.

If errors are encountered in the process of running the diagnostics, they are assumed to be due to incompleteness of the Expectation’s implementation (e.g., declaring a dependency on Metrics that do not exist). These errors are added under “errors” key in the report.

Parameters:
  • raise_exceptions_for_backends – Bool object that when True will raise an Exception if a backend fails to connect.

  • ignore_suppress – Bool object that when True will ignore the suppress_test_for list on Expectation sample tests.

  • ignore_only_for – Bool object that when True will ignore the only_for list on Expectation sample tests.

  • for_gallery – Bool object that when True will create empty arrays to use as examples for the Expectation Diagnostics.

  • debug_logger (optional[logging.Logger]) – Logger object to use for sending debug messages to.

  • only_consider_these_backends (optional[List[str]]) –

  • context (optional[AbstractDataContext]) – Instance of any child of “AbstractDataContext” class.

Returns:

An Expectation Diagnostics report object

validate(validator: Validator, configuration: Optional[ExpectationConfiguration] = None, evaluation_parameters: Optional[dict] = None, interactive_evaluation: bool = True, data_context: Optional[AbstractDataContext] = None, runtime_configuration: Optional[dict] = None) ExpectationValidationResult

Validates the expectation against the provided data.

Parameters:
  • validator – A Validator object that can be used to create Expectations, validate Expectations, and get Metrics for Expectations.

  • configuration – Defines the parameters and name of a specific expectation.

  • evaluation_parameters – Dictionary of dynamic values used during Validation of an Expectation.

  • interactive_evaluation – Setting the interactive_evaluation flag on a DataAsset make it possible to declare expectations and store expectations without immediately evaluating them.

  • data_context – An instance of a GX DataContext.

  • runtime_configuration – The runtime configuration for the Expectation.

Returns:

An ExpectationValidationResult object

class great_expectations.expectations.expectation.ColumnExpectation(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None)

Base class for column aggregate Expectations.

These types of Expectation produce an aggregate metric for a column, such as the mean, standard deviation, number of unique values, column type, etc.

WARNING: This class will be deprecated in favor of ColumnAggregateExpectation, and removed in a future release. If you’re using this class, please update your code to use ColumnAggregateExpectation instead. There is no change in functionality between the two classes; just a name change for clarity.

Relevant Documentation Links
Parameters:
  • domain_keys (tuple) – A tuple of the keys used to determine the domain of the expectation.

  • success_keys (tuple) – A tuple of the keys used to determine the success of the expectation.

  • default_kwarg_values (optional[dict]) –

    Optional. A dictionary that will be used to fill unspecified kwargs from the Expectation Configuration.

    • A “column” key is required for column expectations.

Raises:

InvalidExpectationConfigurationError – If no column is specified

domain_type: ClassVar = 'column'
get_success_kwargs(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None) Dict[str, Any]

Retrieve the success kwargs.

Parameters:

configuration – The ExpectationConfiguration that contains the kwargs. If no configuration arg is provided, the success kwargs from the configuration attribute of the Expectation instance will be returned.

print_diagnostic_checklist(diagnostics: Optional[great_expectations.core.expectation_diagnostics.expectation_diagnostics.ExpectationDiagnostics] = None, show_failed_tests: bool = False, backends: Optional[List[str]] = None, show_debug_messages: bool = False) str

Runs self.run_diagnostics and generates a diagnostic checklist.

This output from this method is a thin wrapper for ExpectationDiagnostics.generate_checklist() This method is experimental.

Parameters:
  • diagnostics (optional[ExpectationDiagnostics]) – If diagnostics are not provided, diagnostics will be ran on self.

  • show_failed_tests (bool) – If true, failing tests will be printed.

  • backends – list of backends to pass to run_diagnostics

  • show_debug_messages (bool) – If true, create a logger and pass to run_diagnostics

run_diagnostics(raise_exceptions_for_backends: bool = False, ignore_suppress: bool = False, ignore_only_for: bool = False, for_gallery: bool = False, debug_logger: Optional[logging.Logger] = None, only_consider_these_backends: Optional[List[str]] = None, context: Optional[AbstractDataContext] = None) ExpectationDiagnostics

Produce a diagnostic report about this Expectation.

The current uses for this method’s output are using the JSON structure to populate the Public Expectation Gallery and enabling a fast dev loop for developing new Expectations where the contributors can quickly check the completeness of their expectations.

The contents of the report are captured in the ExpectationDiagnostics dataclass. You can see some examples in test_expectation_diagnostics.py

Some components (e.g. description, examples, library_metadata) of the diagnostic report can be introspected directly from the Exepctation class. Other components (e.g. metrics, renderers, executions) are at least partly dependent on instantiating, validating, and/or executing the Expectation class. For these kinds of components, at least one test case with include_in_gallery=True must be present in the examples to produce the metrics, renderers and execution engines parts of the report. This is due to a get_validation_dependencies requiring expectation_config as an argument.

If errors are encountered in the process of running the diagnostics, they are assumed to be due to incompleteness of the Expectation’s implementation (e.g., declaring a dependency on Metrics that do not exist). These errors are added under “errors” key in the report.

Parameters:
  • raise_exceptions_for_backends – Bool object that when True will raise an Exception if a backend fails to connect.

  • ignore_suppress – Bool object that when True will ignore the suppress_test_for list on Expectation sample tests.

  • ignore_only_for – Bool object that when True will ignore the only_for list on Expectation sample tests.

  • for_gallery – Bool object that when True will create empty arrays to use as examples for the Expectation Diagnostics.

  • debug_logger (optional[logging.Logger]) – Logger object to use for sending debug messages to.

  • only_consider_these_backends (optional[List[str]]) –

  • context (optional[AbstractDataContext]) – Instance of any child of “AbstractDataContext” class.

Returns:

An Expectation Diagnostics report object

validate(validator: Validator, configuration: Optional[ExpectationConfiguration] = None, evaluation_parameters: Optional[dict] = None, interactive_evaluation: bool = True, data_context: Optional[AbstractDataContext] = None, runtime_configuration: Optional[dict] = None) ExpectationValidationResult

Validates the expectation against the provided data.

Parameters:
  • validator – A Validator object that can be used to create Expectations, validate Expectations, and get Metrics for Expectations.

  • configuration – Defines the parameters and name of a specific expectation.

  • evaluation_parameters – Dictionary of dynamic values used during Validation of an Expectation.

  • interactive_evaluation – Setting the interactive_evaluation flag on a DataAsset make it possible to declare expectations and store expectations without immediately evaluating them.

  • data_context – An instance of a GX DataContext.

  • runtime_configuration – The runtime configuration for the Expectation.

Returns:

An ExpectationValidationResult object

class great_expectations.expectations.expectation.ColumnMapExpectation(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None)

Base class for ColumnMapExpectations.

ColumnMapExpectations are evaluated for a column and ask a yes/no question about every row in the column. Based on the result, they then calculate the percentage of rows that gave a positive answer. If the percentage is high enough, the Expectation considers that data valid.

ColumnMapExpectations must implement a _validate(…) method containing logic for determining whether the Expectation is successfully validated.

ColumnMapExpectations may optionally provide implementations of validate_configuration, which should raise an error if the configuration will not be usable for the Expectation. By default, the validate_configuration method will return an error if column is missing from the configuration.

Raises:

InvalidExpectationConfigurationError – If column is missing from configuration.

Parameters:
  • domain_keys (tuple) – A tuple of the keys used to determine the domain of the expectation.

  • success_keys (tuple) – A tuple of the keys used to determine the success of the expectation.

  • default_kwarg_values (optional[dict]) – Optional. A dictionary that will be used to fill unspecified kwargs from the Expectation Configuration.

domain_type: ClassVar = 'column'
get_success_kwargs(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None) Dict[str, Any]

Retrieve the success kwargs.

Parameters:

configuration – The ExpectationConfiguration that contains the kwargs. If no configuration arg is provided, the success kwargs from the configuration attribute of the Expectation instance will be returned.

print_diagnostic_checklist(diagnostics: Optional[great_expectations.core.expectation_diagnostics.expectation_diagnostics.ExpectationDiagnostics] = None, show_failed_tests: bool = False, backends: Optional[List[str]] = None, show_debug_messages: bool = False) str

Runs self.run_diagnostics and generates a diagnostic checklist.

This output from this method is a thin wrapper for ExpectationDiagnostics.generate_checklist() This method is experimental.

Parameters:
  • diagnostics (optional[ExpectationDiagnostics]) – If diagnostics are not provided, diagnostics will be ran on self.

  • show_failed_tests (bool) – If true, failing tests will be printed.

  • backends – list of backends to pass to run_diagnostics

  • show_debug_messages (bool) – If true, create a logger and pass to run_diagnostics

run_diagnostics(raise_exceptions_for_backends: bool = False, ignore_suppress: bool = False, ignore_only_for: bool = False, for_gallery: bool = False, debug_logger: Optional[logging.Logger] = None, only_consider_these_backends: Optional[List[str]] = None, context: Optional[AbstractDataContext] = None) ExpectationDiagnostics

Produce a diagnostic report about this Expectation.

The current uses for this method’s output are using the JSON structure to populate the Public Expectation Gallery and enabling a fast dev loop for developing new Expectations where the contributors can quickly check the completeness of their expectations.

The contents of the report are captured in the ExpectationDiagnostics dataclass. You can see some examples in test_expectation_diagnostics.py

Some components (e.g. description, examples, library_metadata) of the diagnostic report can be introspected directly from the Exepctation class. Other components (e.g. metrics, renderers, executions) are at least partly dependent on instantiating, validating, and/or executing the Expectation class. For these kinds of components, at least one test case with include_in_gallery=True must be present in the examples to produce the metrics, renderers and execution engines parts of the report. This is due to a get_validation_dependencies requiring expectation_config as an argument.

If errors are encountered in the process of running the diagnostics, they are assumed to be due to incompleteness of the Expectation’s implementation (e.g., declaring a dependency on Metrics that do not exist). These errors are added under “errors” key in the report.

Parameters:
  • raise_exceptions_for_backends – Bool object that when True will raise an Exception if a backend fails to connect.

  • ignore_suppress – Bool object that when True will ignore the suppress_test_for list on Expectation sample tests.

  • ignore_only_for – Bool object that when True will ignore the only_for list on Expectation sample tests.

  • for_gallery – Bool object that when True will create empty arrays to use as examples for the Expectation Diagnostics.

  • debug_logger (optional[logging.Logger]) – Logger object to use for sending debug messages to.

  • only_consider_these_backends (optional[List[str]]) –

  • context (optional[AbstractDataContext]) – Instance of any child of “AbstractDataContext” class.

Returns:

An Expectation Diagnostics report object

validate(validator: Validator, configuration: Optional[ExpectationConfiguration] = None, evaluation_parameters: Optional[dict] = None, interactive_evaluation: bool = True, data_context: Optional[AbstractDataContext] = None, runtime_configuration: Optional[dict] = None) ExpectationValidationResult

Validates the expectation against the provided data.

Parameters:
  • validator – A Validator object that can be used to create Expectations, validate Expectations, and get Metrics for Expectations.

  • configuration – Defines the parameters and name of a specific expectation.

  • evaluation_parameters – Dictionary of dynamic values used during Validation of an Expectation.

  • interactive_evaluation – Setting the interactive_evaluation flag on a DataAsset make it possible to declare expectations and store expectations without immediately evaluating them.

  • data_context – An instance of a GX DataContext.

  • runtime_configuration – The runtime configuration for the Expectation.

Returns:

An ExpectationValidationResult object

class great_expectations.expectations.expectation.ColumnPairMapExpectation(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None)

Base class for ColumnPairMapExpectations.

ColumnPairMapExpectations are evaluated for a pair of columns and ask a yes/no question about the row-wise relationship between those two columns. Based on the result, they then calculate the percentage of rows that gave a positive answer. If the percentage is high enough, the Expectation considers that data valid.

ColumnPairMapExpectations must implement a _validate(…) method containing logic for determining whether the Expectation is successfully validated.

ColumnPairMapExpectations may optionally provide implementations of validate_configuration, which should raise an error if the configuration will not be usable for the Expectation. By default, the validate_configuration method will return an error if column_A and column_B are missing from the configuration.

Raises:

InvalidExpectationConfigurationError – If column_A and column_B parameters are missing from the configuration.

Parameters:
  • domain_keys (tuple) – A tuple of the keys used to determine the domain of the expectation.

  • success_keys (tuple) – A tuple of the keys used to determine the success of the expectation.

  • default_kwarg_values (optional[dict]) – Optional. A dictionary that will be used to fill unspecified kwargs from the Expectation Configuration.

domain_type: ClassVar = 'column_pair'
get_success_kwargs(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None) Dict[str, Any]

Retrieve the success kwargs.

Parameters:

configuration – The ExpectationConfiguration that contains the kwargs. If no configuration arg is provided, the success kwargs from the configuration attribute of the Expectation instance will be returned.

print_diagnostic_checklist(diagnostics: Optional[great_expectations.core.expectation_diagnostics.expectation_diagnostics.ExpectationDiagnostics] = None, show_failed_tests: bool = False, backends: Optional[List[str]] = None, show_debug_messages: bool = False) str

Runs self.run_diagnostics and generates a diagnostic checklist.

This output from this method is a thin wrapper for ExpectationDiagnostics.generate_checklist() This method is experimental.

Parameters:
  • diagnostics (optional[ExpectationDiagnostics]) – If diagnostics are not provided, diagnostics will be ran on self.

  • show_failed_tests (bool) – If true, failing tests will be printed.

  • backends – list of backends to pass to run_diagnostics

  • show_debug_messages (bool) – If true, create a logger and pass to run_diagnostics

run_diagnostics(raise_exceptions_for_backends: bool = False, ignore_suppress: bool = False, ignore_only_for: bool = False, for_gallery: bool = False, debug_logger: Optional[logging.Logger] = None, only_consider_these_backends: Optional[List[str]] = None, context: Optional[AbstractDataContext] = None) ExpectationDiagnostics

Produce a diagnostic report about this Expectation.

The current uses for this method’s output are using the JSON structure to populate the Public Expectation Gallery and enabling a fast dev loop for developing new Expectations where the contributors can quickly check the completeness of their expectations.

The contents of the report are captured in the ExpectationDiagnostics dataclass. You can see some examples in test_expectation_diagnostics.py

Some components (e.g. description, examples, library_metadata) of the diagnostic report can be introspected directly from the Exepctation class. Other components (e.g. metrics, renderers, executions) are at least partly dependent on instantiating, validating, and/or executing the Expectation class. For these kinds of components, at least one test case with include_in_gallery=True must be present in the examples to produce the metrics, renderers and execution engines parts of the report. This is due to a get_validation_dependencies requiring expectation_config as an argument.

If errors are encountered in the process of running the diagnostics, they are assumed to be due to incompleteness of the Expectation’s implementation (e.g., declaring a dependency on Metrics that do not exist). These errors are added under “errors” key in the report.

Parameters:
  • raise_exceptions_for_backends – Bool object that when True will raise an Exception if a backend fails to connect.

  • ignore_suppress – Bool object that when True will ignore the suppress_test_for list on Expectation sample tests.

  • ignore_only_for – Bool object that when True will ignore the only_for list on Expectation sample tests.

  • for_gallery – Bool object that when True will create empty arrays to use as examples for the Expectation Diagnostics.

  • debug_logger (optional[logging.Logger]) – Logger object to use for sending debug messages to.

  • only_consider_these_backends (optional[List[str]]) –

  • context (optional[AbstractDataContext]) – Instance of any child of “AbstractDataContext” class.

Returns:

An Expectation Diagnostics report object

validate(validator: Validator, configuration: Optional[ExpectationConfiguration] = None, evaluation_parameters: Optional[dict] = None, interactive_evaluation: bool = True, data_context: Optional[AbstractDataContext] = None, runtime_configuration: Optional[dict] = None) ExpectationValidationResult

Validates the expectation against the provided data.

Parameters:
  • validator – A Validator object that can be used to create Expectations, validate Expectations, and get Metrics for Expectations.

  • configuration – Defines the parameters and name of a specific expectation.

  • evaluation_parameters – Dictionary of dynamic values used during Validation of an Expectation.

  • interactive_evaluation – Setting the interactive_evaluation flag on a DataAsset make it possible to declare expectations and store expectations without immediately evaluating them.

  • data_context – An instance of a GX DataContext.

  • runtime_configuration – The runtime configuration for the Expectation.

Returns:

An ExpectationValidationResult object

class great_expectations.expectations.expectation.Expectation(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None)

Base class for all Expectations.

Expectation classes must have the following attributes set:
  1. domain_keys: a tuple of the keys used to determine the domain of the expectation

  2. success_keys: a tuple of the keys used to determine the success of the expectation.

In some cases, subclasses of Expectation (such as BatchExpectation) can inherit these properties from their parent class.

They may optionally override runtime_keys and default_kwarg_values, and may optionally set an explicit value for expectation_type.

  1. runtime_keys lists the keys that can be used to control output but will not affect the actual success value of the expectation (such as result_format).

  2. default_kwarg_values is a dictionary that will be used to fill unspecified kwargs from the Expectation Configuration.

Expectation classes must implement the following:
  1. _validate

  2. get_validation_dependencies

In some cases, subclasses of Expectation, such as ColumnMapExpectation will already have correct implementations that may simply be inherited.

Additionally, they may provide implementations of:
  1. validate_configuration, which should raise an error if the configuration will not be usable for the Expectation

  2. Data Docs rendering methods decorated with the @renderer decorator. See the

get_success_kwargs(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None) Dict[str, Any]

Retrieve the success kwargs.

Parameters:

configuration – The ExpectationConfiguration that contains the kwargs. If no configuration arg is provided, the success kwargs from the configuration attribute of the Expectation instance will be returned.

print_diagnostic_checklist(diagnostics: Optional[great_expectations.core.expectation_diagnostics.expectation_diagnostics.ExpectationDiagnostics] = None, show_failed_tests: bool = False, backends: Optional[List[str]] = None, show_debug_messages: bool = False) str

Runs self.run_diagnostics and generates a diagnostic checklist.

This output from this method is a thin wrapper for ExpectationDiagnostics.generate_checklist() This method is experimental.

Parameters:
  • diagnostics (optional[ExpectationDiagnostics]) – If diagnostics are not provided, diagnostics will be ran on self.

  • show_failed_tests (bool) – If true, failing tests will be printed.

  • backends – list of backends to pass to run_diagnostics

  • show_debug_messages (bool) – If true, create a logger and pass to run_diagnostics

run_diagnostics(raise_exceptions_for_backends: bool = False, ignore_suppress: bool = False, ignore_only_for: bool = False, for_gallery: bool = False, debug_logger: Optional[logging.Logger] = None, only_consider_these_backends: Optional[List[str]] = None, context: Optional[AbstractDataContext] = None) ExpectationDiagnostics

Produce a diagnostic report about this Expectation.

The current uses for this method’s output are using the JSON structure to populate the Public Expectation Gallery and enabling a fast dev loop for developing new Expectations where the contributors can quickly check the completeness of their expectations.

The contents of the report are captured in the ExpectationDiagnostics dataclass. You can see some examples in test_expectation_diagnostics.py

Some components (e.g. description, examples, library_metadata) of the diagnostic report can be introspected directly from the Exepctation class. Other components (e.g. metrics, renderers, executions) are at least partly dependent on instantiating, validating, and/or executing the Expectation class. For these kinds of components, at least one test case with include_in_gallery=True must be present in the examples to produce the metrics, renderers and execution engines parts of the report. This is due to a get_validation_dependencies requiring expectation_config as an argument.

If errors are encountered in the process of running the diagnostics, they are assumed to be due to incompleteness of the Expectation’s implementation (e.g., declaring a dependency on Metrics that do not exist). These errors are added under “errors” key in the report.

Parameters:
  • raise_exceptions_for_backends – Bool object that when True will raise an Exception if a backend fails to connect.

  • ignore_suppress – Bool object that when True will ignore the suppress_test_for list on Expectation sample tests.

  • ignore_only_for – Bool object that when True will ignore the only_for list on Expectation sample tests.

  • for_gallery – Bool object that when True will create empty arrays to use as examples for the Expectation Diagnostics.

  • debug_logger (optional[logging.Logger]) – Logger object to use for sending debug messages to.

  • only_consider_these_backends (optional[List[str]]) –

  • context (optional[AbstractDataContext]) – Instance of any child of “AbstractDataContext” class.

Returns:

An Expectation Diagnostics report object

validate(validator: Validator, configuration: Optional[ExpectationConfiguration] = None, evaluation_parameters: Optional[dict] = None, interactive_evaluation: bool = True, data_context: Optional[AbstractDataContext] = None, runtime_configuration: Optional[dict] = None) ExpectationValidationResult

Validates the expectation against the provided data.

Parameters:
  • validator – A Validator object that can be used to create Expectations, validate Expectations, and get Metrics for Expectations.

  • configuration – Defines the parameters and name of a specific expectation.

  • evaluation_parameters – Dictionary of dynamic values used during Validation of an Expectation.

  • interactive_evaluation – Setting the interactive_evaluation flag on a DataAsset make it possible to declare expectations and store expectations without immediately evaluating them.

  • data_context – An instance of a GX DataContext.

  • runtime_configuration – The runtime configuration for the Expectation.

Returns:

An ExpectationValidationResult object

validate_configuration(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None) None

Validates the configuration for the Expectation.

For all expectations, the configuration’s expectation_type needs to match the type of the expectation being configured. This method is meant to be overridden by specific expectations to provide additional validation checks as required. Overriding methods should call super().validate_configuration(configuration).

Raises:

InvalidExpectationConfigurationError – The configuration does not contain the values required by the Expectation.

class great_expectations.expectations.expectation.MulticolumnMapExpectation(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None)

Base class for MulticolumnMapExpectations.

MulticolumnMapExpectations are evaluated for a set of columns and ask a yes/no question about the row-wise relationship between those columns. Based on the result, they then calculate the percentage of rows that gave a positive answer. If the percentage is high enough, the Expectation considers that data valid.

MulticolumnMapExpectations must implement a _validate(…) method containing logic for determining whether the Expectation is successfully validated.

MulticolumnMapExpectations may optionally provide implementations of validate_configuration, which should raise an error if the configuration will not be usable for the Expectation. By default, the validate_configuration method will return an error if column_list is missing from the configuration.

Raises:

InvalidExpectationConfigurationError – If column_list is missing from configuration.

Parameters:
  • domain_keys (tuple) – A tuple of the keys used to determine the domain of the expectation.

  • success_keys (tuple) – A tuple of the keys used to determine the success of the expectation.

  • default_kwarg_values (optional[dict]) – Optional. A dictionary that will be used to fill unspecified kwargs from the Expectation Configuration.

domain_type: ClassVar = 'multicolumn'
get_success_kwargs(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None) Dict[str, Any]

Retrieve the success kwargs.

Parameters:

configuration – The ExpectationConfiguration that contains the kwargs. If no configuration arg is provided, the success kwargs from the configuration attribute of the Expectation instance will be returned.

print_diagnostic_checklist(diagnostics: Optional[great_expectations.core.expectation_diagnostics.expectation_diagnostics.ExpectationDiagnostics] = None, show_failed_tests: bool = False, backends: Optional[List[str]] = None, show_debug_messages: bool = False) str

Runs self.run_diagnostics and generates a diagnostic checklist.

This output from this method is a thin wrapper for ExpectationDiagnostics.generate_checklist() This method is experimental.

Parameters:
  • diagnostics (optional[ExpectationDiagnostics]) – If diagnostics are not provided, diagnostics will be ran on self.

  • show_failed_tests (bool) – If true, failing tests will be printed.

  • backends – list of backends to pass to run_diagnostics

  • show_debug_messages (bool) – If true, create a logger and pass to run_diagnostics

run_diagnostics(raise_exceptions_for_backends: bool = False, ignore_suppress: bool = False, ignore_only_for: bool = False, for_gallery: bool = False, debug_logger: Optional[logging.Logger] = None, only_consider_these_backends: Optional[List[str]] = None, context: Optional[AbstractDataContext] = None) ExpectationDiagnostics

Produce a diagnostic report about this Expectation.

The current uses for this method’s output are using the JSON structure to populate the Public Expectation Gallery and enabling a fast dev loop for developing new Expectations where the contributors can quickly check the completeness of their expectations.

The contents of the report are captured in the ExpectationDiagnostics dataclass. You can see some examples in test_expectation_diagnostics.py

Some components (e.g. description, examples, library_metadata) of the diagnostic report can be introspected directly from the Exepctation class. Other components (e.g. metrics, renderers, executions) are at least partly dependent on instantiating, validating, and/or executing the Expectation class. For these kinds of components, at least one test case with include_in_gallery=True must be present in the examples to produce the metrics, renderers and execution engines parts of the report. This is due to a get_validation_dependencies requiring expectation_config as an argument.

If errors are encountered in the process of running the diagnostics, they are assumed to be due to incompleteness of the Expectation’s implementation (e.g., declaring a dependency on Metrics that do not exist). These errors are added under “errors” key in the report.

Parameters:
  • raise_exceptions_for_backends – Bool object that when True will raise an Exception if a backend fails to connect.

  • ignore_suppress – Bool object that when True will ignore the suppress_test_for list on Expectation sample tests.

  • ignore_only_for – Bool object that when True will ignore the only_for list on Expectation sample tests.

  • for_gallery – Bool object that when True will create empty arrays to use as examples for the Expectation Diagnostics.

  • debug_logger (optional[logging.Logger]) – Logger object to use for sending debug messages to.

  • only_consider_these_backends (optional[List[str]]) –

  • context (optional[AbstractDataContext]) – Instance of any child of “AbstractDataContext” class.

Returns:

An Expectation Diagnostics report object

validate(validator: Validator, configuration: Optional[ExpectationConfiguration] = None, evaluation_parameters: Optional[dict] = None, interactive_evaluation: bool = True, data_context: Optional[AbstractDataContext] = None, runtime_configuration: Optional[dict] = None) ExpectationValidationResult

Validates the expectation against the provided data.

Parameters:
  • validator – A Validator object that can be used to create Expectations, validate Expectations, and get Metrics for Expectations.

  • configuration – Defines the parameters and name of a specific expectation.

  • evaluation_parameters – Dictionary of dynamic values used during Validation of an Expectation.

  • interactive_evaluation – Setting the interactive_evaluation flag on a DataAsset make it possible to declare expectations and store expectations without immediately evaluating them.

  • data_context – An instance of a GX DataContext.

  • runtime_configuration – The runtime configuration for the Expectation.

Returns:

An ExpectationValidationResult object

class great_expectations.expectations.expectation.QueryExpectation(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None)

Base class for QueryExpectations.

QueryExpectations facilitate the execution of SQL or Spark-SQL queries as the core logic for an Expectation.

QueryExpectations must implement a _validate(…) method containing logic for determining whether data returned by the executed query is successfully validated.

Query Expectations may optionally provide implementations of:

  1. validate_configuration, which should raise an error if the configuration will not be usable for the Expectation.

  2. Data Docs rendering methods decorated with the @renderer decorator.

QueryExpectations may optionally define a query attribute, and specify that query as a default in default_kwarg_values.

Doing so precludes the need to pass a query into the Expectation. This default will be overridden if a query is passed in.

Parameters:
  • domain_keys (tuple) – A tuple of the keys used to determine the domain of the expectation.

  • success_keys (tuple) – A tuple of the keys used to determine the success of the expectation.

  • runtime_keys (optional[tuple]) – Optional. A tuple of the keys that can be used to control output but will not affect the actual success value of the expectation (such as result_format).

  • default_kwarg_values (optional[dict]) – Optional. A dictionary that will be used to fill unspecified kwargs from the Expectation Configuration.

  • query (optional[str]) – Optional. A SQL or Spark-SQL query to be executed. If not provided, a query must be passed into the QueryExpectation.

Relevant Documentation Links
domain_type: ClassVar = 'table'
get_success_kwargs(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None) Dict[str, Any]

Retrieve the success kwargs.

Parameters:

configuration – The ExpectationConfiguration that contains the kwargs. If no configuration arg is provided, the success kwargs from the configuration attribute of the Expectation instance will be returned.

print_diagnostic_checklist(diagnostics: Optional[great_expectations.core.expectation_diagnostics.expectation_diagnostics.ExpectationDiagnostics] = None, show_failed_tests: bool = False, backends: Optional[List[str]] = None, show_debug_messages: bool = False) str

Runs self.run_diagnostics and generates a diagnostic checklist.

This output from this method is a thin wrapper for ExpectationDiagnostics.generate_checklist() This method is experimental.

Parameters:
  • diagnostics (optional[ExpectationDiagnostics]) – If diagnostics are not provided, diagnostics will be ran on self.

  • show_failed_tests (bool) – If true, failing tests will be printed.

  • backends – list of backends to pass to run_diagnostics

  • show_debug_messages (bool) – If true, create a logger and pass to run_diagnostics

run_diagnostics(raise_exceptions_for_backends: bool = False, ignore_suppress: bool = False, ignore_only_for: bool = False, for_gallery: bool = False, debug_logger: Optional[logging.Logger] = None, only_consider_these_backends: Optional[List[str]] = None, context: Optional[AbstractDataContext] = None) ExpectationDiagnostics

Produce a diagnostic report about this Expectation.

The current uses for this method’s output are using the JSON structure to populate the Public Expectation Gallery and enabling a fast dev loop for developing new Expectations where the contributors can quickly check the completeness of their expectations.

The contents of the report are captured in the ExpectationDiagnostics dataclass. You can see some examples in test_expectation_diagnostics.py

Some components (e.g. description, examples, library_metadata) of the diagnostic report can be introspected directly from the Exepctation class. Other components (e.g. metrics, renderers, executions) are at least partly dependent on instantiating, validating, and/or executing the Expectation class. For these kinds of components, at least one test case with include_in_gallery=True must be present in the examples to produce the metrics, renderers and execution engines parts of the report. This is due to a get_validation_dependencies requiring expectation_config as an argument.

If errors are encountered in the process of running the diagnostics, they are assumed to be due to incompleteness of the Expectation’s implementation (e.g., declaring a dependency on Metrics that do not exist). These errors are added under “errors” key in the report.

Parameters:
  • raise_exceptions_for_backends – Bool object that when True will raise an Exception if a backend fails to connect.

  • ignore_suppress – Bool object that when True will ignore the suppress_test_for list on Expectation sample tests.

  • ignore_only_for – Bool object that when True will ignore the only_for list on Expectation sample tests.

  • for_gallery – Bool object that when True will create empty arrays to use as examples for the Expectation Diagnostics.

  • debug_logger (optional[logging.Logger]) – Logger object to use for sending debug messages to.

  • only_consider_these_backends (optional[List[str]]) –

  • context (optional[AbstractDataContext]) – Instance of any child of “AbstractDataContext” class.

Returns:

An Expectation Diagnostics report object

validate(validator: Validator, configuration: Optional[ExpectationConfiguration] = None, evaluation_parameters: Optional[dict] = None, interactive_evaluation: bool = True, data_context: Optional[AbstractDataContext] = None, runtime_configuration: Optional[dict] = None) ExpectationValidationResult

Validates the expectation against the provided data.

Parameters:
  • validator – A Validator object that can be used to create Expectations, validate Expectations, and get Metrics for Expectations.

  • configuration – Defines the parameters and name of a specific expectation.

  • evaluation_parameters – Dictionary of dynamic values used during Validation of an Expectation.

  • interactive_evaluation – Setting the interactive_evaluation flag on a DataAsset make it possible to declare expectations and store expectations without immediately evaluating them.

  • data_context – An instance of a GX DataContext.

  • runtime_configuration – The runtime configuration for the Expectation.

Returns:

An ExpectationValidationResult object

class great_expectations.expectations.expectation.TableExpectation(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None)

Base class for TableExpectations.

WARNING: TableExpectation will be deprecated in a future release. Please use BatchExpectation instead.

TableExpectations answer a semantic question about the table itself.

For example, expect_table_column_count_to_equal and expect_table_row_count_to_equal answer how many columns and rows are in your table.

TableExpectations must implement a _validate(…) method containing logic for determining whether the Expectation is successfully validated.

TableExpectations may optionally provide implementations of validate_configuration, which should raise an error if the configuration will not be usable for the Expectation.

Raises:

InvalidExpectationConfigurationError – The configuration does not contain the values required by the Expectation.

Parameters:

domain_keys (tuple) – A tuple of the keys used to determine the domain of the expectation.

domain_type: ClassVar = 'table'
get_success_kwargs(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None) Dict[str, Any]

Retrieve the success kwargs.

Parameters:

configuration – The ExpectationConfiguration that contains the kwargs. If no configuration arg is provided, the success kwargs from the configuration attribute of the Expectation instance will be returned.

print_diagnostic_checklist(diagnostics: Optional[great_expectations.core.expectation_diagnostics.expectation_diagnostics.ExpectationDiagnostics] = None, show_failed_tests: bool = False, backends: Optional[List[str]] = None, show_debug_messages: bool = False) str

Runs self.run_diagnostics and generates a diagnostic checklist.

This output from this method is a thin wrapper for ExpectationDiagnostics.generate_checklist() This method is experimental.

Parameters:
  • diagnostics (optional[ExpectationDiagnostics]) – If diagnostics are not provided, diagnostics will be ran on self.

  • show_failed_tests (bool) – If true, failing tests will be printed.

  • backends – list of backends to pass to run_diagnostics

  • show_debug_messages (bool) – If true, create a logger and pass to run_diagnostics

run_diagnostics(raise_exceptions_for_backends: bool = False, ignore_suppress: bool = False, ignore_only_for: bool = False, for_gallery: bool = False, debug_logger: Optional[logging.Logger] = None, only_consider_these_backends: Optional[List[str]] = None, context: Optional[AbstractDataContext] = None) ExpectationDiagnostics

Produce a diagnostic report about this Expectation.

The current uses for this method’s output are using the JSON structure to populate the Public Expectation Gallery and enabling a fast dev loop for developing new Expectations where the contributors can quickly check the completeness of their expectations.

The contents of the report are captured in the ExpectationDiagnostics dataclass. You can see some examples in test_expectation_diagnostics.py

Some components (e.g. description, examples, library_metadata) of the diagnostic report can be introspected directly from the Exepctation class. Other components (e.g. metrics, renderers, executions) are at least partly dependent on instantiating, validating, and/or executing the Expectation class. For these kinds of components, at least one test case with include_in_gallery=True must be present in the examples to produce the metrics, renderers and execution engines parts of the report. This is due to a get_validation_dependencies requiring expectation_config as an argument.

If errors are encountered in the process of running the diagnostics, they are assumed to be due to incompleteness of the Expectation’s implementation (e.g., declaring a dependency on Metrics that do not exist). These errors are added under “errors” key in the report.

Parameters:
  • raise_exceptions_for_backends – Bool object that when True will raise an Exception if a backend fails to connect.

  • ignore_suppress – Bool object that when True will ignore the suppress_test_for list on Expectation sample tests.

  • ignore_only_for – Bool object that when True will ignore the only_for list on Expectation sample tests.

  • for_gallery – Bool object that when True will create empty arrays to use as examples for the Expectation Diagnostics.

  • debug_logger (optional[logging.Logger]) – Logger object to use for sending debug messages to.

  • only_consider_these_backends (optional[List[str]]) –

  • context (optional[AbstractDataContext]) – Instance of any child of “AbstractDataContext” class.

Returns:

An Expectation Diagnostics report object

validate(validator: Validator, configuration: Optional[ExpectationConfiguration] = None, evaluation_parameters: Optional[dict] = None, interactive_evaluation: bool = True, data_context: Optional[AbstractDataContext] = None, runtime_configuration: Optional[dict] = None) ExpectationValidationResult

Validates the expectation against the provided data.

Parameters:
  • validator – A Validator object that can be used to create Expectations, validate Expectations, and get Metrics for Expectations.

  • configuration – Defines the parameters and name of a specific expectation.

  • evaluation_parameters – Dictionary of dynamic values used during Validation of an Expectation.

  • interactive_evaluation – Setting the interactive_evaluation flag on a DataAsset make it possible to declare expectations and store expectations without immediately evaluating them.

  • data_context – An instance of a GX DataContext.

  • runtime_configuration – The runtime configuration for the Expectation.

Returns:

An ExpectationValidationResult object

validate_configuration(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None) None

Validates the configuration for the Expectation.

For all expectations, the configuration’s expectation_type needs to match the type of the expectation being configured. This method is meant to be overridden by specific expectations to provide additional validation checks as required. Overriding methods should call super().validate_configuration(configuration).

Raises:

InvalidExpectationConfigurationError – The configuration does not contain the values required by the Expectation.

great_expectations.expectations.expectation.render_evaluation_parameter_string(render_func) Callable

Decorator for Expectation classes that renders evaluation parameters as strings.

allows Expectations that use Evaluation Parameters to render the values of the Evaluation Parameters along with the rest of the output.

Parameters:

render_func – The render method of the Expectation class.

Raises:

GreatExpectationsError – If runtime_configuration with evaluation_parameters is not provided.