SetBasedColumnMapExpectation
- class great_expectations.expectations.set_based_column_map_expectation.SetBasedColumnMapExpectation(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None)#
Base class for SetBasedColumnMapExpectations.
SetBasedColumnMapExpectations facilitate set-based comparisons as the core logic for a Map Expectation.
Example Definition:
ExpectColumnValuesToBeInSolfegeScaleSet(SetBasedColumnMapExpectation):
set_camel_name = SolfegeScale
set_ = ['do', 're', 'mi', 'fa', 'so', 'la', 'ti']
set_semantic_name = "version-0.16.16 the Solfege scale"
map_metric = SetBasedColumnMapExpectation.register_metric(
set_camel_name=set_camel_name,
set_=set_
)- Parameters
set_camel_name (str) – A name describing a set of values, in camel case.
set (str) – A value set.
set_semantic_name (optional[str]) – A name for the semantic type representing the set being validated..
map_metric (str) – The name of an ephemeral metric, as returned by register_metric(…).
- -Relevant Documentation Links -
- domain_type = 'column'#
- get_success_kwargs(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None) Dict[str, Any] #
Retrieve the success kwargs.
- Parameters
configuration – The ExpectationConfiguration that contains the kwargs. If no configuration arg is provided, the success kwargs from the configuration attribute of the Expectation instance will be returned.
- print_diagnostic_checklist(diagnostics: Optional[great_expectations.core.expectation_diagnostics.expectation_diagnostics.ExpectationDiagnostics] = None, show_failed_tests: bool = False) str #
Runs self.run_diagnostics and generates a diagnostic checklist.
This output from this method is a thin wrapper for ExpectationDiagnostics.generate_checklist() This method is experimental.
- Parameters
diagnostics (optional[ExpectationDiagnostics]) – If diagnostics are not provided, diagnostics will be ran on self.
show_failed_tests (bool) – If true, failing tests will be printed.
- run_diagnostics(raise_exceptions_for_backends: bool = False, ignore_suppress: bool = False, ignore_only_for: bool = False, for_gallery: bool = False, debug_logger: Optional[logging.Logger] = None, only_consider_these_backends: Optional[List[str]] = None, context: Optional[AbstractDataContext] = None) ExpectationDiagnostics #
Produce a diagnostic report about this Expectation.
The current uses for this method's output are using the JSON structure to populate the Public Expectation Gallery and enabling a fast dev loop for developing new Expectations where the contributors can quickly check the completeness of their expectations.
The contents of the report are captured in the ExpectationDiagnostics dataclass. You can see some examples in test_expectation_diagnostics.py
Some components (e.g. description, examples, library_metadata) of the diagnostic report can be introspected directly from the Exepctation class. Other components (e.g. metrics, renderers, executions) are at least partly dependent on instantiating, validating, and/or executing the Expectation class. For these kinds of components, at least one test case with include_in_gallery=True must be present in the examples to produce the metrics, renderers and execution engines parts of the report. This is due to a get_validation_dependencies requiring expectation_config as an argument.
If errors are encountered in the process of running the diagnostics, they are assumed to be due to incompleteness of the Expectation's implementation (e.g., declaring a dependency on Metrics that do not exist). These errors are added under "errors" key in the report.
- Parameters
raise_exceptions_for_backends – Bool object that when True will raise an Exception if a backend fails to connect.
ignore_suppress – Bool object that when True will ignore the suppress_test_for list on Expectation sample tests.
ignore_only_for – Bool object that when True will ignore the only_for list on Expectation sample tests.
for_gallery – Bool object that when True will create empty arrays to use as examples for the Expectation Diagnostics.
debug_logger (optional[logging.Logger]) – Logger object to use for sending debug messages to.
only_consider_these_backends (optional[List[str]]) –
context (optional[AbstractDataContext]) – Instance of any child of "AbstractDataContext" class.
- Returns
An Expectation Diagnostics report object
- validate(validator: Validator, configuration: Optional[ExpectationConfiguration] = None, evaluation_parameters: Optional[dict] = None, interactive_evaluation: bool = True, data_context: Optional[AbstractDataContext] = None, runtime_configuration: Optional[dict] = None) ExpectationValidationResult #
Validates the expectation against the provided data.
- Parameters
validator – A Validator object that can be used to create Expectations, validate Expectations, and get Metrics for Expectations.
configuration – Defines the parameters and name of a specific expectation.
evaluation_parameters – Dictionary of dynamic values used during Validation of an Expectation.
interactive_evaluation – Setting the interactive_evaluation flag on a DataAsset make it possible to declare expectations and store expectations without immediately evaluating them.
data_context – An instance of a GX DataContext.
runtime_configuration – The runtime configuration for the Expectation.
- Returns
An ExpectationValidationResult object