Skip to content

Understanding the data quality capability

Note

The Data Playbook defines a set of capabilities that represent conceptual building blocks that are used to build data-related solutions. See Defining data capabilities to see the full set of capabilities defined in the playbook.

Data quality is essential for ensuring data is accurate, complete, timely, and consistent, aligning with organizational requirements. It's crucial in maintaining the reliability and trustworthiness of data-driven applications.

We can define and measure the quality of data by a set of standards called metrics. Measures of data quality include accuracy, completeness, consistency, validity, uniqueness, and timeliness.

Monitoring data quality is possible and involves a process that evaluates the suitability of data by verifying its capacity to fulfill your business, system, and technical requirements.

Understanding data quality characteristics

Data quality is defined by several key characteristics that ensure its effectiveness and reliability in various applications:

  • Metrics-Based Quality Assessment: Uses metrics like accuracy, completeness, consistency, validity, uniqueness, and timeliness to define data quality.
  • Best Practices Alignment: Integrates with an organization's Data Governance strategy, with set metrics by Data Owners.
  • Effect on Decision-Making: Crucial for accurate reporting, operational efficiency, and compliance in regulated industries.
  • Machine Learning Implications: Directly affects the performance and reliability of artificial intelligence and machine learning models.
  • Remediation Techniques: After identifying data quality issues, it's crucial to decide on the appropriate actions for remediation, such as:
    • Root Cause Analysis: Investigates the origin of data quality issues by examining upstream data sources, schema changes, operational environments, and the code for transformation logic.
    • Interpolation (Imputation): Employs statistical methods to estimate and fill in missing or null values in datasets, applicable in time series data.
    • Schema Evolution: Dynamically adjusts the data schema to accommodate changes in the source dataset over time, ensuring continued data quality and compatibility.
  • Data Quality for Downstream Use Cases: Tailors data quality metrics to suit specific end-user and application requirements, ensuring data suitability for diverse downstream applications like analytics and reporting.
  • Data Quality for Machine Learning: Involves thorough dataset evaluation through feasibility studies and exploratory analysis to ensure suitability for machine learning model training, focusing on data adequacy and freedom from biases and inaccuracies.
  • Data Quality Monitoring: Establishes processes similar to DevOps and Site Reliability Engineering (SRE) for consistent data quality tracking, using dashboards and alerts. Neglecting data quality monitoring can waste time, create redundant work, cause data pipelines to fail, and even pollute downstream datasets used to make business decisions.
    • Monitoring Data in Flight: Involves monitoring dataset quality during processing. This monitoring includes identifying and tracking errors in the data pipeline. Business and technical teams should observe some important dataset characteristics such as: Null percentages, streaming anomaly detection, late arriving data, and schema changes over time.
    • Monitoring Data at Rest: Continuously monitors data stores for quality, serving as a feedback loop for data engineers to refine upstream checks. It's vital for scenarios where full dataset access is necessary to identify and manage data quality issues.
    • Anomaly Detection: Utilizes algorithms to identify deviations from expected data behavior in data pipelines, facilitating proactive identification of issues in real time at each data lifecycle stage.
    • Data Validation: Focuses on identifying and neutralizing fatal errors that could disrupt downstream processing. Key errors include unexpected schema changes, incorrect data type formats, and unexpected nulls.

Learn more about data quality in Microsoft Fabric

Microsoft Fabric improves data quality by using a range of features, including streamlined data integration, data cleansing, data validation and data monitoring. With an end-to-end analytics platform, it ensures that data quality can be maintained throughout the entire data lifecycle from collection to analysis to visualization.

Implementations

Examples

  • The MDW Repo: Parking Sensors sample shows how to utilize Great Expectation python framework in a larger data pipeline built with Azure Databricks and Azure Data Factory.

See For more information section for Data quality tools like Great Expectations, dbt Unit Testing, and Apache Griffin.

For more information