If you are using the DAX Query View Testing Pattern you can also look at automating the deployment and testing using Azure DevOps. The following instructions show you how to setup an Azure DevOps pipeline to automate deployment of Power BI reports/semantic models and automate testing. In addition, test results can be sent to OneLake in your Fabric Capacity for processing.
Figure 1 -- High-level diagram of automated deployment of PBIP and automated testing with the DAX Query View Testing Pattern
In the pattern depicted in Figure 1, your team saves their Power BI work in the PBIP extension format and commits those changes to Azure DevOps.
Then an Azure Pipeline is triggered to validate the content of your Power BI semantic models and reports by performing the following:
-
The semantic model changes are identified using the "git diff" command. Semantic models that are changed are published to a premium-backed workspace using Rui Romano's Fabric-PBIP script. The question now is, which workspace do you deploy it to? I typically promote to a Build workspace first, which provides an area to validate the content of the semantic model before promoting to a development workspace that is shared by others on the team. This reduces the chances that a team member introduces an error in the Development workspace that could hinder the work being done by others in that workspace.
-
With the semantic models published to a workspace, the report changes are identified using the "git diff" command. Report changes are evaluated for their "definition.pbir" configuration. If the byConnection property is null (meaning the report is not a thin report), the script identifies the local semantic model (example in Figure 2). If the byConnection is not null, we assume the report is a thin report and configured appropriately. Each report that has been updated is then published in the same workspace.
-
For the semantic models published in step 1, the script then validates the functionality of the semantic model through a synchronous refresh using Invoke-SemanticModelRefresh. Using the native v1.0 API would be problematic because it is asynchronous, meaning if you issue a refresh you only know that the semantic model refresh has kicked off, but not if it was successful. To make it synchronous, I've written a module that will issue an enhanced refresh request to get a request identifier (a GUID). This request identifier can then be passed as parameter to the Get Refresh Execution Details endpoint to check on that specific request's status and find out whether or not the refresh has completed successfully.
If the refresh is successful, we move to step 4. Note: The first time a new semantic is placed in the workspace, the refresh will fail. You have to "prime" the pipeline and set the data source credentials manually. As of April 2024, this is not fully automatable and the Fabric team at Microsoft has written about. -
For each semantic model, Invoke-DQVTesting is called to run the DAX Queries that follow the DAX Query View Testing Pattern. Results are then logged to the Azure DevOps pipeline (Figure 3). Any failed test will fail the pipeline.
Figure 3 - Example of test results logged by Invoke-DQVTesting
- The results of the tests collected by Invoke-DQVTesting are also sent to OneLake where there reside in a Lakehouse on your Fabric Capacity. These can then be used for processing, analyses, and notifications.
-
You have an Azure DevOps project and have at least Project or Build Administrator rights for that project.
-
You have connected a Fabric-backed capacity workspace to your repository in your Azure DevOps project. Instructions are provided at this link.
-
Your Power BI tenant has XMLA Read/Write Enabled.
-
You have a service principal. If you are using a service principal you will need to make sure the Power BI tenant allows service principals to use the Fabric APIs. The service prinicipal or account will need at least the Member role to the workspace.
-
You have an existing Lakehouse created. Instructions can be found at this link.
-
Navigate to the Lakehouse in the Fabric workspace.
-
Inspecting the URL and capture the Workspace ID and Lakehouse ID. Copy locally to a text file (like Notepad).
- Access the Files' properties by hovering over the Files label, select the option '...' and select Properties.
- Copy the URL to your local machine temporarily in Notepad. Append the copied URL with the text ‘DQVTesting/raw’. This allows us to ship the test results to a specific folder in your Lakehouse. For example if your URL for the Files is:
-
Download the Notebook locally from this location.
-
Navigate to the Data Engineering Screen and import the Notebook.
- Open the notebook and update the parameterized cell's workspace_id and lakehouse_id with the ids you retrieved in Step
-
If you have not connected the Notebook to the appropriate lakehouse, please do so. Instructions are provided here.
-
Run the Notebook. This will create the folders for processing the test results.
- In your Azure DevOps project, navigate to the Pipelines->Library section.
- Select the "Add Variable Group" button.
- Create a variable group called "TestingCredentialsLogShipping" and create the following variables:
- ONELAKE_ENDPOINT - Copy the URL from Step 4 into this variable.
- CLIENT_ID - The service principal's application/client id or universal provider name for the account.
- CLIENT_SECRET - The client secret or password for the service principal or account respectively.
- TENANT_ID - The Tenant GUID. You can locate it by following the instructions at this link.
- Save the variable group.
- Navigate to the pipeline interface.
- Select the "New Pipeline" button.
- Select the Azure Repos Git option.
- Select the repository you have connected the workspace via Git Integration.
- Copy the contents of the template YAML file located at this link into the code editor.
- Update the default workspace name for located on line 5 with the workspace you will typically use to conduct testing.
- Select the 'Save and Run' button.
- You will be prompted to commit to the main branch. Select the 'Save and Run' button.
-
You will be redirected to the first pipeline run, and you will be asked to authorize the pipeline to access the variable group created previously. Select the 'View' button.
-
A pop-up window will appear. Select the 'Permit' button.
- You will be asked to confirm. Select the 'Permit' button.
- This will kick off the automated deployment and testing as described above.
- Select the "Automated Deployment and Testing Job".
- You will see a log of DAX Queries that end in .Tests or .Test running against their respective semantic models in your workspace.
- For any failed tests, this will be logged to the job, and the pipeline will also fail.
- You will also see any test results in your lakehouse as a CSV file. Please see CSV Format for more details on the file format.
- Run the notebook and when completed the files should be moved into the processed folder and following tables are created in the lakehouse:
- Calendar - The date range of the test results.
- ProjectInformation - A table containing information about the Azure DevOps project and pipeline used to execute the test results.
- TestResults - Table containing test results.
- Time - Used to support time-based calculations.
- Schedule the notebook to run on a regular interval as needed. Instructions can be found at this link.
It's essential to monitor the Azure DevOps pipeline for any failures. I've also written about some best practices for setting that up in this article.
The following describes the CSV file columns for each version of Invoke-DQVTesting.
- Message - The message logged for each step of testing.
- LogType - Will be either of the following values:
- Debug - Informational purposes.
- Error - A test has failed.
- Failed - One or more tests failed.
- Success - All tests passed.
- Passed - Test result passed.
- IsTestResult - Will be "True" for if the record was a test. "False" otherwise.
- DataSource - The XMLA endpoint for the semantic model.
- ModelName - The name of the semantic model.
- BranchName - The name of the branch of the repository this testing occurred in.
- RespositoryName - The name of the respository this testing occurred in.
- ProjectName - The name of the Azure DevOps project this testing occurred in.
- UserName - The initiator of the test results in Azure DevOps.
- RunID - Globally Unique Identifier to identify the tests conducted.
- Order - Integer representing the order in which each record was created.
- RunDateTime - ISO 8601 Format the Date and Time the tests were initiated.
- InvokeDQVTestingVersion - The version of Invoke-DQVTesting used to conducted the tests.
The pipeline leverages two PowerShell modules called Invoke-DQVTesting and Invoke-SemanticModelRefresh. For more information, please see Invoke-DQVTesting and Invoke-SemanticModelRefresh respectively.
Git Logo provided by Git - Logo Downloads (git-scm.com)