top of page

Microsoft Fabric: Top Practices for Optimal Usage

In the rapidly evolving web development, Microsoft’s Fluent UI, formerly known as Microsoft Fabric, has emerged as a powerful toolset for building robust and visually appealing user interfaces. This article aims to guide you through the best practices of using Microsoft Fabric to create efficient, user-friendly, and aesthetically pleasing applications.


Microsoft Fabric is a collection of UX frameworks that developers can use to build Fluent experiences that work seamlessly across Microsoft products. It includes a variety of versatile components, robust styling options, and utilities designed to simplify the process of creating consistent, accessible, and responsive designs.


Best Practices of using Microsoft Fabric

Before we explore the specifics, it’s important to understand that the effectiveness of Microsoft Fabric, like any other tool, largely depends on how well it’s used. Therefore, adhering to best practices is not just recommended, but essential for leveraging the full potential of this powerful library.


Best Practices of Using Microsoft Fabric Efficiently

In the following sections, we will explore these best practices in detail, providing practical examples and tips to help you make the most of Microsoft Fabric in your projects.


Let’s get started!


Best Practice 1: Use Azure Resource Manager Templates and the Service Fabric PowerShell Module to Create Secure Clusters

Azure Resource Manager (ARM) templates and the Service Fabric PowerShell module are powerful tools that can help automate the process of setting up and managing your Service Fabric clusters.


An ARM template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which means you describe your intended deployment without writing the sequence of programming commands to create it.


To create a Service Fabric cluster using an ARM template in Microsoft Fabric, you would typically follow these steps:


STEP 1: Install Modules

Install the Service Fabric SDK and PowerShell module.


STEP 3: Download resources:

Clone or download the Azure Resource Manager quickstart Templates repo.

Alternatively, download these files from the service-fabric-secure-cluster-5-node-1-nodetype folder:

  • New-ServiceFabricClusterCertificate.ps1

  • azuredeploy.json

  • azuredeploy.parameters.json


STEP 4: Sign in to Azure:

Sign in to your Azure account using Login-AzAccount -SubscriptionId "<subscription ID>" (replace <subscription ID> with your actual ID).


STEP 5: Create a self-signed certificate (for development only):

Production environments require a trusted certificate authority (CA). Here, we'll create a self-signed certificate for testing purposes.


Define unique names for your resources (within cloudapp.azure.com):

  • $resourceGroupName: Name for your resource group (e.g., SFQuickstartRG)

  • $keyVaultName: Name for your Key Vault (e.g., SFQuickstartKV)


Create a resource group:

New-AzResourceGroup -Name $resourceGroupName -Location SouthCentralUS

Create a Key Vault enabled for deployment:

New-AzKeyVault -VaultName $keyVaultName -ResourceGroupName $resourceGroupName -Location SouthCentralUS -EnabledForDeployment

Generate and upload a certificate to Key Vault:

.\scripts\New-ServiceFabricClusterCertificate.ps1

The script will prompt you for details (modify CertDNSName and KeyVaultName):


After completion, the script provides values for template deployment:

  • $sourceVaultId: Source Vault resource ID

  • $certUrlValue: Certificate URL

  • $certThumbprint: Certificate thumbprint


Note: Store these values carefully for deploying your Service Fabric cluster template.


Best Practice 2: Use X.509 certificates

X.509 certificates provide a secure method for authenticating clients and servers. They are widely supported across different platforms and technologies

.

When used for authentication in Service Fabric, a certificate can be used to prove the following claims:

  • The presenter of the certificate credential has possession of the certificate’s private key.

  • The certificate’s SHA-1 hash (‘thumbprint’) matches a declaration included in the cluster definition.

  • The certificate’s distinguished Subject Common Name matches a declaration included in the cluster definition, and the certificate’s issuer is known or trusted.


In the context of Service Fabric, you add certificates to a cluster Resource Manager template by referencing the key vault that contains the certificate keys. This includes both the cluster certificate and any application security certificates that you plan to use for your applications.



Best Practice 3: Configure Security Policies

Configuring security policies is a critical aspect of safeguarding your data and clusters in Microsoft Fabric. These policies govern who can access your data, what they can do with it, and how it’s protected.

  1. Authentication: Every interaction with Fabric, including logging in, using the Power BI mobile app, and running SQL queries through SQL Server Management Studio (SSMS), is authenticated using Microsoft Entra ID. This allows users to safely connect to Fabric from their corporate office, when working at home, or from a remote location.

  2. Network Security: Interactions within Fabric use the internal Microsoft network and traffic outside of the service is protected by default. Your organization might want to restrict and secure the network traffic coming into Fabric based on your organization’s requirements.

  3. Conditional Access: You can configure conditional access policies in Microsoft Fabric to control access based on conditions. For example, you can create a policy that requires multi-factor authentication when users access Fabric from outside your corporate network.

  4. Role-Based Access Control (RBAC): RBAC allows you to grant permissions to users, groups, and applications at a certain scope. You can assign roles at different scopes, such as the management group, subscription, resource group, or individual resources, to provide the exact access that users need to perform their tasks.

  5. Data Protection: Fabric ensures that built-in security and reliability features secure your data at rest and in transit. Microsoft also makes sure that your data is recoverable in cases of infrastructure failures or disasters.



Best Practice 4: Implement the Reliable Actors Security Configuration

The Reliable Actors model in Microsoft Fabric is a programming model that provides a high-level abstraction for writing distributed applications. It’s built on top of Service Fabric Reliable Services, which makes it easy to package, deploy, and manage scalable and reliable microservices.


The Reliable Actors model is an application framework that simplifies the building of distributed, scalable applications in the cloud. It’s based on the actor design pattern, where each actor is a unit of computation and state, with single-threaded execution.



Implementing the Reliable Actors security configuration is considered a best practice for Microsoft Fabric due to several reasons:

  1. Simplicity and Abstraction: The Reliable Actors model provides a high-level abstraction for writing distributed applications, simplifying the development process. This allows developers to focus on the business logic of their applications, rather than the complexities of distributed systems.

  2. Built-in Security Features: The Reliable Actors model includes built-in features for managing security. This means that it provides mechanisms to ensure that only authorized entities can access or modify an actor’s state. This is crucial in a distributed system where actors can be spread across many nodes.

  3. Scalability and Performance: The Reliable Actors model is designed to be highly scalable, making it suitable for applications that need to handle large volumes of data or high levels of traffic. It also provides reliable messaging and state management, ensuring that your application can maintain high performance even under heavy load.

  4. Flexibility: The Reliable Actors model is flexible and can be used to implement a wide range of application patterns. This includes everything from simple stateless services to complex, stateful services with sophisticated workflows.


To implement the Reliable Actors security configuration, you would typically define the actor interface and implement the actor class. The actor interface defines the actor contract that is shared by the actor implementation and the clients calling the actor. The actor class is a class that derives from the base type Actor and implements the interfaces defined in the actor interface. Any additional actor implementations added to the project must also be registered in the Main() method.


Best Practice 5: Configure TLS for Azure Service Fabric

Transport Layer Security (TLS) is a protocol that provides privacy and data integrity between two communicating applications. It’s used for web browsers and other applications that require data to be securely exchanged over a network.


In the context of Azure Service Fabric, TLS is used to provide secure communication between the nodes of a Service Fabric cluster. This helps to protect your data from eavesdropping and tampering as it moves across the network.


Configuring Transport Layer Security (TLS) for Azure Service Fabric is considered a best practice for using Microsoft Fabric due to the following reasons:

  1. TLS provides encryption for data in transit, helping to protect your data from eavesdropping and tampering. This is particularly important in a distributed system like Service Fabric where data is often transmitted across different nodes in the cluster.

  2. TLS also provides a mechanism for server and client authentication. This ensures that the data is being sent to and received from the intended parties, adding layer of security.

  3. TLS ensures the integrity of data. This means that it provides a mechanism to detect if data has been tampered with during transit.

  4. Many industry regulations and standards require data to be encrypted during transit. By configuring TLS, you can meet these compliance requirements.


Here’s how you can configure TLS for Azure Service Fabric:

  1. Obtain a Certificate: The first step is to obtain a certificate that can be used to establish the TLS connection. This could be a self-signed certificate or a certificate issued by a trusted Certificate Authority.

  2. Upload the Certificate to Azure Key Vault: Once you have a certificate, you need to upload it to Azure Key Vault. This provides a secure way to store and manage the certificate.

  3. Configure the Service Fabric Cluster: The next step is to configure your Service Fabric cluster to use the certificate from Azure Key Vault for establishing TLS connections. This involves updating the cluster’s security configuration to reference the certificate.

  4. Update the Service Manifest: Finally, you need to update the service manifest of any services that need to use the TLS connection. This involves adding an endpoint to the service manifest that uses the certificate for communication.


Best Practice 6: Use Dynamic Management Views (DMVs) to Monitor Query Execution

Dynamic Management Views (DMVs) are a feature of SQL Server that provide valuable insights into the state of your SQL Server system, including detailed statistics about query execution.


DMVs return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance. For example, you can use DMVs to find out what queries are running, which indexes are being used, what the most expensive queries are, and much more.


You can use DMV for:

  1. Monitoring and Diagnostics: DMVs provide a wealth of information about the state of your SQL Server system, including detailed statistics about query execution. This allows you to monitor the performance of your queries and diagnose any issues that might be affecting their performance.

  2. Performance Tuning: By providing insights into query execution, DMVs can help you identify performance bottlenecks and optimize your queries for better performance. This can lead to significant improvements in the overall performance of your applications.

  3. Capacity Planning: DMVs can also provide valuable insights for capacity planning. By understanding your query workloads and how they impact your system, you can make more informed decisions about capacity planning and scaling.

  4. Security and Compliance: DMVs can also help with security and compliance efforts. For example, you can use DMVs to monitor login and user activity, identify potential security risks, and ensure that your system is in compliance with relevant regulations.


Here’s how you can use DMVs to monitor query execution:

  1. Identify the Query: Use the sys.dm_exec_requests DMV to identify the query you wish to investigate. This DMV contains information about each request that is executing within SQL Server.

  2. Investigate the Query Plan: Use the sys.dm_exec_query_plan DMV to retrieve the query plan for the query. This DMV returns the showplan in XML format for the batch specified by the plan_handle.

  3. Investigate SQL on the Distributed Databases: If you’re using distributed databases, you can use the sys.dm_exec_sql_text DMV to get the text of the SQL batch that is identified by the specified sql_handle.

  4. Investigate Data Movement on the Distributed Databases: If you’re using distributed databases, you can use the sys.dm_pdw_nodes_db_column_store_row_group_physical_stats DMV to monitor data movement on the distributed databases.


By using DMVs in Microsoft Fabric, you can gain deeper insights into your data and the performance of your queries, leading to more efficient and effective data management.


Best Practice 7: Manually Update Statistics after Data Modifications

In Microsoft Fabric, statistics play a crucial role in optimizing the performance of your queries. They provide the query optimizer with metadata about the distribution of data in a table, which helps the optimizer estimate the cost of different query plans and choose the most efficient one.


When you make significant changes to your data, such as inserting, updating, or deleting a large number of rows, the distribution of data in the table can change. This means that the existing statistics may no longer accurately represent the current state of the data.


If the query optimizer uses outdated statistics to plan queries, it may choose a suboptimal query plan, which can lead to poor performance. Therefore, after making significant changes to your data, it’s a good idea to manually update these statistics to ensure that the query optimizer has the most accurate information.


Here’s how you can manually update statistics in Microsoft Fabric:

-- To create statistics on the dbo.DimCustomer table, based on all the rows in a column CustomerKey:
CREATE STATISTICS DimCustomer_CustomerKey_FullScan ON dbo.DimCustomer (CustomerKey) WITH FULLSCAN;

-- To manually update the statistics object DimCustomer_CustomerKey_FullScan, perhaps after a large data update:
UPDATE STATISTICS dbo.DimCustomer (DimCustomer_CustomerKey_FullScan) WITH FULLSCAN;

-- To show information about the statistics object:
DBCC SHOW_STATISTICS ("dbo.DimCustomer", "DimCustomer_CustomerKey_FullScan");

-- To show only information about the histogram of the statistics object:
DBCC SHOW_STATISTICS ("dbo.DimCustomer", "DimCustomer_CustomerKey_FullScan") WITH HISTOGRAM;

-- To manually drop the statistics object DimCustomer_CustomerKey_FullScan:
DROP STATISTICS dbo.DimCustomer.DimCustomer_CustomerKey_FullScan;

By regularly updating statistics after significant data modifications, you can help ensure that the query optimizer has the most up-to-date information, leading to more efficient query plans and better overall performance.


Best Practice 8: Group INSERT Statements into Batches (Avoid Trickle Inserts)

In Microsoft Fabric, when you’re inserting a large amount of data into a table, it’s often more efficient to group your INSERT statements into batches. This is because each INSERT statement incurs a certain amount of overhead to process.


By grouping multiple INSERT statements into a single batch, you can amortize this overhead over multiple rows, which can significantly improve the performance of your data modifications.

.

Here’s how you can group INSERT statements into batches in Microsoft Fabric:

-- Start a transaction
BEGIN TRANSACTION;

-- Insert the first batch of rows
INSERT INTO MyTable (Column1, Column2) VALUES ('Value1', 'Value2'), ('Value3', 'Value4'), ...;

-- Insert the next batch of rows
INSERT INTO MyTable (Column1, Column2) VALUES ('Value5', 'Value6'), ('Value7', 'Value8'), ...;

-- Commit the transaction
COMMIT TRANSACTION;

In this example, each INSERT statement inserts a batch of rows into MyTable. The rows to be inserted are specified as a list of values in the VALUES clause.


This approach is particularly beneficial when you’re inserting a large amount of data. However, for smaller amounts of data, the difference in performance may not be noticeable.


Best Practice 9: Create (UNENFORCED) Primary Key, Foreign Key and Unique Constraints

In Microsoft Fabric, creating unenforced Primary Key, Foreign Key, and Unique Constraints is considered a best practice to ensure data integrity and improve query performance.

  1. Data Integrity: These constraints help ensure the integrity of your data by enforcing certain rules on the data in your tables. For example, a primary key constraint ensures that each row in a table has a unique identifier, while a foreign key constraint ensures that a column value matches a value in another table.

  2. Query Performance: These constraints can also improve query performance. The query optimizer can use these constraints to make assumptions about your data, which can lead to more efficient query plans.

  3. Flexibility: The UNENFORCED option provides flexibility in managing these constraints. When a constraint is UNENFORCED, the SQL Server does not enforce the constraint when data is modified. This can be useful in certain scenarios, such as when you’re loading data into a table and you want to temporarily disable a constraint.


Primary Key: A primary key constraint ensures that a column (or a set of columns) always has a unique value and never contains NULL. This uniquely identifies each row in the table. Here’s how you can create a primary key constraint in Microsoft Fabric:

-- Schema: sales
-- Table: customer
-- primary key column: customerid
ALTER TABLE sales.customer ADD CONSTRAINT PK_customer_customerid PRIMARY KEY NONCLUSTERED (customerid) NOT ENFORCED;

Foreign Key: A foreign key constraint ensures that the values in a column (or a set of columns) match the values in another table’s primary key. Note that Microsoft Fabric currently supports foreign key constraints only when NOT ENFORCED is used.


Unique Constraints: A unique constraint ensures that all values in a column (or a set of columns) are unique. This can be used to prevent duplicate entries in a column that should contain unique values. Here’s how you can create a unique constraint in Microsoft Fabric:

ALTER TABLE t1 ADD CONSTRAINT unique_t1_a1 UNIQUE (a1) NOT ENFORCED;

These constraints not only help to ensure the integrity of your data but also allow the query optimizer to make certain assumptions about your data, which can lead to more efficient query plans.



Best Practice 10: Choose the Best Data Type for Performance

In Microsoft Fabric, the data type you choose for a column can significantly impact the performance of your queries.

  1. Storage Requirements: Different data types require different amounts of storage. For example, an INT data type requires 4 bytes of storage, while a BIGINT requires 8 bytes. By choosing the smallest data type that can hold the range of values for a column, you can reduce the amount of storage required for your data, which can improve query performance.

  2. Data Processing: The data type of a column also affects how quickly the data can be processed. For example, operations on smaller data types are generally faster than operations on larger data types.

  3. Index Performance: The data type of a column can affect the performance of indexes on that column. Smaller data types usually result in smaller index sizes, which can lead to faster index scans and seeks.

Therefore, when defining your tables in Microsoft Fabric, it’s a good practice to use the smallest data type that supports your data as doing so will improve query performance. This recommendation is important for all data types, including CHAR and VARCHAR columns.


Conclusion

Microsoft Fabric is a powerful toolset that can significantly enhance the user interface and user experience of your applications when used correctly. The best practices outlined in this article are designed to help you leverage the full potential of Microsoft Fabric, ensuring that your applications are not only visually appealing but also efficient and user-friendly.


Remember, the key to effectively using Microsoft Fabric lies in understanding its capabilities and applying the best practices that align with the specific needs of your project. As with any tool, continuous learning and practice are essential to mastering its use.

Comments


bottom of page