4 September 2025

Qlik vs. Power BI: Part 2 – Back-End & Data Modeling 

Share this message

In the first part, we introduced this blog post and set the boundaries for what to expect the coming weeks. In this blog post, I compare Qlik vs Power BI data preparation and modeling, starting with the back-end, the environment where data is prepared and loaded into the model. 

Let’s begin with the back end, the place where data is prepared and loaded into the model. This is the stage where the real comparison between Qlik and Power BI in data preparation and modeling starts. In Qlik, this is the Data Load Editor; in Power BI, it’s Power Query. For completeness, Qlik also has other data preparation options, some of which are more UI driven (Data Manager), have previews (Script), UI transformations (Table Recipe) or a flow like interface (Data flow). These options were not part of the project I worked on, and apart from the Data Manager, are still relatively new. When it comes down to it, the Qlik Data Load Editor, remains the most widely used tool among Qlik users worldwide, although I won’t rule out that some of these new options will take the stage in the future.  

Similarly, Microsoft also has additional data preparation options, like Apache Spark notebooks, and Dataflow Gen2, but they where out of scope in this project and are considered part of Microsoft Fabric and not of Power BI. In general, Power BI users will use Power Query, although that might change with Microsoft’s continued effort to build upon features in Fabric. 

Power Query is known for its user-friendly, point-and-click interface. This is where most transformations can be done through the UI and recorded as sequential steps. For more advanced logic, you can write in M code, the scripting language behind Power Query. It’s important to note that Power BI uses a different language (DAX) for front-end calculations. Mastering both is essential if you want to master Power BI. 

One of Power Query’s strengths is its accessibility. Since it’s also available in Excel and uses a familiar ribbon interface like other Microsoft Office tools, it’s approachable for users without a BI or SQL background. Common tasks like renaming fields, changing data types, and merging tables can be done with just a few clicks.

Microsoft Power Query ribbon

Most starting users, will use Power Query within Microsoft Power BI desktop, but you will have the same transformation logic available within the Power BI service, as Microsoft has been adding most of the data modeling features of desktop to the cloud service as well.  

A useful feature in Power Query is column profiling, which shows value distribution, null counts, and uniqueness, helpful for data quality checks. Next to this, a standout feature  is the preview pane, which shows the result of each transformation step. This is incredibly helpful for debugging and understanding your data as you go, something I often miss in Qlik. However, these previews can also slow down development, especially with large datasets or slower connections, as you have to wait for every preview step, before you can do the next transformation step.  

Power Query’s preview pane and column profiling 

Power Query’s preview pane and column profiling 

To work around this, you can load your raw data into a Power BI Dataflow, a Power Query environment in the Power BI Service. This lets you upload untransformed data to the cloud and build your model on top of it. Depending on your data volume and requirements, this can be done with a Pro license, though larger setups may require Dataflows v2, which requires Fabric capacity.  

In fact, dataflows are the closest Power BI equivalent to Qlik’s QVD-based architecture. They allow you to centralize and reuse transformation logic across multiple semantic models. However, governance and manageability can be more challenging, especially in larger environments. In the environment we worked in, the transition would not have been so smooth if it hadn’t been for the Databricks platform that both Qlik and Power BI plug into, where most transformations are done centrally. Migrating all of the Qlik script to Power Query, without Databricks, MS Fabric, or something similar, while maintaining the principle of re-using the transformation outcomes, would have been next to impossible.  

As your Power Query queries grow more complex, maintaining structure becomes a challenge. Without discipline, your transformation steps can quickly become a tangled mess.  

Very messy transformation steps in PowerQuery

Power Query steps can get messy. Now where did I rename that Year column?  

To keep things manageable, you should: 

  • Use folders to group queries 
  • Add comments to each step 
  • Combine transformations into fewer steps (this often requires writing M code) 

If the ship has already sailed, you can see the underlying M-code of all transformation steps of a table in the Advanced Editor, which you can copy paste to a text editor to search for your transformation.  

One feature I particularly appreciate is selective table refresh. If you modify just one table in your model, Power Query allows you to reload only that table, saving time during development. You can also trigger this from the data view in the front-end, if you have a need for fresh data out of your data source. 

In Qlik, data preparation happens in the Data Load Editor, where you write Qlik script. Unlike Power Query’s step-by-step approach, Qlik script builds tables column by column, more like traditional SQL. 

Qlik Data Load Editor


The Qlik Data load editor  

This scripting approach comes with a steeper learning curve, especially for users coming from Excel or non-technical backgrounds. However, those familiar with SQL will find the syntax somewhat familiar. Compared to Power Query, Qlik script is harder to learn but easier to master. This is especially because the same scripting language is used in both the back-end and front-end. 

Anyone can make a mess in any environment, but Qlik makes it easier to keep your code clean and modular. QVD files let you split your ETL process into reusable layers. This approach is powerful for small to medium-sized organizations that use Qlik for both ETL and visualization. In many cases, it even removes the need for a separate data warehouse.

Qlik has recently introduced data previews in the script editor, but they require a partial load and are limited to the top X rows. Unlike Microsoft’s Power Query, you can’t preview transformations step-by-step as you write them in the Qlik Data Load Editor. Just last week, Qlik introduced a new feature, Table Recipe, where you transform step-by-step and have data previews every step of the way. This is however, only in Qlik Cloud (for now) and a separate feature, next to the data load editor.   

Another limitation is that reloading just one table is a hassle. You can use a feature called partial reloads, but it requires you to add script to the LOAD statements of all your tables. If you just want to reload that one DimCustomer Table you would have to make script additions. While the partial reload has its uses, I have rarely seen it used for this scenario in actual production environments, as the script logic will become too complex. Of course, if you neatly separated your Transformation logic from your front-end you can refresh 1 table from the source, but you’d always have to reload the complete front-end data model to have it updated there.  

When exploring your data, you’ll often need a separate Qlik app to inspect source or destination tables. In Qlik Client-Managed, this often means using an external QVD viewer, or having multiple apps open next to each other to see both the source and the results of your transformations.  

Some Qlik functions simply don’t exist in Power Query.

A few notable examples: 

  • AutoNumber() – Automatically generates unique integers for strings, useful for optimizing joins and reducing memory usage. 
  • Where Exists() – Filters data during load based on whether a key already exists in another table. 
  • ApplyMap() – A mapping function similar to Excel’s VLOOKUP, used to enrich data without performing joins. 

In Power BI, you’d need to use joins or lookup tables to replicate these behaviors, often at a performance cost. Qlik’s variable concept also has no real counterpart in Power Query. While Power Query supports parameters, they can’t be updated dynamically during reloads, making loops (like for..each) difficult to implement. 

On the flip side, Power Query includes some  built-in functions and capabilities as well that are harder to replicate in Qlik: 

  • HTML parsing: Removing HTML tags is straightforward with Power Query’s Html.Table() function. In Qlik, this requires a combination of SubField() and mapping tables, or the newer RegEx functions available in Qlik Cloud. 
  • List transformations: Power Query supports powerful list-based operations. For example, you can easily generate a running total using List.Accumulate(), or create a dynamic date range using List.Generate(). In Qlik, these tasks are possible but require more effort, such as using loops for date generation or sorting and scripting for cumulative calculations. Senior developers will use them a lot, but for beginners, Power BI’s approach will be more accessible.  
  • Pivoting and unpivoting: Power Query allows you to reshape data with a few clicks straight from the UI. In the Qlik Data Load Editor, this involves using Crosstable() or Generic Load, both of which require you to write the code yourself. In the past, QlikView did have a built-in wizard for unpivoting in the script editor but in Qlik Sense this has been moved into the Data Manager.

The data model is the result of your back-end transformations and the foundation for your front-end visualizations. While both tools aim to deliver performant, flexible models, they take different approaches. 

Qlik automatically links tables based on field names. If the field CustomerID exists in both tables Sales and Customer, Qlik links them. If the field is named differently, you’ll need to rename it in your script.  

Qlik also creates synthetic keys when multiple fields with the same name exist across tables. These should be avoided by combining fields into a single key. 

All relationships in Qlik are bidirectional. This means you can’t filter two fact tables by shared keys without creating a circular reference, which Qlik blocks. To resolve this, you often need to restructure your model by either combining fact tables or duplicating dimensions. Unlike Power BI, Qlik can also create Many-to-Many relationships without the developer noticing right away. More often than not, you’d want to prevent this when building a data model.  

This limitation may frustrate developers, but it also pushes them to think carefully about their model design, which benefits them in the long run. It will guide you in preventing  a tangled web of data-model spaghetti, but it will not prevent it if you haven’t given the data model enough thought or are trying to put too much business functions into one dashboard.  

Qlik’s Associative Engine has more perks than just associating field names, but we will get to that in part 3 of this series. For the data modeling part, we can conclude that it is powerful, but it demands discipline in naming conventions and structure.  

Power BI offers a flexible modeling environment. Relationships between tables can be created manually or suggested automatically, and you’re free to link tables on any field, regardless of naming.

You can also define: 

  • Cardinality (One-to-One, One-to-Many, Many-to-Many) 
  • Filter direction (Single or Bidirectional) 

You typically do this from within the data model pane in Power BI desktop, so after the Power Query transformation steps.  

These features allow for modeling scenarios such as multiple fact tables filtered by shared dimensions, without causing loops. While this can be very powerful, I have also seen scenario’s where this leads to data model spaghetti that is very hard to untangle. In most cases I’d advice to build a star schema in Power BI as well to make sure your data model is robust and has the least amount of steps in between tables.  

Model View in Power BI Desktop

The Model view in Power BI desktop.  

In Power BI, you can build reports on a semantic model from another report. Users often separate front-end from back-end to let multiple front-end reports share one semantic model. Power BI even allows you to build a report on multiple semantic models.

One feature I particularly appreciate is Power BI’s visual relationship view, which shows cardinality and filter direction at a glance. You can define these explicitly, and if the data violates the expected relationship (e.g., a Many-to-One becomes Many-to-Many), Power BI will throw an error, alerting the developer before incorrect data reaches the end user. 

Power BI separates the Model View from the Table View, allowing you to: 

  • Browse full tables 
  • Filter and sort data 
  • Inspect columns and measures 
  • Perform quick data quality checks 

This makes it easy to validate your model and troubleshoot issues during development. 

In Qlik, you can preview tables in the Data Model Viewer. The view shows only the top 100 rows and doesn’t support filtering or sorting.

To influence anything, you’ll need  to adapt the script and for deeper inspection, you’ll often need to: 

  • Add a temporary sheet with a table object 
  • Use a separate app to explore source or destination tables 
  • Or, in Qlik Client-Managed, use an external QVD viewer 

Data Model view in Qlik

The Qlik Data Model viewer 

When it comes to the back-end and data modeling layers, both Qlik Sense and Power BI offer powerful capabilities, but they approach the problem from very different angles. 

  • Power BI shines with flexibility, visual relationship management, and error detection features. It’s accessible and easier to learn for Excel or non-technical users, but its forgiving modeling environment can also create messy transformations if not managed well.
     
  • Qlik Sense, on the other hand, enforces structure through naming conventions and its associative model. While it has a steeper learning curve and the most common data preparation environment lacks visual transformation features, it encourages clean, reusable, and scalable design, especially when leveraging QVDs.  

In the next post on 11 September 2025 , we’ll move from the engine room to the showroom: the front-end. I’ll compare how both tools handle: 

  • Visualizations and interactivity 
  • Expression languages (DAX vs. Qlik expressions) 
  • Performance and responsiveness 
  • Developer experience and collaboration 
  • End-user experience and self-service capabilities 

If you’ve ever wondered why a dashboard “feels” different in Power BI versus Qlik, even when showing the same data, you won’t want to miss it. 

Stay tuned! 

Power BI Qlik

How can we help?

Barry has over 20 years experience as a Data & Analytics architect, developer, trainer and author. He will gladly help you with any questions you may have.