Archive | November 2013

Behind my QVD file

Have you ever considered your QVD files to be more than data files? If your answer is no, then you should keep reading this post.

I was working in my laptop when I found by chance a QVD file that should not have been there. I usually store everything in a server and work in the server so I wondered why that file was there.

When I opened it, I could recognize the data in it and I couldn’t remember why I copied that file from the server to my laptop or why I had created it.

It turns out that QVD files are more than data files. They contain a structure behind that can be very useful. This XML structure can tell you the following about your QVD file:

  • Creation date
  • QVW file that created the QVD file
  • QVD store path
  • Table name used to create this QVD file
  • Field names, formats, length, etc.
  • Query used to pull data before creating the QVD file
  • Qlikview Load statement used in the QVW file.
  • And more

As we can appreciate, you can have a full understanding of that QVD file without even having Qlkview installed in your environment.

So, how do we see all this information? Simply change the file extension from .qvd to .xml. You can then open your file with any text or XML editor.


You can also load this structure in a QV application and play with the data you get from it. To do this, you should:

  • Double click your QVD file
  • In the File Wizard window, select XML in the File Type menu.
  • Finish and load your application


Write back to database

It might take some time to  find out how to write back in a database from Qlikview, so here’s my input.

In the script, create a connection to the database you want to work with:

We can then create a Insert statement to test  if it works:

If we load the script, we’ll see the following error message:

This error message is basically telling as that the connection to the database is opened as read only. In order to make a Read/Write connection, we add “mode is write” in the connection statement and in the Settings tab in the Script Editor, we check options “Open Databases in Read and Write mode” and “Can Execute External Programs” although is not mandatory.

Now reload the application and you will be able to write back to the database.


Working with data at different levels of granularity

I’m currently working in a project for a client that reports on data with different levels of granularity. Of course this becomes a problem as the architecture becomes a bit messy. However, it becomes really interesting the way you can play with Qlikview even when you have a de-normalize schema.
I’ll try to explain what I want to achieve with an example.
Let’s suppose we have two fact tables. I will not include any dimension tables to make the example simpler, but I will still have the dimension values within the fact table.
My first fact table contains data at the day/city level. It looks like this:


From this table I can know the number of downloads for a song I uploaded to my site. This is by day and City. Having the city I can then create a hierarchy up to the continent level.

My second table, on the other hand, represents the number of times my song was played online (streaming) on my website. This is by day, and country.


Knowing that we cannot de-aggregate this last table to the city/day level, I needed to find a way to show the number of times my song was played when the users choose continent.

Joining the tables by day is not an option as we might have data in the first table for a certain day that we don’t have on the second table and vice versa. For this example, we assume that we have the same countries in the first and second table.

Within our physical model, there’s no connection between my first and second fact table. The good thing is Qlikview allows us to create logical data models.

A Logical data is based on business requirements and doesn’t have to follow strict DB definition. To clarify, joining two tables with different levels of granularity is not correct when working on your physical model, but it can be done in a logical model.

After joining my two tables I have this:

And the data in my tables would look like this:

If I select America, both tables will filter USA and Argentina but I will see different dates. What I need is my second table to show the exact same dates and countries as my first one.

Naturally, I can’t achieve what I want with this data model, so I need to find a way to make the second table show whatever I select in my first table and for this we can use the P() function. The P() function represents all possible values and allows a field name as a parameter.

I will now change my expression in the second table from:



Sum({<DatePlayed P(DateDownload)>} NumOfTimesPlayed)

My model will now respond as expected and the second table will show only those countries and dates filtered in my first table.


It is ideal that our model contains data with the same level of granularity, but there are many cases that this won’t happen. Qlikview allow us to easily find a work around to manage this complex, de-normalize models.