- The in-memory engine is really powerful, making it possible to deal with huge data records on the fly (more than 400 million in a table). I have built data models with over 1 billion records and the performance is good.
- With in-memory calculations at the UI (on the fly), it is possible to build charts, tables, etc. directly on the user interface (UI).
- Cubes are built on the UI. No need to pre-aggregate data, but it is of course possible to do that in the script (which I would not recommend) or better deliver data as SQL-View.
- Scripting possibilities to build complex applications and functionality.
- Web integration capabilities, for ex. you can include JScript Objects or R for better visualization.
Improvements to My Organization:
We are service providers and built applications for our customers, not for us. Our customers are able to evaluate their marketing campaigns at the aggregated and customer levels. They can use micro-segmentation to select the leads.
Room for Improvement:
We actually use QlikView, which does not allow easy-to-use framework management. There is a deployment framework, which I do not find easy to use and somehow not stable. I hope QlikView can improve this.
Use of Solution:
I have been using this solution for 3.5 years now.
There is no general rule, but we use the deployment framework, which allows us to produce a QV application user with four different levels of deployment:
- One level for the extraction of the data (1 to 1)
- In the second level, we get the original tables and based on the user requirements, build the data model and make the necessary transformations.
- In the third level, we get an intermediary state of the data model to be loaded in the end-user application.
- The fourth level contains the end-user application. We do a “binary” load, which is a command line in the QlikView script that allows us to copy 1-1 to the application we are copying from, in this case, from the third level. We have separate development from production and testing in a highly reachable environment (cluster environment).
It is stable, but of course stability depends on your IT infrastructure and deployment chosen.
Data model, data size, RAM size, hardware CPUs and UI expressions / charts, are strongly related. It is possible to scale QV without problems, but the issue is more complex than not having enough RAM. Complex expressions at the UI level, single threaded object calculations, complex data models, huge data size, etc., can alter the performance of the QV application and thus we might think we need to scale. There are many tools to measure QV performance and to try keep them at an optimal level. Otherwise, if everything else is fine, we need to increase RAM. Actually, we have 1 TB RAM on our production server.
Technical support is good, but could be improved.
Initial setup was straightforward. You need a server, install QlikView with its web server option and then you can start doing applications.
For more complex architectures (clustered, with two or more servers), you will probably need Qlik Tech support.
We contacted Qlik Tech directly for a cluster implementation (at my current company).
Cost and Licensing Advice:
Pricing/licensing depends on your company size, the uses of QV, # of users, among others. Named CALs, at approx. 1.200 EUR (see the Qlik website), would be required if you have permanent users with write access to different applications, while Document CALs are good if you have users that will only see a document to perform their jobs. There is good and simple documentation explaining your best pricing / licensing model.
Other Solutions Considered:
For my previous company I compared Tibco, QlikView, Tablaeu, and other tools, as indicated in the Gartner Quadrant. We did workshops with them, and were happy with QV because you can build something immediately. In the case of Tablaeu, you need a really well-established ETL and / or views. Otherwise, it is complicated for a normal user to build correct charts. Users have normally very little understanding of the data structure. Other tools need pre-aggregation and a long development process. With QV, it is possible to use views, pre-aggregated or raw data and you can still manipulate the data in the script, i.e. do ETL (see Deployment Framework).
- Check your user and business requirements first and check if QV can help you solve your user / business needs.
- If yes, check what are the potential uses of QV and describe the environment, company or unit size, # of users, # of applications, # of KPIs, and data volume. After this first check, you may be able to determine how big would your applications be, and thus estimate current and future RAM, necessary IT Infrastructure, # of servers, etc. I would talk to IT to see how to integrate QV into your IT environment. Many people start with an isolated QV implementation in their unit, which is fast, but then you have no single point of truth (this may discourage adoption because the user does not trust the data). But it depends on your goals.
- Start working and promote user adoption, showing good functionality, fast implementation, and reliable data. QV can then become the main BI tool in the company.
- Looking forward, QV can be extended in the use of mash-ups, Jscript extensions and analytics with R, so that you can build up in the future. Qlik Sense is a new Qlik Tech product, which offers many new possibilities.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Jul 11 2016