MicroStrategy Review

Issues with reports, however can be quickly fixed

A common problem in MicroStrategy that users face is the performance of reports. But with simple tuning techniques that the product provides this can be fixed in no time.
I would like to share with you the techniques that I have played around with.

First I would like to talk about the caching.

Caching allows for improved performance in response to report queries. As the cache gets stored in the memory, when repeated queries are made against the same reports the data is fetched faster.

Caching comes in different forms - Project/Report Level Caching and Template Caching

1. Caching can be enabled/disabled at project or report level. However the report level setting will override the project level setting. Using Project Configuration we can enable the caching else in report – using caching options we can enable or disable the caching. This type of caching is very useful only in cases where the data is preloaded in the database and there is no incremental refreshing of the database at any time of the user report run. For a data warehouse caching would help as against a OLTP ( Online Transaction Processing) system where data is frequently refreshed.

2. Templates can also be used for caching. Using a common template for developing common reports will help to fetch report data faster.

In projects I have worked on, we have implemented template level caching where in one template is built with common set of attributes and metrics and the reports are built by using the templates as shortcuts and in turn we cache the templates. Hence when one of the reports built from the template is run the entire data of template is run and gets cached in the memory and when other subsequent reports are run the report hits the cache and does not hit the database thus increasing the performance.

Next I would like to talk about intelligent cubes. This is a form of In-Memory cache.

Rather than returning data from the data warehouse for a single report, you can return sets of data from your data warehouse and save them directly to Intelligence Server memory. These sets of data can be shared as a single in-memory copy, to be used by many different reports created by multiple users.

These are gaining huge importance as they have several advantage over report cache as report caches expire or become invalid in certain circumstances.
A few instances are:

1. When there are changes made to the objects in the data warehouse, the existing caches may be configured so that they are no longer valid when hitting certain warehouse tables. Any further report execution will not hit the cache.

2. When the definition of an application object changes (such as report definition, report, report template, metric definition) the related report cache is marked invalid.

3. When there is need to control the growth of caches in the intelligence server memory, old caches need to be expired automatically.

Intelligent cubes can be refreshed on daily, weekly, quarterly monthly or yearly. In my past experience where I was working with a financial firm, we maintained cubes for different regions - Asia, EMEA and Tokyo. Also we developed cubes for daily and monthly data. Hitting against the database caused performance issues and having a single cube to store all data had its problems.
Also with recent versions of the tool, a new concept called incremental refreshing was introduced where in a cube loaded with 1 lakh records need not be refreshed just to insert or update a few row of records. With this new concept it was just sufficient to build a incremental refresh report with the required criteria to refresh the cube. For eg: If data was loaded for Region A at 9:00 AM from the database into the cube and data for Region B was available only at 12:00 PM then a incremental refresh report can be created on top of the cube with only one filter condition like Region = B and with the insert records option so that it does not override the existing data.

**Disclosure: I am a real user, and this review is based on my own experience and opinions.
More MicroStrategy reviews from users
...who work at a Financial Services Firm
...who compared it with Tableau
Learn what your peers think about MicroStrategy. Get advice and tips from experienced pros sharing their opinions. Updated: April 2021.
501,499 professionals have used our research since 2012.
Add a Comment

author avatarit_user4635 (Consultant at a insurance company with 501-1,000 employees)

It looks like the author of the article is just talking about caching techniques and I-Cubes to speed up reports which could cover a huge part of the performance issues. However, even the greatest cache strategy won't solve the inherent problems associated to performance due to a wrong design. Good performance is assured from the earliest stages of the design and shouldn't be a post-development stage. Performance requires a deep understanding of the RDBMS, data architecture and BI architecture. Once this is guaranteed there are some tools that MSTR offers such as a set of VLDB properties to produce a good (non-harming) SQL.

author avatarit_user4401 (Developer at a transportation company with 1,001-5,000 employees)

MicroStrategy provides more control on the solution being developed, so the user knows what work is required for the development and he can keep a checklist of work done. MicroStrategy can monitor quality of coding conventions, style and structure. I also noticed that it requires a lot of time and indirect cost.

author avatarit_user1068 (Tech Support Staff at a tech company with 1,001-5,000 employees)

When issues with reports are encountered, time is bound to be lost.

author avatarit_user8547 (Consultant at a financial services firm with 501-1,000 employees)

Cubing and caching works for OLAP database generally when the frequency of data update is less. For systems like OLTP these tuning won’t work. I agree when you use cubing and caching the reports fly in seconds but there are various performance tuning techniques that can be used in Microstrategy.

General tuning includes tuning your DB driver which is provided by data direct for MSTR which is very useful if you want to increase network through put and use clustered DB. There are lot of quick tunings which is possible using microstrategy and I believe a separate post on it.

In case you want to explore some tuning options I would be happy to assist.
Feel free to reach out to me at arpitagrawal9@yahoo.co.in for any queries.