The speed of the solution, and the clarity offered are the solution's most valuable features.
The functionality is of the solution is very good.
The SAP HANA® platform helps you reimagine business by combining a robust database with services for creating innovative applications. It enables real-time business by converging trans-actions and analytics on one in-memory platform. Running on premise or in the cloud, SAP HANA untangles IT complexity, bringing huge savings in data management and empowering decision makers everywhere with new insight and predictive power.
SAP HANA is also known as SAP High-Performance Analytic Appliance, HANA.
Download the SAP HANA Buyer's Guide including reviews and more. Updated: September 2021
Unilever, NHS 24, adidas Group, CHIO Aachen, Hamburg Port Authority (HPA), Bangkok Airways Public Company Limited
The speed of the solution, and the clarity offered are the solution's most valuable features.
The functionality is of the solution is very good.
The challenge right now is all databases are on S4 HANA architecture. You're running it for HANA, but not all the functionalities are available. If they can speed up getting all the databases on S4 HANA that would help.
The production seems to be stable, but outside of that some of the QA doesn't show as 100% stable. Currently, we're still assessing whether it's a network issue or it's a server problem. That said, it is pretty stable in production.
We have been in touch with technical support, but not for any major issues. I'd rate them seven out of ten.
The initial setup was a bit complex for us because we had a lot of other add-on solutions, like the Open Expanding Invoice Management solution. We had to upgrade it because the current version we had, 10.5, wasn't compatible with the upgraded S4 HANA.
I found the process complex because I had to re-implement all open expanding invoice management. We were using ICC and ICC also wasn't compatible either. We had to then switch over to BCC, which I had to re-implement and reconfigure the whole system again.
I'd rate the solution eight out of ten.
Right now, they say the solution is S4 HANA, but, not everything on there is S4 HANA, so it is kind of confusing that they say that they're moving us to S4 HANA, but we are not really on it. Because of that, we're not 100% happy. Once everything is properly moved over, it might be better.
I am the technical consultant at our firm and our primary use case of the solution is to manage our databases.
We don't really use the solution for code integration purposes but one feature I find very valuable, is the response time of the application on the database memory.
If the developers were to enhance or improve the application logic while processing the transactions, that would be great. For example, if you are accessing a transection, it takes about 10 seconds. So the logic behind the transection usually is part of the development part and a product code is not from the database.
The solution is currently very stable. We have about 80,000 people in our company's underlying database.
The technical support is good and they were very helpful.
The initial setup was complex and we had to contact the vendor a couple of times. And for the licensing part we've also had a couple of issues. We did the deployment ourselves, as a team.
On a scale from one to ten, I rate this solution an eight. In the future I would like to see the response time of the application being much faster than it currently is. The response time on a task should be faster so that we don't have to wait for 10 seconds each time.
We're using the on-premises deployment model.
This solution did improve my organization. We had some changes to better our organization, especially in the procurement process. Here in the Middle East, we have some processes that are not according to standard business processes. Through our change management processes, we had to re-change or resend our process to adapt it with SAP.
Integration is the most valuable feature we use SAP HANA for.
FI, or the financial module of SAP, has room for improvement. It has to have some better localization for the Middle East, especially in regards to taxes and the letter of a credit cycle. I would like to see better localization from the HCM.
We are satisfied with the solution's stability.
This solution is fine to scale. It converted great.
We have a technical support contract with a subcontractor from SAP in the from of an SLA, a service-level agreement, divided into four categories. First, second, third, and fourth lines of support. We are satisfied with the technical support.
It was a little bit complex in the beginning, but after gaining experience training through business structures, now it is straightforward for us. Especially, as we are building our internal team now, it is becoming easier and easier.
It took us eight months to deploy because we are running five modules. In some cases, it may take even longer than that.
The biggest lesson learned was that we started late. We all should have started earlier.
Out of ten, I would rate this solution as eleven.
We are using on-premises, but I have also done some research in the last six months trying to go towards the cloud. We want to upgrade it because we also did the same thing with another company we are working with which is using the Sage X3 Cloud. We started with Sage Evolution, but now we are also moving to Save X3 Cloud.
It helped us because some of the people who are busy supporting us are not local. We opened SAP HANA more for the management. We even got some tenders that we were able to submit documents online and sending it to our servers. The key value is that we can get more tenders because of the lower cost while giving a better product or service. This is possible only because of our use of SAP HANA.
The most value for us was in terms of using it to issue tenders online. We host our server, but it is open to the public, so clients who want to buy those tenders were able to go online, put their tender documents up, and we could evaluate them using SAP. We were basically able to do pre-qualifications using SAP. After that, we could send notifications to people who qualified and go through the non-qualified people using SAP. That feature is very effective in terms of supplier relationship management. We can issue tenders and people put their big documents through SAP HANA, which helps with communication and gives them notifications.
One is the menu. There is a part of the menu where the button should be "reject." The interface is a little bit hard to customize. You almost have to consult the SAP original developer to change it. Now we have to consult SAP just to do some interface changes. You expect it to be easy to get into the menu, but you can't. Instead of changing the console you wanted to reject it, for example, if a tender that does not meet a specific qualification. Basically, the customization of the interface needs to be more friendly.
I think we are also going towards mobile technology, so I would like to see the integration of a mobile app.
It's quite stable. There haven't been many cases of bugs, crashing, or freezing. It has been quite stable.
Its scalability is good, in terms of meeting our needs.
I think technical support is okay. They should be more focused on updating the knowledge transfer for people who have experience with SAP in general but need to transition their knowledge to the local client. This part is a little bit challenging.
We used another solution, but it was more of a client-oriented system, where you get developers to make and customize them for you. It's more local or in-house than regular IT systems. When you only have one company developer to make some products for you and he is the only one who can support you, it's a little bit of a challenge. With SAP HANA, if you get stuck somewhere you can call any other SAP HANA partner.
It was easy for us to set up, because we had that QA code, in terms of the system analysis and system requirements. Once we got the system requirements, we were able to connect to the hardware and software. We could make sure before we did the implementation that we had the right environment.
The main lesson is the importance of ERP capability, stability, and speed. The other lesson is about knowledge transfer because that is how you learn.
At the end of the day, I like it because it's one of the affordable ERP systems. I would rate is as eight of ten.
We use a hybrid deployment model for this solution. Our primary use case of SAP HANA is for business intelligence.
In terms of improvement, the speed is not as good as we thought it would be. That is why we are trying different solutions that will be built with different technologies.
Also, the cost is an issue. SAP HANA is extremely expensive, especially in the cloud. Right now that has changed because you can actually purchase modules of that size but, for example, two years ago when we had a database of 10 terabytes, then we would have to purchase the hardware on our own and then put it in the cloud foreign location of the vendor. It runs on our own software that we have purchased. It's just placed in the same location as the rest of the cloud of the vendor.
They should improve the speed and scalability
If you want to scale the entire size of the database then that is difficult and has an impact on the speed. If you want to scale with new processes and new reports, that's fairly easy.
We have more than 1,000 users using this solution.
The initial setup, from what I recall, was complex. I remember we had a lot of issues to tackle when we set this up and with upgrading.
We used a partner for the implementation. We had mixed feelings about our experience with them. It wasn't bad. It wasn't exceptionally good.
We're moving away from HANA and currently implementing a new solution which is not yet productive. Only the first part of it has become productive and I can't really say whether it's better or worse. During testing, we can see it's faster than HANA and provides the same data which is promising. I would restrain myself from providing any recommendations because that might give a false impression.
I would not recommend SAP HANA because it has some issues with the speed and scalability of the size. It's also extremely expensive. It's probably the most expensive solution of all and you could expect more from it. On the other hand, we don't have much experience with other solutions yet, so it will be very difficult to provide a real recommendation.
I would rate it a seven out of ten.
We use this solution for database storage.
I am an SAP developer and consultant at my company. I examine the client's system and propose solutions that will ease their processes or make them faster. This involves programming, as well as other kinds of development.
We are using the on-premise deployment model.
This solution is very fast.
The backup solution and time machine should be more accurate, reliable, and comfortable to use. The inclusion of a well-performing Time Machine is vital.
If the interface were more comfortable and easy to use then it would be excellent. Sometimes, an incorrect request is taken to production and it will corruption everything in the production database.
When there are a large number of records to process in a transaction, it is not any faster than Oracle.
This solution is very stable. We have been using it for one year and there have been no problems with the database.
I was not involved in the setup of this solution. I only installed SAP HANA Express on my laptop, which was easy. The full version requires professional knowledge. It's not something you can install, like Microsoft Office, on any laptop.
We hired a consulting firm in Turkey to set up our solution. The two machines were configured by SAP Turkey.
I have more than nineteen years of experience with the Oracle database, from version 7.2 through to RAC. I know the administration, as well as backup and recovery very well.
There are not many differences between Oracle and HANA. As an example, for transactional purposes, it is very similar to Oracle.
We switched to HANA from Oracle because SAP systems are moving entirely to the HANA platform. There will be no support for SAP using Oracle.
We do not use the HANA features, for example, embedded scripts. This is something that we may use in the future.
My advice to anybody looking to implement a relational database is to use Oracle, rather than HANA. HANA consultants are very rare and therefore costly. My testing has also shown that Oracle in memory is much faster than HANA.
This is a good solution, but the vendor inaccurately promises that the database is ten-thousand times faster than Oracle.
I would rate this solution a seven out of ten.
It is a memory database that has all the content of the database. Once the database is turned on, it is loaded in the server RAM. It has a very huge bandwidth and data transfer. Once you try to do any queries against this database you can get the result very fast. You can get real-time output or results. This aspect is very helpful to me.
From the deployment-side, I don't have any issues with the solution and haven't heard of any problems from clients.
The solution is very expensive, however. The pricing depends on the number of users and many other factors that affect licensing.
It is very stable. I use the Linux operating system and find it to be quite stable.
The solution is scalable. You have horizontal or vertical capabilities. You can upgrade the server itself in case the memory is at capacity. The resources of one server are not enough because it's big. According to your requirements, you can expand by adding more servers into one big cluster.
I don't go through the official support team from SAP, but most of the time I use the website to find the answers I need. It's very detailed and most of the problems that I've faced in the past while handling the implementations I can find on the website or on the internet.
Before using SAP HANA, we used other SAP products.
The initial setup is straightforward. For one system, the stand-down system, it will take about four to five days for implementation from scratch. I often handle implementation, so for me, it's straightforward because I have some experience in this area. You do need a skilled team. You have to understand many areas if you want to deploy it yourself. You have to have experience with the storage, the network, with operating systems, etc.
I know SAP itself recommends that you have to have a certificate or a certified person that can deploy SAP HANA.
We are an integrator, so we handle the installation for clients.
The SAP portfolio is huge. It covers all industries and fields. It is very wide horizontally or vertically. It has modules for all industries, fields, and for all departments: accounts, HR, production, they have a solution for each industry and for each department in any organization.
There are some applications that are very sensitive to the delay or the latency so for these types of applications I would recommend SAP HANA. However, if these are not concerns, there may be other database technologies that would be more cost-effective than HANA.
I would rate this solution eight out of ten.
Provides us with predictive capabilities for asset maintenance and real-time forecasts.
Real-time database, near zero downtime for production business.
Graphical programming without coding.
System recovery in version 1.0 failed due to corrupt log files. Version 2.0 is stable now.
Should have scalibity from terabytes to petabytes/zetabytes/yotabytes for both scale-up and scale-out, multi-tenancy approach.
Gradual deployment from straightforward to complex, on-premise and then to cloud platform.
Set up a consortium of consulting partners and hardware vendors to define your tech. Landscape TCO (total cost of ownership) and then approach the OEM for pricing (on-premise or on cloud or a hybrid model).
Check if you can bring your own licenses for some of the existing application licenses on the new platform, to reduce TCO.
Product was the first of its kind for us. However, we later evaluated other products: Oracle Exadata, Exalytics, Teradata, Hadoop, MongoDB.
By now most of us are well aware of the data explosion, that businesses are creating more data than they can effectively manage. This is not a new problem. Throughout history societies have always made efforts to create repositories to organize, analyze and store documents (recorded knowledge). Some of these ancient repositories still exist today in the form of “brick and mortar” libraries. But just like anything else in a consumer’s market, demand (Time-To-Solution) eventually becomes greater than the supply (Information Available/Accessible).
The global economy is currently undergoing a fundamental transformation. Market dynamics and business rules are changing at an ever increasing speed. Those responsible for keeping the company on track for the future have a massive need for high-quality data--both from inside and outside the company. Technology decision makers are facing the challenge of having to create infrastructures that leverage speed, scale and availability.
Data technology must assist in the removal of silos and support collaboration and the sharing of expertise across the company and with business partners. Successful companies will need access not only to their own "Data repository" but to data from various heterogeneous sources. Today, finding mission-critical data or even being aware of all potential sources is more a question of luck and intuition than anything else.
How important is your data to your organization? How does your organization use its data? How do they access and interact with it? Are the decisions being made from data, innovative or disruptive in nature? What’s the value and impact?
According to a Forbes article written by Caroline Howard, “People are sometimes confused about the difference between innovation and disruption. It’s not exactly black and white, but there are real distinctions, and it’s not just splitting hairs. Think of it this way: Disruptors are innovators, but not all innovators are disruptors — in the same way that a square is a rectangle but not all rectangles are squares”.
Database accessibility is critical for rapid but sensible, innovative and disruptive decision making. A business database management system must be able to processes both transactional workloads and analytical workloads fully in-memory. By bringing together OLAP and OLTPL to form a single database, your organization can benefit dramatically from lower total cost up front. Additionally, gaining incredible speed that will accelerate their business processes and custom application.
SAP HANA DB takes advantage of the low cost of main memory (RAM), data processing abilities of multicore processors and the fast data access of solid-state drives relative to traditional hard drives to deliver better performance of analytical and transactional applications.
Fusing SAP HANA with a scalable shared memory platform will enable businesses and government agencies running high-volume databases and multitenant environments to utilize high-performance DRAM that can offer up to 200 times the performance of flash memory to help deliver faster insight.
Here’s my analogy: players go to the “Super Bowl” for one of two reasons, to watch or participate. To be successful in today’s global market companies must effectively participate or risk being on the sidelines watching.
Since its introduction in 2011, SAP tries to push HANA very heavily and there is a lot of marketing buzz over this new product. For a freelance consultant focused on SAP Sybase database products, like me, it is next to impossible to ignore HANA in year 2013. So, I decided not to rely to marketing slogans and check what HANA is, what it can do, and, importantly, what HANA is NOT. I put my first impressions to this blog post; hopefully other HANA-related posts will follow. Note that I’m not a HANA expert (yetJ) and I’m writing these rows as a person with a lot of experience with IQ and some other RDBMSs and trying to learn HANA.
So, why to compare HANA and IQ? Both are designed for data warehouse environment, both are column-based (with some support of row-based data), both provide a data compression out-of-the-box and highly-parallel. Years ago, much like SAP for HANA today, Sybase claimed that IQ processed data so fast that aggregation tables are not really needed, because the aggregations can be just performed on-the-fly. Well, experience with a number of big projects showed me how problematic that statement was, and it is only a single example.
According to SAP, the strong point of HANA is its ability to utilize CPU cache , which is much faster than accessing the main memory (0.5 - 15 ns. vs. 100 ns.). Currently, IQ and other Sybase RDBMSs lack this capability. Therefore, I decided to build a test environment which allows performing of queries that answer a number of conditions:
Some notes about the test environment:
For IQ, I used 16-core RHEL server with hyper-threading turned on (32 cores visible to OS) and 140GB RAM available. I used IQ 16.0 SP01 for my tests.
For HANA, I had to use HANA SPS6 Developer Edition on a Cloudshare VM, which provides HANA on a Linux server with 24GB RAM. However, only 19.5 GB is actually available from the Linux point of view (free –m output) and most of this memory is allocated by various HANA processes. In fact, less than 3GB RAM is available for user data in HANA . I only wish that SAP would allow us to download HANA and install it on any server that answers to HANA’s requirements for CPUs, but it seems that the SAP’s policy is to distribute HANA as a part of appliances only, so I don’t expect free HANA download any time soon.
This brings us to an additional requirement for the test: the test dataset should be relatively small , because of severe RAM restrictions imposed by HANA Developer Edition on Cloudshare.
Finally, I decided to base my tests on a relatively narrow table that represents information about phone calls (for those involved in Telecom industry, it is like short and very much simplified CDRs). Here is the structure of the table:
create table CDRs (<br>
CDR_ID unsigned bigint, -- Phone
CC_ORIG varchar(3), -- Country code
of the call originatior
AC_ORIG varchar(2), -- Area code of
the call originatior
NUM_ORIG varchar(15), -- Phone number
of the call originatior
CC_DEST varchar(3), -- Country code
of the call destination
AC_DEST varchar(2), -- Area code of
the call destination
NUM_DEST varchar(15), -- Phone number
of the call destination
STARTTIME datetime, -- Start time of
ENDTIME datetime, -- End time of
DURATION unsigned int -- Duration of
the conversation in seconds
I developed a stored procedure that fills this table in SAP Sybase ASE row-by-row according to some meaningful logic and prepared delimited files for IQ and HANA. The input files are available upon request. At first, I planned to run tests on a dataset with 900 million rows, but I finally discovered that I have to go down to 15 million rows because of the VM memory limitations mentioned above.
Important note about the terminology. In IQ, inserting of the data from a delimited file into a database table is called LOAD, and retrieving of the data from a table to a delimited file is called EXTRACT. In HANA, the inserting is called IMPORT and the retrieving is called EXPORT. The term LOAD in HANA has a totally different meaning – it means loading of a whole table, or some of its columns, to the memory from disk, when the data is already in the database.
IMPORT functionality in HANA is not similar to IQ, at all. Actually, it contains two phases: IMPORT and MERGE. During the first phase, the data is imported to a “delta store” in an uncompressed form. Then, the data from the “delta store” is merged into “main store”, where the table data is actually resided. The merge is performed automatically, when a configurable threshold is crossed (for example, the size of the “delta store” becomes too big). To ensure that the imported data is fully inside the “main store”, a manual MERGE may be required. The memory requirements during the MERGE process are quite interesting, maybe I will write about it in a different post. It is pretty much possible that you will be able to IMPORT the data, but will not have enough memory to MERGE it; it happened to me a number of times during my tests. I would recommend you to read more about HANA architecture here: http://www.saphana.com/docs/DOC-1073, Chapter 9.
Given the significant difference between the test systems (a powerful dedicated server for IQ vs. small VM for HANA), I didn’t plan to compare the data load performance between IQ and HANA. However, so far I see HANA performing the IMPORT using not more than 1.5 core of 4 available, thus underutilizing the available hardware. The MERGE phase, though, is executed in a much more parallel way. The bottom line is that IQ seems outperform HANA in data loading, possibly quite by far. I will probably return to this topic in one of following posts, additional tests with larger dataset are required.
Now, we come to the data compression. Since IQ and HANA approach the indexing quite differently, I chose to compare the compression without non-default indexes in both IQ and HANA. It appears that IQ provides better data compression and needs 591M to store 15,000,000 rows, while HANA needs 748M to store the same data. HANA provides a number of compression algorithms for columns, which are chosen automatically, according to the data type and data distribution. However, it seems that neither of compression algorithms offered by HANA contains LZW-like compression used by IQ. I’d prefer to test the compression on a more representative data set (15,000,000 is way too small) and play with different HANA compression algorithms. I hope one of future posts will be dedicated to this topic.
Finally, the data is inside the database and we are ready to query it. To answer the test conditions mentioned above, I chose the following query:
a.CDR_ID CDR_ID_1, b.CDR_ID CDR_ID_2,
a.NUM_ORIG NUM_A, a.NUM_DEST NUM_B, a.STARTTIME STARTTIME_1, a.ENDTIME
b.NUM_DEST NUM_C, b.STARTTIME STARTTIME_2, b.ENDTIME ENDTIME_2,
from CDRs a, CDRs b
where a.NUM_DEST = b.NUM_ORIG
and datediff(ss, a.ENDTIME, b.STARTTIME) between 5 and 60
order by a.STARTTIME;
This query finds cases when a person A called person B and then the person B called person C almost immediately (in 60 seconds). This query has to perform a lot of logical I/O by its very definition. With my test data set, this query returns 31 rows.
In IQ, this query takes 6.6 seconds while executed fully in memory and when all relevant indexes are in place. The query uses sort-merge join and runs with relatively high degree of parallelism, allocating about 60% of 32 CPU cores available.
In HANA, the same query takes only 1 second with no indexes in place ! Remember, that in my tests HANA is running on a small VM with just 4 virtual CPU cores! The query finishes so fast that I cannot measure the degree of parallelism. Creation of indexes on NUM_ORIG and NUM_DEST reduces the response time to 900 ms.
A note about indexes in HANA: HANA offers only two index types and, by default, it chooses the index type automatically. In my tests, I have found that indexes improve query performance in HANA, sometimes significantly. Unfortunately, I have not found any indication of index usage in HANA query plans, even when some indexes were used by the query for sure. The role of the optimizer statistics in the query plan generation is also not very clear to me. I hope to prepare a separate post about query processing in HANA, stay tuned!
Another amazing and totally unexpected finding in HANA – index creation on NUM_DEST (varchar(15)) takes 194 ms. Index on DURATION (int) is created in 12ms!
My conclusions so far:
Update: see IQ query plan for my test case here: Download ABC_15mln_fully_in_memory
I’m not a great fan of SAP, or Oracle for that matter, but SAP’s HANA architecture is an unexpected innovation from a company that is rooted in serving the dull administrative needs of large organisations. In a nutshell HANA is an in-memory database capable of handling very large amounts of data with frightening speed. This is very timely, and more importantly will serve the needs of organisations for decades to come. While the focus is currently on the ability of HANA to address real-time analytics, the capability offered by HANA will serve us as we move into the feedback and control (cybernetics) era which has yet to unfold.
The current preoccupation with all forms of analytics (data mining, statistics, text mining, optimisation) and big data are predicated on very fast database systems. Traditional disk based technology is typically too slow and SAP has taken a simple idea – placing all data in much faster memory – and made it a reality. The idea is simple, but making it a reality is not. HANA enables many forms of business activity that were simply not possible before – real-time recommendations for customers, real-time tracking of very large distribution networks – and so on. This alone is enough to make HANA important for many businesses.
On the horizon however, and virtually unseen by most commentators, is the need to implement real-time feedback and control systems. It’s all very well to analyse current activity, but at which point is action called for, and what type of action will rectify a situation? Recommending additional purchases to customers in real time might not be optimal, and the response rate might start to drop off. At what point is remedial action needed, and how should the algorithms be modified? This is where we are headed – not just analysis, but analysis of analysis – a level of awareness within systems.
Massive computing ability is needed and there simply is no way that slow disk based technology will deliver the goods. HANA is a foundation for this move into a brave new world – and there are no real alternatives. There is a saying in technology markets that ‘if it works it’s already obsolete’ – I would make HANA an exception to this rule. For many organisations it will be a solid investment that will see them move into an age of real-time, intelligent business systems. Who would have thought that such an innovation would come from a German software company rooted in dull software applications that serve the needs of business administration.