Senior Database Consultant at Performing Databases
Consultant
It helps to build successful mixed-workload environments

What is our primary use case?

We are using Oracle Database In-Memory as an indirect approach to improving response times. In mixed-workload environments, we use the In-Memory column store to support OLAP-type queries without harming the latency-critical OLTP operations the systems "earn money with". This was successful for many customers throughout 12.2 and 18c.

How has it helped my organization?

It helps to build successful mixed-workload environments. Thus, for smaller setups, it's enough to have one database setup, not two, and it saves one interface in between.

What is most valuable?

In recent versions, Oracle implemented storing the In-Memory column store contents in the database, to resurrect the IMCS quicker and in a repeatable way.

What needs improvement?

One very nice side-effect is the in-memory index. If this would be developed a bit more into being configurable, users could use it as a kind of in-memory partitioning. That opens a big field of possible use cases.

Buyer's Guide
Oracle Database In-Memory
April 2024
Learn what your peers think about Oracle Database In-Memory. Get advice and tips from experienced pros sharing their opinions. Updated: April 2024.
768,857 professionals have used our research since 2012.

For how long have I used the solution?

One to three years.

What do I think about the stability of the solution?

Very stable.

What do I think about the scalability of the solution?

In my experience, it scales quite well. Unfortunately, decent scale-out with RAC only works in Exadata, since Oracle relies on RDMA which is only available for InfiniBand.

How are customer service and support?

"It depends". If you get a good support engineer, it is a dream. 

But, most times, it is not, unfortunately.

Which solution did I use previously and why did I switch?

No, since there was no other solution offering in-memory without changing the SQL syntax.

How was the initial setup?

We grew into it during beta and initial releases, so I can't answer this.

What about the implementation team?

We do implementations ourselves, so I can't answer this.

What was our ROI?

If you can save setting up an additional interface and a second DB server, investment should return immediately.

What's my experience with pricing, setup cost, and licensing?

The setup cost is not a big factor, but the engineer should have decent experience with Oracle's In-Memory system.

License cost is a factor; the benefit has to be carefully evaluated.

Which other solutions did I evaluate?

We tried several ways to offload OLAP queries from the database, especially using a second DB system.

We evaluated this product throughout the beta1 and beta2 phase.

What other advice do I have?

It is always worth testing or running a proof of concept to check its value.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
GM & CTO (In charge) at CCBL
Real User
Top 20
Helps to handle fast transactions with low latency
Pros and Cons
  • "We use the tool for real-time data transfer for risk management purposes. In a trading system, conversions happen fast. We use the product to handle fast transactions with low latency."
  • "I would like Oracle Database In-Memory to include a data replication feature."

What is our primary use case?

We use the tool for real-time data transfer for risk management purposes. In a trading system, conversions happen fast. We use the product to handle fast transactions with low latency. 

What needs improvement?

I would like Oracle Database In-Memory to include a data replication feature. 

For how long have I used the solution?

I have been working with the solution since 2005. 

What do I think about the stability of the solution?

Oracle Database In-Memory is very stable. 

What do I think about the scalability of the solution?

I would rate the product's scalability a nine out of ten. 

How was the initial setup?

Oracle Database In-Memory's setup takes six months to complete. You need 10-15 resources to handle the deployment of the whole system. 

What's my experience with pricing, setup cost, and licensing?

Oracle Database In-Memory is expensive. 

What other advice do I have?

I would rate the solution a ten out of ten. If you do not have high-frequency transactions, then Oracle Database In-Memory is not for you. You would require Oracle Database In-Memory for mission critical high-frequency transactions. 

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
PeerSpot user
Buyer's Guide
Oracle Database In-Memory
April 2024
Learn what your peers think about Oracle Database In-Memory. Get advice and tips from experienced pros sharing their opinions. Updated: April 2024.
768,857 professionals have used our research since 2012.
it_user521652 - PeerSpot reviewer
Senior Oracle Consultant at a retailer with 1,001-5,000 employees
Real User
After expressing the columns, it then compresses the data and puts the data into memory; there's a lot of compression.

What is most valuable?

Ours is a DW environment, ETL extracts data from SAP and loads into the reporting database. The DBA objective was optimal performance for both nightly batch and reporting. We used the below features which significantly helped improve he performance.

We used HCC with query high to compress all the fact tables, Interval partition the fact tables with daily partitions.

Most of the financial reports go back to maximum 2 months so we scheduled a stored procedure to load last 60 days partitions in to IN-MEMORY. We have also loaded highly used dimention tables as well in to IN-MEMORY.

Extended stats was the key to performance, we could achieve good performance of the reports by gathering extended stats. Histograms too were helpful, but extended stants and histograms dont go well together. in our testing phase, we had tested and chose the best one. Over the period when data changed, we switch between the two.

DMRM, was another key to make sure all the consumer groups get the required resources.

Forcing the optimizer to use BloomFilters, boosted the performance to a significant extent. We could achieve this by getting rid of quite a few indexes, parallel processing and optimizer statistics.

Tablespaces with NOLOGGING option. The nightly batch process also has a great performance with nologging tablespaces, parallel DML and insert append (direct path load).

Dynamic sampling set to 4 was. This was the value which gave us consistent performance across most of the reporting.

Disabling optimizer_adaptive_features. This feature turned on, flickered the performance of the reports. We could achieve consistent performance by turning this off.

Cognos Dynamic Cubing, This is a feature used at the cognos layer which helped increase the performance of the reports.

How has it helped my organization?

This data is of our BI reporting. This reporting is open for all the warehouse managers to know all their financial status.The period end . We have very tight schedules. Every report has to complete in milli seconds seconds. The SLA are very tight. In memory in combination with partitioning, HCC and offloading feature helped achieve this SLA's.

We use lots of aggregations, and a lot of transformations that happen beforehand. We use Information, and then the data comes into Oracle. Cognos actually runs those reports. That was a very big challenge for us. We didn't use in-memory before.

For most of the tables, we use partitions, we use HCC, and then we could not get through the day with that level of performance. What we did is we made sure that the latest partitions, on which most of the reports run, are actually put into in-memory, and then very highly compressed. We move the data – some of the key tables, master tables especially, and some of the FAT tables – into In-Memory, and we use very high compression ratios. After that, we saw a really dramatic improvement in the performance. We are doing much better than the SLAs require. Most of our reports are converting in 1, 2 or 3 seconds. Most of them are below 5, except if we have any stats issue or anything like that; it takes time for them to complete. After we started using the in-memory product, we saw really dramatic figures.

What needs improvement?

For some reason, the stats optimizer doesn't work well. We actually disabled some of these features, such as the optimizer adaptive features. On the fly, the optimizer actually changes the explained plan, and that feature is really not working fine. We had to disable that.

The plans were actually not stable when we enabled it, so we had to disable it. We had to lock the stats on some of the master tables, because the plan instability is the actual problem. We don't want the plan instability, and we saw that quite often. We had to disable some of the Oracle’s new features that are not quite mature. That’s one of the problems that we have seen.

For how long have I used the solution?

We are using this solution for about an year now, The performance is good and is with the desired SLA's.

What was my experience with deployment of the solution?

No we did not encounter any issues with the deployment.

What do I think about the stability of the solution?

In-Memory stability is really very good.

What do I think about the scalability of the solution?

From the scalability perspective, the concurrency that we expect is 2,000 users. We're only keeping only two years’ worth of data; not more than that. The data comes in and the old data actually goes out to the archives, and only the new data is there. The old data is an SAP, so we don't maintain that. For reporting purposes, we don't want them; we only maintain two years’ worth of data.

The only issues is with concurrency, and we tested it with 2,000 users, so it's fine.

How are customer service and technical support?

Customer Service:

5 out of 10.

Technical Support:

Oracle tech support is really not very good; I’m not pleased. For 90% of my tickets, I raise the ticket, and then I work on it and resolve it myself. Oracle provides a solution for only 10% of my tickets. I'm really not happy from a support perspective.

Which solution did I use previously and why did I switch?

No we have built this on oracle.

How was the initial setup?

Not applicable.

What about the implementation team?

We did this inhouse. We ofcourse hired some contractors who were good at performance tuning.

What's my experience with pricing, setup cost, and licensing?

Pricing and Licensing is not something I deal with. But since we have ULA, we did not have to bother about it.

Which other solutions did I evaluate?

No we did not evaluate other options, however we are moving out of this option to SAP Hana.

What other advice do I have?

You need to understand the data. You do not want to use In-Memory for all of the data. You need to understand the data, understand the inquiries, understand which data you actually want to put in the In-Memory. You should not put all the data in there.

We do not INMEMORY ADVISOR in oracle, however this is obvious

Which data has to go into In-Memory is something that you only know from experience; how to get the most benefit out of In-Memory. Get access to the Oracle Learning Library School. There are some good videos there for the In-Memory. They're really awesome.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Paresh-Nayak - PeerSpot reviewer
Enterprise Data Architect at Link Group (LNK), Digital Solutions
Real User
Top 10
Fast, reliable solution that zeroes data loss
Pros and Cons
  • "The most valuable feature is that Database-In-Memory is more consistent and faster than traditional databases as it requires fewer CPUs to process instructions."
  • "Oracle should include column store or advanced query optimization so a database can be optimized by enabling analytic queries to run faster."

What is our primary use case?

I mainly use this solution for business analytics and intelligence.

What is most valuable?

The most valuable feature is that Database In-Memory is more consistent and faster than traditional databases as it requires fewer CPUs to process instructions. Another valuable feature is that we can zero data loss with this solution because there is no data loss when there is a loss of power or the RAM crashes, as with traditional databases.

What needs improvement?

In the next release, Oracle should include column store or advanced query optimization so a database can be optimized by enabling analytic queries to run faster.

For how long have I used the solution?

I've been using Database In-Memory for two to three years.

What do I think about the stability of the solution?

Database In-Memory is reliable, performs very well, and requires very little maintenance.

What do I think about the scalability of the solution?

This solution is easy to scale.

How are customer service and support?

Oracle's technical support is very good.

How would you rate customer service and support?

Positive

How was the initial setup?

The initial setup was simple.

What about the implementation team?

We used an in-house team along with some documentation and help from Oracle's website.

What's my experience with pricing, setup cost, and licensing?

Database In-Memory is priced a bit higher than its competitors like Microsoft. The Enterprise edition has no additional costs, the in-memory features come included.

What other advice do I have?

I would rate Database In-Memory nine out of ten.

Which deployment model are you using for this solution?

Hybrid Cloud
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Independent Consultant at Unaikui
Real User
Top 5Leaderboard
Has a simple setup process and efficient features for columnar storage
Pros and Cons
  • "We can integrate it with any data sources as well."
  • "The platform’s pricing needs improvement."

What is our primary use case?

We use Oracle Database In-Memory for reporting and analytics.

What is most valuable?

The platform's most valuable features are query response time, columnar storage, and data cube setup.

What needs improvement?

The platform’s pricing needs improvement.

What do I think about the stability of the solution?

The platform is stable. I rate the stability a nine out of ten.

What do I think about the scalability of the solution?

We have more than 50 Oracle Database In-Memory users in our organization. While occasional usage may increase, the technical infrastructure effectively manages these fluctuations. It demonstrates negligible impact on performance, even with incremental user growth. I rate the scalability a nine out of ten.

How are customer service and support?

We have received technical support for reported bugs, and the team promptly releases patches.

Which solution did I use previously and why did I switch?

Earlier, we utilized the Oracle Database Appliance for data warehousing purposes. Additionally, we relied on SQL databases, particularly older versions managed by Oracle as well as the old SAP DB. However, we transitioned to Oracle Database In-Memory to address the specific needs of our environment, particularly in scenarios where we needed to report data across distributed sites efficiently.

How was the initial setup?

The initial setup process is simple.

What other advice do I have?

The platform provides the best performance in terms of database analytics. It efficiently serves as a data lake. We can integrate it with any data sources as well.

I recommend it to others and rate it a ten out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
PeerSpot user
it_user452352 - PeerSpot reviewer
Strategic Solutions Architect at OnX Enterprise Solutions
Video Review
MSP
Offers the ability for analytic queries to run extremely quickly because essentially, it's employing what's called the columnar format.

Valuable Features:

I'm going to be discussing Database In-Memory and Multitenant in Oracle 12c. 

The multitenant features offer some really excellent flexibility, especially for an organization that's looking to consolidate their databases from smaller servers, especially smaller servers, possibly to desiloize, as I like to say, and bring their databases up under one larger database instance. It also makes it really easy to clone a database, either for read-only purposes or for read/write purposes extremely quickly, usually in less than a few seconds.

Database In-Memory, to me, is the most compelling reason to go to Oracle 12c, release 1. 12.1.0.2 is the beginning release for that. Database In-Memory offers the ability for analytic queries to run extremely quickly because essentially, it's employing what's called the columnar format. We've never had that in Oracle until 12.1.0.2. The idea behind that, especially if you have a table that's really wide and with several hundred million or even billions of rows, you can scan that table and filter it extremely quickly, even in a data warehousing environment, essentially join it extremely quickly from a fact table to multiple dimension tables in unbelievable speed. The idea there is that, because it's in a columnar format instead of a row major format, you can access the data much more quickly, especially if you have a very wide table because you can eliminate a lot of the intervening columns.

Improvements to My Organization:

Multitenant is going to get much better in the next release, but that doesn't mean you can't adopt it right now. The major advantage of multitenant, as I see it, is to be able to share memory on a larger server with larger amounts of CPU and especially larger amounts of memory. If you have an engineered system, especially an Oracle engineered system, you can take advantage of the extreme amounts of memory and CPU capacity and essentially share that memory across smaller databases by consolidating them. That's why they're called pluggable databases being plugged into a consolidated, as we call them, CDBs and PDBs. The advantages are that you can leverage that huge amount of memory and compute power for smaller databases.

Another major advantage of it is that you can also quickly clone a production database into a test or a dev database. That's extremely important in today's world where DevOps is happening so quickly and customers need to sometimes get a test database or a DevOps database, an exact copy of production, at almost the same point in time. It doesn't necessarily have to be synced perfectly with production, but that it can be very, very close to what you have right now and be able to test extremely quickly because the cloning mechanism is so blindingly fast with pluggable databases when you're in a 12c environment.

Room for Improvement:

The In-Memory database features are probably going to double in capacity and in power in the upcoming release; also, with pluggable and container databases. They're going to become much more flexible. Most of this is already public knowledge. I'm just not discussing details at this point.

The one feature that I'd like to see more emphasis on is security because as we move towards the Oracle Public Cloud, it's here already. The biggest concern and we heard this in many of the talks here at Collaborate 16 this week, but also among my clients and customers is, is it secure. That's an excellent question and the answer is it absolutely is. It's absolutely secure because you can't not encrypt. You must encrypt your data when it's placed inside the Oracle Public Cloud. Now the wonderful thing about that is that you hold the private key; the private key part of the private key infrastructure and public key infrastructure, so there is no way that a government agency, for example, could come to you and go, "I want all this data." All that they would get would be the encrypted files. That may not necessarily be true in other cloud implementations. I'll leave it at that.

Stability Issues:

The stability of the solutions are one of the more impressive things. Having come from Oracle 8i as my first Oracle DBA job, when a new release would come out, there were always tremendous numbers of bugs. That's not the case with these 12.1 releases. They're extremely well-tested. I'm not going to say there aren't any bugs but they're fewer and farther between than they were in earlier releases. There's a significant advantage to getting on the 12.1 juggernaut, if you will, because these releases have been very well shaken out.

Scalability Issues:

I'm not going to get into too much because I haven't tried that, I'll be honest. It offers quite a bit of scalability in terms of scaling out and scaling up because all of these features for pluggable databases and that’s the multitenant feature, but we also talked about the Database In-Memory feature, scale extremely well, especially in a real applications cluster or rack database environment. Container databases do take a little bit more thinking. You have a limited number, only 254 possible pluggable databases, inside a container database and you are essentially creating a different version of an Oracle database, container versus, as we call them, non-container or non-CDBs. When you're building that initial container database, you do have to think about which way am I going. The good news is there's a very simple procedure to take a non-CDB and make it into a PDB underneath a CDB with a very limited amount of downtime of the source non-CDB.

From the In-Memory side for 12.1.0.2 for setting up the In-Memory column store, it's literally change one initialization perimeter, bounce the instance, and you're ready to go. It's extremely simple to do that. There is also a really good tool that allows us to figure out which objects should be inside the In-Memory column store, called the In-Memory Advisor. That has even matured since about a year ago when that was first released, it's much easier to figure that type of thing out. In terms of implementation, it's really quick and almost painless, is what it comes down to.

Other Advice:

I would have to say the In-Memory column store is definitely a 9.5. It's really great right now in 12.1.0.2 and it's going to get gooder in the next release. Multitenant, to be honest, is a little adolescent at this point. There are some things I wish it could do right out of the box but what I'm seeing coming in the next release will probably get it up into the 8.0 or 8.5 range. In terms of security, I'd have to say some of the features we discussed are actually quite good right now. I'd put them towards a 9.0. Oracle Public Cloud needs some more work. As an Ace Director for Oracle, I'm working very closely with the team and my company, OnX Enterprise Solutions, as well, is working very closely with Oracle to make Oracle Public Cloud better, stronger and more easily deployable. For all of these, I'd have to say probably 8.5 to 9 on a scale of 1 to 10.

My recommendation is if you're going to go to 12.1, immediately take a look at the In-Memory column store. It's the biggest bang for your buck. I'm not going to discuss the L word, licensing here, while we're here. It is definitely not a free feature but, in my opinion, if it's implemented properly, it might save the cost of an extra DBA. A good DBA, I'm from Chicago, might cost somewhere between $110,000 to $130,000 a year in salary, not counting benefits. You could save that amount, if you will, OpEx with some CapEx. That's a compelling story. The ability to take a query that you might have dedicated a full DBA to, to make it run faster, and simply by throwing a switch make it run faster, that's a compelling case to me, as a DBA but also as someone who deals with C-level executives and DBA managers all the time.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor. The reviewer's company has a business relationship with this vendor other than being a customer: We're platinum partners.
PeerSpot user
Solution Architect at ixtel
MSP
Simple to implement, very fast, and easily scalable
Pros and Cons
  • "The solution is very fast."
  • "The solution is quite expensive."

What is our primary use case?

We primarily use the solution as a warehouse. In the front end we are using MicroStrategy, and we are using Oracle as a database.

What is most valuable?

The warehouse is the solution's most valuable aspect.

The solution is very fast.

What needs improvement?

I'm not sure about the improvements needed in the solution.

The solution is quite expensive.

For how long have I used the solution?

I've been using the solution for two years.

What do I think about the stability of the solution?

The solution is stable.

What do I think about the scalability of the solution?

The solution is scalable. Almost everyone in the company uses the solution.

How are customer service and technical support?

The solution's technical support is good.

How was the initial setup?

I did not install it myself, but the database administration team did, and they found it simple. In one day they were able to administrate the data stack file.

What about the implementation team?

Our team handled the implementation.

What's my experience with pricing, setup cost, and licensing?

Oracle is a costly product.

Which other solutions did I evaluate?

Choosing this solution was the choice of the database administrator. I didn't participate in the process, so I'm not sure what was used before of if other solutions were evaluated.

What other advice do I have?

We use the on-premises deployment model.

I'd advise others to look at the solution, but to be mindful as it is costly. Whether it is right for a company or not depends on the requirements. If they have the budget, they should go for Oracle. If they do not, I'd suggest they look at something open-source like MySQL or Oracle SQL.

I'd rate the solution nine out of ten.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user522219 - PeerSpot reviewer
IT Director at a manufacturing company with 1,001-5,000 employees
Real User
It scales really well.​​​

What is most valuable?

Performance. That's probably the number one.

When we use it for OLTP, which is the Online Transaction Processing, the response for the end-user is pretty fast, which is a good thing, especially if the user is looking at a website, the response time is really micro milliseconds as opposed to waiting a few seconds for that page to load.

How has it helped my organization?

I think it still goes back to the user benefiting the most out of this, it's basically a good customer experience, the product.

What needs improvement?

Nothing right now.

What do I think about the stability of the solution?

It is pretty stable.

What do I think about the scalability of the solution?

It scales really well.

We had to scale it along with the rest of the ecosystem, not just this in particular, but the infrastructure as well. We had to scale both of them.

How are customer service and technical support?

We had to involve them just because we ran into a couple of issues and they've been resolved in a timely manner. Pretty good.

Which solution did I use previously and why did I switch?

No, we always needed something like this, and I've looked at Microsoft, they have a similar solution, I think they have something similar,

How was the initial setup?

It's pretty straight-forward. It's actually more of an inbuilt core functionality as opposed to have us to go through an implementation process.

Which other solutions did I evaluate?

We did evaluate a few other vendors, but again this wasn't a product on it's own. It's part of the bigger ecosystem. So the decisions are for other reasons.

What other advice do I have?

Try it out first. See if it meets your expectations and go from there.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user