Oracle Exadata Review
v1 and v2 on Linux: Its large memory capability and parallelism potential has given a big boost


On a scale from 1-5 (1=worst, 5=best), how would you rate this product overall compared to similar products?
- 4, Have not used anything similar but our environment is mainly OLTP so we have to figure out how to take advantage of the normal 11gR2 features and make the optimal use of the Exadata structure and storage cells.

For how long have you used this product?
- 15 months

Which features of this product are most valuable to you?
- Parallelism
- Memory
- Storage Cell “intervention” for query performance
- If we can modify all of our code to use /*+ APPEND */ or /*+ APPEND_VALUES */ then the compression would be significant. Seems to be a “version 1.0” for OLTP centric applications. On the other hand we also need to rethink the database, application architecture and deployment.

Have been finding that in some situations with partitioned tables ignoring the local index (partition key) and doing a full table scan is a lot faster! For example
select /*+ parallel(t1,8) */
column_x
from partition_table t1
where partition_key_column >= trunc(sysdate-30)
and partion_key_column <= trunc(sysdate-1)
and other things

performs poorly (effectively hours!) and the execution plan show the use of the index associated with the partition_key_column.

adding the hint full(t1) gets the result back in less than 10 seconds

Can you give an example of how this product has improved the way your organization functions?
- We have a lot of documents to index and search. Exadata, with its large memory capability and parallelism potential has given a big boost, (300+%) to indexing throughput. We can see a potential for User Query performance improvements but this needs a “re-factoring” (re-write??) of the code.

What areas of this product have room for improvement?
- No built-in feature seems to be present for not allowing the CPU to become overwhelmed and crash the system. Oracle does (or should know!) how much CPU and RAM is necessary for the product to work. Why not just reserve this “minimum” capacity?

Did you encounter any issues with deployment, stability or scalability?
- Spurious shutdowns, snapshots stop working.

Did you previously use a different solution and if so, why did you switch?
- The physics of the old system could not be improved. “Out-of-the-Box” solution and probably more important a controlled and managed upgrades by the Vendor.

Did you implement through a vendor team or an in-house one? If through a vendor team, how would you rate their level of expertise?
- Combination

What advice would you give to others looking into implementing this product?
- If you cannot consolidate into a single database you are effectively spending a lot of money to get no further ahead.

I am beginning to wonder if Exadata X5 will be the end of the road and for seriously challenged Data quantity the shift towards the newer Big Data Appliance. There are also the new advanced analytic functions in the SQL in 12c.

Disclosure: I am a real user, and this review is based on my own experience and opinions.

3 Comments

kapilmalik1983ConsultantTOP REVIEWER

What backup solution do you use to backup databases?

26 October 14
Amin AdatiaConsultantTOP 20POPULAR

Daily incremental backup. The goal is to have dataguard.

26 October 14
Amin AdatiaConsultantTOP 20POPULAR

Have been finding that in some situations with partitioned tables ignoring the local index (partition key) and doing a full table scan is a lot faster! For example
select /*+ parallel(t1,8) */
column_x
from partition_table t1
where partition_key_column >= trunc(sysdate-30)
and partion_key_column <= trunc(sysdate-1)
and other things
/

performs poorly (effectively hours!) and the execution plan show the use of the index associated with the partition_key_column.

adding the hint full(t1) gets the result back in less than 10 seconds

16 February 15
Guest
Why do you like it?

Sign Up with Email