The system is engineered to run both databases and enterprise applications unlike other engineered systems, which are either tailored only for databases or only for applications. The ability to consolidate all of the databases is a big plus.
The SuperCluster provides best of both worlds – Oracle Exadata functionality along with virtualization at the firmware and the OS kernel layers.
On the Exadata side the features most valuable are – Hybrid Columnar Compression (HCC) for both Data Warehouse and OLTP workloads, Storage Indexes for Smart Scan, ability to use Flash Cache for DB storage, and more.
Improvements to My Organization:
With SuperCluster and Exadata, all of the servers and storage are integrated within the same rack. This reduces the configuration and setup time, increases performance, and makes maintenance and patching easy. We are able to consolidate all of the databases and application stack on a single SuperCluster with Exadata.
Room for Improvement:
In the area of Solaris zone-level virtualization, it would be good to have memory capping as a tool for memory management. Currently for 11g databases running on Exadata with smart scan against hybrid columnar compressed (HCC), tables could result in errors. It would be nice to have a patch rather than the current solution of upgrading the databases to 12c.
Use of Solution:
Currently, we have a Half-Rack SuperCluster with 2 SPARC T5-8 Compute Nodes and 4 Exadata Storage Servers. Current version of Exadata is 188.8.131.52.3. The compute nodes are running Solaris 11.2 with Oracle11g 184.108.40.206 databases. We've been using it for close to three years.
We had few issues while deploying DB zones. This virtualization has to be carried out differently when compared to DB LDOM virtualization.
There have been no major problems so far with stability.
There have been no major problems so far with scalability.
Overall it has been good so far.
Overall it has been good so far. In the case of engineered systems like SuperCluster/Exadata, the patching has to go through the support team and there is definitely room for improvements in this area.
Previously, we had multiple servers both with and without physical partitioning. The storage for all of the servers had to be zoned to a SAN. The servers and the storage previously were from different vendors and we had to integrate those. Other product offerings were evaluated, but with all of those we had to explicitly integrate the compute, storage, and networking components. In addition, we could not get the benefits of database optimizations with Exadata and have to pay the penalty for virtualization overhead and network traffic between compute and storage layers.
There is a certain degree of complexity with respect to the initial design of the Exadata storage cell disks and grid disks to meet the customer’s application needs. This especially true when migrating from an existing setup. Care has to be taken with regard to the initial domain configurations since this will determine the LDOM and zone-level virtualization. The Exadata disks have to be exposed to both the DB LDOMS and DB Zones.
The initial setup was from the vendor team (for any engineered systems from Oracle). But later we had to continue the setup to cater to our application and business needs. Prior to the implementation, all of the IP allocations are to be completed for the three layers – Client/Public network, InfiniBand private network, and the Management network.
At this time, I do not have the actual numbers but would rate the ROI is pretty good.
Cost and Licensing Advice:
In the long run one can consolidate the various DB related licenses. The number of cores required to run the DBs and applications is much lower on a SuperCluster with Exadata, thereby leading to fewer licenses and reduced cost. Since the entire stack is owned by one vendor (in this case Oracle) the core factor for licensing is 0.5 leading to fewer licenses for the software components.
The SuperCluster in addition to Exadata storage also comes with ZFS storage cluster. Since the compute nodes, storage and networking components are fully integrated with InfiniBand I/O fabric this provides very high performance between various components. Also, it has built-in hardware encryption to provide data security.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Aug 16 2016