We just raised a $30M Series A: Read our story

Red Hat Ceph Storage OverviewUNIXBusinessApplication

Red Hat Ceph Storage is #3 ranked solution in top File and Object Storage tools and #6 ranked solution in top Software Defined Storage (SDS) tools. IT Central Station users give Red Hat Ceph Storage an average rating of 8 out of 10. Red Hat Ceph Storage is most commonly compared to MinIO:Red Hat Ceph Storage vs MinIO. The top industry researching this solution are professionals from a comms service provider, accounting for 28% of all views.
What is Red Hat Ceph Storage?
Red Hat Ceph Storage is an enterprise open source platform that provides unified software-defined storage on standard, economical servers and disks. With block, object, and file storage combined into one platform, Red Hat Ceph Storage efficiently and automatically manages all your data.

Red Hat Ceph Storage was previously known as Ceph.

Buyer's Guide

Download the Software Defined Storage (SDS) Buyer's Guide including reviews and more. Updated: November 2021

Red Hat Ceph Storage Customers
Dell, DreamHost
Red Hat Ceph Storage Video

Archived Red Hat Ceph Storage Reviews (more than two years old)

Filter by:
Filter Reviews
Industry
Loading...
Filter Unavailable
Company Size
Loading...
Filter Unavailable
Job Level
Loading...
Filter Unavailable
Rating
Loading...
Filter Unavailable
Considered
Loading...
Filter Unavailable
Order by:
Loading...
  • Date
  • Highest Rating
  • Lowest Rating
  • Review Length
Search:
Showingreviews based on the current filters. Reset all filters
SB
Senior Software Test Engineer at a tech vendor with 201-500 employees
Real User
Replicated and erasure coded pools allow multiple copies, easy scale-out of nodes

What is our primary use case?

File Services NFS and CIFS (RBD images) Object Services S3 and Swift (radosgw) I/O into replicated and erasure coded pools (librados)

How has it helped my organization?

Replicated and erasure coded pools have allowed for multiple copies to be kept, easy scale-out of additional nodes, and easy replacement of failed hard drives. The solution continues working even when there are errors.

What is most valuable?

I/O into replicated and erasure coded pools (librados).

What needs improvement?

It needs a better UI for easier installation and management.

For how long have I used the solution?

One to three years.

What do I think about the stability of the solution?

It has provided retention of all customer sites on which it has been installed in…

What is our primary use case?

  • File Services NFS and CIFS (RBD images)
  • Object Services S3 and Swift (radosgw)
  • I/O into replicated and erasure coded pools (librados)

How has it helped my organization?

Replicated and erasure coded pools have allowed for multiple copies to be kept, easy scale-out of additional nodes, and easy replacement of failed hard drives. The solution continues working even when there are errors.

What is most valuable?

I/O into replicated and erasure coded pools (librados).

What needs improvement?

It needs a better UI for easier installation and management.

For how long have I used the solution?

One to three years.

What do I think about the stability of the solution?

It has provided retention of all customer sites on which it has been installed in the last three years, with no catastrophic failures so far.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
it_user860847
Systems Engineer at a marketing services firm with 51-200 employees
Real User
Simplifies my storage integration by replacing multiple storage systems

Pros and Cons

  • "Ceph has simplified my storage integration. I no longer need two or three storage systems, as Ceph can support all my storage needs. I no longer need OpenStack Swift for REST object storage access, I no longer need NFS or GlusterFS for filesystem sharing, and most importantly, I no longer need LVM or DRBD for my virtual machines in OpenStack."
  • "I have encountered issues with stability when replication factor was not 3, which is the default and recommended value. Go below 3 and problems will arise."

What is our primary use case?

I am involved with Ceph and OpenStack as an integrator. I set it up or consult with clients for private cloud deployments. Ceph is my storage of choice for OpenStack and general object-storage needs.

How has it helped my organization?

Ceph has simplified my storage integration. I no longer need two or three storage systems, as Ceph can support all my storage needs. I no longer need OpenStack Swift for REST object storage access, I no longer need NFS or GlusterFS for filesystem sharing, and most importantly, I no longer need LVM or DRBD for my virtual machines in OpenStack.

What is most valuable?

The ability to present Rest API, POSIX filesystem, and block devices from the same distributed object storage back-system (RADOS) is of major value to me.

What needs improvement?

Ceph lacks a little bit only in performance. It needs to scale a lot and needs very fast and well-orchestrated/configured hardware for best performance. This not a downside though, it is a challenge. Ceph only improves the given hardware.

For how long have I used the solution?

One to three years.

What do I think about the stability of the solution?

I have encountered issues with stability when replication factor was not 3, which is the default and recommended value. Go below 3 and problems will arise.

What do I think about the scalability of the solution?

Ceph has no issues with scalability but needs proper planning regarding the hardware.

How are customer service and technical support?

Technical support from the mailing lists is very good but you always need to get your hands dirty. However, if you pay for a product like Red Hat Ceph Storage, support is a big advantage.

Which solution did I use previously and why did I switch?

I have used many solutions but not as extensively as Ceph. I switched to Ceph because of its architecture and because it is a one-stop shop when it comes to storage.

How was the initial setup?

Ceph is complex, as with all distributed systems. It has a long learning curve but it pays for itself after that.

What's my experience with pricing, setup cost, and licensing?

If you can afford a product like Red Hat Ceph Storage then go for it. If you cannot, then you need to test Ceph and get your hands dirty.

What other advice do I have?

I have been using Ceph since 2015, in both SOHO and bigger private cloud installations.

Ceph, as a distributed storage solution, is amazing and I can only rate it a 10 out of 10. However, being distributed and complex, Ceph needs engineers with a good understanding of its internals and Linux, for it to shine.

Overall, it is a great product.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Find out what your peers are saying about Red Hat, Dell EMC, LizardFS and others in Software Defined Storage (SDS). Updated: November 2021.
554,676 professionals have used our research since 2012.
it_user860841
Unix Solutions Manager at a tech vendor with 501-1,000 employees
Real User
Data redundancy means no data or service loss during disk/server failure

What is our primary use case?

We used Ceph as back-end storage for our internal cloud, based on OpenStack. Our environment was a small open-source OpenStack cloud, on 10 Dell servers, each with two 1TB disks. We installed Ceph on each server to handle those disks and integrated OpenStack Cinder with Ceph to be used as cloud storage instances.

How has it helped my organization?

Scale out storage and storage redundancy.

What is most valuable?

Data redundancy, since it can survive failures (disks/servers). We didn’t lose our data or have a service interruption during server/disk failures.

What needs improvement?

Rebalancing and recovery are a bit slow.

For how long have I used the solution?

One to three years.

What do I think about the stability of the solution?

I used…

What is our primary use case?

We used Ceph as back-end storage for our internal cloud, based on OpenStack.

Our environment was a small open-source OpenStack cloud, on 10 Dell servers, each with two 1TB disks. We installed Ceph on each server to handle those disks and integrated OpenStack Cinder with Ceph to be used as cloud storage instances.

How has it helped my organization?

Scale out storage and storage redundancy.

What is most valuable?

Data redundancy, since it can survive failures (disks/servers). We didn’t lose our data or have a service interruption during server/disk failures.

What needs improvement?

Rebalancing and recovery are a bit slow.

For how long have I used the solution?

One to three years.

What do I think about the stability of the solution?

I used it for about two years. We had no service interruption and no loss of data.

Which other solutions did I evaluate?

I have used (or know) the following distributed/shared file systems: Gluster, NFS, GFS2, and Ceph. For me, Ceph is one of the best.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
SS
IT Engineer at a tech services company with 51-200 employees
Consultant
Without any extra costs, I was able to provide a redundant environment

Pros and Cons

  • "We are using Ceph internal inexpensive disk and data redundancy without spending extra money on external storage."
  • "Without any extra costs, I was able to provide a redundant environment."
  • "This product uses a lot of CPU and network bandwidth. It needs some deduplication features and to use delta for rebalancing."

What is our primary use case?

We are using it in a Proxmox Cluster.

How has it helped my organization?

Without any extra costs, I was able to provide a redundant environment. After I built the first cluster, other departments notice the performance and redundancy, then they started building their own clusters. I helped them to design and build their cluster. Now, we have four Ceph Clusters running, three in production and one in the test environment.

What is most valuable?

We are using Ceph internal inexpensive disk and data redundancy without spending extra money on external storage.

What needs improvement?

This product uses a lot of CPU and network bandwidth. It needs some deduplication features and to use delta for rebalancing. 

After adding new OSD, e-balancing is an huge issue if not properly configured, as it can cause huge performance issues.

For how long have I used the solution?

Less than one year.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Diego Woitasen
Senior Solutions Architect and co-founder at flugel.it
Real User
Saves money and we can scale the storage without limits

Pros and Cons

  • "The community support is very good."
  • "It has helped to save money and scale the storage without limits."
  • "Geo-replication needs improvement. It is a new feature, and not well supported yet."

What is our primary use case?

I use Ceph as OpenStack and Kubernetes storage back-end solution.

How has it helped my organization?

It has helped to save money and scale the storage without limits. 

We did not depend on expensive hardware to start using it and can upgrade it if our demand grows.

What is most valuable?

Its reliability. I have experienced failures and human mistakes. However, Ceph was able to recover automatically the data with a special procedure.

What needs improvement?

Geo-replication needs improvement. It is a new feature, and not well supported yet. 

For how long have I used the solution?

Three to five years.

What do I think about the stability of the solution?

No stability issues.

What do I think about the scalability of the solution?

No scalability issues.

How is customer service and technical support?

The community support is very good. I have never used the paid support.

How was the initial setup?

The initial setup is very straightforward. We used the tool ceph-deploy, which is simple to use. The manual procedure is time consuming, but well-documented.

What's my experience with pricing, setup cost, and licensing?

No advice, as we never used the paid support.

Which other solutions did I evaluate?

We evaluated GlusterFS (POCs), but Ceph provides better block device storage.

What other advice do I have?

I would recommend the product.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flavio Carvalho
Senior Information Technology Specialist at a tech consulting company with 501-1,000 employees
MSP
We can reuse servers of any type, even legacy, and include them in deployment

Pros and Cons

  • "We have some legacy servers that can be associated with this structure. With Ceph, we can rearrange these machines and reuse our investment."
  • "radosgw and librados provide a simple integration with clone, snapshots, and other functions that aid in data integrity."
  • "In the deployment step, we need to create some config files to add Ceph functions in OpenStack modules (Nova, Cinder, Glance). It would be useful to have a tool that validates the format of the data in those files, before generating a deploy with failures."

What is our primary use case?

We’re using Ceph storage to provide allocation space to instances (VMs) in an OpenStack environment.

How has it helped my organization?

The product spawned a new vision of storage deployment, as well as a strong interest in reusing equipment and increasing ROI. We have some legacy servers that can be associated with this structure. With Ceph, we can rearrange these machines and reuse our investment.

What is most valuable?

  • We can reuse servers of any type and include them in the deploy of the solution.
  • radosgw and librados provide a simple integration with clone, snapshots, and other functions that aid in data integrity.

What needs improvement?

In the deployment step, we need to create some config files to add Ceph functions in OpenStack modules (Nova, Cinder, Glance). It would be useful to have a tool that validates the format of the data in those files, before generating a deploy with failures.

For how long have I used the solution?

One to three years.

What do I think about the stability of the solution?

No current issues. Almost all our difficulties were related to implementation. After that, everything ran well.

What do I think about the scalability of the solution?

Most times that I have implemented Ceph, I have used some type of deployment tool, like RDO (Red Hat Director). With these tools, I can make the environment scale in or out without issues. An attention point is looking for a journal and disk separation on the YAML file.

How are customer service and technical support?

It is possible that you only have support if you partner with a vendor like Red Hat. However, you can find many articles in forums or GitHub.

Which solution did I use previously and why did I switch?

We had no previous solution. My first contact with ephemeral storage is through Ceph.

How was the initial setup?

My first deployment was complex, connecting Ceph with all OpenStack modules, but that was  before I was testing Ceph and doing installations manually and with hard coding.

It is not a complex implementation, but you need to look into all the structure requirements and OSDS division.

What's my experience with pricing, setup cost, and licensing?

Most of time, you can get Ceph with the OpenStack solution in a subscription as a bundle.

Which other solutions did I evaluate?

No.

What other advice do I have?

I rate it at nine out of 10. It is a product which is constantly undergoing improvements.

Disclosure: My company has a business relationship with this vendor other than being a customer: We are Red Hat global partners.
it_user854061
Enterprise Solutions Architect at a tech services company with 1,001-5,000 employees
Real User
Not mature yet, bugs and errors during install, but opens doors for completely open-source cloud

Pros and Cons

  • "Ceph was chosen to maintain exact performance and capacity characteristics for customer cloud."
  • "Ceph is not a mature product at this time. Guides are misleading and incomplete. You will meet all kind of bugs and errors trying to install the system for the first time. It requires very experienced personnel to support and keep the system in working condition, and install all necessary packets."

What is our primary use case?

We use it as cloud storage, connected to OpenNebula cloud system for one of our customers. System includes 56 Supermicro nodes that are specially configured to be used as a hyperconverged system. Same nodes used for storage and virtualization.

How has it helped my organization?

It opens doors for completely open-source cloud. No monthly charges, no revenue share policy. It just works.

What is most valuable?

When going open-source, there is actually not much of a choice. Ceph was chosen to maintain exact performance and capacity characteristics for customer cloud.

What needs improvement?

Ceph is not a mature product at this time. Guides are misleading and incomplete. You will meet all kind of bugs and errors trying to install the system for the first time. It requires very experienced personnel to support and keep the system in working condition, and install all necessary packets.

For how long have I used the solution?

One to three years.

What do I think about the stability of the solution?

The system needs some polishing to be stable enough for a production environment.

What do I think about the scalability of the solution?

CEPH is scalable to almost infinity.

How are customer service and technical support?

We didn’t use Red Hat services, due to bad experience with them in the past. They usually play email ping pong, while you solve the problems yourself.

Which solution did I use previously and why did I switch?

None. This was a pilot project and it has worked out.

How was the initial setup?

It is complex. Before the system is built, all your technical staff must research as much about CEPH technology as they can.

Which other solutions did I evaluate?

We tried Scality. It is the perfect solution, but was out of budget.

What other advice do I have?

Think twice.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
it_user827883
Senior UNIX Systems Engineer with 51-200 employees
User
Most valuable features include replication and compression.

What is our primary use case?

Cloud. 

How has it helped my organization?

Speed of storage Flexible

What is most valuable?

RADOS Swift S3 Replication Compression

What needs improvement?

Please create a failback solution for OpenStack replication and maybe QoS to allow guaranteed IOPS.

For how long have I used the solution?

Less than one year.

What other advice do I have?

Everything is perfect.

What is our primary use case?

Cloud. 

How has it helped my organization?

  • Speed of storage
  • Flexible

What is most valuable?

  • RADOS
  • Swift
  • S3
  • Replication
  • Compression

What needs improvement?

Please create a failback solution for OpenStack replication and maybe QoS to allow guaranteed IOPS.

For how long have I used the solution?

Less than one year.

What other advice do I have?

Everything is perfect.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
it_user774150
Senior Software Engineer
Real User
A scale-out solution designed to be free of scale limitations common when using proprietary storage solutions

Pros and Cons

  • "Ceph’s ability to adapt to varying types of commodity hardware affords us substantial flexibility and future-proofing."
  • "Routing around slow hardware."

How has it helped my organization?

Ceph has allowed us to deploy block storage for our users in a reliable, performance, and easy to manage fashion without vendor lock-in.

What is most valuable?

By being open source, Ceph is not tied to the whim or fortunes of any one vendor. The community of Ceph code contributors and admins is large and active. Ceph’s ability to adapt to varying types of commodity hardware affords us substantial flexibility and future-proofing.

What needs improvement?

Routing around slow hardware.

What do I think about the stability of the solution?

Ceph is more stable than many proprietary solutions.

What do I think about the scalability of the solution?

Ceph is a scale-out solution designed to be free of scale limitations common when using proprietary storage solutions. Ceph will continue to scale meeting our needs for years to come.

How are customer service and technical support?

Red Hat’s technical support is valuable, but with Ceph, one can often do well with community resources.

Which solution did I use previously and why did I switch?

I have used traditional storage on much smaller scales: ZFS, SVM, VxVM, and NetApp. They do not fit the use case and demands of a growing Cloud infrastructure.

How was the initial setup?

Initial setup is straightforward with automation tools.

What's my experience with pricing, setup cost, and licensing?

Not applicable.

Which other solutions did I evaluate?

Ceph is de facto for mixed modality of cloud storage.

What other advice do I have?

Pick up a copy of this excellent book: Learning Ceph - Second Edition.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
it_user623802
Senior Architect at a tech vendor with 10,001+ employees
Real User
We use this as an OpenStack storage back-end

How has it helped my organization?

We use this as an OpenStack storage back-end of Nova/Glance/Cinder.

What is most valuable?

High reliability with commodity hardware There is no cost for software

What needs improvement?

I would like to see better performance and stability when Ceph is in recovery.

What do I think about the stability of the solution?

Latency increases abruptly when conducting recovery. This impacts the upper application.

What do I think about the scalability of the solution?

It is OK to add a node or a disk, but it may impact the latency of read/write of the application which is running.

How is customer service and technical support?

The level of technical support is acceptable.

How was the initial setup?

The initial setup was straightforward…

How has it helped my organization?

We use this as an OpenStack storage back-end of Nova/Glance/Cinder.

What is most valuable?

  • High reliability with commodity hardware
  • There is no cost for software

What needs improvement?

I would like to see better performance and stability when Ceph is in recovery.

What do I think about the stability of the solution?

Latency increases abruptly when conducting recovery. This impacts the upper application.

What do I think about the scalability of the solution?

It is OK to add a node or a disk, but it may impact the latency of read/write of the application which is running.

How is customer service and technical support?

The level of technical support is acceptable.

How was the initial setup?

The initial setup was straightforward without much effort.

What's my experience with pricing, setup cost, and licensing?

People can try the vanilla Ceph, if they are confident with their technical skills.

Which other solutions did I evaluate?

We evaluated VMware vSAN.

What other advice do I have?

It is easy to set up.

Disclosure: My company has a business relationship with this vendor other than being a customer: We are partners with RHEL.
AB
Sr. Systems Engineer at a tech company with 10,001+ employees
Real User
Provides block storage and object storage from the same storage cluster.

Pros and Cons

  • "The ability to provide block storage and object storage from the same storage cluster is very valuable for us."
  • "Ceph does not deal very well with, or takes a long time to recover from, certain kinds of network failures and individual storage node failures."

How has it helped my organization?

Ceph has helped our organization to provide a Software Defined Storage solution in our private cloud.

What is most valuable?

The ability to provide block storage and object storage from the same storage cluster is very valuable for us.

We are using Ceph as back-end storage for our OpenStack cloud. Ceph provides:

  • Block storage for storing the OpenStack images or VM templates
  • Block storage for OpenStack Cinder volume service
  • Block storage for OpenStack Nova VM compute service boot volumes
  • Object storage for the OpenStack Swift service

Without Ceph, we would have ended up with at least two storage systems: One for block storage and another for providing Swift Objectstore.

The other big advantage is that Ceph is free software. Compared to traditional SAN based storage, it is very economical.

What needs improvement?

Ceph does not deal very well with, or takes a long time to recover from, certain kinds of network failures and individual storage node failures.

I believe the community that supports Ceph is working on this. They will be providing solutions to improve these issues in the newer versions, like Jewel and in the future with technologies like BLU Store and RDMA.

What do I think about the stability of the solution?

Stability in a normal operating environment is satisfactory. Improvements would come from providing better data re-balancing algorithms when the storage cluster is expanded. Currently, cluster expansion is a user impacting process.

What do I think about the scalability of the solution?

We have not noticed any issues with scalability. In fact, when more nodes/disks were added to the cluster, it improved performance due to its nature of being a native object store.

How are customer service and technical support?

We are using the open source version. However, there seem to be many vendors, in addition to RedHat, who sell or provide support for Ceph.

Which solution did I use previously and why did I switch?

We used traditional fiber based SAN storages before we started using Ceph. The main reasons for switching to Ceph were:

  • Ability to provide block as well as object storage
  • Open source system
  • Scalability: Performance actually improves as we scale the cluster bigger

How was the initial setup?

The initial setup required a lot of research and learning to understand Ceph storage's underlying technology. Once we had the right understanding and configurations, it was pretty straightforward.

However, this is not a traditional storage solution. It may not be straightforward for storage administrators, but easier for cloud administrators with good Unix/Linux knowledge.

The key things to consider while deploying Ceph, especially for block storage (also known as RBD) are:

  • Use a higher number of disks to get more IOPS. (Ceph is a copy-on-write storage, so usage is less of a worry than providing the right number of IOPS.)
  • Use SSD journal disks to improve write performance. (In fact, with the price of SSD drives coming down, use all SSD or NVME+SSD configurations - more IOPS makes a better solution.)
  • Use SSD for Ceph MONITOR nodes
  • Use networking speeds of at least 20 Gbits/sec or more since this is a network based storage on all clients as well as Ceph nodes. As you move to full SSD or NVME disks, the networking needs to match up.
  • Select the right CRUSH map and Placement Group numbers based on your storage pool size and node distribution in the data center.

What's my experience with pricing, setup cost, and licensing?

Pricing/licensing depends on what kind of internal knowledge or expertise exists in your organization about Ceph.

If you don't have the expertise, choose the right partner or vendor based on proven expertise by the vendor in large production environments.

Which other solutions did I evaluate?

We did not evaluate other storage solutions. We spent the time understanding Ceph better to provide a stable solution.

What other advice do I have?

Ceph is open source and there are large organizations running huge Ceph clusters which have published blogs on how they deployed Ceph.

Do your research based on the lessons learned from these users of Ceph to decide on which configuration and architecture to use for Ceph.

As organizations move to Linux container based technologies and container orchestration frameworks (especially Kubernetes), Ceph is still relevant as it provides integration into these future technologies to provide block storage for them as well.

It's ultimately all about IOPS. When a failure occurs CEPH tries to 'rebalance' data on the surviving nodes which can consume a lot of IOPS affecting client IO. If there's not enough IOPS or fast data rebalancing, it can take a lot of time to rebalance data. Some of this can be improved with faster networks and faster drives like SSD or flash drives (which people can implement right now in older versions of CEPH), some of the improvements will come from how CEPH writes data using BlueStore and replicate or rebalance data between OSD nodes using RDMA (which may become stable for users in newer versions).

Disclosure: I am a real user, and this review is based on my own experience and opinions.