- Density
- Cost
- Feature set
- Manageability
The low cost and high density of the solution has allowed us to place more compute assets per data center rack and increase our virtual machine count. Our customers rent VMs and compute space from us via a traditional IaaS model. The more VMs and the more compute per rack, the better.
I would like to see a broader range of chassis networking options. Cisco was an option at one time and then not. I am not sure if they are again.
I have used it for 3+ years.
I have not encountered any stability issues. Our only issues with the solution have revolved around firmware interoperability between networking firmware and storage array firmware.
I have not encountered any scalability issues.
Technical support is 10/10.
We were using 1U rack mount servers. We switched for density reasons at the time. But now, today, 1U servers are able to accommodate larger amounts of memory and 22-core processors. Additional data center deployments have moved back to 1U designs.
Initial setup was straightforward. The chassis is easy to use and manage. Easy-to-use interfaces allowed for rapid deployment.
Blade chassis can be found on the used market super cheap. If a company already has the M1000e chassis and support under contract, I would advise buying additional chassis on the refurb and second-hand market. If that is not an option, Dell pricing starts high and ends up low after negotiation. They seem to have a large amount of room to move on the initial quoted pricing.
We looked at options from Cisco. Pricing was too high for 1/2 the density.
It’s a solid solution. I recommend it. But in today’s always-on environments and virtual deployments in redundant designs, used hardware is not a bad option. We have quotes on the table for fully populated chassis with 16 blade servers and 4 MXL switches for around $40,000. Compared to about $9k for a single new blade. Given that today’s blades are higher density with newer processors, 1-2 year-old equipment is still a valid solution for our needs.