We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
"The technical evaluation is very good."
"The most valuable features are the counter features and the NoSQL schema. It also has good scalability. You can scale Cassandra to any finite level."
"The solution's database capabilities are very good."
"The time series data was one of the best features along with auto publishing."
"MemSQL supports the MySQL protocol, and many functions are similar, so the learning curve is very short."
"Interface is not user friendly."
"Fine-tuning was a bit of a challenge."
"The disc space is lacking. You need to free it up as you are working."
"The solution doesn't have joins between tables so you need other tools for that."
"There should be more pipelines available because I think that if MemSQL can connect to other services, that would be great."
"I would advise users to try the free 128GB version."
![]() | Morten Calisch Special Adviser Strategy at a university |
Earn 20 points
MemSQL is The No Limits Database™, powering modern applications and analytical systems with a cloud-native, massively scalable architecture for maximum ingest and query performance with the highest concurrency. MemSQL envisions a world where every business can make decisions in real time and every experience is optimized through data. Global enterprises use the MemSQL distributed database to easily ingest, process, analyze, and act on data in order to thrive in today’s insight-driven economy. MemSQL is optimized to run on any public cloud or on-premises with commodity hardware. To learn more, visit our Product page - https://msql.co/2xUDAqL. To use MemSQL, get started with a free trial - https://msql.co/2NZOAy9.
Headquartered in San Francisco, CA with offices in Seattle, WA and Portland OR, MemSQL has raised $110M from top investors including GV, Accel Partners, and Khosla Ventures, among others. MemSQL is trusted by customers including Uber, Akamai, Dell EMC, Samsung, Comcast, Kellogg, and more.
If you want to work at a company that celebrates diversity, innovation, leadership, and creativity every day, check out our openings on our Careers page - https://msql.co/2RoTZgn.
Cassandra is ranked 3rd in NoSQL Databases with 4 reviews while SingleStore DB is ranked 10th in Relational Databases with 1 review. Cassandra is rated 8.6, while SingleStore DB is rated 10.0. The top reviewer of Cassandra writes "Great time series data feature but it requires third parties to join tables". On the other hand, the top reviewer of SingleStore DB writes "MySQL Big Brother with built-in data pipeline, high concurrency, and blazing fast analytical queries". Cassandra is most compared with InfluxDB, Couchbase, Accumulo, Cloudera Distribution for Hadoop and Scylla, whereas SingleStore DB is most compared with CockroachDB, MySQL, SQL Server, Oracle Database In-Memory and Apache HBase.
See our list of .
We monitor all NoSQL Databases reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.
I haven't used SQream personally. However, if you are only considering GPU based rdbms's please check the following
https://hackernoon.com/which-gpu-database-is-right-for-me-6ceef6a17505
SQreamDB is a GPU DB. It is not suitable for real-time oltp of course.
Cassandra is best suited for OLTP database use cases, when you need a scalable database (instead of SQL server, Postgres)
SQream is a GPU database suited for OLAP purposes. It's the best suite for a very large data warehouse, very large queries needed mass parallel activity since GPU is great in massive parallel workload.
Also, SQream is quite cheap since we need only one server with a GPU card, the best GPU card the better since we will have more CPU activity. It's only for a very big data warehouse, not for small ones.
Your best DB for 40+ TB is Apache Spark, Drill and the Hadoop stack, in the cloud.
Use the public cloud provider's elastic store (S3, Azure BLOB, google drive) and then stand up Apache Spark on a cluster sized to run your queries within 20 minutes. Based on my experience (Azure BLOB store, Databricks, PySpark) you may need around 500 32GB nodes for reading 40 TB of data.
Costs can be contained by running your own clusters but Databricks manage clusters for you.
I would recommend optimizing your 40TB data store into the Databricks delta format after an initial parse.