Compare Apache Spark vs. Spark SQL

Apache Spark is ranked 1st in Hadoop with 11 reviews while Spark SQL is ranked 7th in Hadoop with 2 reviews. Apache Spark is rated 8.0, while Spark SQL is rated 7.6. The top reviewer of Apache Spark writes "Good Streaming features enable to enter data and analysis within Spark Stream". On the other hand, the top reviewer of Spark SQL writes "An excellent solution that continues to mature but needs graphing capabilities". Apache Spark is most compared with Spring Boot, Azure Stream Analytics and AWS Lambda, whereas Spark SQL is most compared with Informatica Big Data Parser, Apache Spark and AtScale Adaptive Analytics (A3).
Cancel
You must select at least 2 products to compare!
Apache Spark Logo
10,923 views|9,163 comparisons
Spark SQL Logo
627 views|502 comparisons
Most Helpful Review
Find out what your peers are saying about Apache, Cloudera, Hortonworks and others in Hadoop. Updated: February 2020.
398,050 professionals have used our research since 2012.
Quotes From Members

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:

Pros
The processing time is very much improved over the data warehouse solution that we were using.The main feature that we find valuable is that it is very fast.The features we find most valuable are the machine learning, data learning, and Spark Analytics.I feel the streaming is its best feature.The solution is very stable.The most valuable feature of this solution is its capacity for processing large amounts of data.I found the solution stable. We haven't had any problems with it.The scalability has been the most valuable aspect of the solution.

Read more »

Overall the solution is excellent.The stability was fine. It behaved as expected.

Read more »

Cons
I would like to see integration with data science platforms to optimize the processing capability for these tasks.We use big data manager but we cannot use it as conditional data so whenever we're trying to fetch the data, it takes a bit of time.We've had problems using a Python process to try to access something in a large volume of data. It crashes if somebody gives me the wrong code because it cannot handle a large volume of data.When you want to extract data from your HDFS and other sources then it is kind of tricky because you have to connect with those sources.The solution needs to optimize shuffling between workers.When you first start using this solution, it is common to run into memory errors when you are dealing with large amounts of data.It needs a new interface and a better way to get some data. In terms of writing our scripts, some processes could be faster.The management tools could use improvement. Some of the debugging tools need some work as well. They need to be more descriptive.

Read more »

The solution needs to include graphing capabilities. Including financial charts would help improve everything overall.In the next release, maybe the visualization of some command-line features could be added.

Read more »

report
Use our free recommendation engine to learn which Hadoop solutions are best for your needs.
398,050 professionals have used our research since 2012.
Ranking
1st
out of 24 in Hadoop
Views
10,923
Comparisons
9,163
Reviews
10
Average Words per Review
309
Avg. Rating
8.0
7th
out of 24 in Hadoop
Views
627
Comparisons
502
Reviews
1
Average Words per Review
220
Avg. Rating
8.0
Top Comparisons
Compared 35% of the time.
Compared 10% of the time.
Compared 20% of the time.
Learn
Apache
Apache
Overview

Spark provides programmers with an application programming interface centered on a data structure called the resilient distributed dataset (RDD), a read-only multiset of data items distributed over a cluster of machines, that is maintained in a fault-tolerant way. It was developed in response to limitations in the MapReduce cluster computing paradigm, which forces a particular linear dataflowstructure on distributed programs: MapReduce programs read input data from disk, map a function across the data, reduce the results of the map, and store reduction results on disk. Spark's RDDs function as a working set for distributed programs that offers a (deliberately) restricted form of distributed shared memory

Spark SQL is a Spark module for structured data processing. Unlike the basic Spark RDD API, the interfaces provided by Spark SQL provide Spark with more information about the structure of both the data and the computation being performed. There are several ways to interact with Spark SQL including SQL and the Dataset API. When computing a result the same execution engine is used, independent of which API/language you are using to express the computation. This unification means that developers can easily switch back and forth between different APIs based on which provides the most natural way to express a given transformation.
Offer
Learn more about Apache Spark
Learn more about Spark SQL
Sample Customers
NASA JPL, UC Berkeley AMPLab, Amazon, eBay, Yahoo!, UC Santa Cruz, TripAdvisor, Taboola, Agile Lab, Art.com, Baidu, Alibaba Taobao, EURECOM, Hitachi Solutions
Information Not Available
Top Industries
REVIEWERS
Software R&D Company29%
Financial Services Firm29%
Non Profit14%
Marketing Services Firm14%
VISITORS READING REVIEWS
Software R&D Company32%
Comms Service Provider12%
Media Company10%
Financial Services Firm9%
No Data Available
Find out what your peers are saying about Apache, Cloudera, Hortonworks and others in Hadoop. Updated: February 2020.
398,050 professionals have used our research since 2012.
We monitor all Hadoop reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.