Apache Spark vs Spark SQL comparison

Cancel
You must select at least 2 products to compare!
Apache Logo
2,430 views|1,869 comparisons
89% willing to recommend
Apache Logo
1,520 views|1,008 comparisons
85% willing to recommend
Comparison Buyer's Guide
Executive Summary

We performed a comparison between Apache Spark and Spark SQL based on real PeerSpot user reviews.

Find out in this report how the two Hadoop solutions compare in terms of features, pricing, service and support, easy of deployment, and ROI.
To learn more, read our detailed Apache Spark vs. Spark SQL Report (Updated: May 2024).
772,649 professionals have used our research since 2012.
Featured Review
Quotes From Members
We asked business professionals to review the solutions they use.
Here are some excerpts of what they said:
Pros
"The solution has been very stable.""Spark helps us reduce startup time for our customers and gives a very high ROI in the medium term.""With Hadoop-related technologies, we can distribute the workload with multiple commodity hardware.""The scalability has been the most valuable aspect of the solution.""The product’s most valuable features are lazy evaluation and workload distribution.""The most valuable feature of Apache Spark is its ease of use.""Now, when we're tackling sentiment analysis using NLP technologies, we deal with unstructured data—customer chats, feedback on promotions or demos, and even media like images, audio, and video files. For processing such data, we rely on PySpark. Beneath the surface, Spark functions as a compute engine with in-memory processing capabilities, enhancing performance through features like broadcasting and caching. It's become a crucial tool, widely adopted by 90% of companies for a decade or more.""The features we find most valuable are the machine learning, data learning, and Spark Analytics."

More Apache Spark Pros →

"The solution is easy to understand if you have basic knowledge of SQL commands.""One of Spark SQL's most beautiful features is running parallel queries to go through enormous data.""Spark SQL's efficiency in managing distributed data and its simplicity in expressing complex operations make it an essential part of our data pipeline.""The team members don't have to learn a new language and can implement complex tasks very easily using only SQL.""I find the Thrift connection valuable.""Overall the solution is excellent.""Certain data sets that are very large are very difficult to process with Pandas and Python libraries. Spark SQL has helped us a lot with that.""Offers a variety of methods to design queries and incorporates the regular SQL syntax within tasks."

More Spark SQL Pros →

Cons
"Apart from the restrictions that come with its in-memory implementation. It has been improved significantly up to version 3.0, which is currently in use.""One limitation is that not all machine learning libraries and models support it.""If you have a Spark session in the background, sometimes it's very hard to kill these sessions because of D allocation.""The solution’s integration with other platforms should be improved.""At times during the deployment process, the tool goes down, making it look less robust. To take care of the issues in the deployment process, users need to do manual interventions occasionally.""I would like to see integration with data science platforms to optimize the processing capability for these tasks.""The product could improve the user interface and make it easier for new users.""The migration of data between different versions could be improved."

More Apache Spark Cons →

"I've experienced some incompatibilities when using the Delta Lake format.""Anything to improve the GUI would be helpful.""It takes a bit of time to get used to using this solution versus Pandas as it has a steep learning curve.""It would be useful if Spark SQL integrated with some data visualization tools.""There are many inconsistencies in syntax for the different querying tasks.""In terms of improvement, the only thing that could be enhanced is the stability aspect of Spark SQL.""The solution needs to include graphing capabilities. Including financial charts would help improve everything overall.""Being a new user, I am not able to find out how to partition it correctly. I probably need more information or knowledge. In other database solutions, you can easily optimize all partitions. I haven't found a quicker way to do that in Spark SQL. It would be good if you don't need a partition here, and the system automatically partitions in the best way. They can also provide more educational resources for new users."

More Spark SQL Cons →

Pricing and Cost Advice
  • "Since we are using the Apache Spark version, not the data bricks version, it is an Apache license version, the support and resolution of the bug are actually late or delayed. The Apache license is free."
  • "Apache Spark is open-source. You have to pay only when you use any bundled product, such as Cloudera."
  • "We are using the free version of the solution."
  • "Apache Spark is not too cheap. You have to pay for hardware and Cloudera licenses. Of course, there is a solution with open source without Cloudera."
  • "Apache Spark is an expensive solution."
  • "Spark is an open-source solution, so there are no licensing costs."
  • "On the cloud model can be expensive as it requires substantial resources for implementation, covering on-premises hardware, memory, and licensing."
  • "It is an open-source solution, it is free of charge."
  • More Apache Spark Pricing and Cost Advice →

  • "The solution is open-sourced and free."
  • "There is no license or subscription for this solution."
  • "The solution is bundled with Palantir Foundry at no extra charge."
  • "The on-premise solution is quite expensive in terms of hardware, setting up the cluster, memory, hardware and resources. It depends on the use case, but in our case with a shared cluster which is quite large, it is quite expensive."
  • "We use the open-source version, so we do not have direct support from Apache."
  • "We don't have to pay for licenses with this solution because we are working in a small market, and we rely on open-source because the budgets of projects are very small."
  • More Spark SQL Pricing and Cost Advice →

    report
    Use our free recommendation engine to learn which Hadoop solutions are best for your needs.
    772,649 professionals have used our research since 2012.
    Questions from the Community
    Top Answer:We use Spark to process data from different data sources.
    Top Answer:In data analysis, you need to take real-time data from different data sources. You need to process this in a subsecond, and do the transformation in a subsecond
    Top Answer:Spark SQL's efficiency in managing distributed data and its simplicity in expressing complex operations make it an essential part of our data pipeline.
    Top Answer:We don't have to pay for licenses with this solution because we are working in a small market, and we rely on open-source because the budgets of projects are very small.
    Top Answer:In terms of improvement, the only thing that could be enhanced is the stability aspect of Spark SQL. There could be additional features that I haven't explored but the current solution for working… more »
    Ranking
    1st
    out of 22 in Hadoop
    Views
    2,430
    Comparisons
    1,869
    Reviews
    26
    Average Words per Review
    444
    Rating
    8.7
    4th
    out of 22 in Hadoop
    Views
    1,520
    Comparisons
    1,008
    Reviews
    7
    Average Words per Review
    543
    Rating
    8.3
    Comparisons
    Learn More
    Overview

    Spark provides programmers with an application programming interface centered on a data structure called the resilient distributed dataset (RDD), a read-only multiset of data items distributed over a cluster of machines, that is maintained in a fault-tolerant way. It was developed in response to limitations in the MapReduce cluster computing paradigm, which forces a particular linear dataflowstructure on distributed programs: MapReduce programs read input data from disk, map a function across the data, reduce the results of the map, and store reduction results on disk. Spark's RDDs function as a working set for distributed programs that offers a (deliberately) restricted form of distributed shared memory

    Spark SQL is a Spark module for structured data processing. Unlike the basic Spark RDD API, the interfaces provided by Spark SQL provide Spark with more information about the structure of both the data and the computation being performed. There are several ways to interact with Spark SQL including SQL and the Dataset API. When computing a result the same execution engine is used, independent of which API/language you are using to express the computation. This unification means that developers can easily switch back and forth between different APIs based on which provides the most natural way to express a given transformation.
    Sample Customers
    NASA JPL, UC Berkeley AMPLab, Amazon, eBay, Yahoo!, UC Santa Cruz, TripAdvisor, Taboola, Agile Lab, Art.com, Baidu, Alibaba Taobao, EURECOM, Hitachi Solutions
    UC Berkeley AMPLab, Amazon, Alibaba Taobao, Kenshoo, Hitachi Solutions
    Top Industries
    REVIEWERS
    Computer Software Company33%
    Financial Services Firm12%
    University9%
    Marketing Services Firm6%
    VISITORS READING REVIEWS
    Financial Services Firm25%
    Computer Software Company13%
    Manufacturing Company7%
    Comms Service Provider5%
    VISITORS READING REVIEWS
    Financial Services Firm21%
    Computer Software Company15%
    University8%
    Construction Company5%
    Company Size
    REVIEWERS
    Small Business42%
    Midsize Enterprise16%
    Large Enterprise42%
    VISITORS READING REVIEWS
    Small Business17%
    Midsize Enterprise12%
    Large Enterprise71%
    REVIEWERS
    Small Business36%
    Midsize Enterprise36%
    Large Enterprise29%
    VISITORS READING REVIEWS
    Small Business14%
    Midsize Enterprise14%
    Large Enterprise73%
    Buyer's Guide
    Apache Spark vs. Spark SQL
    May 2024
    Find out what your peers are saying about Apache Spark vs. Spark SQL and other solutions. Updated: May 2024.
    772,649 professionals have used our research since 2012.

    Apache Spark is ranked 1st in Hadoop with 60 reviews while Spark SQL is ranked 4th in Hadoop with 14 reviews. Apache Spark is rated 8.4, while Spark SQL is rated 7.8. The top reviewer of Apache Spark writes "Reliable, able to expand, and handle large amounts of data well". On the other hand, the top reviewer of Spark SQL writes "Offers the flexibility to handle large-scale data processing". Apache Spark is most compared with Spring Boot, AWS Batch, SAP HANA, Cloudera Distribution for Hadoop and Azure Stream Analytics, whereas Spark SQL is most compared with IBM Db2 Big SQL, Netezza Analytics, SAP HANA and HPE Ezmeral Data Fabric. See our Apache Spark vs. Spark SQL report.

    See our list of best Hadoop vendors.

    We monitor all Hadoop reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.