Please share with the community what you think needs improvement with Spark SQL.
What are its weaknesses? What would you like to see changed in a future version?
I would like to have the ability to process data without the overhead. To use the same API to process both terabytes data and be able to process one GB of data.
Anything to improve the GUI would be helpful. We have experienced a lot of issues, but nothing in the production environment.
The service is complex. This is due to the fact that it's a combination of a lot of technology. The solution needs to include graphing capabilities. Including financial charts would help improve everything overall.
In the next release, maybe the visualization of some command-line features could be added.