What is our primary use case?
Our use case for Apache Spark was a retail price prediction project. We were using retail pricing data to build predictive models. To start, the prices were analyzed and we created the dataset to be visualized using Tableau. We then used a visualization tool to create dashboards and graphical reports to showcase the predictive modeling data.
Apache Spark was used to host this entire project.
How has it helped my organization?
The processing time is very much improved over the data warehouse solution that we were using.
What is most valuable?
The most valuable features are the storage engine, the memory engine, and the processing engine.
What needs improvement?
I would like to see integration with data science platforms to optimize the processing capability for these tasks.
For how long have I used the solution?
I have been using Apache Spark for the past year.
How are customer service and technical support?
We have not been in contact with technical support.
What's my experience with pricing, setup cost, and licensing?
The initial setup is straightforward. It took us around one week to set it up, and then the requirements and creation of the project flow and design needed to be done. The design stage took three to four weeks, so in total, it required between four and five weeks to set up.
What other advice do I have?
I would rate this solution an eight out of ten.
Which deployment model are you using for this solution?