What is our primary use case?
I mainly used it to perform predictive modeling projects, such as customer-churn predictions and HR attrition predictions. The environments are mainly SQL-databases or CSV files.
The installation I worked with to perform the analyses was a regular laptop with no computational server behind it, which may have an impact on the capacity of the program handling very large databases or files.
How has it helped my organization?
The clients I performed the analyses for were all very pleased with the results. For churn prediction, one of the companies proactively started contacting clients with high risk to churn, resulting in drastically decreasing churn rates.
For organizations with a small team of data analysts or data scientists, it is a very easy tool to become familiar with predictive modeling, and makes it possible to hand over projects to colleagues without the need to extensively document them.
What is most valuable?
- The very easy-to-use visual interface
- Help functions and clear explanations of the functionalities and the used algorithms
- Data Wrangling and data manipulation functionalities are certainly sufficient, as well as the looping possibilities which help you to automate parts of the analysis
For inexperienced analysts or data scientists, it is a very easy tool to take your first steps in modeling and analytics.
What needs improvement?
The visualization functionalities are not good (cannot be compared to, for instance, the possibilities in R).
The program is not fit for handling very large files or databases (greater than 1GB); it gets too slow and has a tendency to crash easily.
For how long have I used the solution?
Less than one year.
What other advice do I have?
I used it quite intensively for 10 months, long enough get familiar with it, to follow training, to use it in in several projects, to ask questions on the user forum.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Apr 04 2018