Most Helpful Review
We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
The processing time is very much improved over the data warehouse solution that we were using.
The main feature that we find valuable is that it is very fast.
The features we find most valuable are the machine learning, data learning, and Spark Analytics.
I feel the streaming is its best feature.
The solution is very stable.
The most valuable feature of this solution is its capacity for processing large amounts of data.
I found the solution stable. We haven't had any problems with it.
The scalability has been the most valuable aspect of the solution.
The most valuable feature is that it scans the cloud system and if they are any security anomalies it triggers an email.
The most valuable feature of this solution is the API Gateway.
The ability to scale up and down very quickly helps because we can maintain our system performance and business at a low cost.
Provides a good, easy path from when you're using an AWS cluster.
I would like to see integration with data science platforms to optimize the processing capability for these tasks.
We use big data manager but we cannot use it as conditional data so whenever we're trying to fetch the data, it takes a bit of time.
We've had problems using a Python process to try to access something in a large volume of data. It crashes if somebody gives me the wrong code because it cannot handle a large volume of data.
When you want to extract data from your HDFS and other sources then it is kind of tricky because you have to connect with those sources.
The solution needs to optimize shuffling between workers.
When you first start using this solution, it is common to run into memory errors when you are dealing with large amounts of data.
It needs a new interface and a better way to get some data. In terms of writing our scripts, some processes could be faster.
The management tools could use improvement. Some of the debugging tools need some work as well. They need to be more descriptive.
The running time of AWS Lambda runs fine. It takes around five minutes but it would be great if that time could be extended.
The security needs to be improved.
Lamba functions have cold-starts that can cause some delay.
I would like to see some better integration with other providers, like Cohesity, Druva, and others. I also think the Lambda interface could be better.
The setup was pretty complex because there were many steps. For me, it was complex because I was somewhat new at it. It could be easier for someone who has done it a bunch of times. I just found that it was a very dense user experience. There's a lot going on during setup.
out of 11 in Compute Service
Average Words per Review
out of 11 in Compute Service
Average Words per Review
Compared 35% of the time.
Compared 11% of the time.
Compared 10% of the time.
Compared 40% of the time.
Compared 29% of the time.
Compared 8% of the time.
Spark provides programmers with an application programming interface centered on a data structure called the resilient distributed dataset (RDD), a read-only multiset of data items distributed over a cluster of machines, that is maintained in a fault-tolerant way. It was developed in response to limitations in the MapReduce cluster computing paradigm, which forces a particular linear dataflowstructure on distributed programs: MapReduce programs read input data from disk, map a function across the data, reduce the results of the map, and store reduction results on disk. Spark's RDDs function as a working set for distributed programs that offers a (deliberately) restricted form of distributed shared memory
AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time you consume - there is no charge when your code is not running. With AWS Lambda, you can run code for virtually any type of application or backend service - all with zero administration. AWS Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging. All you need to do is supply your code in one of the languages that AWS Lambda supports (currently Node.js, Java, C# and Python).
You can use AWS Lambda to run your code in response to events, such as changes to data in an Amazon S3 bucket or an Amazon DynamoDB table; to run your code in response to HTTP requests using Amazon API Gateway; or invoke your code using API calls made using AWS SDKs. With these capabilities, you can use Lambda to easily build data processing triggers for AWS services like Amazon S3 and Amazon DynamoDB process streaming data stored in Amazon Kinesis, or create your own back end that operates at AWS scale, performance, and security.
Learn more about Apache Spark
Learn more about AWS Lambda
|NASA JPL, UC Berkeley AMPLab, Amazon, eBay, Yahoo!, UC Santa Cruz, TripAdvisor, Taboola, Agile Lab, Art.com, Baidu, Alibaba Taobao, EURECOM, Hitachi Solutions||Netflix|
Financial Services Firm29%
Software R&D Company29%
Software R&D Company32%
Comms Service Provider12%
Financial Services Firm9%
Software R&D Company35%
Comms Service Provider18%