We just raised a $30M Series A: Read our story

Apache Hadoop OverviewUNIXBusinessApplication

Apache Hadoop is the #6 ranked solution in our list of top Data Warehouse tools. It is most often compared to Microsoft Azure Synapse Analytics: Apache Hadoop vs Microsoft Azure Synapse Analytics

What is Apache Hadoop?
The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing. The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.
Buyer's Guide

Download the Data Warehouse Buyer's Guide including reviews and more. Updated: October 2021

Apache Hadoop Customers
Amazon, Adobe, eBay, Facebook, Google, Hulu, IBM, LinkedIn, Microsoft, Spotify, AOL, Twitter, University of Maryland, Yahoo!, Cornell University Web Lab
Apache Hadoop Video

Archived Apache Hadoop Reviews (more than two years old)

Filter by:
Filter Reviews
Industry
Loading...
Filter Unavailable
Company Size
Loading...
Filter Unavailable
Job Level
Loading...
Filter Unavailable
Rating
Loading...
Filter Unavailable
Considered
Loading...
Filter Unavailable
Order by:
Loading...
  • Date
  • Highest Rating
  • Lowest Rating
  • Review Length
Search:
Showingreviews based on the current filters. Reset all filters
LD
Data Scientist at a tech vendor with 501-1,000 employees
Real User
Good standard features, but a small local-machine version would be useful

Pros and Cons

  • "What comes with the standard setup is what we mostly use, but Ambari is the most important."
  • "In the next release, I would like to see Hive more responsive for smaller queries and to reduce the latency."

What is our primary use case?

The primary use case of this solution is data engineering and data files.

The deployment model we are using is private, on-premises.

What is most valuable?

We don't use many of the Hadoop features, like Pig, or Sqoop, but what I like most is using the Ambari feature. You have to use Ambari otherwise it is very difficult to configure.

What comes with the standard setup is what we mostly use, but Ambari is the most important.

What needs improvement?

Hadoop itself is quite complex, especially if you want it running on a single machine, so to get it set up is a big mission.

It seems that Hadoop is on it's way out and Spark is the way to go. You can run Spark on a single machine and it's easier to setup.

In the next release, I would like to see Hive more responsive for smaller queries and to reduce the latency. I don't think that this is viable, but if it is possible, then latency on smaller guide queries for analysis and analytics.

I would like a smaller version that can be run on a local machine. There are installations that do that but are quite difficult, so I would say a smaller version that is easy to install and explore would be an improvement.

For how long have I used the solution?

I have been using this solution for one year.

What do I think about the stability of the solution?

This solution is stable but sometimes starting up can be quite a mission. With a full proper setup, it's fine, but it's a lot of work to look after, and to startup and shutdown.

What do I think about the scalability of the solution?

This solution is scalable, and I can scale it almost indefinitely.

We have approximately two thousand users, half of the users are using it directly and another thousand using the products and systems running on it. Fifty are data engineers, fifteen direct appliances, and the rest are business users.

How are customer service and technical support?

There are several forums on the web, and Google search works fine. There is a lot of information available and it often works.

They also have good support in regards to the implementation.

I am satisfied with the support. Generally, there is good support.

Which solution did I use previously and why did I switch?

We used the more traditional database solutions such as SAP IQ  and Data Marks, but now it's changing more towards Data Science and Big Data.

We are a smaller infrastructure, so that's how we are set up.

How was the initial setup?

The initial setup is quite complex if you have to set it up yourself. Ambari makes it much easier, but on the cloud or local machines, it's quite a process.

It took at least a day to set it up.

What about the implementation team?

I did not use a vendor. I implemented it myself on the cloud with my local machine.

Which other solutions did I evaluate?

There was an evaluation, but it was a decision to implement with Data Lake and Hortonworks data platform.

What other advice do I have?

It's good for what is meant to do, a lot of big data, but it's not as good for low latency applications.

If you have to perform quick queries on naive or analytics it can be frustrating.

It can be useful for what it was intended to be used for.

I would rate this solution a seven out of ten.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
MB
IT Expert at a comms service provider with 1,001-5,000 employees
Real User
An inexpensive and flexible suite that helps users integrate varied legacy systems

Pros and Cons

  • "The best thing about this solution is that it is very powerful and very cheap."
  • "The upgrade path should be improved because it is not as easy as it should be."

What is our primary use case?

We primarily use this product to integrate legacy systems.

How has it helped my organization?

It helps us work with older products and more easily create solutions. 

What is most valuable?

The most valuable thing about this program for us is that it is very powerful and very cheap. We're using a lot of the program's modules and features because we're using software and hardware that can be difficult to integrate. For example, we're using supersets and a lot of old products from difficult systems. We love having the various options and features that allow us to work with flexibility.

What needs improvement?

We are using HDTM circuit boards, and I worry about the future of this product and compatibility with future releases. It's a concern because, for now, we do not have a clear path to upgrade. The Hadoop product is in version three and we'd like to upgrade to the third version. But as far as I know, it's not a simple thing.

There are a lot of features in this product that are open-source. If something isn't included with the distribution we are not limited. We can take things from the internet and integrate them. As far as I know, we are using Presto which isn't included in HDP (Hortonworks Data Platform) and it works fine. Not everything has to be included in the release. If something is outside of HDP and it works, that is good enough for me. We have the flexibility to incorporate it ourselves.

For how long have I used the solution?

We have been using the product for about five years.

What do I think about the stability of the solution?

The product is well tested and very stable. We have no problems with the stability of it at all. Really we just install it and forget about fussing with it. We just use the features it offers to be productive.

What do I think about the scalability of the solution?

This is a scalable solution and we like what it does. It is currently serving about 100 users at our organization and it seems like it can handle more easily.

How are customer service and technical support?

We actually have not used technical support. Everything we needed a solution for we just use Google and it's enough for us. Sometimes we do have issues, but not often. The issues are mainly to do with the terminals because it's a bit complicated to integrate these other systems. We have managed to solve all the problems up till now.

Which solution did I use previously and why did I switch?

We had a very old version of Hadoop which was already installed by another company and we upgraded it. We didn't really switch we just upgraded what was here.

How was the initial setup?

The initial setup wasn't very easy because of the incredible security, but we have managed to get by that. It's sort of simple, in my opinion, once you get past that part. I think, in all, it took about half of a year. But it wasn't a new deployment, it's an upgrade and the bigger challenge was moving the data. We pretty much just supported the existing product and moved to HDP.

What about the implementation team?

We have everything on-premises and we did the deployment and maintenance. 
It took four people. We want to increase usage of Hadoop and we are thinking about it very heavily. We're actually in the process of doing it. At the same time, we are integrating things from other systems to Hadoop.

What other advice do I have?

I would give this product a rating of eight out of ten. It would not be a ten out of ten because of some problems we are having with the upgrade to the newer version. It would have been better for us if these problems were not holding us back. I think eight is good enough.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Find out what your peers are saying about Apache, VMware, Snowflake Computing and others in Data Warehouse. Updated: October 2021.
542,721 professionals have used our research since 2012.
MahalingamShanmugam
User
Real User
Reduces cost, saves time, and provides insight into our unstructured data

What is our primary use case?

We use this solution for our Enterprise Data Lake.

How has it helped my organization?

Using this solution has reduced the overall TCO. It has also improved data processing time for the machine and provides greater insight into our unstructured data.

What is most valuable?

The most valuable features are the ability to process the machine data at a high speed, and to add structure to our data so that we can generate relevant analytics.

What needs improvement?

We would like to have more dynamics in merging this machine data with other internal data to make more meaning out of it.

For how long have I used the solution?

More than four years.

What is our primary use case?

We use this solution for our Enterprise Data Lake.

How has it helped my organization?

Using this solution has reduced the overall TCO. It has also improved data processing time for the machine and provides greater insight into our unstructured data.

What is most valuable?

The most valuable features are the ability to process the machine data at a high speed, and to add structure to our data so that we can generate relevant analytics.

What needs improvement?

We would like to have more dynamics in merging this machine data with other internal data to make more meaning out of it.

For how long have I used the solution?

More than four years.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Samuel  Feinberg
Analytics Platform Manager at a consultancy with 10,001+ employees
Real User
Parallel processing allows us to get jobs done, but the platform needs more direct integration of visualization applications

Pros and Cons

  • "Two valuable features are its scalability and parallel processing. There are jobs that cannot be done unless you have massively parallel processing."
  • "I would like to see more direct integration of visualization applications."

What is our primary use case?

We use it as a data lake for streaming analytical dashboards.

How has it helped my organization?

There is a lot of difference. I think the best case is that we are able to drill down to transactional records and really build a root-cause analysis for various issues that might arise, on demand. Because we're able to process in parallel, we don't have to wait for the big data warehouse engine. We process down what the data is and then build it up to an answer, and we can have an answer in an hour rather than 10 hours.

What is most valuable?

  • Scalability
  • Parallel processing

There are jobs that cannot be done unless you have massively parallel processing; for instance, processing call-detail records for telecom.

What needs improvement?

In general, Hadoop has as lot of different component parts to the platform - things like Hive and HBase - and they're all moving somewhat independently and somewhat in parallel. I think as you look to platforms in the cloud or into walled-garden concepts, like Cloudera or Azure, you see that the third-party can make sure all the components work together before they are used for business purposes. That reduces a layer of administration configuration and technical support.

I would like to see more direct integration of visualization applications.

For how long have I used the solution?

More than five years.

What do I think about the stability of the solution?

In general, stability can be a challenge. It's hard to say what stability means. You're in an environment that's before production-line manufacturing, where none of the parts relate together exactly as they should. So that can create some instability.

To realize the benefit of these kinds of open-source, big-data environments, you want to use as many different tools as you can get. That brings with it all this overhead of making them work together. It's kind of a blessing and a curse, at the same time: There's a tool for everything.

How are customer service and technical support?

Apache is the open-source foundation that Cloudera and Hortonworks contribute code and some work to. I don't know that there is actually support and structure, per se, for Apache.

We have had premium, at various times with various companies. From the three dominant companies I've worked with - Cloudera, Hortonworks, and MapR - there is a premium support package but that still only covers their base. Distribution is not necessarily all the add-ons that are on top of it, which is really a big challenge: to get everything to work together.

Which solution did I use previously and why did I switch?

There are the older relational database technologies: Netezza, SQL Server, MySQL, Oracle, Teradata. All have some advantages and some disadvantages. Most notably, they are all significantly more expensive in terms of the capital expense, rather than the operational expense. They are "walled-garden," so to speak, that are curated and have a distinct set of tools that work with them, and not the bleeding-edge ingenuity that comes with an open-source platform.

Data warehousing is 30 years old, at least. Big data is, in its current form, has only been around for four or five years old.

How was the initial setup?

There are capacities in which I have been responsible for setup, administration, and building the applications on those environments. Each of the components is relatively straightforward. The complexity comes from all the different components.

What other advice do I have?

Implement for defined use cases. Don't expect it to all just work very easily.

I would rate this platform a seven out of 10. On the one hand, it's the only place you can use certain functions, and on the other hand, it's not going to put any of the other ones out of business. It's really more of a complement. There is no fundamental battle between relational databases and Hadoop.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
AM
CEO
Real User
We are able to ingest huge volumes/varieties of data, but it needs a data visualization tool and enhanced Ambari for management

Pros and Cons

  • "Initially, with RDBMS alone, we had a lot of work and few servers running on-premise and on cloud for the PoC and incubation. With the use of Hadoop and ecosystem components and tools, and managing it in Amazon EC2, we have created a Big Data "lab" which helps us to centralize all our work and solutions into a single repository. This has cut down the time in terms of maintenance, development and, especially, data processing challenges."
  • "Since both Apache Hadoop and Amazon EC2 are elastic in nature, we can scale and expand on demand for a specific PoC, and scale down when it's done."
  • "Most valuable features are HDFS and Kafka: Ingestion of huge volumes and variety of unstructured/semi-structured data is feasible, and it helps us to quickly onboard a new Big Data analytics prospect."
  • "Based on our needs, we would like to see a tool for data visualization and enhanced Ambari for management, plus a pre-built IoT hub/model. These would reduce our efforts and the time needed to prove to a customer that this will help them."
  • "General installation/dependency issues were there, but were not a major, complex issue. While migrating data from MySQL to Hive, things are a little challenging, but we were able to get through that with support from forums and a little trial and error."

What is our primary use case?

Big Data analytics, customer incubation. 

We host our Big Data analytics "lab" on Amazon EC2. Customers are new to Big Data analytics so we do proofs of concept for them in this lab. Customers bring historical, structured data, or IoT data, or a blend of both. We ingest data from these sources into the Hadoop environment, build the analytics solution on top, and prove the value and define the roadmap for customers.

How has it helped my organization?

Initially, with RDBMS alone, we had a lot of work and few servers running on-premise and on cloud for the PoC and incubation. With the use of Hadoop and ecosystem components and tools, and managing it in Amazon EC2, we have created a Big Data "lab" which helps us to centralize all our work and solutions into a single repository. This has cut down the time in terms of maintenance, development and, especially, data processing challenges. 

We were using MySQL and PostgreSQL for these engagements, and scaling and processing were not as easy when compared to Hadoop. Also, customers who are embarking on a big journey with semi-structured information prefer to use Hadoop rather than a RDBMS stack. This gives them clarity on the requirements.

In addition, since both Apache Hadoop and Amazon EC2 are elastic in nature, we can scale and expand on demand for a specific PoC, and scale down when it's done.

Flexibility, ease of data processing, reduced cost and efforts are the three key improvements for us.

What is most valuable?

HDFS and Kafka: Ingestion of huge volumes and variety of unstructured/semi-structured data is feasible, and it helps us to quickly onboard a new Big Data analytics prospect.

What needs improvement?

Based on our needs, we would like to see a tool for data visualization and enhanced Ambari for management, plus a pre-built IoT hub/model. These would reduce our efforts and the time needed to prove to a customer that this will help them.

For how long have I used the solution?

Less than one year.

What do I think about the stability of the solution?

We have a three-node cluster running on cloud by default, and it has been stable so far without any stoppages due to Hadoop or other ecosystem components.

What do I think about the scalability of the solution?

Since this is primarily for customer incubation, there is a need to process huge volumes of data, based on the proof of value engagement. During these processes, we scale the number of instances on demand (using Amazon spot instances), use them for a defined period, and scale down when the PoC is done. This gives us good flexibility and we pay only for usage.

How is customer service and technical support?

Since this is mostly community driven, we get a lot of input from the forums and our in-house staff who are skilled in doing the job. So far, most of the issues we have had during setup or scaling have primarily been on the infrastructure side and not on the stack. For most of the problems we get answers from the community forums.

How was the initial setup?

We didn't have any major issues except for knowledge, so we hired the right person who had hands-on experience with this stack, and worked with the cloud provider to get the right mechanism for handling the stack.

General installation/dependency issues were there, but were not a major, complex issue. While migrating data from MySQL to Hive, things are a little challenging, but we were able to get through that with support from forums and a little trial and error. In addition, the old PoCs which were migrated had issues in directly connecting to Hive. We had to build some user functions to handle that.

What's my experience with pricing, setup cost, and licensing?

We normally do not suggest any specific distributions. When it comes to cloud, our suggestion would be to choose different types of instances offered by Amazon cloud, as we are technology partners of Amazon for cost savings. For all our PoCs, we stick to the default distribution.

Which other solutions did I evaluate?

None, as this stack is familiar to us and we were sure it could be used for such engagements without much hassle. Our primary criteria were the ability to migrate our existing RDBMS-based PoC and connectivity via our ETL and visualization tool. On top of that, support for semi-structured data for ELT. All three of these criteria were a fit with this stack.

What other advice do I have?

Our general suggestion to any customer is not to blindly look and compare different options. Rather, list the exact business needs - current and future - and then prepare a matrix to see product capabilities and evaluate costs and other compliance factors for that specific enterprise.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
ITCS user
Software Architect at a tech services company with 10,001+ employees
Consultant
Gives us high throughput and low latency for KPI visualization

Pros and Cons

  • "High throughput and low latency. We start with data mashing on Hive and finally use this for KPI visualization."

    What is our primary use case?

    Data aggregation for KPIs. The sources of data come in all forms so the data is unstructured. We needed high storage and aggregation of data, in the background.

    How has it helped my organization?

    We start with data mashing on Hive and finally use this for KPI visualization. This intermediate step not only mashes data in the form that we want through data Cube slicing, but also helps us save states as snapshots for multiple time frames.

    Without this, we would have had to plan another data source for only this purpose. Moving this step closer to processing worked better than keeping it at visualization. Although we can't completely avoid using data stores/snapshots at visualization, this step proved to be promising for getting data ready for better analytics and insights.

    What is most valuable?

    High throughput and low latency. We start with data mashing on Hive and finally use this for KPI visualization.

    What needs improvement?

    At the beginning, MRs on Hive made me think we should get down to Hadoop MRs to have better control of the data. But later, Hive as a platform upgraded very well. I still think a Spark-type layer on top gives you an edge over having only Hive.

    For how long have I used the solution?

    Less than one year.

    What other advice do I have?

    I rate it an eight out of 10. It's huge, complex, slow. But does what it is meant for.

    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    Chitharanjan Billa
    Database/Middleware Consultant (Currently at U.S. Department of Labor) at a tech services company with 51-200 employees
    Consultant
    ​There are no licensing costs involved, hence money is saved on software infrastructure​

    What is our primary use case?

    Content management solution Unified Data solution Apache Hadoop running on Linux

    What is most valuable?

    Data ingestion: It has rapid speed, if Apache Accumulo is used. Data security Inexpensive

    What needs improvement?

    It needs better user interface (UI) functionalities.

    For how long have I used the solution?

    Three to five years.

    What's my experience with pricing, setup cost, and licensing?

    There are no licensing costs involved, hence money is saved on the software infrastructure.

    What is our primary use case?

    • Content management solution
    • Unified Data solution
    • Apache Hadoop running on Linux

    What is most valuable?

    • Data ingestion: It has rapid speed, if Apache Accumulo is used.
    • Data security
    • Inexpensive

    What needs improvement?

    It needs better user interface (UI) functionalities.

    For how long have I used the solution?

    Three to five years.

    What's my experience with pricing, setup cost, and licensing?

    There are no licensing costs involved, hence money is saved on the software infrastructure.

    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    RC
    Senior Associate at a financial services firm with 10,001+ employees
    Real User
    Relatively fast when reading data into other platforms but can't handle queries with insufficient memory

    Pros and Cons

    • "As compared to Hive on MapReduce, Impala on MPP returns results of SQL queries in a fairly short amount of time, and is relatively fast when reading data into other platforms like R."
    • "The key shortcoming is its inability to handle queries when there is insufficient memory. This limitation can be bypassed by processing the data in chunks."

    What is most valuable?

    Impala. As compared to Hive on MapReduce, Impala on MPP returns results of SQL queries in a fairly short amount of time, and is relatively fast when reading data into other platforms like R (for further data analysis) or QlikView (for data visualisation).

    How has it helped my organization?

    The quick access to data enabled more frequent data backed decisions.

    What needs improvement?

    The key shortcoming is its inability to handle queries when there is insufficient memory. This limitation can be bypassed by processing the data in chunks.

    For how long have I used the solution?

    Two-plus years.

    What do I think about the stability of the solution?

    Typically instability is experienced due to insufficient memory, either due to a large job being triggered or multiple concurrent small requests.

    What do I think about the scalability of the solution?

    No. This is by default a cluster-based setup and hence scaling is just a matter of adding on new data nodes.

    How are customer service and technical support?

    Not applicable to Cloudera. We have a separate onsite vendor to manage the cluster.

    Which solution did I use previously and why did I switch?

    No. Two years ago this was a new team and hence there were no legacy systems to speak of.

    How was the initial setup?

    Complex. Cloudera stack itself was insufficient. Integration with other tools like R and QlikView was required and in-house programs had to be built to create an automated data pipeline.

    What's my experience with pricing, setup cost, and licensing?

    Not much advice as pricing and licensing is handled at an enterprise level.

    However do take into consider that data storage and compute capacity scale differently and hence purchasing a "boxed" / 'all-in-one" solution (software and hardware) might not be the best idea.

    Which other solutions did I evaluate?

    Yes. Oracle Exadata and Teradata.

    What other advice do I have?

    Try open-source Hadoop first but be aware of greater implementation complexity. If open-source Hadoop is "too" complex, then consider a vendor packaged Hadoop solution like HortonWorks, Cloudera, etc.

    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    it_user693231
    Big Data Engineer at a tech vendor with 5,001-10,000 employees
    Vendor
    HDFS allows you to store large data sets optimally. After switching to big data pipelines our query performances had improved hundred times.

    What is most valuable?

    HDFS allows you to store large data sets optimally.

    How has it helped my organization?

    After switching to big data pipelines, our query performance improved a hundred times.

    What needs improvement?

    Rolling restarts of data nodes need to be done in a way that can be further optimized. Also, I/O operations can be optimized for more performance.

    For how long have I used the solution?

    I have used Hadoop for over three years.

    What do I think about the stability of the solution?

    Once we had an issue with stability, due to a complete shutdown of a cluster. Bringing up a cluster took a lot of time because of some order that needed to be followed.

    What do I think about the scalability of the solution?

    We have not had scalability issues.

    How are

    What is most valuable?

    HDFS allows you to store large data sets optimally.

    How has it helped my organization?

    After switching to big data pipelines, our query performance improved a hundred times.

    What needs improvement?

    Rolling restarts of data nodes need to be done in a way that can be further optimized. Also, I/O operations can be optimized for more performance.

    For how long have I used the solution?

    I have used Hadoop for over three years.

    What do I think about the stability of the solution?

    Once we had an issue with stability, due to a complete shutdown of a cluster. Bringing up a cluster took a lot of time because of some order that needed to be followed.

    What do I think about the scalability of the solution?

    We have not had scalability issues.

    How are customer service and technical support?

    The community is very supportive and provided prompt replies and suggestions to JIRA tickets.

    Which solution did I use previously and why did I switch?

    We didn’t have a previous solution. It was a move from RDBMS to big data.

    How was the initial setup?

    Initial setup of a few nodes was simple, but as we increased the node count it became complex, as we need to maintain rack topology, etc.

    What's my experience with pricing, setup cost, and licensing?

    It’s free and it is open source.

    What other advice do I have?

    I would suggest using this product. We were able to use this for petabytes of data.

    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    ITCS user
    Infrastructure Engineer at Zirous, Inc.
    Real User
    Top 20
    The Distributed File System stores video, pictures, JSON, XML, and plain text all in the same file system.

    What is most valuable?

    The Distributed File System, which is the base of Hadoop, has been the most valuable feature with its ability to store video, pictures, JSON, XML, and plain text all in the same file system.

    How has it helped my organization?

    We do use the Hadoop platform internally, but mostly it is for R&D purposes. However, many of the recent projects that our IT consulting firm has taken on have deployed Hadoop as a solution to store high-velocity and highly variable data sizes and structures, and be able to process that data together quickly and efficiently.

    What needs improvement?

    Hadoop in and of itself stores data with 3x redundancy and our organization has come to the conclusion that the default 3x results in too much wasted disk space. The user has the ability to change the data replication standard, but I believe that the Hadoop platform could eventually become more efficient in their redundant data replication. It is an organizational preference and nothing that would impede our organization from using it again, but just a small thing I think could be improved.

    For how long have I used the solution?

    This version was released in January 2016, but I have been working with the Apache Hadoop platform for a few years now.

    What was my experience with deployment of the solution?

    The only issues we found during deployment were errors originating from between the keyboard and the chair. I have set up roughly 20 Hadoop Clusters and mostly all of them went off without a hitch, unless I configured something incorrectly on the pre-setup.

    What do I think about the stability of the solution?

    We have not encountered any stability problems with this platform.

    What do I think about the scalability of the solution?

    We have scaled two of the clusters that we have implemented; one in the cloud, one on-premise. Neither ran into any problems, but I can say with certainty that it is much, much easier to scale in a cloud environment than it is on-premise.

    How are customer service and technical support?

    Customer Service:

    Apache Hadoop is open-source and thus customer service is not really a strong point, but the documentation provided is extremely helpful. More so than some of the Hadoop vendors such as MapR, Cloudera, or Hortonworks.

    Technical Support:

    Again, it's open source. There are no dedicated tech support teams that we've come across unless you look to vendors such as Hortonworks, Cloudera, or MapR.

    Which solution did I use previously and why did I switch?

    We started off using Apache Hadoop for our initial Big Data initiative and have stuck with it since.

    How was the initial setup?

    Initial setup was decently straightforward, especially when using Apache Ambari as a provisioning tool. (I highly recommend Ambari.)

    What about the implementation team?

    We are the implementers.

    What's my experience with pricing, setup cost, and licensing?

    It's open source.

    Which other solutions did I evaluate?

    We solely looked at Hadoop.

    What other advice do I have?

    Try, try, and try again. Experiment with MapReduce and YARN. Fine tune your processes and you will see some insane processing power
    results.

    I would also recommend that you have at least a 12-node cluster: two master nodes, eight compute/data nodes, one hive node (SQL), 1 Ambari dedicated node.

    For the master nodes, I would recommend 4-8 Core, 32-64 GB RAM, 8-10 TB HDD; the data nodes, 4-8 Core, 64 GB RAM, 16-20 TB RAID 10 HDD; hive node should be around 4 Core, 32-64 GB RAM, 5-6 TB RAID 0 HDD; and the Ambari dedicated server should be 2-4 Core, 8-12 GB RAM, 1-2 TB HDD storage.

    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    ITCS user
    Senior Hadoop Engineer with 1,001-5,000 employees
    Vendor
    The heart of BigData

    What is most valuable?

    • Storage
    • Processing (cost efficient)

    How has it helped my organization?

    With the increase in data size for the business, this horizontal scalable appliance has answered every business question in terms of storage and processing. Hadoop ecosystem has not only provided a reliable distributed aggregation system but has also allowed room for analytics which has resulted in great data insights.

    What needs improvement?

    The Apache team is doing great job and releasing Hadoop versions much ahead of what we can think about. Every room for improvement is fixed as soon as a version is released by ASF. Currently, Apache Oozie 4.0.1 has some compatibility issues with Hadoop 2.5.2.

    For how long have I used the solution?

    2.5 years

    What was my experience with deployment of the solution?

    Not at all.

    What do I think about the stability of the solution?

    We did when we started initially with Hadoop 1.x, which did’t have HA, but now we don’t have any stability issue.

    What do I think about the scalability of the solution?

    Hadoop is known for its scalability. Yahoo stores approx. 455 PB in their Hadoop cluster.

    How are customer service and technical support?

    Customer Service:

    It depends on the Hadoop distributor. I would rate Hortonworks 9/10.

    Technical Support:

    I would rate Hortonworks 9/10.

    Which solution did I use previously and why did I switch?

    We previously used Netezza. We switched because our business required a highly scalable appliance like Hadoop.

    How was the initial setup?

    It's a bit complex in terms of build around for commodities, but soon it will ease up as the product matures.

    What about the implementation team?

    We used a vendor team who were 9/10.

    What was our ROI?

    Valuable storage and processing with a lower cost than previously.

    What's my experience with pricing, setup cost, and licensing?

    Best in pricing and licensing depends on the flavors, but remember it is only good if you have very large data set which cannot be handled by traditional RDBMS.

    Which other solutions did I evaluate?

    Cloud options.

    What other advice do I have?

    First, understand your business requirement; second, evaluate the traditional RDBMS scalability and capability, and finally, if you have reached to the tip of an iceberg (RDBMS) then yes, you definitely need an island (Hadoop) for your business. Feasibility checks are important and efficient for any business before you can take any crucial step. I would also say “Don’t always flow with stream of a river because some time it will lead you to a waterfall, so always research and analyze before you take a ride.”

    Disclosure: I am a real user, and this review is based on my own experience and opinions.