Advice From The Community

Read answers to top Backup and Recovery Software questions. 397,408 professionals have gotten help from our community of experts.
David Thompson
What is the best backup for super-duper (100Gbps) fast read and write with hardware encryption?
author avatar
Vendor

The backup speed depends on:
- number of concurrent I/O streams
- data type
- network
- read/write speed of backup repository
- data encryption enable or not
- terabytes of front end data to be backed up

The question is not clear enough, to sizing a high scalable, high throughput environment. To archive the 100Gbps throughput, you have to list down the mentioned information.

For a very large environment, I strongly recommend using either NetBackup or CommVault.

author avatar
Reseller

There is no such thing as best "anything" let alone backups. There are plenty of enterprise solutions that can handle the load you mentioned plenty are available in the market and it all comes down to your needs.

Hardware encryptions might be much more secure (tougher to hack but still hackable) than software encryptions however they open doors for vendor lock-in and that in certain situations can affect the recoverability of your data.

My advice to you is to focus on looking for a backup solution that can help you guarantee the recoverability of your data at the event of a disaster rather than focus on best backup 100gbps with hardware encryptions.

At the end of the day what's the point of a backup solution if it can do all that you mentioned and fails you at the event of a disaster.

If you can give me more environment details such as what kind of platforms and apps are being utilized I may be able to assist other than that my answers to you are there is no such thing as the best backup for 100gbps with hardware encryption.

We live in a world where everything is software-defined and it's safe to say that that's the way everyone should go.

author avatar
Real User

We use the smallest Cohesity cluster possible with three nodes and have 60GBps of available bandwidth. I assume with more nodes you could get to 100Gbps. They have flash and an unbelievable filesystem. Do you have a use case for 12,500 megabytes per second of backup throughput? I'm having trouble envisioning an admin who would be in charge of a source capable of that coming to a forum like this with your exact question!

author avatar
User

It seems an object storage with inline dedupe could fit but would need to be sized for the performance. Backup targets are typically tuned for the ingest. Is the data dedup-able or compressible? How much data are you looking to backup and in how much time? How much data do you need to restore and in how much time?

author avatar
Real User

Your question is not cearly enough for calculate best scenario for your question, Because there are many factors depend on such as :
-Backup for what physical or virtualization environment.
-Data tybe.
-Network speed on all devices.
-Storage tybe flash or tap.
-What is the read/write speed of your disks/tape, AND the bus/controller speed that the disk is attached to?
-How many files and, how much data are you backing up?
-Is your backup application capable of running multiple jobs and sending multiple streams of data simultaneously?

Some potential points for improvement might include:
Upgrading switches and ethernet adapters to Gigabit Ethernet or greater.
Investing in higher performing disk arrays or subsystems to improve read and write speeds.
Investing in LTO-8 tape drives and consider a library, if you are not already using one, so that you can leverage multiplex (multistream) to tape.

author avatar
Real User

To be able to reach that speed of read and writes, other factors also play a role. For example, network topology, NIC speeds and the backup client speed for data delivery.

Aside from that, you'll need larger files to reach that speed, since with smaller files there is always a speed ramp up time.

So there is no straightforward answer.

But what kind of data or machines is he trying to backup? What OS, DB, and type of apps, will help to give a definite answer.

Solutions that will always deliver is Netbackup (All apps, OS' and DB's), Backup Exec (MS apps, Win and Linux and some DB's) and Veeam.

author avatar
Real User

While we do not sell/offer backup SW per se, we do work with a lot of providers like Commvault, Rubrik, Veeam et el, I can say a majority of our user base, large global companies with 10+ Offices, do use Rubrik, and implement it with the generic S3 out that points to s3.customer_name.rstor.io .As RStor pricing model, we do not charge for Puts/Gets Reads/Writes, Ingress/egress fees. And with triple geographic replication a standard offering, the customer data moves fast in all regions with a super-fast network multiple +100G connections LAG's together, transferring 1PB in 22.5hrs, from SJC to LHR!

author avatar
Joshua Roche (Acronis)
Vendor

There are plenty of tools out there at the moment, many include features like data encryption, e-discovery, and instant restore.
For the current use case, small company/no data center - I would recommend Acronis.
The commercial version of the product even includes a proprietary feature called active protection that is a ransomware defense tool that is unlike anything else on the market.

Ariel Lindenfeld
There's a lot of vendor hype about enterprise backup and recovery software. What's really important to look for in a solution? Let the community know what you think. Share your opinions now!
author avatar
Real User

They are several aspects;
1) The frequency with which you need the backup files, folders (files) and / or servers in question to be running. Since this frequency is in theory your closest and farthest recovery point at the same time.

Example 1: If you define that every four hours, in case of a problem you will be able to recover what you backed up four hours ago

2) The estimated size of what you need to back up vs. the time it takes to back it up
Example 2: If you are going to backup 300 GB every four hours and the process takes 8 hrs. (because your information is sent to a MDF - SITE mirror by an internet link or something) then you will not be able to back up every 4 hours, you will have to do it every 8 or 9 hrs.

Example 3: If you are going to backup 50 GB every four hours and the process takes 1 hrs. (because you send your information to an MDF - SITE mirror through an internet link or something) then you will not have problems when you have to make the next backup within 4 hours.

3) The applicant's ability to program (in sequence and / or in parallel) what you need to support

Example 4: Suppose that some files, folders (files) and / or servers need to be backed up every 4 hours. and others every 12 hrs. and others every 24 hours. and others maybe every week. In this case you have to estimate very well the worst scenario that is when the sum of what you are going to be supporting coincides and that slows the process, which implies that when the following programmed backups are activated they effectively run without setback.

4) The flexibility of the application for the execution of incremental or full backups

Example 5: In this case it is knowing what the application is going to do in case a backup fails. Does the incremental part that did not back up start again from scratch? Does it leave a process restart point, if so, how reliable is this process? Will it force you to make a FULL backup that will not take 4 hrs. and that it will take 24 hrs. or more? With what your programming will have to be re-standardized?

5) While it is true that the restoration is the most relevant, prior to this you must ensure that you have well supported what "budgets" should be supported.

In these aspects www.datto.com is what worked best for us.

author avatar
Real User

The most important aspect is the time for the backup and restore to finish, and of course how easy it is to configure schedules, rules, policies, etc.

author avatar
Thang Le Toan (Victory Lee) (Robusta Technology & Training)
Real User

1. Data integrity
(e.g., fast recovery capability, scheduling backups of the most recent problem / high success rate recovery / ability to automatically check or open data to be restored for quick check support to identify define backup data that can restore well).

2. Data availability
(e.g. the ability to successfully back up in the backup window).

3. Integrate with the rest of the infrastructure
(e.g. automation, the ability to create scripts when backing up or restoring or syncing data).

4. Easy to use
(for example, an easy-to-find interface for necessary functions, arranging drivers in a process sequence).
5. Confidentiality, data encryption and data protection.
6. Ability to integrate standards The General Data Protection Regulation (GDPR), Centralized data management, uniform data control, can access backed up data by Token, USB smart.

author avatar
User

The most important thing is the speed and accuracy and flexibility of the recovery process.

author avatar
Real User

It s really "Recovery" software, not "Backup" software. So it is the recovery features that are paramount.

Recovery considerations:
o What type of data do you need to easily recover:
- Single files from Vmware images
- Entire Vmware systems
- Needed OS Support: Windows, AIX, Linux
- Exchange: Database, single mailbox, single message
- SharePoint: Database or document or share
o Administration
- You don't want to have to have a dedicated "Backup administrator".
+ Should be a quiet, reliable, background operation
+ Avoid solutions that depend on Windows that has to be maintained and patched
+ Avoid solutions that require expensive skill sets such as UCS/VMWare/Linux/Windows/AIX etc
+ Upgrades should be painless and done by support not customer
- Hoopless recovery
+ Should be so simple the helpdesk could perform recoveries
+ Self serve sounds good and could be a plus but experience has shown me that they call the help desk anyway
- Self-monitoring
+ Space issues, problems, and such should alert
+ Success should be silent
+ Should not have to check on it every day
o Quick recovery
- Recovery operations usually happen with someone waiting and are usually an interruption to your real work.
- You want it to recover while you are watching so you can tell the customer it is done and get back to your real job

Backup Considerations
o System Resources
- Should be kind to network
+ Incremental backups based on changed blocks not changed files
+ Should automatically throttle itself
- Memory and CPU utilization
+ Backing up should be a background process that does not require much CPU or Memory on the host being backed up
+ Primarily this is for client-based backups
- VMWare considerations
+ If using Vmware snapshots consider the CPU and I/O oad on ESX servers... you may need more
o Replication
- if the backup host is local, a remote replica should be maintained at a remote location
- Replications should not monopolize WAN links
- Recovery should be easily accomplished from either location without jumping through hoops

Random thoughts
o It is OK to go with a new company as opposed to an established one
- Generally speaking, a backup/recovery solution has a capital life span of about 3 to 5 years
- Generally speaking, moving to a new backup solution is fairly straight forward
+ Start backing up to the new solution and let the old solution age out
- So. it is OK to look at non-incumbent solutions
o When replacing an "incumbent", vendors will often give deep discounts
o Don't feel like you are stuck if an upgrade path looks like you are having to buy a solution all over again
o Do a real POC. It does not really take that much time or effort... or shouldn't. If it does, it is not the right solution
o At the end of the day, if it successfully and reliably backs up and recovers your data, it is a working solution

author avatar
Real User

It depends on your operations structure, however, in all cases a solution that can reliably backup your targeted data within your time window, and restore that data in a timeline that meets your business needs is most important. If it cant do that task, it doesn't matter what it costs, how easy it is to integrate, or how intuitive the UI is.

author avatar
Morenwyn Siobhan Ellis (Veritas Technologies)
Vendor

There are two questions here, really. One is technical, and the other is political.

So often, over the years, I have found that the political one is the hardest and the one that tends to have more sway. I have seen, so often, that companies will have global standards, and yet someone always seems to find a way to break those standards and do what they want... and this is the basis of the rise of the new data protection companies.

Once upon a time, there were mainframes, and it was easy. Then we had distributed systems and this is where fragmentation started. I personally had to unify a data protection infrastructure that had 13 different OS' and 5 different data protection products. Just as I did that, that company started a different business unit... and they chose a different data protection product.

Then we got virtualisation, an the teams that ran that environment often ran as a separate unit,a nd so chose their own backup product.... which tended to be new products because they concentrated on just that one single platform. This enabled them to be focused and, arguably, deliver a better solution... for that one platform.

Now we are seeing a plethora of solutions that are coming up and their concentration is cloud providers. Even AWS is getting in the game with a solution, but concentrating on their cloud. This is the new battle ground.

Technically, you can choose one solution. That solution must :

1) Guarantee restore
2) Backup within the required backup window
3) cover traditional enterprise (Which is mattering less and less), Virtual/HCI, and cloud
4) enable you to put that data wherever you nee dit so that the restore can happens within the desired window.
5) Be low cost to run. That is infrastructure, software, facilities and people cost - not just software
6) Scallable

Above all of this, though, is that a company need the political will to force errant departments/people to bend the corporate decision. Without that, the corporate will always be fragmented and will never be able to get the best deal it can from whomsoever the vendor is, and will always waste time fighting off encroachments from other vendors.

author avatar
Real User

There are a ton of great answers below. They highlight all the characteristics of a good backup solution and those characteristics are important. For me, the ability to restore successfully is the one key characteristic. Imagine a 100% secure, easy to use, centralized, deduplicated, inexpensive, fast backup solution that, when you go to restore from it does not work. Does it matter that it is fast and cheap? Does it matter if it is centralized or deduplicated? Not in my view. The key is the ability to restore, and everything else is specific to your needs.

See more Backup and Recovery Software questions »

What is Backup and Recovery Software?

Backup and recovery software performs a well understood role in IT. However, the requirements for backup and recovery tools, as well as their actual implementation and performance can vary widely. As architectures grow more complex, so too can the demands on backup and recovery packages. IT Central Station members comment on what selection factors are best to consider when looking at the purchase of backup/recovery solution.

Members cite performance as an important selection criterion for backup and recovery software tools. Reviewers explain that they want their backup and restore to be fast and easy to use. Instant recovery is prized. Users want a simple GUI, too. Many members put forth a powerful, simple idea, though, which is that backup success is all that counts – that no number of features can ever compensate for a failure to restore missing data.

Other members express a desire for reads that are nearly instantaneous. People want zero downtime backup. A good backup and restore solution should eliminate latency from long distance replication, making synchronous and asynchronous unimportant as descriptors. The backup system should also ideally ensure that all information is backed up continuously across multiple locations. The rationale for this requirement is the goal of providing fail over to get continuous high availability of operational systems.

The ability to perform backup recoverability tests in a virtual lab or on-demand sandbox is considered valuable, as are backup from storage snapshots, de-duplication and simple integration with all operating systems. Application specific selection criteria include item-level recovery for Active Directory, Exchange, SQL Server and SharePoint. Members prefer software that can recover user-specific data such as a mailbox or a file server.

Backup and recovery has to map to specific architectural styles. For example, instant VM recovery is valued because it is known to help speed up recovery objectives (RTOs). Backup managers expect backup and recovery tools to offer useful and easy reporting.

Backup and recovery policies tend to overlap with data management and disaster recovery, which are separate work streams but often rely on the same tools. To this point, some IT Central Station members prefer software that provides long term archiving / retention options. For example, certain types of files can never be purged, by policy. Others want their backup tools used for replication for disaster recovery between data centers

Find out what your peers are saying about Rubrik, Veeam Software, Cohesity and others in Backup and Recovery Software. Updated: February 2020.
397,408 professionals have used our research since 2012.