We just raised a $30M Series A: Read our story

Top 8 Backup and Recovery Software Tools

Veeam Backup & ReplicationZertoCommvaultRubrikCohesity DataProtectNakivoQuorum OnQVembu BDR Suite
  1. leader badge
    No problems with stability. Very user-friendly and easily recoverable.
  2. leader badge
    It's also very much faster than any other migration or disaster recovery platform we have. I work with virtualization, mostly on VMware, and I must admit that Zerto is even better than VMware Site Recovery Manager. Zerto compresses the data and it works much faster.
  3. Find out what your peers are saying about Veeam Software, Zerto, Commvault and others in Backup and Recovery Software. Updated: November 2021.
    552,136 professionals have used our research since 2012.
  4. leader badge
    Commvault gives us a single platform to manage and recover our data. Since we are a research organization, backup is one of the most critical parts of our IT operations and services. Internally, we run it as a managed service, and there is a single console that makes it easy for management to see the performance.
  5. leader badge
    What I like about Rubrik is that you can back up to the cloud. It is a different kind of backup where you're not archiving the tape. Rubrik is a physical appliance. It is a dedicated appliance to which you can back up, and you can also back up to another device. So, you can back up to Rubrik and then to another device. If you have more than one site, you can set up Rubrik on site A and also on site B. You can back up to the Rubrik on site A and then replicate that data to the Rubrik on site B.
  6. leader badge
    The solution is stable. Its most valuable feature is its ransomware protection, so it has immutable backups.
  7. The solution is stable; there is no need to retry any sort of jobs in daily operations.The user interface in general is very good. I
  8. report
    Use our free recommendation engine to learn which Backup and Recovery Software solutions are best for your needs.
    552,136 professionals have used our research since 2012.
  9. Quorum OnQ has taken the guesswork out of backup/recovery and disaster recovery. The most useful feature is the one-click recovery.
  10. The ability to map a drive and restore a separate file is most valuable. The restoration activity is good.You can restore all your data or partial parts of it. You can restore a specific version of the data. It has a lot of options for restore, so you can have the correct data that you want to restore. This is very important. You must know what you are going to restore. Otherwise, you may be overwriting correct data with other data. You must know what specific files you are restoring as well as which version. Partial restore is very important because there might be some files which are newer than the backup and some files that are corrupted. You need to restore some files from the backup, but not all the files.

Advice From The Community

Read answers to top Backup and Recovery Software questions. 552,136 professionals have gotten help from our community of experts.
Ariel Lindenfeld
Hi peers, There's a lot of vendor hype about enterprise backup and recovery software. What's really important to look for in a solution? Let the community know what you think. Share your opinions now!
author avatarTomas-Dalebjörk (CGI)
Real User

There are several things to consider.


The abilities to have flexibilities to fulfill requirements


- Recovery Time Objectives (RTO, how to fulfill different requirements that the business has to restore data that meet the requirements of "how long time can the business live without the data")


- Recovery Point Objectives (RPO, how to fulfill different requirements that the business has about how much data to lose in case of different incidents)


- Backup Time Objectives (BTO, how efficient the solutions are to protect the data)


- Resource utilization (How cost-efficient the solutions are with the resources utilization), data reduction inline/post, progressive incremental forever with/without rebuilding base data


- Maintenance tasks on the solutions (data retention managements), protecting the solution, upgrading off/online, ..


- Support from vendor


- Price of the solution


- Limitation of licenses, gentlemen's agreement, or hard limits


- The ability to use different retention policies, exclude content, use different storages, extra copies etc...


- Security of the solution

Philosophy: Why backing up data again if the data has not been changed?


The fastest way to protect data is to not back it up (again)


Progressive incremental forever (Always incremental)

Philosophy: Why restore all data if you can restore only the data needed


Instant Recovery or restoring single objects

Integrating the backup process with applications such as PostgreSQL, Oracle etc, so that the archive logs / WAL logs, etc will be protected immediately when it is created will improve the RPO. This can be done using SPFS - a filesystem for Spectrum Protect

Taking application-consistent snapshots stored on Spectrum Protect storage using efficient data transfer (progressive block-level incremental forever), reduces the time to take backups, and saves resources on the backup server and the server protected. This can be done using SPFS - Instant Recovery for Spectrum Protect

Restoring only what is needed, can be performed by native backup software such as Spectrum Protect. Provisioning an application consistent snapshot to a server and accessing the data while the restoration is performed in the background can be done using SPIR - Instant Recovery for Spectrum Protect. This helps clients to access data directly to select the data that is needed to copy to the origin or use as production data directly.

author avatarRaul Garcia
Real User

They are several aspects;
1) The frequency with which you need the backup files, folders (files) and / or servers in question to be running. Since this frequency is in theory your closest and farthest recovery point at the same time.

Example 1: If you define that every four hours, in case of a problem you will be able to recover what you backed up four hours ago

2) The estimated size of what you need to back up vs. the time it takes to back it up
Example 2: If you are going to backup 300 GB every four hours and the process takes 8 hrs. (because your information is sent to a MDF - SITE mirror by an internet link or something) then you will not be able to back up every 4 hours, you will have to do it every 8 or 9 hrs.

Example 3: If you are going to backup 50 GB every four hours and the process takes 1 hrs. (because you send your information to an MDF - SITE mirror through an internet link or something) then you will not have problems when you have to make the next backup within 4 hours.

3) The applicant's ability to program (in sequence and / or in parallel) what you need to support

Example 4: Suppose that some files, folders (files) and / or servers need to be backed up every 4 hours. and others every 12 hrs. and others every 24 hours. and others maybe every week. In this case you have to estimate very well the worst scenario that is when the sum of what you are going to be supporting coincides and that slows the process, which implies that when the following programmed backups are activated they effectively run without setback.

4) The flexibility of the application for the execution of incremental or full backups

Example 5: In this case it is knowing what the application is going to do in case a backup fails. Does the incremental part that did not back up start again from scratch? Does it leave a process restart point, if so, how reliable is this process? Will it force you to make a FULL backup that will not take 4 hrs. and that it will take 24 hrs. or more? With what your programming will have to be re-standardized?

5) While it is true that the restoration is the most relevant, prior to this you must ensure that you have well supported what "budgets" should be supported.

In these aspects www.datto.com is what worked best for us.

author avatarThang Le Toan (Victory Lee) (Robusta Technology & Training)
Real User

1. Data integrity
(e.g., fast recovery capability, scheduling backups of the most recent problem / high success rate recovery / ability to automatically check or open data to be restored for quick check support to identify define backup data that can restore well).

2. Data availability
(e.g. the ability to successfully back up in the backup window).

3. Integrate with the rest of the infrastructure
(e.g. automation, the ability to create scripts when backing up or restoring or syncing data).

4. Easy to use
(for example, an easy-to-find interface for necessary functions, arranging drivers in a process sequence).
5. Confidentiality, data encryption and data protection.
6. Ability to integrate standards The General Data Protection Regulation (GDPR), Centralized data management, uniform data control, can access backed up data by Token, USB smart.

author avatarIvo Dissel
Real User

The most important aspect is the time for the backup and restore to finish, and of course how easy it is to configure schedules, rules, policies, etc.

author avatarCheyenne Harden
Reseller

When deploying backup solutions we look at features that work the way we expect them to. 


Data should be deduplicated to retain quick efficient backups while actually being able to restore without issue. Restoring databases, mailboxes, and domain controllers is particularly difficult for some well-known vendors. We have observed many instances of potential clients having failed restores with "successful" backups. So, having reliable restores is a must. Test often!


Backups must be flexible to meet customer needs with custom retention times while providing quick restore options.


The UI must be easy to use or mistakes will be made during the configuration of backup jobs.

author avatarChrisKetel
User

RPO and RTO

author avatarRaul Garcia
Real User

Exactly, according to what is mentioned, I would add the order of priority that I would give them from my experience (and of course to your best consideration).


From the last Backup of the data you have until the moment you apply.


The contingency is your RPO (Recovery Point Objective) and after you apply the contingency and until you restore the data it is your RTO (Recovery Time Objectives).


It seems to me that it is more important to consider the RTO (Recovery Time Objectives) because it is always the longest in the process, if we saw it as a Critical Path the RTO this is your Critical Path.

author avatarChrisKetel
User

The most important thing is the speed and accuracy and flexibility of the recovery process.

Rony_Sklar
Office 365 has built-in backup functionality, but some people recommend having a third-party backup. Is this necessary, and what solutions do you recommend for this?
author avatarTonyKerr
Consultant

In regards to Backups for 365, It all depends on backup costings licenses and functionality and what type of environment you have to say for Hybrid to the cloud.

If you are in a large Enterprise environment it may be necessary to change your backup strategy to cover all products to get a cost-effective solution however 365 has basic built-in functionality but not many features as enterprise products.

DPM
https://docs.microsoft.com/en-us/system-center/dpm/dpm-protection-matrix?view=sc-dpm-2019

Veeam
https://www.ct.co.uk/cloud/secure-backup-for-office-365-as-a-service?campaignid=1044984678&adgroupid=55058808401&adid=435674883076&gclid=CjwKCAjwqpP2BRBTEiwAfpiD-w-XyR7Y9dhHMjjq6ZN53N349aFcr396VYCShSVysrv-pC1Oa9bH_BoCalgQAvD_BwE

AvePoint
https://www.avepoint.com/uk/products/cloud/backup?utm_source=google&utm_medium=cpc&utm_content=lp-form&utm_campaign=backup-uk&gclid=CjwKCAjwqpP2BRBTEiwAfpiD--8clZue35PJp7nGKiqaGSKG7MbK2aGbXBWNFkfW2_GTha-j_19QzRoCAQEQAvD_BwE

SolarWinds
https://pages.solarwindsmsp.com/back-up-the-full-office-365-suite-at-an-affordable-price.html?&utm_source=google-search&utm_medium=cpc&utm_term=office%20365%20backup&utm_content=1782668382&utm_campaign=BU_CPC-Goog_Nonbrand&ds_rl=1280700&ds_rl=1284819&ds_rl=1284819&gclid=CjwKCAjwqpP2BRBTEiwAfpiD-zlldxtOx_cQaWL46TGA5Fx3NYCIn0VR8Z6zddz_0KyAj-bi1GiG7hoCTLYQAvD_BwE&gclsrc=aw.ds

author avatarRupert Laslett (iland Internet Solutions Corp)
Vendor

Due to Microsoft's 'shared responsibility' model, it is absolutely necessary to have a backup of your O365 data, especially if the data is critical to the business. Whether you require a backup to be compliant or are looking for protection against accidental or malicious deletion (Insider Threats or Malware), a long term archive solution is well worth the price.

There are many providers offering O365 backup solutions today so be sure to check for any hidden fees or potential caps. It's also worth checking to see if the vendor supports backup for SharePoint Online, Teams, and OneDrive as well as Exchange Online.

Some companies provide licenses for you to backup locally, others provide an almost SaaS-like model, incorporating the storage and licensing.
If you don't have local storage available or do not wish to backup locally then you're best off looking at Cloud Service Providers or SaaS providers for O365 Backup. Be sure to understand where your data is held, the level of security and redundancy, and whether or not there is any level of support included in the cost.

You'll also want to be sure you can restore easily, with several different restore options as some vendors have very limited options.
iland cloud, the company I represent, offers a backup of the entire domain within O365 for a per-user price including licensing, unlimited storage, and support, with no extra fees.

Feel free to contact me via LinkedIn if you would like to find out more.
Also happy to answer any questions on other vendors that I have experience with.

author avatarVladan Kojanic
Real User

Surely. Of course, you should first check what and what kind of contract you have with the cloud provider for using Office 365, what kind of license and support you have. But I would definitely recommend some corporate backup solution as well. If you use business applications and databases or cleanly store copies of databases. By using a backup solution, you have more flexibility and the ability to set up, as opposed to what the cloud provider offers you through a license for Office365.


I would even recommend that you take a cloud backup solution, specifically Commvault Metalic, which is not intended for clients who do not have on-premise capabilities. Here you also have the option of choosing which components you want to cover with the backup solution, and what is most convenient, whenever you want to take a new component, it is enough to just enter a new license for it, without any setup, installation.

author avatarMarkPattison
Reseller

The backup functionality built into Microsoft365 is all to do with Microsoft losing their systems. 


If you want to recover something YOU have accidentally deleted, or any of the more advanced backup functionality (e.g. the ability to recover a single mailbox from a date in the recent past) then you need 3rd-party backup software.

author avatarMartin Mash
Real User

If you don't care about the data stored by Microsoft, then you don't have to back it up.  But if you do care about your data, then look into some sort of backup solution for O365/M365.  There are many good options out there.  Microsoft's responsibility is for the infrastructure but if you have a user do something that they shouldn't have, you could be in for a big headache.

We had a user, about a month after we had migrated the accidentally deleted their entire inbox.  Since we did have a backup solution in place we were able to recover their inbox back into their mailbox.  While our solution was slow in this recovery, the user was able to get all their mail box.

author avatarreviewer1243038 (CEO/co-founder at a tech services company with 1-10 employees)
Real User

I would use:

1. Azure Backup solutions - it's quite cheap for some amount of data;

2. Another third party backup solution - depends on how the whole environment looks - many backup software solutions exist  - for every computer/or session - backup with agent, or if the environment is virtualized (for example - virtual desktops by Microsoft or VMware Horizon/Workspace One)-

It's good to use some software that is able to backup the whole user virtual machine - -Veeam (os/apps agents and virtual environment),
-PureBackup by Archiware (totally free, only support costs - but it's not required; only for virtual machines)
-Networker (composed with DataDomain - very high level of deduplication),
-Agent backups - Symantec\Veritas Backup Exec, Arcserve Backup or Veeam
-Agent for Windows ( it's free, but there's no common management console if the quantity of clients is above 10 - I guess).  These are not expensive solutions.

There're some in-built solutions - for example - if a data storage is Qnap/Synology device - some software (synchronization software) exists 'in device' - it's easy to use but - the device has to be rather stronger because the synchronization client works in continuous mode. For small offices, this solution is enough.

author avatarAlbeez
Real User

Use GFI Email Archiver. The solution helps in backup and addressing long term retention requirements. It keeps a copy directly when an email is sent or received and avoid any mail lose due to intentional or accidental deletion of emails by users.
https://manuals.gfi.com/en/mar12admin/content/administrator/topics/o365/o365howitworks.htm
https://www.gfi.com/products-and-solutions/network-security-solutions/gfi-archiver

Nurit Sherman
Hi community,  We all know it's important to conduct a trial or do a proof of concept as part of the buying process.  Do you have any advice for our community about the best way to conduct a trial or PoC?  How would you conduct a trial effectively? Are there any mistakes which should be avoided?
author avatarMichaelWeimann (Infrascale)
Real User

Was going to write a lengthy response but yours is spot on Gary. I will only add that the front end and back end of every SMART goal is to be Specific and Timely. Document what is important to test and what the criteria for passing are BEFORE you ever take delivery. Then put an expected time for this POC to complete and what would be a successful test.

The only other thing I would add is if the vendor is not providing technical resources to drive and/or assist during the POC...don't waste your time. But, if you expect the vendor to devote the resources, you can also expect the vendor to hold you to a purchasing decision when/if everything passes with flying colors.

author avatarGary-Cook (Commvault)
Consultant

I am not sure if this question comes from a vendor or customer so the response is somewhat generic. If you are the technical customer or end user, try to be involved in the process start to end. If possible, be the hands on the keyboard. No better way to understand the solution if you are going to be the user of it in the future. If you are the vendor promoting ease of use, there is no better way to sell your product to the technical team.

I have managed a lot of data replication, protection, and archiving POCs. Two requirements always stand out. Success criteria and POC type. As a vendor delivering the POC, you will fail 90% of the time without clearly defining these up front. As a customer, you should have a clear idea about why you are investing your time in POC and what you expect to gain from it.

POCs should not be a training exercise. They are a path to purchase a solution for a budgeted project. If you are just kicking the tires, consider the free or self-paced options provided by many vendors. These include on-line labs and downloadable virtual machines or trial software. These cannot be considered a POC in my book.

Now the two key components for a successful POC.

#1 - Define as a Functional or Performance POC

Decide whether you are running a functional or performance-based POC. If you are the vendor, make sure the customer is aware of the limitation of a functional POC in a limited resource environment. Don't allow a Functional POC to become a Performance POC. Been there. Done that. It's never a success.

Functional testing is easier. There is no requirement for measured performance so sizing the environment is a minor issue. Just has to be "fast enough" to keep your attention. They usually cover base installation, backup target configuration, agent configuration, test backups and restores, reporting, alerting, etc. Data sets are generally small. It can be executed in a limited environment with virtual machines. Sometimes the vendor can supply access to a remote lab environment such as the VMware vSAN lab. Sometimes it can be delivered as a preconfigured VM downloaded from the vendor.

Performance testing is complicated. Speeds and feeds matter. You will not be able to backup your entire live environment so you have to build a test environment to mimic it as close as possible if you are looking for GB/sec measurements. Success Criteria become golden in performance tests. You will be following the recommended hardware configuration supplied by the vendor.

#2 - Success Criteria

Define clear success criteria and stay with the plan. This will avoid scope creep where testing has no endpoint.

A test plan can be extremely difficult to create from scratch. Take the time because it is key to a fair and complete test. It will make you think about the purpose of the test. Most vendors have boilerplate POC documents. They are a good starting point but they almost always focus on the strength of the product. If you are the customer performing comparison testing, blend them into a single document.

Some or all of the success criteria should meet the "must have" requirements of a published RFP if it exists.

Test criteria should not be too detailed, especially to favor a particular solution UNLESS that is a pass/fail test.

Define a start and end date based on the testing requirements. Testing should be sequenced. Test backup of app A, app B, os C.. Don't jump back and forth between Oracle and Sharepoint for example. Complete one, deal with any issues, check the boxes, and move on.

DR, Performance, and SLA testing absolutely require detailed planning. Too much to detail in this short response. Imagine a POC where you are faced with "I need to recover my 50 TB Oracle server off-site with an RPO of 5 seconds and an RTO of 5 minutes".

In a large POC, you might have regularly scheduled meetings or conference calls for updates on the progress and to deal with issues.

Include a site survey covering security and the network configuration, Prepare to deal with fixed IPs, firewalls, ports, Active Directory, etc. Nothing like a backup solution to break a network and bring the testing to a standstill. Make sure you have a clear understanding of the environment. I once had a POC where they were migrating some AD domains that were part of the test infrastructure. Unknown to me. Needless to say, we faced constant failures.

Define the hardware and configuration requirements on a per server basis. OS, partition sizes, network, etc. This applies to the backup infrastructure servers and the servers that will be the source of the backup data.

Include all the key contacts with access information to servers.

Make sure you have ALL the required resources (human and compute) resources available on the start date. For example, you might need help from an Oracle DBA or SME on day 2 to continue the installation.

Define a process to modify the plan. I've seen cases where another department sees the shiny new object and wants to jump into testing their app after the plan was approved and tests begin. Plan to deal with this exception in the testing procedure but not deviate from accomplishing the original success criteria. It should be approved by management.

Define what is considered critical to the success of the test, what is a nice to have feature, and optionally, what doesn't matter at all. Be specific. Include application versions if it matters. You might judge the test completion as pass / partial pass / no pass or a percentage of how it meets the criteria. Don't use subjective rankings. Add a column next to the test for comments for subjective comments.

If you are comparison testing two or more solutions, make sure you can test "apples to apples" across the POC candidates. All vendors should be tested to the same standard. It can be difficult to compare an appliance to an enterprise software solution. The appliance will win the easy to install checkbox but might fail in the ease of expansion category because it requires a new, larger box.

Consider the future in a POC, not just how it functions today. For example, you should think about the process to add additional capacity locally or bring on new sites/servers.

NOTE: Content here subject to updates if I think of something new or helpful.

author avatarFred Kovacs
User

I know this is a simple answer but research companies that offer this service and use their free software trial versions to see if you like them or not. Research is the answer.

author avatarDominik-Oppitz
User

1 - Build up a dedicated environment for evaluation. In this, you can control and monitor all aspects (performance impact on primary storage, restore times, etc) very granularly without jeopardizing your production infrastructure. Hardware vendors are more than willing to help out as often a new software solution comes hand in hand with a new backup solution.


2 - A man (or woman) with a plan is a man (or woman) having success. Work out an agenda for the evaluation, starting at the business needs (SLO/SLAs, etc). Define the necessary processes with the vendor - this is a great test for how supportive they are and will be.


3 - Document the outcome person by person! Everyone looks at a vendor differently, so you need multiple-vector information as a foundation for your decision. BTW, this is a great tool to motivate your staff and to a vendor´s pricetag where you need it!


4 - Stick to the plan but be open to expanding. Never go back from the initially defined scenario. It was based on business needs, and these needs do not disappear - but boy do they come up during these evaluations. Keep them tracked and manage them accordingly. Not every input needs to be tested, but it needs to be ticked off and to be addressed.


5 - Whatever solution you look at: form follows function follows usability follows security


6 - Squeeze whatever you can learn out of these scenarios. You never know when you need it again.


7 - Play fair. Vendors invest a lot of their time in these PoCs. So, if they do not fit your need is to tell them. Give them the chance to bring up another solution or to withdraw. But again: Never go back from your agenda. Your business defined the needs.

author avatarit_user897210 (IBM Spectrum Protect Expert - ISMS owner at a non-tech company with 10,001+ employees)
User

Hello

Resilience is the keyword of any Backup & Restore software. Whatever it’s named. I can see 3 major mistakes while people are driving analysis & test of their Backup & Restore solution.

1 – Definition: Before even starting, RTO/RPO have to be clearly defined, this will be very helpful to determine your B&R tools and architecture.

- RTO/RPO, do I need all my data backup or only critical one and to what timeframe?

- Size, how much active data are we speaking about? Should I keep all data indefinitely or should I put strong Data policy management (in my environment, the standard is 30 days and all above have to be justified either by Business or Legal requirement)

As well, if I put deduplication in place, what is the impact?

- Availability, should I make my B&R infra high available? What is the outage I can consider/live without my B&R software

2 – Environment: What is the scope of your Backup & Storage software ? We you be able to use as 1 tool, or many spread over your coverage (datacenter, workstation…) and at what scale?

3 – Testing, what is the purpose of performing the 100GB test if I’m covering 100TB? The test must be driven in “prod likd” situation.

- Sizing of all daily backup

- Restore in production while the backup is running in parallel

o This will confirm my RTO/RPO or will show what gap I do have to answer.

- My B&R “admin” task

o Do I use deduplication? If Yes, what is the capability of my hardware to dedup daily backups (10, 20, 100TB/24h ?)

o Do I use Disaster Recovery capability?

o Recycling of tapes (considering smart environment)

If to protect my data in the best manner, the time all those tasks are taking in my B&R resource must be known.

- Restore my B&R software.

This is quick head-up on the hottest topic too much often forgotten when driving study on B&R software.

author avatarit_user871440 (Senior Solutions Consultant with 51-200 employees)
User

From my experience following aspects should be considered to avoid potential problems:
1. Choose only “known vendors” in the market for POC especially if the data to be secured its worth
2. Check for a single product which can fulfill “all data management” requirements out of the box
3. Conduct a “real life” POC which includes all required scenarios (backup & restore)
4. Don’t forget about the performance of the Backup & Recovery solution especially the Restore Speed (RTO)
5. Ask Data Management Specialists early for advise e.g. Rules of thumb / Best Practices
6. Before deciding for a solution check for the total costs over a longer period (renewal, growth,..)
7. Avoid vendor lock-in solutions (flexible components e.g. Server, Storage,…)

author avatarRaul Garcia
Real User

I agree with the previous recommendations that are exposed, which would add the following 4 aspects;

1) Involves all the supply of the complete solution: A proof of concept does not only involve the provider of the application. For example, you should also consider communications providers (carriers) in the event that your test is being done by supporting servers in geographically distant locations, or even in the same SITE, it may be involving other solutions such as virtualization and even the database of the main application that you want to support (considering the size of the logs or things of that nature)

2) Prepare your level of acceptance and rate the test: Go testing and qualifying (knowing and not necessarily learning) is very good, however what are we qualifying ?, As we know we are after the test comparing "pears" with "pears " Well for this I recommend that prior to the proof of concept have the SCRIP ready for the steps you want to test, the range over which will qualify the result, the findings and assumptions, as well as the ideal qualification on which you will observe that both distance themselves the solutions he tried. Even when you only try one. Let's imagine that you are qualifying 3 criteria with an 80, 95 and 90 as a suitable qualification and that the solution complied at 40, 55 and 50. Would you say that the proof of concept was successful? In my opinion, the test fulfilled half of the functionality or solution expected for each requirement, so the test could be considered as failed (we must find another solution)
 
3) Time required for the test (your time not the provider's): Another aspect to consider is the time in which the provider is willing to invest with you the use of your solution. Sometimes the best tests are those that simulate a natural period of your operation. It may be 24 hrs. as it could also be a full week (7 days) or even a full month with a change from one period to another.
In this case you should negotiate with the supplier the time that you require and of course if you do not want to invest adjust to what the provider proposes, but that is already part of the qualification of the possible results (your requirement vs the variable of times required to replicate a scenario "of the actual operation")

4) Test the Backup on the Recovery in the same POC: Por last, as we are talking about replicating information (application, a virtual server, etc.), always consider exposing in the scope of proof of concept both the backing of information and the return to normal operation from said backup. Otherwise your proof of concept would be incomplete.

author avatarJohannFLEURY
Real User

Hello

With all already been said, be sure to test all different technology this PoC will be used for and do not neglect end-user testing. They are the final step of a good PoC.


Do not rely on vendor performance story. They can be far from reality of your own environment so have already a baseline and set of performance tests to be sure it fits your need and know the limits. As example, do not buy a Dell EMC if you need IOs, this is not made for it but if you need archiving solution then it is becoming a good candidate.


And last, PoC is here so… test, test, test… and redo test, test, test so your teams will be confortable with it.

Rony_Sklar
Why should businesses prioritize having a disaster recovery solution?  Do you have some real life examples of cases where disaster recovery was not in place, and what the ramifications were to the business? And vice-versa - what are some examples of cases where disaster recovery proved vital and mitigated loss?
author avatarJohannFLEURY
Real User

I’ve been working for big agro-company and multi-site for our different kinds of production. We put in place a BCP first to identify, in terms of revenues, which site were critical from that weren’t and construct our BCP accordingly. The BCP consisted on defining all actors and services mandatory to ensure production and delivery of our products (supply chain, ordering, delivering, third part and of course IT associated). We found out that before putting BCP in place some of our factories would have been totally unproductive for more than 3 weeks in case of major incidents. 
so then after identifying all needed components, we came for some from 3 weeks outage to 4 hours.


We had a very good commitment from our third-party suppliers too while coming to the analysis—and helped some to understand as well their own gap in the case of the same.


So in the end, it was a win-win deal and today we do have clear visibility on all the chains needed to continue the most possible our business.


And of course, i could be hired to help in putting in place such process (no matter it is IBM SP or any other tool, this is just a small part of the journey of BCP)

author avatarShrijendra Shakya
Real User

I am in the business of Disaster Recovery and have been providing DRAAS with one of the renowned vendor equipment. I have come across quite a few cases where there had been many cases here in Nepal where ransomware attacks happen and all the data is encrypted of some reputed corporate houses. It had a lot of business impacts.


Although they had some traditional backup mechanisms, the backup system copied all the ransomware files too so we devised new recovery mechanisms and help the client restore some files too.


Additionally, we implemented and designed a new system and the client is contempt and everything is fine.

author avatarZied Chelbi
Real User

disaster recovery plan (DPR) is a set of “action to be taken before during and after a disaster”, and is made to help protect businesses in such an event. Although disasters may not always be avoidable, having a plan helps to reduce the potential damage and quickly restore operations.


Disaster recovery plans and the preventative measures they include are essential for stopping disasters from occurring in the first place. Organizations can’t always avoid disasters, but having a plan helps to minimize the potential damage and get operations back up and running quickly


here an example of a real case :


>>> A DDoS attack:


In this disaster recovery scenario, imagine that a group of malicious hackers executes a Distributed-Denial-of-Service (DDoS) attack against your company. The DDoS attack focuses on overwhelming your network with illegitimate requests so that legitimate data cannot get through.


As a result, your business can no longer connect to databases that it accesses via the network – which, in today’s age of cloud-native everything, means most databases. It’s rare nowadays to have a database that does not require a working network connection to do its job.


In this scenario, disaster recovery means being able to restore data availability even as the DDoS attack is underway. (Ending the DDoS attack would be helpful, too, but anti-DDoS strategies are beyond the scope of this article; moreover, the reality is that your ability to stop DDoS attacks once they are in progress is often limited.) Having backup copies of your data would be critical in this situation. That’s obvious.


What may be less obvious, however, is the importance of having a plan in place for making the backup data available by bringing new servers online to host it. You could do this by simply keeping backup data servers running all the time, ready to switch into production mode at a moment’s notice. But that would be costly, because it would mean keeping backup servers running at full capacity all the time.


A more efficient approach would be to keep backup data server images on hand, then spin up new virtual servers in the cloud based on those images when you need them. This process would not be instantaneous, but it should not take more than a few minutes, provided that you have the images and data already in place and ready to spin up.



having no disaster recovery plan is equal to an unlimited downtime .


In the Disaster Recovery Preparedness Benchmark Survey, the cost of outages added up to more than $50,000 in losses, on average, with bigger companies citing losses up to $5 million.

It’s these kind of eye-popping figures that bring companies down without any hope of recovery. It doesn’t matter what size your company is, downtime is clearly the enemy you want to avoid

author avatarRamaswamyK
Real User

I would fully agree that in the present stage of Cyber & Email preparedness one has to be always be prepared in terms of a disaster recovery. This will need to be prepared with the latest backup & recovery system which will pertain to the needs of the current needs & requirements prevalent in the sites.

author avatarNavin Gadhvi
Real User

A disaster recovery plan describes scenarios for resuming work quickly and reducing interruptions in the aftermath of a disaster. It is an important part of the business continuity plan and it allows for sufficient IT recovery and the prevention of data loss

author avatarTim Lenz
Real User

The Healthcare industry seems to be the new target for hackers and ransomware. 


With our DR plan in place, we were able to recover 80% of the files and 100% of the database data (by having a plan that had been based on best practices): most of the lost files were due to users not following guidelines for storing files on their network drives instead of their personal desktops and laptops. The data and files were back up within 24 hours with the biggest headache being the corrupted files on the single point of failure - Domain controllers are several buildings.


Lessons learned for the estimated cost in lost revenue was easy to show to management (that by not updating to the latest software and application versions and requiring DR on a separate subnet with different system password protection) how much it cost to be down for the four days. 


Luckily, it happened on Thursday and we were back up on Monday morning.


It is not a matter of if we will ever need the DR... It is can you afford to not have it WHEN you need the DR!

David Thompson
What is the best backup for super-duper (100Gbps) fast read and write with hardware encryption?
author avatarVuong Doan
Real User

The backup speed depends on:
- number of concurrent I/O streams
- data type
- network
- read/write speed of backup repository
- data encryption enable or not
- terabytes of front end data to be backed up

The question is not clear enough, to sizing a high scalable, high throughput environment. To archive the 100Gbps throughput, you have to list down the mentioned information.

For a very large environment, I strongly recommend using either NetBackup or CommVault.

author avatarJohn Askew
Real User

I would suggest Veeam with the underlying storage being provided by a Pure FlashArray//C.


The FlashArray will provide the throughput you are after (its all-flash), the encryption (FIPS 140-2 certified, NIST compliant), data reduction (Veeams isn't that great) which should provide price parity to spinning disk, provides Immutability which you may also need & is a certified solution with Veeam.


The other storage platform worth looking at is VAST Storage, which has roughly the same features set as the Pure arrays, but uses a scale-out, disaggregated architecture and wins hands down in the throughput race against the Pure's.

author avatarreviewer1053252 (Technical Presales Consultant/ Engineer at a wholesaler/distributor with 10,001+ employees)
Real User

There is no such thing as best "anything" let alone backups. There are plenty of enterprise solutions that can handle the load you mentioned plenty are available in the market and it all comes down to your needs.

Hardware encryptions might be much more secure (tougher to hack but still hackable) than software encryptions however they open doors for vendor lock-in and that in certain situations can affect the recoverability of your data.

My advice to you is to focus on looking for a backup solution that can help you guarantee the recoverability of your data at the event of a disaster rather than focus on best backup 100gbps with hardware encryptions.

At the end of the day what's the point of a backup solution if it can do all that you mentioned and fails you at the event of a disaster.

If you can give me more environment details such as what kind of platforms and apps are being utilized I may be able to assist other than that my answers to you are there is no such thing as the best backup for 100gbps with hardware encryption.

We live in a world where everything is software-defined and it's safe to say that that's the way everyone should go.

author avatarreviewer1183848 (User at a media company with 51-200 employees)
Real User

We use the smallest Cohesity cluster possible with three nodes and have 60GBps of available bandwidth. I assume with more nodes you could get to 100Gbps. They have flash and an unbelievable filesystem. Do you have a use case for 12,500 megabytes per second of backup throughput? I'm having trouble envisioning an admin who would be in charge of a source capable of that coming to a forum like this with your exact question!

author avatarMike Zukerman
Real User

I don't think there are backup appliances with the 100Gbps interfaces that exist.


This speed is not needed for the backups, as the network is hardly ever the bottleneck.

author avatarSaravanan Jaganathan
Real User

Nowadays Cisco and other vendors are coming up with 25 Gig & 100 Gig Ports. On the physical setup of your physical or ESXi(Including backup servers) it should be planned in a way which can connect to this switches to have 100 Gig Pipe. DataDomain, HPE storeonce & Quantum DXI supports you the hardware encryption. Identify the right hardware model which supports the right I/O for your disk backups.This will eliminate your bottleneck after having the 100 Gig N/W. On software you can go for Netbackup, Veeam or Commvault. Each has its own option to reduce the frequent data flow by having client side deduplication

author avatarNick Cerrone
User

It seems an object storage with inline dedupe could fit but would need to be sized for the performance. Backup targets are typically tuned for the ingest. Is the data dedup-able or compressible? How much data are you looking to backup and in how much time? How much data do you need to restore and in how much time?

author avatarMuathAlhwetat
Real User

Your question is not cearly enough for calculate best scenario for your question, Because there are many factors depend on such as :
-Backup for what physical or virtualization environment.
-Data tybe.
-Network speed on all devices.
-Storage tybe flash or tap.
-What is the read/write speed of your disks/tape, AND the bus/controller speed that the disk is attached to?
-How many files and, how much data are you backing up?
-Is your backup application capable of running multiple jobs and sending multiple streams of data simultaneously?

Some potential points for improvement might include:
Upgrading switches and ethernet adapters to Gigabit Ethernet or greater.
Investing in higher performing disk arrays or subsystems to improve read and write speeds.
Investing in LTO-8 tape drives and consider a library, if you are not already using one, so that you can leverage multiplex (multistream) to tape.


Backup and Recovery Software Articles

Evgeny Belenky
IT Central Station
Nov 19 2021
Hi community members, Spotlight #2 is our fresh bi-weekly community digest for you. It covers cybersecurity, IT and DevOps topics. Check it out and comment below with your feedback! Trending What are the pros and cons of internal SOC vs SOC-as-a-Service? Join The Moderator Team at IT… (more)

Hi community members,

Spotlight #2 is our fresh bi-weekly community digest for you. It covers cybersecurity, IT and DevOps topicsCheck it out and comment below with your feedback!

Spotlight 2 - community digest

Trending

Questions

Share your experience with other peers by answering the questions below!

IT

Security

    DevOps


    Articles

    Community members share their knowledge in the articles below.

    Also, you're welcome to check our previous community digest here.

    Community Team,

    IT Central Station (soon to be PeerSpot)

    (less)
    Matthew Shoffner
    IT Central Station
    Discussions about backup tend to dive straight into the technical aspects of creating safe copies of vital data. They may miss what is arguably a more important issue, which is the purpose of the backup process itself. When looking at explanations of the different types of backup available to IT… (more)

    Discussions about backup tend to dive straight into the technical aspects of creating safe copies of vital data. They may miss what is arguably a more important issue, which is the purpose of the backup process itself. When looking at explanations of the different types of backup available to IT managers, it’s worth keeping in mind that backup and restore processes ideally serve business objectives - ensuring business continuity, recovering from disasters like cyberattacks, and enabling operational technology to remain in service with as little interruption as possible.

    In reviewing the offerings of the best backup software, we’ve put together options that should keep these higher level objectives in mind, help you create a sound backup strategy, and leverage all the advantages of data backup and recovery.. There are four main approaches to backup: full backup, incremental backup, differential backup, the mirror backup. There is no best type of backup. There is only the method that works best for a particular organization’s needs. That said, the fastest backup and restore process is generally the best, all things considered.

    Jump to our Comparison Table for Types of Backup.


    Full backups

    What is full data backup?

    A full backup involves making a copy of an entire digital asset, such as a database or data set. It’s the most basic and rudimentary backup type. Typically, whoever is in charge of backups will conduct a full backup of a file on a periodic basis.

    However, its name notwithstanding, in the case of a system backup, a full backup typically does not replicate every single little piece of the system. That is the job of the “Day Zero” backup, which occurs right after a system has been successfully installed and configured. A “Day Zero” backup makes a 100% complete copy of every system file for safe keeping, including files and libraries. These files don’t change very often, so it’s not worth wasting time and resources backing them up as regularly as a full backup process.

    Advantage

    There’s minimal time needed to restore the data. Since everything is backed up at one time and it’s a copy in its entirety, everything can be restored at one time as well.

    Disadvantage

    A full backup can take up a lot of storage space and a lot of time to backup. It’s storing and recovering the full data set, which can be quite large, as opposed to storing and recovering the portion of the data set that has changed.


    Incremental backups

    What is an incremental backup?

    An incremental backup copies only that data which has changed since the last full backup and since the last incremental backup. An incremental backup process is particularly useful when dealing with transactional databases, which are constantly changing. When it’s not practical to make a full backup because of resource constraints (i.e. time or storage) or the pace at which data assets change, an incremental approach is more pragmatic.

    So, imagine that there is a full backup performed on Sundays. (Weekends are a good time to do an operation with such high network and system load.) On Monday, an incremental backup would only replicate anything that changed in the period between Sunday and Monday. If the incremental backup schedule is daily, then the Tuesday backup would copy any data that changed since Monday, and so forth.

    Advantage

    Very little storage space is required when executing an incremental backup process since only data that has changed will need to be backed up. It’s also a very fast process given the limited amount of data in comparison to a full backup.

    Disadvantage

    Restoration is the slowest in comparison. All incremental changes in addition to the full backup need to be reconciled to accurately restore the data, which takes more time than a full or differential backup process. Additionally, if one incremental backup record in the chain is “broken”, it can jeopardize the latter incremental backups.


    Differential backups

    What is a differential backup?

    A differential backup is similar to an incremental backup, with one important difference. Every time a differential process is executed, it backs up all data that has been modified or generated since the last full backup and ignores previous differential backup instances. An incremental backup process only backs up data changes since the last incremental backup was run.

    For example, if the full backup was on Sunday, and there is a differential backup done each subsequent day of the week, then Monday’s differential backup will copy data that’s changed since Sunday. The differential backup on Tuesday will copy all the data that’s changed since Sunday, as will the differential backups on Wednesday, Thursday, Friday and Saturday.

    Advantage

    The recovery process for a differential approach is much faster than an incremental approach since there is only one differential backup block versus the multiple contained in the incremental approach.

    Disadvantage

    Differential backups require more storage space and more time to backup than an incremental approach.


    Mirror backups

    A mirror backup is a backup that makes an exact copy of the source data. The advantage of this approach is that it does not store any old or obsolete data. This advantage can cause a problem, though, if files get deleted by accident. Then, they are permanently lost. The mirror backup approach is often used in highly critical systems, such as financial and stock trading platforms where a nearly instant restore of extremely recent data is needed to meet recovery time objectives (RTOs) and recovery point objectives (RPOs).


    Backup Types Comparison Table

    Backup type

    Definition

    Benefits

    Drawbacks

    Full backup

    A complete copy of the source data.

    Comprehensive and easy to restore.

    Takes a lot of time and system resources—cannot be done frequently.

    Incremental backup

    Copies data that has changed since the last backup, either full or incremental.

    Is efficient in use of network and system resources. Can be performed frequently.

    Can be complicated to restore.

    Differential backup

    Copies data that has been changed since the last full backup.

    Relatively fast and easy to restore.

    Takes longer and uses more system resources than an incremental backup.

    Mirror backup

    Creates an exact copy of source data.

    Enables nearly instant, complete restoration of lost data.

    Can result in accidental permanent loss of data.

    (less)
    Hugh
    Freelance Writer – B2B Technology Marketing
    Journal of Cyber Policy
    On Saturday, May 8, 2021, major media outlets reported that Colonial Pipeline, whose fuel pipeline network supplies gasoline, jet fuel, and other petroleum necessities to over 50 million Americans, had suffered a ransomware attack and shut down its pipeline as a precaution. The disruption in supply… (more)

    On Saturday, May 8, 2021, major media outlets reported that Colonial Pipeline, whose fuel pipeline network supplies gasoline, jet fuel, and other petroleum necessities to over 50 million Americans, had suffered a ransomware attack and shut down its pipeline as a precaution. The disruption in supply sent gasoline prices rising over the weekend, with financial markets on edge in anticipation of economic impacts in the coming weeks.

    Colonial, which is one of the largest pipeline operators in the US, has hired Mandiant, a division of FireEye, to investigate the attack. The FBI and Critical Infrastructure Security Agency (CISA) are also investigating the incident to determine the source of the ransom attack. Their goal is to help Colonial understand the nature of the malware that has affected its operations. According to the company, the attack only affected its business systems, not the pipeline management technology itself. However, they shut down the pipeline as a precaution.

    The source of the attack has not been confirmed, but according to government sources, an Eastern European cybercrime gang known as DarkSide is a leading suspect. At this point, it is unclear who is behind DarkSide. In some cases, such criminal gangs operate either with the consent of nation-state actors or even under their direct instruction. By having a criminal group perpetrate an attack on another country, nation-state actors preserve deniability. The vulnerabilities that were exploited by the attackers are unknown at this time.

    Detecting and Preventing Ransomware Attacks

    IT Central Station members would not be surprised by the Colonial attack. Many of them have spent their careers detecting and preventing such events, using anti malware solutions as well as tools for endpoint protection and more. Hasnae A., presales engineer at DataProtect, uses Cisco Umbrella for ransomware protection. As a system integrator, they implemented Umbrella to protect the network of a client in Morocco against ransomware and phishing attacks. Hasnae characterized the solution as “easy to use” and valued its ability to integrate with eBay.

    Network security solutions are just one of many countermeasures that IT and security professionals are deploying to combat the ransomware threat. Email defense, endpoint protection, and secure browsers can also help mitigate ransomware risks. Backup and disaster recovery solutions fit into the ransomware defense mix as well.

    Reducing Ransomware by Protecting Email

    Ransomware malware needs to enter a target’s network in order to encrypt data and hold it for ransom. Email, especially phishing attacks, is one of the most potent vectors of attack. For this reason, security managers often try to stop ransomware as it enters the organization through email. An IT manager at a mid-sized healthcare company, for instance, uses Forcepoint for email filtering. He explained that “the spam filter is very effective. It does a good job of detecting ransomware links in email and then blocking them.”

    Protecting End Users by Securing their Browsers

    Ransomware attackers may deliver their malicious payload through infected websites. An end user might click on a link and accidentally download ransomware onto their device in the process. To reduce this risk, some security teams deploy secure browsers, such as Comodo, on end user devices. Principal enterprise architect Donald B. takes this approach at Aurenav Sweden AB, a business services company. As he put it, “If you open up an application or a web browser, it [Comodo] runs within a container (sandbox). So if there's some malicious code, it will be contained within the sandbox.”

    He further noted that “ransomware prevention and zero-day exploits were a driver for adopting Comodo. From our research lab results working with live ransomware, Comodo has been very effective in preventing infection. We've done a lot of tests with numerous types of live malware, and it works really well.”

    Protecting Endpoints to Stop Ransomware

    The endpoint is a logical place to fight against ransomware. After all, if the security team can kill ransomware on the end user’s device, they’ve gone a long way toward winning the battle against the attacker. IT Central Station members discussed their experiences with a variety of endpoint protection solutions that help them with ransomware. These include a technical manager at a small tech services company who uses Malwarebytes to prevent ransomware and malware. He also deploys the solution’s endpoint detection and response (EDR) functionality. He related, “This means if the data is attacked, I'll be able to recover my data - that is, roll back the data and go to the pre-attack state.”

    “The most valuable feature is its ability to detect and eradicate ransomware using non-signature-based methods. It is not a traditional EDR,” said the owner of a small software company. He added, “We think of this product as a fishing net that fits into the computer and has all of the capabilities and understanding of what ransomware and malware look like. It reacts to the look of ransomware, as opposed to trying to detect it by using a signature.”

    For Imad T., group CIO at a large construction company, the Carbon Black solution “ensures the probability that any ransomware will be stopped before spreading.” It is an endpoint line of defense against malware and ransomware with scheduled network scans. A senior security consultant for Checkpoint Technologies at a small tech services firm had a similar use case. He remarked, “We had a ransomware attack and the SandBlast agent automatically picked up the ransomware. It automatically deleted the ransomware and restored the encrypted files.”

    Mitigating the Impact of Ransomware with Backup and DR

    As the Colonial attack reveals, even strong defenses can be breached. Ransomware is able to get through and wreak havoc on important systems. Anticipating this potential, some organizations prepare to respond to an attack by restoring lost data through backup. This way, they can ignore the ransom demand. Anti-ransomware processes should be part of a thorough Disaster Recovery (DR) plan. Such an approach has been taken by Sastra Network Solution Inc. Pvt. Ltd. As their CTO, Shrijendra S., noted, they use Quorum OnQ for backup, cloud service, and disaster recovery as a service [DRaaS]. In particular, they have found that Quorum OnQ has a good ransomware protection feature. Deven S., director at a small tech services company, similarly relies on Acronis as a file- and data-backup solution. In his view, Acronis is “easy to use, performs well, and provides built-in ransomware protection.” He described this as “a great advantage.”

    Conclusion

    The attack on Colonial Pipeline is getting attention because it is a piece of critical infrastructure that can affect the general public. However, as security experts know, it is just one of thousands of such attacks that have occurred in the US in the last year. Many more are likely coming. Security teams must be eternally vigilant against increasingly brazen and sophisticated attackers. As the IT Central Station reviews show, many validated solutions are available. The challenge is to deploy them effectively in order to detect and prevent ransomware attacks over the long term.

    (less)
    Hugh
    Freelance Writer – B2B Technology Marketing
    Journal of Cyber Policy
    OVHcloud, Europe’s largest cloud services provider, suffered a devastating fire on March 10 at its facility in Strasbourg, in Eastern France. The fire destroyed one of four data centers at the site. As Reuters reported, the fire disrupted millions of websites, taking government agency portals… (more)

    OVHcloud, Europe’s largest cloud services provider, suffered a devastating fire on March 10 at its facility in Strasbourg, in Eastern France. The fire destroyed one of four data centers at the site. As Reuters reported, the fire disrupted millions of websites, taking government agency portals offline, along with banks, news sites, shops, and many other sites using the .fr web space. The company advised its clients, such as the French government and Deribit cryptocurrency exchange, to activate Disaster Recovery (DR) plans.

    Yes, even the cloud can catch on fire. When it does, you want to be confident that your digital assets will be safe - and that your business can continue to operate. Business continuity and DR plans and solutions exist exactly for moments like this. If such an incident does affect your digital business, it is definitely not the time to find out that your DR plan was deficient.

    IT Central Station members discuss these challenges in their reviews of backup and recovery and disaster recovery solutions. For example, Pieter S., Information Technology Manager at PAV Telecoms, explained that Acronis Backup “has saved us from certain financial ruin after important servers containing financial data were stolen during a robbery.” He added that “we were able to restore backups to new hardware and carry on with business as usual.”

    A System Administrator at Abdullah Al-Othaim Markets put it this way: “We use Veritas for incremental backups for all the servers. It is an automated process. Disaster can happen to any server and the disk image can be destroyed very easily. It's also easy to fully email the backup image and restore the server.”

    Albert S., Co-Owner at Angels Dtp, similarly noted that “in difficult situations, where something was accidentally erased or there was another kind of error, I can return to the latest backup and recover. The most valuable feature is the fact that [Acronis] backs up my systems transparently in the background, and I'm not even conscious of it. I just get a notice that it's successfully backed up.”

    Other backup managers rely on their solutions to maintain a state of preparedness. Muzammil M., Sr. IT Operations Engineer at AlGosaibi Group, for instance, uses Azure Backup as part of his company's disaster recovery solution. He said, “If there is a problem in the entire building then we can restore our data from over the cloud. Azure Backup is very good for our clients who need to back up data securely and reliably.”

    The time it takes to restore data is critical, according to Syed Q., Technical Services Manager at a small tech services company. “With Veritas, the SLAs [Service Level Agreements] are pretty predictable and you can achieve complete backups easily.” For Syed, what matters is duplication and compression and the safe sorting of information, along with a quick recovery time. It seems as if he’s learned this the hard way, as he remarked that “sometimes with other software, a complete backup doesn't happen.”

    Reviews of backup solutions on IT Central Station also offer suggested best practices for DR and business continuity. A Technical Presales Consultant/ Engineer at a large wholesaler recommended that “a person put on a Veeam backup service so that in a disaster recovery scenario you set what has to come back up first, because that is going to be the critical information that has to go back up as quickly as possible. You can put anything on critical servers, but we recommend that you use it for critical data that is going to be restored within a four-hour timeframe.”

    A Group Product Specialist at a distributor with more than 200 employees revealed that Veeam Backup Replication has “automated backup to the point of almost no involvement needed and our backups can be checked, tested, and verified at any time in accordance with our policies.”

    An IT Network Analyst at a manufacturing company made a comment that sums up the essence of DR effectiveness, saying “[Veritas] saves us a lot of time. We're backing up all of our servers with it. The backup potential of the solution is very good. It's protected us in the past very well and allowed us to get up and running after an attack with minimal loss.”

    We can all hope that no fire will come to our clouds. But, if it does, we’d best be ready. Events like the OVHcloud disaster are rare, but they do occur. DR plans and business continuity solutions have to be set up and tested so business can go on, without the loss of valuable data.

    (less)
    JC AlexandresThe never old IT adage ... backup, backup, backup.
    Rory SheltonBe prepared, an organization's existence could be the cost. 
    Vladan Kojanic
    Project Manager - Business Consultant at Comtrade System Integration
    When the pandemic hit, we were forced to quickly adapt and find answers to questions we’d never asked ourselves before: how can we keep in touch with our colleagues when we’re not in the office? And how can we make sure we are still efficient while working from home? It quickly became apparent… (more)

    When the pandemic hit, we were forced to quickly adapt and find answers to questions we’d never asked ourselves before: how can we keep in touch with our colleagues when we’re not in the office? And how can we make sure we are still efficient while working from home?

    It quickly became apparent that one, seemingly small issue, could prove catastrophic: our digital system is designed so all business documents are located on our servers and on our work computers. But this also meant that many wouldn’t be able to access these documents from home.

    The easy solution would be to give everyone a VPN connection, but because we are more than 300 people in my organization, that would have been too expensive. It would also have been too slow since we would first need to explain how to use the VPN and how to connect to our office network.

    In the end, we found a solution that not only solved our problems but actually improved our efficiency by dramatically reducing the amount of emails we were sending back and forth.

    A quick response to the crisis

    When it became apparent we would soon be working from home, we did a quick internal analysis to identify the programs that were used the most. From this, we concluded that the most important thing was for our colleagues to have access to their data, which enabled us to repurpose a backup solution we have had in place since 2016.

    In addition to the standard backup functions, the advantage of Commvault Backup & Recovery is that the backup is managed from a single location. Whether you are backing up computers, laptops, databases, or some business applications, you control and adjust the whole process from one place. And that means my colleagues could access and recover documents, folders, etc. without having to ask an admin.

    In my team, we had already used this backup solution in some special situations, so it was relatively easy for us to quickly roll it out to all employees and adapt it to help them work from home.

    Since my colleagues were already familiar with the backup system, this actually proved to be a welcome opportunity to modernise our system and to digitise our processes. We had momentum because everyone knew that these changes were necessary, and, most importantly, this solution reassured them that working from home wouldn’t be an insurmountable problem.

    A new, short manual was compiled to help employees adapt to working from home, with an emphasis on how they can use secure HTTPS connections to access their data and documents located on the servers.

    Now, when they need to share a document with colleagues, they don’t have to send an email with a file, which further burdens the email system, but can simply share a link to the document itself through the backup application. Doing so even enables them to collaborate better, allowing them to edit a document or simply access it without first having to send an email. Just 15 days after implementing this solution, the number of internal emails sent to share documents between colleagues was reduced by almost 70%.

    In a later analysis, we saw that compared to March 2019, the total number of emails sent and received with external users in March 2020 increased by an incredible 240%. In other words, there was a huge influx of emails at the beginning of the pandemic as citizens and other external stakeholders sought help and guidance from us, the public servants. By using these functionalities that our backup system has in itself we reduced the number of internal emails significantly, making the system more stable and reducing friction for our colleagues.

    This system can be applied in any crisis situation, the only requisite is an Internet connection.

    Innovation

    Initially, we were concerned about whether our colleagues would use this new system and whether they would realise that it could do more than serve as the backup system we had used it as up until this point. If the system was rejected by employees, it wouldn’t matter that it could technically do the job.

    In addition, as is the case every time a new IT solution is introduced, we had to make sure the system was secure enough, adding another layer of complexity.

    What we’ve learned from the pandemic is that the various systems used in public institutions can often be used in other, creative ways than they were originally intended.

    And this is the point I will leave you with: this was a crisis where time was in short supply, so we tried to make the most of the tools and software we already had. To come up with a solution by combining several systems.

    Rather than coming up with something completely new, sometimes it is enough to simply look a little deeper at what we already have and make some small changes, adapting them to a new reality or crisis. We were all aware that we had neither the time nor the money to hire external firms for new software solutions, and so we had to think outside the box. And in this way, the pandemic pushed us to innovate our systems for the better. Maybe it can do the same for you?

    (less)
    Chris Childerhose
    Lead Infrastructure Architect at ThinkON
    Every Virtualization and System Administrator deals with having the ability to recover servers, files, etc. and having a Backup Solution to help with recovery will ease the burden.  But how do you know which one is right for you?  How would you go about choosing the right solution that will help you… (more)

    Every Virtualization and System Administrator deals with having the ability to recover servers, files, etc. and having a Backup Solution to help with recovery will ease the burden.  But how do you know which one is right for you?  How would you go about choosing the right solution that will help you in your daily tasks?

    When choosing a backup solution there are many things to consider based on your physical/virtual environment.  What hypervisor are you running, what storage is being used, etc.?  The best way to choose the right solution for the job is through evaluation and the more you evaluate the easier it will be to pick the right one for you.  During an evaluation process you should consider things such as:

    ·    Compatibility with your chosen Hypervisor

    ·    Ease of installation and setup

    ·    Program ease of use and navigation

    ·       Backup scheduling

    ·       Reporting – is the reporting sufficient enough

    ·       Popular within the industry

    ·       Support for Physical and Virtual servers

    ·       And so on…and so on….

    There are many criteria you can use in the evaluation stage and the above examples are but just a few.  Composing a list prior to starting to look at software would be the recommended approach, this way you are looking at software that will fit most of your criteria prior to the evaluation/PoC stage.

    When you have completed your criteria list and selected vendors for evaluation ensure to install all of them.  Installing all of the products allows you to do a side-by-side comparison of the features you are looking for like job setup, ease of use, etc.  Being able to see the products and how they work side-by-side gives you the best evaluation experience.

    During the comparison stage look at something like ability to conduct SAN based backup versus LAN – how does each solution compare?  Can the solution connect in to your SAN fabric allowing faster backups?  If you cannot use SAN backups how will it affect the overall performance of the environment?  After backups complete is there a reporting structure showing success/failure, length of time, amount of data, etc.?  When working with the solution is navigation for job creation/modification simple?  Is it cumbersome within the product and/or frustrating creating backups?

    There are many things when comparing products to be aware of and answering questions as you go through the products is a great way to evaluate them.

    Remember that there are many backup solutions out there for evaluation and choosing the right one can be a difficult decision.  Evaluating the ones that appeal most to your organization is the best way to go and using a methodology for testing them is even better.  In the end you will ensure your success by choosing the right solution for the job!  Evaluate…..evaluate…..evaluate

    (less)
    Federico LucchiniGreat article. Also remember how important are datas in your company, I would… more »
    Matthew Shoffner
    IT Central Station
    Effective data backup and recovery doesn’t just happen, it takes using the best backup software and a sound plan. For sure, some companies have an ad hoc approach to this critical area of IT operations, but that is not a best practice. It’s wise to develop a thorough data backup strategy and plan… (more)

    Effective data backup and recovery doesn’t just happen, it takes using the best backup software and a sound plan. For sure, some companies have an ad hoc approach to this critical area of IT operations, but that is not a best practice. It’s wise to develop a thorough data backup strategy and plan. After that, the plan should be tested regularly to ensure that it works. There are too many advantages of data backup and recovery to not have a sound strategy and plan.

    Data backup strategies and plans are evolving, however. Traditional backup strategies like tape-based data back up to an offsite secondary location have transformed over the last couple of decades with cloud and mobility solutions. Older data backup strategies are often inadequate to protect your data. You need to adopt modern data backup strategies. Also, backup implementation, such as using incremental vs differential backup, should be planned.

    New data backup trends include Cloud-to-Cloud data backup, where data from one cloud is backed up to another cloud. Other options include cloud storage of onsite backup data along with appliance-based backup, which can automate the data backup process. There are also hyper-converged backup products and virtual backup appliances that run on a hypervisor and enable faster deployment and easier configuration. However you do it, the data backup strategy and plan should take recovery time objectives (RTOs) and recovery point objectives (RPOs) into consideration.

    Best Practices for Backup and Recovery

    Backup and recovery processes must balance effort, expense and risks. With that in mind, some important best practices for backup and recovery include the following:

    • Have an offsite backup. The onsite or primary backup could be compromised, so it’s a good idea to have a second, offsite backup as a contingency (e.g. cloud).
    • Treat critical data as a high priority. Sensitive files like financial, personally identifiable information (PII), business contracts, and other crucial information should be prioritized and backed up in ways that are compliant with regulations. Any other data that may have a high business impact in case of a data loss will also fall into this category.
    • Know how you can access critical data. If you suffer data loss, you need to ensure you know how you can access critical data and in how much time it will be available. This sets the right expectations to plan for minimal impact on your operations.
    • Test your backups. Make certain that the regular backups are done successfully by enabling full backup verification. You must also train your IT staff on access and restoration of backups. Periodically conduct tests of data restore processes.
    • Have a communication plan. Your plan has to provide for communications between team members and other business stakeholders even if there is a major system outage. If the outage is due to a disaster, it’s essential to plan for emergency communications, e.g., everyone’s cell phone numbers need to be available on paper lists, etc.

    Data Backup Plan and Procedures

    The implementation of a backup plan involves procedures and processes. The good news is that many modern backup systems have high levels of automation. Once programmed in, the procedures more or less run themselves. The challenge is to prepare them properly. One path to success in this regard is to create an automated network backup plan. A network backup transmits data from selected devices over a network to the backup server, wherever that is.

    Key Questions

    • What is being backed up? Is the data sensitive?
    • Where is your backup being stored? Cloud, on-site, offsite, etc.
    • What is the frequency of back-ups? Hourly, daily, weekly, etc.?
    • Who is responsible for handling the actual work of back-ups? IT manager, Datacenter manager, etc.
    • Who is testing the success of backups? QA resource, IT manager, etc.

    First Steps In Planning

    1. Put together a backup plan budget. Once you have decided on your backup strategy, you can allocate a budget accordingly. Cloud-based solutions will be cost-effective and economical. Buying hardware and maintaining it will be expensive, while Backup-as-a-Service with monthly rental options may turn out to be more affordable.
    2. Choose a platform. There are a wide variety of backup platform options. What works for you will depend on your size, the scope and complexity of your backup needs and the compliance burden your organization faces, if any. You can also choose a cloud-based service provider for data backup. If you do not want to put sensitive data on the cloud or government regulations are prohibiting it for any reason, you may have offsite backup storage.
    3. Select a data backup vendor. Evaluate different vendors, some of which may provide a complete solution comprising hardware, software, and cloud backup solution. Others offer discrete components of a full backup solution, and you have to put it all together yourself. We’ve put together the best backup software, and here are reviews on the top three that provide comprehensive solutions - Veeam Backup and Replication reviews, Commvault reviews, and Zerto Replication reviews.

    Keys to a Successful Implementation Plan

    1. Establish roles and responsibilities. Set firm procedures, with assigned personnel and accountability for all aspects and scenarios of data backup and restoration. If you work with a service provider, its team should produce a customized recovery plan.
    2. Set a backup schedule. Creating a backup schedule, including backup type and timing will be crucial to ensuring your data can be stored and restored in alignment with your business needs and resources.
    3. Test your backup system. Test your system once it has been fully implemented and set up a regular testing schedule. There are a variety of ways you can test your backup system - core data sets, application recovery, Virtual machine (VM) recovery, physical server recovery, and more. Validate that your backup works, regularly.
    4. Continue to optimize. Revisit your backup schedule and plan periodically, at least one per year. Integrate the review as a part of your IT project development. For example, as new applications and storage projects go into development, their project workflows should include backup and restoration planning.

    Backup Plan Example

    Digital Asset

    Backup schedule

    Primary backup type

    Secondary type

    Backup Owner

    Validated by/Date

    ERP system

    Daily

    Incremental – to cloud

    Mirror site

    John F.

    1/1/21

    PCs on-site

    Weekly

    Incremental – cloud

    On-premises storage array

    Joe D.

    3/1/21

    PCs – remote

    Weekly

    Incremental – to cloud

    On-premises storage array

    Joe D.

    4/1/21

    (less)
    Matthew Shoffner
    IT Central Station
    There are many types of backup used to both guard and recover data in cases where data integrity is compromised. Data can change very fast, slowly, or not all. Data can be very sensitive or common. Because data can change hourly, daily, or weekly and be of differing importances, a data backup… (more)

    There are many types of backup used to both guard and recover data in cases where data integrity is compromised. Data can change very fast, slowly, or not all. Data can be very sensitive or common. Because data can change hourly, daily, or weekly and be of differing importances, a data backup process that aligns with the needs and resources of the business should be selected.

    We’ve reviewed the capabilities of the best backup software to understand the main differences between incremental and differential backup, which are the two most common types. Both are extensions of a full backup process, which as its name implies, is a complete copy of all digital assets.

    Incremental backup

    An incremental backup is a copy of whatever data has changed since the last backup. Thus, if you perform a full backup of your system on Sunday, an incremental backup on Monday will only copy and store any data that has been changed or added since Sunday. An incremental backup on Tuesday will only deal with data that’s changed since the Monday incremental backup, and so on.

    Incremental Backup Process

    An incremental backup process can be very fast, less than an hour, and the storage capacity needed will be quite small in comparison to a differential backup or full backup. If the full backup was 50 gigabytes, the incremental changes from one day to the next might be 1 gigabyte. The backup manager will have to decide how often to do incremental backups. It might be daily or weekly, or even hourly. That depends on how much change is occurring in the data and how critical it is for continued business operations. If there’s any concern about network capacity or storage, an incremental backup process is most the best choice.

    Recovery time for an incremental backup process can be lengthier in comparison to a differential process. Since there are more backups created during an incremental process, each backup logging changes from the last backup, it takes time to piece everything back together. So, the benefit of the quick backup time is offset by a longer recovery time.

    Overall, incremental backups are good for organizations that need flexibility and short time periods between backups. Compared to differential backups, an incremental backup copies less information. The “backup window” is shorter. The files are smaller. If your business needs consistently high network performance, you might prefer incremental backups to differential backups, as they typically make lighter demands on the network.

    Differential backup

    A differential backup is comparable to an incremental backup, in that it’s a backup extension of a full backup. However, unlike an incremental backup, the differential backup backs up the data that has changed since the last full backup. Thus, if you do a full backup on Sunday, a differential backup on Monday will back up all the data that’s changed since Sunday. Then, on Tuesday, the differential backup will also back up everything that’s changed since Sunday. The Wednesday and Thursday differential backups will similarly back up everything that’s changed since the full backup on Sunday.

    As you can see, the differential backup process is longer, recording changes in the data since the last full backup, which creates a more resource intensive process requiring larger storage capacity. If you have limited resources or need a faster backup time, an incremental approach may be a better option.

    The recovery time using a differential process is much faster in comparison to an incremental process. Since it backs up data changes since the last full backup, there is only one large backup as opposed to the multiple of incremental. Recovering data from one large block in addition to the full backup is much faster than recovering and piecing together multiple blocks.

    If your business needs a fast, simple way to restore lost data, a differential backup may be a better option than an incremental backup. A sacrifice in backup up speed and storage capacity will need to be given, but the recovery process will be fast.

    Full backup

    The term “full backup” is actually a little bit misleading. Hypothetically, the full backup involves backing up all elements of a data asset. However, this is not always true. If the asset in question is a single file or data object, then yes, a full backup will be a complete, unique copy of the asset. However, if the digital asset is a system of some kind, like a website, operating system or database server, the full backup typically does not include every single system file, library and supporting configuration file. That is done in what is known as a “day one backup.”

    The “day one backup” truly has everything in it. As it contains everything, it takes a long time to backup. However, it takes the least amount of time to restore since it’s only one large file and nothing needs to be pieced together.

    Incremental vs Differential vs Full Backup

    Type of Backup

    Characteristics

    Pros

    Cons

    Full backup

    Backs up all elements of a data asset (except for underlying system files, which are backed up in the “day one backup.”)

    It’s complete, so it offers a single file to restore.

    Rapidly becomes out of date and requires a lot of storage.

    Incremental backup

    Backs up all data that has changed since the last backup, e.g., since the last full backup or the last incremental backup.

    It’s the most up-to-date backup.

    Fastest backup.

    Small backup files; low network and storage requirements.

    Data recovery can be time-consuming and more challenging.

    Differential backup

    Backs up all data that has changed since the last full backup.

    Fast data recovery time.

    Requires longer backup times and more storage capacity.

    (less)
    Davina Becker
    Content Editor
    IT Central Station
    Businesses spend a lot of time building their proprietary data and information. That information can often hold the key to a competitive advantage in the market. Data loss from threats or disasters can lead to upset customers, lost revenue, and potentially bankruptcy, e.g., more than 90 percent of… (more)

    Businesses spend a lot of time building their proprietary data and information. That information can often hold the key to a competitive advantage in the market. Data loss from threats or disasters can lead to upset customers, lost revenue, and potentially bankruptcy, e.g., more than 90 percent of companies without a disaster recovery plan who suffer a major disaster are out of business within one year. There are tremendous benefits of backup software in backing up your data that can save you time and help you retain your competitive advantage when facing a data loss or complication.


    IT Central Station real users of backup and recovery software note the advantages of this type of solution to their IT Departments, management, and end users. Choosing the best backup software can provide you with a great safety net and the most benefits to your business. Benefits include:

    1. Security. One of the most important aspects of data backup and recovery. As IT systems grow and integrate with one another, the number of potential threats increases to the information that a company holds. Maintaining a backup and recovery solution that has strong security is foremost when looking to protect and save data. A Senior Vice President at a medium-sized technology company notes how their backup and recovery software solution, Quorum OnQ, provides them security, “Quorum offers a very high level security environment for secondary data. Once data is in Quorum it is highly secured because the appliance is Linux-based. It does 256-bit AES encryption at rest and in motion.”
    2. Ease of management. Especially when restoring lost data, which can be stressful and time-sensitive, ease of management creates consistencies in the processes for backing up data and information. It avoids end users backing up their own devices inconsistently and irregularly. Speedy data restorations help in expediting RPOs and RTOs across core applications. Jean Maurice Prosper, Chief Executive Officer at Nettobe Group, discusses how Barracuda Backup’s all-in-one management platform is easy and intuitive. Describing Barracuda, he says, “It is not difficult to configure a backup strategy, and manage our backup and restore. It is very user-friendly and made to ease the job of the administrator. We let the solution do the hard part.”
    3. Reliable replication. Ensuring accurate replication of your data makes it disaster-proof. As Franklin, an Enterprise Network Engineer at a large healthcare company and Zerto user, explains, “It's almost like a tape recorder. You can rewind if you need to, if something bad happens. You can rewind the tape and your production begins where your tape left off.”
    4. Maintain compliance standards. Collecting and preserving critical data through regular backup processes, IT departments can be more nimble when responding to requests from legal or auditors. Keith Alioto, Lead Storage Engineer at a large tech services company, talks about the compliance difference before and after deploying NetApp SnapCenter, “What we found was that we weren't backing up all of our SQL servers. We have now gotten to 100 percent compliance in backing up all of our data, and it's regularly measurable: daily, weekly, and monthly.”
    5. Zero impact on performance. Most of the time users don’t realize that a solution for backups is running in the background. Fewer disruptions to users means more uptime. Benjamin R Roper, a Backup and Recovery Specialist at Parsons notes this benefit about his backup and recovery software, Metallic, “Its performance for both backup and recovery is amazing. It runs very well. I don't even know when it's running and that's true during the backups as well. It completes successfully and there's zero impact on the endpoints.”
    6. Helps management control costs. A good backup and recovery software solution can reduce workforce overhead, leading to cost savings. A QA Engineer at a small tech vendor reflects on the savings seen in their company’s storage using Pure Storage FlashArray, “The speed of deployment has gone from several days to a few minutes, e.g., our database team used to spend 93 days backing up and restoring databases. This product has reduced that time into minutes, simplifying storage for us.”

    Benefits of Backup as a Service (BaaS)

    Backup as a Service (BaaS) connects systems to an outside provider who provides private, public, or hybrid cloud services. This is instead of performing backups with a centralized, on-premise solution. Organizations may prefer to use a BaaS solution when they have outgrown their legacy storage backup, instead of doing a costly upgrade, or possible if they are lacking resources for an on-premise backup. Market trends for BaaS solutions suggest their popularity with a CAGR at 24 percent. Benefits of BaaS include:

    • Quick access to data. This allows IT to easily retrieve files and information for end users. It also provides quick restoration of data when operating systems fail. If employees lose files from OSs, then companies want a solution that provides a single place to recover that data. According to Brijesh Parikh, Senior Architect, Cloud Infrastructure at a large tech vendor, Commvault provides that single solution to recover data. He gives the example, “Every once in a while, we receive requests for files or emails that people have lost and those files are in SharePoint or OneDrive. We have the ability to restore it within 30 days directly from the portal. But if it's beyond the 30 days, we use Commvault to restore data and that has worked absolutely fine.”
    • Data accessibility. Whether connecting to your data from local areas or remote locations, you want to be able to access your data at any time. A System Manager at a large construction company using Rubrik discusses how Rubrik Polaris GPS improved their productivity by keeping data accessible, “Since acquiring Rubrik Polaris GPS, we have further increased our productivity by utilizing SLA policies that extend across clouds and multiple on-premises data centers.”
    • Scalability. As companies scale, they look to enterprise software solutions to help them manage their infrastructure. Whereas, local backups are costly and difficult to scale up. Mostly, this is a result of their reliance on onsite storage. Calvin Engen, CTO at F12.net, had this situation as his company’s data centers continued to grow and they needed to be able to scale out their total protected VMs. Before Veeam, “We had backup infrastructure sprawl.” After deploying the solution, “Veeam was able to reduce our overall backup windows while reducing our dedicated backup infrastructure.” Based on Calvin’s review, this saves F12.net approximately $20,000 per year.
    (less)
    Find out what your peers are saying about Veeam Software, Zerto, Commvault and others in Backup and Recovery Software. Updated: November 2021.
    552,136 professionals have used our research since 2012.