it_user332616 - PeerSpot reviewer
Director of IT Infrastructure at a financial services firm with 501-1,000 employees
Vendor
It provides us with redundancy and security, which is important because we hold a lot of customer information that must be secure and reliable.

What is most valuable?

  • Redundancy
  • Snap mirroring
  • Home-drive capability, which looks at a user name and gives the correct rights to folder

How has it helped my organization?

  • Rendundancy
  • Security

We hold a lot of information for our customers, so the information has to be secure and reliable.

What needs improvement?

I'm not sure, because every time I’ve gone to them, they’ve said “yes, we can do that.”

What do I think about the stability of the solution?

I sleep well at night because of its redundancy. I hardly even know when it has a bad drive. The Call Home capability sends a message automatically if there's a bad drive to NetApp who then sends a new drive.

Buyer's Guide
NetApp FAS Series
April 2024
Learn what your peers think about NetApp FAS Series. Get advice and tips from experienced pros sharing their opinions. Updated: April 2024.
769,479 professionals have used our research since 2012.

What do I think about the scalability of the solution?

Amazing how scalable it is. As a comparison, we looked at EMC vBlock as well, and if you want to upgrade, you have to use a forklift. With FAS, you just put in new shelves or heads.

How are customer service and support?

They’re extremely technical. Everyone I’ve talked to has been very knowledgeable, and I can’t say anything bad.

How was the initial setup?

It was complex. There's a lot to do, but I had their assistance and went through everything step by step. So while complex, it was also simple.

What other advice do I have?

One thing that burned me, is that it surprised me how much overhead it uses, like 30% right off the top. So don’t forget the overhead. It’s not usable space, but that percentage is coming down. It all has to do with deduplication.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
PeerSpot user
System Administrator - Backup & Storage Specialist at METRO SYSTEMS Romania
Consultant
It provides very good storage High Availability and data protection. The thing we'd like to see the most is the possibility of pairing LAN/SAN ports from different nodes.

Valuable Features

What impressed me the most about these systems are their excellent reliability, ease of administering (both in GUI and command line), and their very good documentation that is easy to access and understand. It provides very good storage, High Availability, and data protection by employing the use of two separate storage controllers that can take over each other's role as soon as any of them goes down. The technology has been improved even more after the introduction of the cluster cDot ONTAP OS.

Improvements to My Organization

NetApp systems are a good choice if you want a versatile unified system that's also capable of delivering performance. Our company has been using NetApp filers both as file sharing solutions (CIFS over LAN) and also as block storage (LUNs) for VMware ESXi hosts.

Since we switched to the newer 2552 models, we now benefit from better data protection and improved storage capacity thanks to the clustered Data ONTAP OS.

Room for Improvement

The thing we'd like to see the most is the possibility of pairing LAN/SAN ports from different nodes. Currently, the systems only provide pairing (and thus redundancy) only at same-node level. Also, it wouldn't hurt having this sort of cross-functionality when it comes to choosing disks for aggregate structures. Right now, you can't integrate in the same storage aggregate disks from different shelves.

Use of Solution

I've had the chance to work a lot with NetApp FAS 2552 series and also have some experience with older models such as 2050, 2040, 3240 and 2240. I think it's a pretty reliable unified storage solution. The FAS 2552 model, especially, offers good performance and excellent reliability. My experience with similar storage systems is, currently, somewhat limited however.

My company has been using NetApp for a few years now, over four I think, and I have come into contact with this technology for over a year.

Deployment Issues

When it comes to deployment we had our share of issues. Some of these issues are to blame on the vendor's lack of experience with the new models and ONTAP versions, but sometimes the systems themselves were faulty.

Stability Issues

The most recent issue we had involves a LAN card that couldn't be set on the correct bandwidth setting. In consequence, the vendor had to replace one of the node's motherboard.

Scalability Issues

There have been no issues with scaling it, other than during the actual deployment of new devices.

Customer Service and Technical Support

If you buy NetApp systems from third-party vendors, then you would be surprised that their technicians aren't exactly up to date with the latest ONTAP versions. NetApp releases new versions (with great improvements) so often that it's hard for some vendors to stay up to date with their technical knowledge base.

However, when it comes to technical support from NetApp directly, they tend to have a very competent team and the reaction time is pretty decent. Perhaps their biggest strong point in this chapter is their public knowledge base which helps you solve on your own most of issues you can encounter with configuring and administering.

Initial Setup

All I can say is that if you take your time and study the NetApp documentation, you shouldn't have any issue, provided the initial setup was done properly by the vendor technician.

Implementation Team

Initial setup is usually performed by NetApp or the third-party vendor from whom you purchased the devices. Our experience with third-party vendors isn't the best due to reasons stated above. All other configuration and administration is done in-house.

Pricing, Setup Cost and Licensing

When it comes to software licensing, I think that NetApp promotes a very fair system. Basically you only pay for the features you need (eg.: Cluster Mode, SnapMirror, SnapVault, etc.).

Other Advice

The best advice I can offer is to try and purchase it directly from NetApp in order to have a better chance of having a successful initial configuration from the first try. Also, make sure you purchase the system with a General Availability OS version as Release Candidate ones tend to be bugged.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Buyer's Guide
NetApp FAS Series
April 2024
Learn what your peers think about NetApp FAS Series. Get advice and tips from experienced pros sharing their opinions. Updated: April 2024.
769,479 professionals have used our research since 2012.
it_user332652 - PeerSpot reviewer
Storage Adminstrator at SRPNet
Vendor
It has the capability to use SAN, so it has a broad spectrum of use. I'd like to see more cohesiveness with a unified manager.

Valuable Features

  • Software features, such as being able to do snapshots and file system optimization
  • High Availability -- components fail so this is a nice feature to have when failing over. There's no downtime, so we don’t lose data.

Improvements to My Organization

Good bang for the buck. Also, we use NFS generally, but FAS has the capability to use SAN, so it has a broad spectrum of use.

Room for Improvement

Tough for me to answer because I’m limited in my role, but the one thing I’d like to see most is more cohesiveness with a unified manager. I like the end product, but it’s not really all integrated and is convoluted with different managers. I would ike a single pane of glass, a single dashboard.

Deployment Issues

We see a lot of bugs in roll outs, and sometimes I think the first GA are late-beta deployments. My impression is they could have let it bake a little longer. But it could also be because of some of the environments it deploys in.

Stability Issues

Snap Manager v3.3.1 is a little buggy and NetApp doesn’t offer training course on it. So it could be what I’ve been taught by other people, or it’s in fact buggy, but likely a little of both. Hopefully they made improvements on 3.4.

Scalability Issues

7-mode scales very well. I’m even more impressed with where they intend to go with cDOT, but it may be rolled out prematurely.

Customer Service and Technical Support

Tech support is usually pretty good, but occasionally there are some things that occur only on our site that tech support has issues.

Other Advice

Plan ahead and make sure you right-size it. How much head room do you really need? How many spindles are you going to attach? Are you really going to share workloads or do you want to separate some of those? We don’t segregate our infrastructure, which I don’t like, but all that costs money. But you should make sure that you have failover.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user330081 - PeerSpot reviewer
Principal Computer Engineer with 1,001-5,000 employees
Real User
Since implementation, our performance has definitely increased, but they're upgrading the performance monitoring tool, which is the main thing I think needs improvement.

Valuable Features

I think that the flexibility with the volume, resizing, and performance.

Improvements to My Organization

I think that our performance has definitely increased.

Room for Improvement

I think that they are upgrading the performance monitoring tool, which is the main thing I think needs improvement. From version to version they are changing, and you want to see things improve – I think we will continue to see more and more benefits.

Use of Solution

We have been using it since 2013.

Stability Issues

Pretty solid in terms of stability.

Scalability Issues

We haven’t really grown it but I see a roadmap, the only problem there may be cost. It’s not an expensive product per se, but because of budget issues. People sometimes don’t evaluate the cost correctly.

Customer Service and Technical Support

NetApp overall has been really good in terms of technical support.

Initial Setup

Initial setup was hard a year ago, but now we just did another setup and everything was smooth. It’s gotten a lot better in the last year we’ve been using it.

Other Advice

If you are on the fence it’s been a very good product, you don’t want to build your own solution, you want to use the appliance for the flexibility. Overall performance has gotten a lot better.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
PeerSpot user
Solutions Architect with 51-200 employees
Vendor
The interesting thing about VVOLs is that not all implementations will be equal. It puts more responsibility on the array by moving storage operations to it that were previously handled by vSphere.

More information on VVOLs is being released every week and it is only now that we are getting a chance to play with the full release code that we are able to dig into the detail of how it works. Let’s start off by exploring the benefits of VVOLs that are likely to make it game changing technology:

Granular Control of VMs

  • Enable VM granular storage operations on individual virtual disks for the first time including control of the following capabilities:
    • Auto Grow
    • Compression
    • De-duplication
    • Disk Types: SATA, FCAL, SAS, SSD
    • Flash Accelerated
    • High Availability
    • Maximum Throughput: IOPS & MBs
    • Replication
    • Protocol: NFS, iSCSI, FC,FCoE

Enhanced Efficiency and Performance

  • Off-load VM snapshots, clones and moves to the array
  • Automatically optimise I/O paths for all protocols
  • No VMFS, therefore
    • Virtual disks natively stored on the array
    • Datastore space management is not required
    • Size limits are dictated by the guest and array
    • Zeroing, either on disk creation or use, is not required
    • vSphere UNMAP, when a VM is deleted, is not required
    • Guest UNMAP commands are passed directly to the VVOL
    • Thin-provisioning is managed by the array
  • Minimise LUN and path consumption, NFS mount usage, and LIF count and IP address consumption

Automated Policy Based Management

  • Create a library of reusable storage profiles
  • Match the profiles to storage capabilities
  • Provision VMs using storage profiles
  • Alert when a VM no longer conforms to the profile

To get VVOLs up and running you need cDOT 8.2.3 or above, Virtual Storage Console 6.0 and VASA Provider 6.0 – for more background information see A deeper look into NetApp’s support for VMware Virtual Volumes.

The On-Demand engine

One of the best kept secrets of cDOT 8.3 was the inclusion of the On-Demand engine which consists of the following new commands:

  • Single-File Move on Demand (SFMoD)
  • Single-File Copy/Clone on Demand (SFCoD)
  • Single-File Restore on Demand (SFRoD)

When a command is triggered, data access at the destination begins immediately, while in the background the data is copied or moved from source to destination. The commands cannot be directly invoked, rather other operations take advantage of them (i.e. VVOLs and LUN moves). So when the policy of a VVol is changed that results in it needing to be moved from one volume to another (even across controllers) the On-Demand engine non-disruptively moves data access from the source to the destination instantly. All writes go to the new destination and, while the data is being copied from the source, reads are redirected back to the original volume as required. If a VVOL is migrated elsewhere in the cluster, a rebind operation automatically changes the I/O path to the new closest PE, maintaining optimum performance and reducing complexity and latency.

Not all VVOLs implementations will be equal

The interesting thing about VVOLs is that not all implementations will be equal, as it puts more responsibility on the array by moving many storage operations to it that were previously handled by vSphere – you therefore need an array that provides efficient:

  • Thin-provisioning
  • Snapshots
  • Clones
  • Non-disruptive VM mobility

The current snapshot technology in VMFS is to say the least very poor – best practice is to have no more than 2-3 snapshots in a chain (even though the maximum is 32) and to use no single snapshot for more than 24-72 hours – the reason is simple, storage performance will suffer if you create a snapshot on a VM. So if an array supports VVOLs and we can off-load snapshot and clone creation to the array then we have surely solved the problem and we can then keep 100s of snapshots. As always it is not so simple – if the array uses inefficient CoW snapshots then you will not gain much over the standard vSphere snapshots. Thin-provisioning is another area whereby some arrays do it very efficiently, but many suffer a significant performance drop unless thick LUNs are used.

The nice thing about FAS is that it has excelled at the first three points above for many years and the last point has been introduced with the On-Demand engine in cDOT 8.3 – there are plenty of arrays on the market that will be enabled for VVOLs, but they will not be able to claim efficient support for these features without massive re-engineering work.

Other points of note

It is essential to backup the VASA provider VM, this can be achieved using the in-built backup capabilities of the array using one of the following options:

  • The backup and recovery features of VSC
  • The built-in scheduled FlexVol snapshot copies

NetApp All-Flash FAS has emerged as the first storage array to successfully complete validation testing with Horizon View 6 with VVols.

The VADP APIs backup vendors use are fully supported on VVOLs therefore backup software using VADP should be unaffected.

For a detailed breakdown of vSphere product and feature interoperability with VVOLs click here

Get hands on with VVOLs on FAS

If you would like to gain a detailed understanding of how the technology works we have created, in conjunction with VMware and NetApp, a series of demo café events – to find-out more click here.

VVOLs is certainly interesting technology and I am sure what we have today is only the beginning of the journey and it is going to be interesting to see how it develops over the coming years – we know for sure that NetApp will be making improvements to cDOT to enable things like replication to be set at a VVOL level.

What do you think – is VVOLs as game changing as VMware thinks?

Disclosure: My company has a business relationship with this vendor other than being a customer: We are Partners with NetApp.
PeerSpot user
it_user3396 - PeerSpot reviewer
it_user3396Team Lead at Tata Consultancy Services
Top 5Real User

Saluting Mark:>)

Henry

PeerSpot user
Solutions Architect with 51-200 employees
Vendor
We have flash caching, but it would be nice if we could move data between flash, SAS and SATA drives.

As we move into the world of Software-Defined Storage it “sticks out like a sore thumb” when an array vendor only makes new software releases available on their next generation hardware. The problem with this is that even if you purchase at the very beginning of the life cycle of a product, at best you will get one round of feature enhancements, after that all software development is focused on the next generation product. This often even includes support for new drive types – again they are only supported on the latest generation hardware.

This problem is very evident when it comes to support for VMware Virtual Volumes – any array vendor that will be releasing new hardware next year is unlikely to provide support for Virtual Volumes on their currently shipping product. My view is that the industry cannot continue like this and instead they need to make sure new microcode versions and drive technologies are supported on the current shipping product and at least the previous generation – without this there is a real danger that your new storage array becomes obsolete shortly after purchase.

The good news for NetApp customers is that Clustered Data ONTAP (cDOT) meets my criteria above, so the recently announced version 8.3 will not only work on 2014 generation hardware (2500 and 8000 series), but previous generations as wells.

So what’s 8.3 all about?

Major features

  • MetroCluster support – to enable continuous availability
  • SnapMirror to Tape (SMTape) – simplifies and speeds up backup to tape
  • Virtual Volumes support – enables native storage of VMDKs (requires vSphere 6)

Efficiency enhancements

  • Combined SnapMirror and SnapVault – so that you only need to send the data once, rather than having separate SnapMirror and SnapVault copies
  • SnapMirror and SnapVault Compression – traffic can now be optionally compressed to reduce bandwidth requirements
  • Root Drive and Flash Pool Partitioning – no longer requires Root Vols and Flash Pool drives to be dedicated to a single node therefore provides better capacity utilisation
  • Flash Pool enhancements – caches overwrites larger than 16K and compressed blocks, increases the usable capacity, and supports much larger pool sizes (up to 4x)
  • Inline zero write detection and elimination – so host disk zeroing activity does not consume I/O or capacity
  • Significant performance improvements – further multi-core, SSD random read, CIFS, replication and cloning optimisations

Migration and Administration Tools

  • 7-Mode Transition Tool – now supports SAN as well as NAS
  • Foreign LUN Import (Offline) – to simplify 3rd party (EMC, HDS, HP) SAN data migration
  • LUN migration – whereas previously an entire volume could be non-distributively moved around the cluster it can now also be performed at the LUN level
  • Disaster Recovery fail-over – to a specific point-in-time snapshot copy at the DR site for recovery from mirrored corruption
  • Automated Non-disruptive Upgrade – requires just 3 commands to upgrade an entire cluster

8.3 is a milestone release for NetApp as it is the end of development for 7-Mode as 8.3 only includes the cDOT build. Overall I think NetApp are finally in a good place with cDOT and they can now put the 7-Mode platform behind them and focus on innovating.

So what would we like to see in the next version of cDOT?

  • SnapLock (for retention and compliance) – the last remaining major feature to be ported over from 7-Mode
  • Erasure coding – to enable rapid drive rebuilds
  • Sharing of drives across controllers – we are already starting to see this with the new drive and Flash Pools partitioning features
  • Detaching of the drives from the controllers – so that the failure of an HA pair within a cluster does not result in downtime
  • Controller based Flash modules – in place of Root Vol drives
  • Advanced QoS – to enable setting of Service Level Objectives rather than just limits
  • Automated Tiering – we have flash caching, but it would be nice if we could move data between flash, SAS and SATA drives
  • Integrated file archiving – to move older files to secondary storage or the cloud
  • Encryption – provided by the controllers rather than drives
  • MetroCluster granular fail over – so volumes or even Virtual Volumes can be “moved” between sites
  • MetroCluster IP replication – either using FCIP bridges or native IP connectivity
  • MetroCluster Active/Active – so volumes/LUNs can be active on both sides of the cluster
Disclosure: My company has a business relationship with this vendor other than being a customer: We are Partners with NetApp.
PeerSpot user
PeerSpot user
Solutions Architect with 51-200 employees
Vendor
Policies are applied to FlexVols but it would be useful if they could also be specified for an individual Virtual Volume.

Virtual Volumes is the flagship feature of vSphere 6.0 as they enable VM granular storage management and NetApp FAS running Clustered Data ONTAP 8.3 is one of the first platforms to support the technology.

Today storage administrators have to explain to the VM administrators how to identify which datastores to use for each class of VM, which is typically achieved using a combination of documentation and datastore naming conventions – however, consistency and compliance are difficult to achieve.

Virtual Volumes changes this by enabling the storage administrator to provide vCenter with detailed information on the capabilities of each datastore. VM Storage Policies, whilst they existed in previous versions of vSphere were not sophisticated enough to query the actual storage for its capabilities, the VMware APIs for Storage Awareness (VASA) Provider 2.0 resolves this problem. Now the VM administrator can create VMs using Virtual Volumes and use the VM Storage Policy wizard to easily determine which datastores are compatible with its needs.

What components are required for Virtual Volumes?

VASA Provider (VP)The NetApp VP is deployed as an OVA virtual appliance and is managed by the Virtual Storage Console plugged in to the vSphere Web Client. VMs running on Virtual Volumes require that the VP is running in order to create the swap Virtual Volume at power on – the VP should not be running on Virtual Volumes since it would be dependent on itself.

Storage Container (SC)A SC is a set of FlexVol volumes used for Virtual Volume datastores. All the FlexVols within a SC must be accessed using the same protocol (NFS, iSCSI, or FC) and be owned by the same Storage Virtual Machine (SVM), but they can be hosted on different aggregates and nodes of the NetApp cluster.

Protocol Endpoint (PE)The IO path to a Virtual Volume is through a PE with the Virtual Volume bound to the PE through a binding call managed by the VP. The VP determines which PE is on the same node as the FlexVol containing the Virtual Volume and binds the Virtual Volume to that PE.

For block protocols, a PE is a small (4MB) LUN, and the VP creates one PE in each FlexVol that is part of a Virtual Volume datastore. The PE is automatically mapped to initiator groups created and managed by the VP.

For NFS, a PE is a mount point to the root of the SVM and is created by the VP for each data LIF of the SVM using the LIF’s IP address. The PE is automatically created when the first Virtual Volume datastore is created on the SVM along with the appropriate export policy rules.

Storage Capability Profile (SCP)A SCP is a set of capabilities for a volume or set of volumes and may include features such as availability, performance, capacity, space efficiency, replication or protocol.

How could things be improved in the future?

Today De-duplication, Compression, SnapMirror and SnapVault policies are applied to FlexVols – it would be useful if they could also be specified for an individual Virtual Volume, which in turn would enable MetroCluster to non-disruptively “move” an active Virtual Volume from one site to another.

It is great to see that NetApp is ahead of the game with regard to support for Virtual Volumes – it is also nice to see that the 8.3 release can be installed on older versions of hardware allowing FAS customers, who purchased their systems a number of years ago, to take advantage of Virtual Volumes.

Disclosure: My company has a business relationship with this vendor other than being a customer: We are Partners with NetApp.
PeerSpot user
PeerSpot user
Solutions Architect with 51-200 employees
Vendor
NetApp do have a “pure” block storage array but it lacks the advanced data services enabled by WAFL

For many years traditional storage array vendors have claimed that their platforms are superior for block storage than NetApp FAS because they do not have the overhead of a Pointer-based Architecture – let’s explore this in more detail:

What do we mean by “pure” block storage?

Uses a Fixed Block Architecture whereby data is always read from and written to a fixed location (i.e. each block has its own Logical Block Address) – in reality most block storage arrays provide the option to use pages (ranging from 5 MB to 1 GB) where the LBA is fixed within the page, but the page can be moved to facilitate tiering.

The advantages of this architecture are:

  1. No performance overhead – it is very easy for the storage array to calculate the location of a block and there is no metadata to cache
  2. No capacity overhead – as there is no additional metadata to manage
  3. No fragmentation – blocks always remain together which enables good sequential IO performance on HDDs
  4. Lends itself to tiering – to automatically place data on the most appropriate drive

The disadvantages of this architecture are:

  1. Advanced data services – cannot be supported:
    1. Granular De-duplication, Compression and Thin Provisioning – typically 4K-32K
    2. Low-overhead snapshots – using Redirect-on-Write rather than Copy-on-Write
    3. Hypervisor technologies like Virtual Volumes (VVOLs) – as VMDKs need to be stored as objects/files
  2. Write performance overhead – especially when using parity RAID (i.e. R5 or R6)
  3. Replication performance overhead – when based on snapshots (as snapshots have a significant overhead)
  4. Separate block and NAS – NAS requires a separately managed file system to be laid on top of the block storage

How does NetApp FAS compare?

FAS uses a Pointer-based Architecture, utilising 4K blocks which can be located anywhere, called WAFL therefore we have to reverse the above list of advantages and disadvantages. NAS based file systems are delivered along with block storage on top of WAFL – block protocols do not sit on top of the NAS protocols instead they interact directly with WAFL.

The good news is that WAFL has been around since 1993 so it is a very mature and highly optimised technology – retrofitting advanced data services to a “pure” block storage array is not straight forward and requires major re-engineering work.

So which is best?

Well we can debate this endlessly and clearly depending on your use case one may be a better choice than the other – 5 years ago this was a valid debate, but to be honest it is a moot point as today all storage platforms have to support the advanced data services listed above and therefore need a Pointer-based rather than Fixed Block Architecture.

Let’s explore some examples of this:

  • VMware
    • Virtual SAN – version 2 will include the Virsto Pointer-based Architecture to enable RoW snapshots and clones, and moving forward many more of the advanced data services
  • EMC
    • VNX/VNXe – uses an 8K Pointer-based Architecture to provide RoW snapshots, De-duplication, Compression and Thin Provisioning
    • XtremIO – uses an 8K Pointer-based Architecture to provide RoW snapshots, De-duplication, Compression and Thin Provisioning
    • VMAX3 – uses 128K tracks to provide RoW snapshots and Thin Provisioning, and in the future support for VVOLs
  • HDS
    • HNAS – uses a 4K/32K Pointer-based Architecture to provide RoW snapshots, De-duplication and Thin Provisioning
    • VSP G1000 – the new Storage Virtualization Operating System (SVOS) was built with VVOLs in mind

It is also worth pointing out that all of the start-up storage vendors that have come onto the market in the last 5 years do not have “pure” block storage platforms – it would just not make sense if they did.

What is interesting is that NetApp do have a “pure” block storage array – the E-Series which provides excellent price/performance, but it lacks the advanced data services enabled by WAFL – also VVOLs support is not expected for some time.

So for me “pure” block storage is no longer sustainable and dismissing products like NetApp FAS because they are not “pure” block no longer makes sense. Moving forward the issue is not that your storage platform has a ground-up all-flash design, but does it have a ground-up Pointer-based Architecture.

“Pure” block storage is dead – long live WAFL and the like.

Disclosure: My company has a business relationship with this vendor other than being a customer: We are Partners with NetApp.
PeerSpot user
it_user186357 - PeerSpot reviewer
it_user186357Solutions Architect with 51-200 employees
Vendor

Please post any questions at blog.snsltd.co.uk

Best regards
Mark

See all 2 comments
Buyer's Guide
Download our free NetApp FAS Series Report and get advice and tips from experienced pros sharing their opinions.
Updated: April 2024
Buyer's Guide
Download our free NetApp FAS Series Report and get advice and tips from experienced pros sharing their opinions.