What is most valuable?
- Test cases are not generated on the fly (which means that it isn't really fuzzing per se). They are organized in groups and defined according to the type of message and the tested part of the message. Compared to more random tools, fuzzing sessions can take less time and be more relevant.
- Simple and straightforward GUI.
- Context-sensitive helps in describing every configuration field along with their CLI equivalent.
- You can set a test sequence and thus test several protocols without any user interaction, and it can be sequential or in parallel.
- Interoperability feature which enables the user to ensure that the SUT supports the various types of tested protocols' messages. If a type of message failed the interoperability test, it won't be included in the fuzzing session, unless you want them to be included.
- Instrumentation capabilities (valid cases, ping, custom command) and actions (execution of a restart script of a device after a given number of failed instrumentation steps) upon instrumentation results.
- Reproduction of single test cases or along with the rest of the test case group.
- Network capture during fuzzing session as well as during the reproduction.
- Top 100 of the test cases which caused an important delay on the SUT response. Those cases are reproducible in order to check that the same test cases caused the unwanted behavior. This is useful for covering not well processed frames that don't necessarily make the SUT crash.
- Different sets to use depending on the available time and the coverage wanted (Full, Unlimited, Quick Run, Sample, etc.).
- We can create custom test cases by setting a value or a range of values on particular fields of a protocol.
What needs improvement?
- You can't implement proprietary ciphering algorithms, nor can you modify protocol models if you need to test customized public protocols.
- You can't use the program at all without the USB license dongle. This would be useful for instance to export results, prepare the wizard, and so on. It can be inconvenient if several teams use the license.
- Time estimation: order of magnitude is not always respected.
- To test ARP on the client side, you have to clear the MAC table of the SUT. A feature such as sending ping requests to the SUT with a different virtual IP/MAC address each time to force the client to send ARP request would be great.
- No automatic bug reproduction (as Peach has for example).
- You can't create a protocol model from scratch using the GUI. You can use the traffic capture fuzzer, import a PCAP file and generate tests cases from it. Known protocols are described according to a wireshark dissector, proprietary protocols have to be defined manually (by defining a label on a part of the data). It seems that we can go further with the Java SDK, but we didn't have enough time to test it.
- When using the GUI, you can't run fuzzing sessions both sequentially and in parallel at the same time, for instance for testing different protocols on different devices. One possible workaround is to use the CLI of Defensics and to use different configuration folders.
- When you choose the network interface to use, there is an «auto-configuration» box ticked by default. It means that Defensics will try to guess the interface you will use, but it often lead to mistakes.
For how long have I used the solution?
I've been using, mainly evaluating, it for two weeks.
What was my experience with deployment of the solution?
What do I think about the stability of the solution?
What do I think about the scalability of the solution?
How are customer service and technical support?
Very reactive and efficient.
Which solution did I use previously and why did I switch?
I didn't use a previous solution.
How was the initial setup?
There was nothing difficult, it was all typical Next>Next>Finish wizards.
Which other solutions did I evaluate?
We also met people from BreakingPoint (Ixia) and µSecurity (Spirent). I also tested an open-source solution, Peach.
Which version of this solution are you currently using?