What is most valuable?
Of the various components that I use in UniDac, I use to just about every aspect of them. I cannot pinpoint any single feature as it would diminish the others. They are all very important, they all work without any issues and because of this are valuable.
Probably the most important features of the UniDac components are that they do not contain the 100’s of extra options that would be included within a single component of competitors product, these verbose options serve little purpose to the development team other than tinkering, confusion and nightmares when support issues arise. If you really want to go looking for them, you will find them in UniDac also, but at least at that time you know what you’re looking for rather than just tinkering about setting options for things less experienced developers would understand.
How has it helped my organization?
In general, I look for bug fixes in a new release that effect my default database engine. In the years that I have used the solution, I only encountered one issue when upgrading from version 4 to version 5. This was resolved within 24 hours.
What needs improvement?
Error handling. This has caused me many problems in the past. When an error occurs, the event on the connection that is called does not seem to behave as documented. If I attempt a retry or opt not to display an error dialog, it does it anyway. In all fairness, I have never reported this. I think it is more important that a unique error code is passed to the error event that identifies a uniform type of error that occurred, such as ecDisconnect, eoInvalidField. It is very hard to find what any of the error codes currently passed actually mean. A list would be great for each database engine. Trying to catch an exception without displaying the UniDAC error message is impossible, no matter how you modify the parameters in the OnError of the TUniConnection object.
I have already implemented the following things myself. They are suggestions rather than specific requests.
Copy Datasets: This contains an abundance of redundant options. I think that a facility to copy one dataset to another in a single call would be handy.
Redundancy: I am currently working on this. I have extended the TUniConnection to have an additional property called FallbackConnection. If the TUniConnection goes offline, the connection attempts to connect the FallbackConnection. If successful, it then sets the Connection properties of all live UniDatasets in the app to the FallbackConnection and re-opens them if necessary. The extended TUniConnection holds a list of datasets that were created. Each dataset is responsible for registering itself with the connection. This is a highly specific feature. It supports an offline mode that is found in mission critical/point of sale solutions. I have never seen it implement before in any DACs, but I think it is a really unique feature with a big impact.
Dataset to JSON/XML: A ToSql function on a dataset that creates a full SQL Text statement with all parameters converted to text (excluding blobs) and included in the returned string.
Extended TUniScript:- TMyUniScript allows me to add lines of text to a script using the normal dataset functions, Script.Append, Script.FieldByName(‘xxx’).AsString := ‘yyy’, Script.AddToScript and finally Script.Post, then Script.Commit. The AddToScript builds the SQL text statement and appends it to the script using #e above.
Record Size Calculation. It would be great if UniDac could estimate the size of a particular record from a query or table. This could be used to automatically set the packet fetch/request count based on the size of the Ethernet packets on the local area network. This I believe would increase performance and reduce network traffic for returning larger datasets. I am aware that this would also be a unique feature to UniDac but would gain a massive performance enhancement. I would suggest setting the packet size on the TUniConnection which would effect all linked datasets.
For how long have I used the solution?
I have used Devart for about between five and seven years.
What was my experience with deployment of the solution?
What do I think about the stability of the solution?
We never had any stability issues.
What do I think about the scalability of the solution?
Other than the redundancy issue described above, we have not had scalability issues. It would be really great if we had some data replication APIs available, although I realize that this is far outside the scope of the solution. I do believe that a universal database replication system, with the quality of the other products, would be a very successful product.
How is customer service and technical support?
Excellent, never had any problems here. Technical Support
My experience of support is excellent. However just to add, I have never really had any call to use technical support as the product is so simple and reliable to use. The one issue I had was a change in behaviour between major releases and due to time constraints I had not the time to investigate the documentation before contacting Devart. The answer was prompt, set the global xxx := True. I believe that may have been my only request from technical support over the years.
Which solutions did we use previously?
I mostly use Firebird/SQlite. I tried most of the usual components sets, as well as more targeted components sets for Firebird. They did not offer any advantage over this solution and other than Firedac they lacked multi-database support.
How was the initial setup?
The setup could not have been more straightforward. You just need to run the setup. Compiling the source code, however, is a different story. That piece is quite difficult.
What was our ROI?
To be honest, I hardly call the token fee associated with UniDac an serious investment. I spend €1000's each year evaluating component sets that I never use based on one deficiency or another. How to quantify the ROI of something that just works without incident is hard to imagine. Considering that I use UniDac in every application that I create, I think it is invaluable.
What's my experience with pricing, setup cost, and licensing?
For what it offers, I think this solution is a must for any Delphi programmer. My decision to use UniDac years back was made after a lot of research. It is a lot simpler to use than FireDac. It is cheap to purchase. For what it offers a software company, I believe that it is the best choice. I have looked at so many similar database component sets and after many tests I ended up with UniDac. The component sets I choose are not selected lightly. Not only is the performance/stability and code quality extremely important but so is the company behind them. In my view, there is no point creating any application based on components that are part of a long term strategy where the component vendor creating them is unstable or unreliable (of which there is many), that would be a real mistake.
Which other solutions did I evaluate?
I have evaluated AnyDac and FireDac which are really the same thing. I found them cumbersome to use. There are too many options to entice a programmer to set the options without knowing what they are playing with. Also looked at various others such as IB etc...
What other advice do I have?
Just install it. Experiment with error handling, for me, everything else just works exactly as any programmer would expect.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Jul 05 2017