What is most valuable?
So, it's been more than a year on since I wrote this review, so what has changed ?
Well. The first thing to say is that we (that is, a large multi-national financial services company) continue to use Sonarqube, indeed it has become mandatory for all projects (new and existing). We have introduced an aggregation portal which takes metrics from SonarQube via its API along with other sources, to provide a cross project and somewhat sanitised view for upwards reporting. It's important, we feel, not to try and hide issues, but at the same time not to 'set hares running' by exposing more senior management to metrics 'in the raw'. So instead, we gather all the evidence that we have, and add to that some constructive assessment from the lead solution designers, scrum masters and others, to provide more balanced and reasoned view. As we all know, there are a whole multitude of metrics baying for our attention, and it is not always obvious which are critical and which are less important (and that is often a factor of timing and priority).
One thing we did do this year is consider other complementary products, particularly in the area of identifying security vulnerabilities with both our own code bases and the open source 3rd party libraries that are routinely packaged with a released application. The latter category can often account for 80%+ of the actual app, so it's an important area not to neglect. Sonarqube does provide some support here i.r.o the integration of OWASP top 10, but it clearly isn't an area of strength when compared to more dedicated products. We did an RFP and have now selected two further products that will bolster this aspect considerably.
We have also moved forward with SonarQube in 3 important ways. First, we have upgraded our implementation to version 5.4 (prev. 4.5.x). This was important to many of our teams because some plugin support require the later version. The second change is that we have moved our implementation of the Sonarqube server into docker. Sonarsource provide an OOTB image on DockerHub which is a good starting point. We have enhanced it in a couple of ways to reduce the size and attack surface and also to add our specific config, but it was pretty easy to do so, so good job from Sonarsource here. The third difference is we have moved some of our install to use the Professional version rather than OSS. There were a couple of reasons, one was to access some commercial plugins which come bundled as part of the product and it made more sense (funding-wise). Another was to provide better support for a central SQ service. When I said 'some' of our installs, that was deliberate. We don't only provide SQ as a central PaaS, but also allow distributed DevOps teams to spin up their own, as long as they fully understand that operational support becomes their problem too of course (no free lunch here !). This works well for teams who want to manage more of their delivery pipeline rather than be part of a change control process where other participants might need to be consulted and perhaps engage in regression testing when changes are requested.
One significant change in v5.x is the movement of the database update to the server. This has a couple of important consequences. The first is that the build-breaker plugin is no longer useful since its harder to synchronise the fact that a build has failed with the update of the analysis outcome visible on the server. We use that plugin a lot, so it was a bit of a PITA. There is a compatible approach that SonarSource have documented, but personally I'm not a great fan because it increases the number of moving parts and thus the opportunity for something else to fail. But, with any upgrade there are always 'swings and roundabouts', and on the whole the positives outweigh the negatives (decoupling the client-side analysis from database update *is* on the whole a good thing). SQ v5 also comes with a bunch of new 'runners', now called 'scanners'. We have used the basic one, the Maven one and the MSBuild one, and all work fine. It's another change that you need to consider as part of migration, but not a massive one. Security controls have been enhanced in v5 and it's now easier to apply more granular access controls than in v4. For companies that outsource development work that's likely to be quite important (it is for us).
Licensing in the 'immutable server' world, whether that's docker or native Cloud remains unresolved. SonarSource seem a little behind the curve here, but we are talking to them. The key point is that we no longer stand up environments (including CI/CD pipelines) with any intention that they will have a 'shelf life' beyond their immediate use. Creating environments for specific use cases then tearing them down frequently (often this can be measured in minutes or hours) has become common-place for use and has tremendous advantages over previously used 'convergence' approaches using config management tools like Pupper, Chef or Ansible. Many vendors recognise this and have adjusted licensing arrangements, SonarSource aren't quite there yet (but they are willing to talk about it).
Anyway, that's probably enough of an update. I hope you find this, and the previous review helpful ?
Original Review (circa: 2014/15)
Moving to a largely evidence-based assessment is hugely beneficial, especially if you are managing out-sourced resources. It provides a clear definition of what acceptable quality actually means, and supports the decision of when you can stop, as well as what is not as there are no arguments based on an opinion. That said, metrics only take you so far, you still need smart people who can interpret and see beyond the base facts provided.
The ability to integrate analysis of software engineering metrics directly into the Development Lifecycle (DLC) in much the same way as any other practice such as Unit Testing. Specifically, developers run SonarQube analysis frequently and don’t commit changes to SCM when build breaker issues remain.
Early warning via CI build pipeline and especially setting up ‘Build Breakers’ based on a team or code base specific quality gate - a set of rules/thresholds that determine the most important measures for a particular code base.
Targeted improvement allows a team to identify specific areas of threat (e.g. TD) and then set purposeful goals to improve in those areas (rather than trying to improve everything).
The SonarQube community is very active which often means that finding solutions is a blog post away from other like-minded organisations. Community plugins are a staple for this product and have tremendous breadth and depth.
How has it helped my organization?
It would be utterly impossible to contemplate Continuous Delivery without including a major focus on ensuring affordable software quality. SonarQube plays a key role in this endeavour and provides Senior Management oversight across multiple project teams and business deliveries. Fits in very well with existing Continuous Integration build pipeline workflows. As we move towards Continuous Delivery ensuring a ‘no surprises’ release management.
Our software quality assessment at an affordable cost (licensing, time and effort). Previous attempts have failed to win the support of the development community (typically overly complex and intrusive and/or not sufficiently timely) without which the initiative will be doomed to failure.
What needs improvement?
- More granular security
- Simpler integration with JIRA
- It would be nice for a dashboard server to be able to address more than one database (this limitation tends to encourage either lots of small (team/project) servers or one uber server if you want to report across projects).
For how long have I used the solution?
What was my experience with deployment of the solution?
Originally we used Puppet to apply our specific configuration to our SQ install, and this was pretty successful albeit reasonably complicated. More latterly we have moved to using Docker. SonarSource provide a base image on DockerHub which is easy to extend for you own use case. We updated it to use a smaller footprint base image (Alpine) to reduce the size and attck surface, and then added our own set of plugins and other config. All straight-forward.
What do I think about the stability of the solution?
There were initially some questions about performance and in particular the location of the database (some suggesting that this needed to be physically close to the point of analysis to minimize network latency). However, this is highly dependent on the size of the code base under analysis (and the multiplicity of code bases). In our case we didn’t find any problem in running the database, server and analysis process in separate locations (RDS, EC2 and Jenkins respectively). Our largest code base to-date is around 500K lines.
How are customer service and technical support?
Average. That said, considerable effort has been made to make the product largely self supporting at from the install and initial config perspective. Response to queries directly to SonarSource haven't always been particularly successful, but the community forum is pretty good. Technical Support
We haven’t had a need for an official support contract with SonarSource. The open source community around SonarQube is very active and has met all of our needs to-date. That said, SonarSource do publish very helpful materials, documentation, blog posts, webinars etc. which we definitely take advantage of.
Which solution did I use previously and why did I switch?
Yes. We had been using Coverity. However, whilst an excellent product with perhaps more capability, we found that it was more difficult to integrate into the development lifecycle and take up was relative modest. The sophistication of the solution was not well suited to our requirements in the sense that we are not producing commercial software but creating applications for internal use, and therefore the depth of analysis available was not really needed especially given the much higher learning curve. Also, licensing and platform costs were also high. We found SonarQube to be sufficiently powerful at a much more affordable price point.
More recently we have added two products with a specific focus on detecting security vulnerabilities. SQ does offer basic OWASP top 10 support within the language rule sets, but it's fair to say that this is probably not sufficient to keep your security folks happy. We definitely wanted to add support for scanning 3rd party libraries which probably make up 80%+ of our released app.
How was the initial setup?
Creating instances of each of the major components (server and database) are very straightforward. Of course there are some complexities if you want to operate high availability, failover and so on, but no more so than any other application server. Given the stage in the lifecycle where SonarQube is used, it is in some ways less critical, so periodic outages can be tolerated. We typically operate an immutable server pattern so if/when we have server issues, we can easily destroy and re-create our environments or auto-scale them up and down as required. Integration into the CI world is easy (Jenkins plugin available or just use the command-line ‘runner’) and integration into the developer lifecycle also easy via plugins for mainstream IDEs (eClipse, Visual Studio, etc).
Using Docker simplifies things considerably. At the same time, the clutch of new 'scanners' does mean some extra work if you are migrating from v4.
What about the implementation team?
In-house. The product is sufficiently simple that setting up the server environment requires some straightforward DevOps skills (spinning up servers and configuration management) and creating Jenkins jobs and installing IDE plugins. This is something that typically your developers should already be familiar with. We didn’t need any vendor support beyond the available documentation. Product training was not really necessary although we did run some awareness/101 sessions in-house, but more to promote why we wanted to go this route rather than any how-to technical skills.H
What's my experience with pricing, setup cost, and licensing?
The only associated costs if you are following the OSS route are the platforms on which you will run your server and database, and any commercial plugins that you want to use (we only use a couple of those). There is a need to invest in a robust environment and some recommended practice but that is no different from any other similar software engineering process. We tend to prefer devolvement of responsibility rather than centralized control. This includes individual teams looking after their own infrastructure as well as determining their own priorities in terms of continuous improvement (albeit there are some standard measure that apply, for example unit testing, code coverage, technical debt and so on).
For v5.4 we moved one of our installs to use the Professional edition. This made sense for us because we wanted to use some of the commercial plugins that are already bundled as well as formalise support with SonarSource. We still use the OSS version for teams who don't need commercial plugins and want to manage their own SQ environment (see above comments).
Which other solutions did I evaluate?
Yes, and we did so again recently (2016). We had an encumbant Coverity solution which was very expensive and very under-used (too complicated). Since then we have also considered specific security analysis tools as complementary products (e.g. CheckMarx, Veracode, Nexus Life-cycle/Firewall, and a few others). We have since selected from these.
What other advice do I have?
If you are looking at SonarQube you already realize the importance of software quality and it’s value proposition. Sometimes you just want to discover the types and severity of issues you have especially for legacy or inherited code bases (i.e. as a result of a merger). You should definitely follow best practice of not trying to cover every metric all at the same time, but instead pick out the two or three (at most) that are most critical to you right now (recognizing that this will change over time). Time based metrics are especially useful to help you understand if you are getting better or worse, and other well known strategies (such as ‘boy scout’) can also help formalise an improvement plan.
Perhaps the single most important consideration is to involve your development community right from the start (don’t try and foist a tool, set of skills or a change in process on them, as they will resist). Those guys are the ones that know where all the skeletons are and their buy in is absolutely critical especially if you need to change some existing behaviors. In my experience most software professionals are highly supportive but you should expect a few negative challengers).
Which version of this solution are you currently using?
OSS + Prof