Today I consider a lot of the things I do to ensure the quality of my JavaScript code to be natural and self-evident. However, these things haven’t always been so natural to me, and they may not be so natural to you either. That’s why I’d like to share some of the things we do here at e-conomic to ensure the quality of our JavaScript code.
Read on below, or check out the presentation I gave on this topic as part of the Copenhagen JS August seminar at e-conomic.
A note on JavaScript: JavaScript has matured a lot over the last years, or rather, the use of JavaScript and VMs has matured – JavaScript itself hasn’t actually changed a great deal. We use JavaScript in the browser and also on the server along with Node.js.
Here is my list of things to do to raise the quality of your code in JavaScript:
- JSHint. Checks our code using statistical code analysis and detects a large number of errors, such as undeclared variables etc. If you only do one thing on this list, run JSHint.
- Unit testing. We use the Jasmine test framework for code in the browser and Mocha on the server. JavaScript is business-critical code – if it fails, users won’t be able to use the website, so it should of course be unit tested.
- Integration testing. We use Mocha on the server to call all end-points in the API and check that they are working. For low-level integration testing, we use Phantom. This software provides you with a “headless” browser, i.e. everything works as in a proper browser, except that it doesn’t render anything on the screen. We use Selenium to run proper browsers through test scenarios.
- Continuous integration. We have a CI server (TeamCity) which runs all tests and JSHint every time someone checks in code. Below the ceiling in our office, monitors continually update us in case something has failed. Additionally, the CI server runs a nightly build that includes load/performance tests with JMeter, worker jobs performing cleanup etc.
- Code coverage. At the moment we run impromptu code coverage reports. With a code coverage report you can see which parts of the code have been affected by your tests. It’s a great method for finding out if you’ve forgotten any test scenarios, which is particularly useful if you write a piece of code and only later write tests for it. It’s also good for finding dead code which is not used anywhere and can be deleted. I plan to try to make code coverage part of the check-in, so that the build will fail if you check in a file with low code coverage.
- Systematic code review. We’re not allowed to write code without having it checked by another developer. We use Github which comes with the huge advantage that you can write comments for a commit within the code itself. When you write a comment, everyone in the developer team will automatically be notified by email. This is a great way of getting the entire team to communicate about code standards and best practices and making sure that these things aren’t just reduced to a couple of best practice documents gathering dust somewhere.
- Cleanup. There is a good understanding in our company that the code base needs ongoing cleanup and maintenance to keep it from becoming bloated. By doing the cleanup continuously during each sprint, we hope to avoid the scenario where everything comes tumbling down as more and more new features are introduced without any cleanup work being done.
- Identical staging and production environments. We use a fully duplicated production environment for acceptance testing and various ad hoc testing that require a “real” environment. This means that our staging environment has just as much database power in its cluster as the production environment, and we can even adjust the web server capacity. Having an environment like this has been invaluable and has helped us discover things we wouldn’t have noticed otherwise.
- DevOps. The monitors in our office allow us to constantly check the load on the database. We also have alarms in place so we hear bird noises in the office when the code throws an error in production. This allowed me to fix a customer’s issue before the customer had even had time to call support. On the downside, I get a serious fright every time I hear a seagull in my garden at home since the seagull alarm signifies a very serious error which we have so far only encountered in tests… 🙂 We use loggly.com and alertbirds.com to perform this monitoring.
- Log browser-related JavaScript errors from production. If you don’t make an active effort to log these errors, they will disappear without a trace.
In addition to all this, we also have a QA department that systematically performs various testing on a scheduled basis. This department handles Selenium and JMeter testing and maintains the CI server.
No guarantees
Even though we do all the things listed above, we’re by no means guaranteed or protected from low-quality code. So much still depends on having a plain architecture and keeping your code simple and well-structured. To produce good, maintenance-friendly code of a high quality, you still need skilled developers and plenty of experience.
It’s also important to keep in mind that a lot of the items on the list can be carried too far. You should strive to find a balance where you raise the quality of your code within reasonable constraints. Remember that ensuring the quality of your code isn’t an exact science – it’s about finding a way of doing things that best suits your particular business.
How do you work?
I hope you’ve been inspired by the points I’ve made here. If you do things to ensure code quality that are not on this list, please feel free to share them below.
You can also check out some of the other presentations from the Copenhagen JS seminar at e-conomic.