We all know what code quality is. Mostly. Though we may have slight differences when we should tell if a certain piece of code is of good quality. Certain aspects we can measure, some other aspects call for aesthetical judgement or simply gut feelings. We can have rules and policies that developers must obey but I could not find one single document that exactly tells what good quality code looks like.
My own approach to score quality would revolve around how well a developer can communicate ideas to another developer who needs to read the code. This includes readability, concise and accurate comments, using meaningful names, code organization, layout on the surface, and proper componentization throughout the whole hierarchy of the application, clear interfaces and dependencies in the deep. Inherently the code must be performant and free from flaws (bugs, vulnerabilities). An application code on its own level must be self-documenting to achieve the above.
We at FreeSoft are in the language conversion business; for example, Cobol in, Java out. We’re talking about Cobol some guys wrote 20-30 years ago. I am not going to score these code pieces, but let’s take a look at what we can tell about the Java we create.
The legacy languages we work with are not object oriented, yet we must create object oriented Java just because Java inherently is an OO language. During conversion our tools create classes that represent “meaning” in the legacy app. Typically a Cobol app turns into a Java class with methods… sigh!… when applicable.
Each piece of the legacy code is handled as objects. A variable will turn into a LegacyString or LegacyNumber that implement behavior of the legacy variables. A sequence of statements turns into a sequence of method calls that implement the behavior of the original statements. There is no magic here, really. There is hard work of development and a lot of testing in the infrastructure that does this no-magic.
There is a quite wide spectrum though how language conversion can be done. One extreme is when the code looks almost like the legacy code and Java developers say: huh?! Another extreme if when the code looks nothing like the legacy original and legacy developers say: huh?! This “huh” translates to a quality score and there you go! What do you – your developers – need exactly? Who will maintain your code: re-trained Cobol developers or legacy-sensitized Java guys?
With rewrite the code is as java as it gets, but decades of legacy developers’ knowledge is potentially binned.
Cross-compilers create code that runs in a JVM and hardly – if at all – resemble java. They may also have limitations on functionality. (No online?)
We can tweak conversion features according to customer requirements along this spectrum I mentioned to achieve a level of code quality that your people believe best for the cause. There is no one-size-fits-all conversion. Also, we can add customer specific components that make developers lean towards our solution.
Funky names in legacy code? Shorthands and abbreviations in variable names and labels? No problem. Our conversion engine will figure out what uncle Jack meant 30 years ago… No, it doesn’t.
There are many aspects of readability, naming conventions are one of them. You will get obscurity back after conversion. Maybe the conversion just makes it worse.
Spaghetti legacy code converts to strings of fusilli when the conversion engine tries to make sense of the code flow. It does not necessarily becomes more readable, but at least becomes OO.
We are proud to deliver SonarQube compliant code to the highest standard. But! There is a but, because there is always a but. The converted code reflects the legacy. Too long names, too short names, cyclomatic complexity, code repetitions, empty code blocks, too large code blocks and many others are issues we inherit.
It is not without precedent that we modify our conversion engine to reduce the number of issues reported by SQ after consulting the customer about reasons and potential modifications to improve quality.