Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Improving your CFML code quality

525 views

Published on

Let's talk about code quality.

We all agree that our code needs to be functional so that it meets business requirements. We also should aim for code that is well written and maintainable for future changes. There are a lot of elements playing into that. A well thought system architecture is an important foundation. The selection of an appropriate framework could be the next step. In the end you might look at how to format and write your code on a line-by-line basis.

This talk will provide an introduction into code quality. We will look at various aspects around this term first. From there we can investigate different ways how you can perform code analysis. This will help you measure and understand code quality. There is a range of categories of tools available, some of which also support CFML.

In the second part of the talk we'll look at the details and usage of CFLint. CFLint is a static code analyser for CFML that is based on the CFParser project.

Published in: Software
  • Be the first to comment

  • Be the first to like this

Improving your CFML code quality

  1. 1. (TOOLS FOR) IMPROVING YOUR 
 CFML CODE QUALITY KAI KOENIG (@AGENTK)
  2. 2. AGENDA ▸ Software and code quality ▸ Metrics and measuring ▸ Tooling and analysis ▸ CFLint
  3. 3. SOFTWARE AND CODE QUALITY https://www.flickr.com/photos/jakecaptive/47697477
  4. 4. THE ART OF CHICKEN SEXING bsmalley @ commons.wikipedia.org
  5. 5. AND NOW…QUALITY
  6. 6. CONDITION OF EXCELLENCE IMPLYING 
 FINE QUALITY 
 AS DISTINCT FROM BAD QUALITY https://www.flickr.com/photos/serdal/14863608800/
  7. 7. SOFTWARE AND CODE QUALITY TYPES OF QUALITY ▸ Quality can be objective or subjective ▸ Subjective quality: dependent on personal experience to recognise excellence. Subjective quality is ‘universally true’ from the observer’s point of view. ▸ Objective quality: measure ‘genius’, quantify and repeat -> Feedback loop
  8. 8. SOFTWARE AND CODE QUALITY CAN RECOGNISING QUALITY BE LEARNED? ▸ Chicken sexing seems to be something industry professionals lack objective criteria for ▸ Does chicken sexing as process of quality determination lead to subjective or objective quality? ▸ What about code and software? ▸ How can we improve in determining objective quality?
  9. 9. “ANY FOOL CAN WRITE CODE THAT A COMPUTER CAN UNDERSTAND. GOOD PROGRAMMERS WRITE CODE THAT HUMANS CAN UNDERSTAND.” Martin Fowler SOFTWARE AND CODE QUALITY
  10. 10. METRICS AND MEASURING http://commadot.com/wtf-per-minute/
  11. 11. STANDARD OF MEASUREMENT https://www.flickr.com/photos/christinawelsh/5569561425/
  12. 12. METRICS AND MEASURING DIFFERENT TYPES OF METRICS ▸ There are various categories to measure software quality in: ▸ Completeness ▸ Performance ▸ Aesthetics ▸ Maintainability and Support ▸ Usability ▸ Architecture
  13. 13. METRICS AND MEASURING COMPLETENESS ▸ Fit for purpose ▸ Code fulfils requirements: use cases, specs etc. ▸ All tests pass ▸ Tests cover all/most of the code execution ▸ Security https://www.flickr.com/photos/chrispiascik/4792101589/
  14. 14. METRICS AND MEASURING PERFORMANCE ▸ Artefact size and efficiency ▸ System resources ▸ Behaviour under load ▸ Capacity limitations https://www.flickr.com/photos/dodgechallenger1/2246952682/
  15. 15. METRICS AND MEASURING AESTHETICS ▸ Readability of code ▸ Matches agreed coding style guides ▸ Organisation of code in a class/ module/component etc. https://www.flickr.com/photos/nelljd/25157456300/
  16. 16. METRICS AND MEASURING MAINTAINABILITY / SUPPORT ▸ Future maintenance of the code ▸ Documentation ▸ Stability/Lifespan ▸ Scalability https://www.flickr.com/photos/dugspr/512883136/
  17. 17. METRICS AND MEASURING USABILITY ▸ Positive user experience ▸ Positive reception ▸ UI leveraging best practices ▸ Support for impaired users https://www.flickr.com/photos/baldiri/5734993652/
  18. 18. METRICS AND MEASURING ARCHITECTURE ▸ System complexity ▸ Module cohesion ▸ Module dependency https://www.flickr.com/photos/mini_malist/14416440852/
  19. 19. WHY BOTHER WITH MEASURING QUALITY? https://www.flickr.com/photos/magro-family/4601000979/
  20. 20. “YOU CAN'T CONTROL WHAT YOU CAN'T MEASURE.” Tom DeMarco METRICS AND MEASURING
  21. 21. METRICS AND MEASURING WHY WOULD WE WANT TO MEASURE ELEMENTS OF QUALITY? ▸ It’s impossible to add quality later, start early to: ▸ identify potential technical debt ▸ find and fix bugs early in the development work ▸ track your test coverage.
  22. 22. METRICS AND MEASURING COST OF FIXING ISSUES ▸ Rule of thumb: The later you find a problem in your software the more effort, time and money is involved in fixing it. ▸ Note: There has NEVER been any scientific study into what the appropriate ratios are - it’s all anecdotal/made up numbers… the zones of unscientific fluffiness!
  23. 23. METRICS AND MEASURING HOW CAN WE MEASURE? ▸ Automated vs. manual ▸ Tools vs. humans ▸ Precise numeric values vs. ‘gut feeling’ … but what about those ‘code smells’?
  24. 24. METRICS AND MEASURING WHAT CAN WE MEASURE? ▸ Certain metric categories lend themselves to being taken at design/code/ architecture level ▸ Others might have to be dealt with on others levels, e.g. acceptance criteria, ’fit for purpose’, user happiness, etc.
  25. 25. METRICS AND MEASURING COMPLETENESS ▸ Fit for purpose — Stakeholders/customers/users ▸ Code fulfils requirements: use cases, specs etc — BDD (to some level) ▸ All tests pass — TDD/BDD/UI tests ▸ Tests cover all/most of the code execution? — Code Coverage tools ▸ Security — Code security scanners
  26. 26. METRICS AND MEASURING PERFORMANCE ▸ Artefact size and efficiency — Deployment size ▸ System resources — Load testing/System monitoring ▸ Behaviour under load — Load testing/System monitoring ▸ Capacity limitations — Load testing/System monitoring
  27. 27. METRICS AND MEASURING AESTHETICS ▸ Readability of code — Code style checkers (to some level) & Human review ▸ Matches agreed coding style guides — Code style checkers ▸ Organisation of code in a class|module|component etc. — Architecture checks & Human review
  28. 28. METRICS AND MEASURING MAINTAINABILITY/SUPPORT ▸ Future maintenance of the code — Code style checkers & Human review ▸ Documentation — Documentation tools ▸ Stability/Lifespan — System monitoring ▸ Scalability — System monitoring/Architecture checks
  29. 29. METRICS AND MEASURING USABILITY ▸ Positive user experience — UI/AB tests & Human review ▸ Positive reception — Stakeholders/customers/users ▸ UI leveraging best practices — UI/AB tests & Human review ▸ Support for impaired users — a11y checker & UI/AB tests & Human review
  30. 30. METRICS AND MEASURING ARCHITECTURE ▸ System complexity — Code style & Architecture checks ▸ Module cohesion — Code style & Architecture checks ▸ Module dependency — Code style & Architecture checks
  31. 31. METRICS AND MEASURING LINES OF CODE ▸ LOC: lines of code ▸ CLOC: commented lines of code ▸ NCLOC: not commented lines of code ▸ LLOC: logic lines of code LOC = CLOC + NCLOC
 LLOC <= NCLOC
  32. 32. METRICS AND MEASURING COMPLEXITY ▸ McCabe (cyclomatic) counts number of decision points in a function: if/else, switch/case, loops, etc. ▸ low: 1-4, normal: 5-7, high: 8-10, very high: 11+ ▸ nPath tracks number of unique execution paths through a function ▸ values of 150+ are usually considered too high ▸ McCabe usually much small value than nPath ▸ Halstead metrics, lean into maintainability index metric - quite involved calculation
  33. 33. METRICS AND MEASURING COMPEXITY ▸ McCabe complexity is 4 ▸ nPath complexity is 8
  34. 34. METRICS AND MEASURING MORE REFERENCE VALUES Java Low Normal High Very High CYCLO/LOC 0.15 0.2 0.25 0.35 LOC/method 7 10 13 20 NOM/class 4 7 10 15
  35. 35. METRICS AND MEASURING MORE REFERENCE VALUES C++ Low Normal High Very High CYCLO/LOC 0.2 0.25 0.30 0.45 LOC/method 5 10 16 25 NOM/class 4 9 15 23
  36. 36. TOOLING AND ANALYSIS https://www.flickr.com/photos/gasi/374913782
  37. 37. “THE PROBLEM WITH ‘QUICK AND DIRTY’ FIXES IS THAT THE DIRTY STAYS AROUND FOREVER WHILE THE QUICK HAS BEEN FORGOTTEN” Common wisdom among software developers TOOLING AND ANALYSIS
  38. 38. TOOLING AND ANALYSIS TOOLING ▸ Testing: TDD/BDD/Spec tests, UI tests, user tests, load tests ▸ System management & monitoring ▸ Security: Intrusion detection, penetration testing, code scanner ▸ Code and architecture reviews and style checkers
  39. 39. TOOLING AND ANALYSIS CODE ANALYSIS ▸ Static analysis: checks code that is not currently being executed ▸ Linter, syntax checking, style checker, architecture tools ▸ Dynamic/runtime analysis: checks code while being executed ▸ Code coverage, system monitoring Test tools can fall into either category
  40. 40. TOOLING AND ANALYSIS TOOLS FOR STATIC ANALYSIS ▸ CFLint: Linter, checking code by going through a set of rules ▸ CFML Complexity Metric Tool: McCabe index
  41. 41. TOOLING AND ANALYSIS TOOLS FOR DYNAMIC ANALYSIS ▸ Rancho: Code coverage from Kunal Saini ▸ CF Metrics: Code coverage and statistics
  42. 42. STATIC CODE ANALYSIS FOR CFML
  43. 43. STATIC CODE ANALYSIS FOR CFML A STATIC CODE ANALYSER FOR CFML ▸ Started by Ryan Eberly ▸ Sitting on top of Denny Valiant's CFParser project ▸ Mission statement: ▸ ‘Provide a robust, configurable and extendable linter for CFML’ ▸ Currently works with ACF and Lucee, main line of support is for ACF though ▸ Team of 4-5 regular contributors
  44. 44. STATIC CODE ANALYSIS FOR CFML CFLINT ▸ Written in Java, requires Java 8+ to compile and run ▸ Unit tests can be contributed/executed without Java knowledge ▸ CFLint depends on CFParser to grok the code to analyse ▸ Various tooling/integration through 3rd party plugins ▸ Source is on Github ▸ Built with Gradle, distributed via Maven
  45. 45. DEMO TIME - USING CFLINT
  46. 46. STATIC CODE ANALYSIS FOR CFML LINTING (I) ▸ CFLint traverses the source tree depth first: ▸ Component → Function → Statement → Expression → Identifier ▸ CFLint maintains its own scope during listing: ▸ Curent directory/filename ▸ Current component ▸ Current function ▸ Variables that are declared/attached to the scope
  47. 47. STATIC CODE ANALYSIS FOR CFML LINTING (II) ▸ The scope is called the CFLint Context ▸ Provided to linting plugins ▸ Plugins do the actual work and feed reporting information back to CFLint based on information in the Context and the respective plugin ▸ TLDR: plugins ~ liniting rules
  48. 48. STATIC CODE ANALYSIS FOR CFML CFPARSER ▸ CFParser parses CFML code using two different approaches: ▸ CFML Tags: Jericho HTMLParser ▸ CFScript: ANTLR 4 grammar ▸ Output: AST (abstract syntax tree) of the CFML code ▸ CFLint builds usually rely on a certain CFParser release ▸ CFML expressions, statements and tags end up in CFLint being represented as Java classes: CFStatement, CFExpression etc.
  49. 49. STATIC CODE ANALYSIS FOR CFML REPORTING ▸ Currently four output formats: ▸ Text-based for Human consumption ▸ JSON object ▸ CFLint XML ▸ FindBugs XML
  50. 50. STATIC CODE ANALYSIS FOR CFML TOOLING ▸ Various IDE and CI server integrations ▸ 3rd party projects: SublimeLinter (Sublime Text 3), ACF Builder extension, AtomLinter (Atom), Visual Studio Code ▸ IntelliJ IDEA coming later this year or early 2018 — from me ▸ Jenkins plugin ▸ TeamCity (via Findbugs XML reporting) ▸ SonarQube ▸ NPM wrapper
  51. 51. STATIC CODE ANALYSIS FOR CFML CONTRIBUTING ▸ Use CFLint with your code and provide feedback ▸ Talk to us and say hello! ▸ Provide test cases in CFML for issues you find ▸ Work on some documentation improvements ▸ Fix small and beginner-friendly CFLint tickets in Java code ▸ Become part of the regular dev team! :-)
  52. 52. STATIC CODE ANALYSIS FOR CFML ROADMAP ▸ 1.0.1 — March 2017; first release after 2 years of betas :) ▸ 1.1 — June 2017; internal release ▸ 1.2.0-3 — August 2017 ▸ Documentation/output work ▸ Internal changes to statistics tracking ▸ 1.3 — In progress; parsing/linting improvements, CommandBox
  53. 53. STATIC CODE ANALYSIS FOR CFML ROADMAP ▸ 2.0 — 2018 ▸ Complete rewrite of output and reporting ▸ Complete rewrite and clean up of configuration ▸ Performance improvements (parallelising linting) ▸ API for tooling ▸ Code metrics
  54. 54. STATIC CODE ANALYSIS FOR CFML ROADMAP ▸ 3.0 — ??? ▸ Support for rules in CFML ▸ Abstract internal DOM ▸ New rules based on the DOM implementation
  55. 55. FINAL THOUGHTS RESOURCES ▸ CFLint: https://github.com/cflint/CFLint ▸ CFML Complexity Metric Tool: https://github.com/NathanStrutz/CFML-Complexity-Metric-Tool ▸ Rancho: http://kunalsaini.blogspot.co.nz/2012/05/rancho-code-coverage-tool-for.html ▸ CF Metrics: https://github.com/kacperus/cf-metrics
  56. 56. FINAL THOUGHTS GET IN TOUCH Kai Koenig Email: kai@ventego-creative.co.nz Twitter: @AgentK Telegram: @kaikoenig

×