Secret ingredients to quality software

SSW Foursquare

Rules to Better Architecture and Code Review - 39 Rules

For any project that is critical to the business, it’s important to do ‘Modern Architecture Reviews’. Being an architect is fun, you get to design the system, do ongoing code reviews, and play the bad ass. It is even more fun when using modern cool tools.

Follow these steps to achieve a 'Modern Architecture Review'. See how performing an architecture review on a big project, no longer needs to be daunting. Read about the 1st steps, then check to see if you’re following SOLID principles, then drill in and find patterns and anti-patterns. Use Visual Studio to help the process with architecture tools, code smell tools, and the great Visual Studio Code Review tools.

These steps enable you to attend to the code that needs the most attention. Finally, create PBI's to make sure they get fixed in the next sprint.

  1. Do you evaluate the processes?

    Often an incorrect process is the main source of problems. Developers should be able to focus on what is important for the project rather than getting stuck on things that cause them to spin their wheels.

    1. Are devs getting bogged down in the UI?
    2. Do you have continuous integration and deployment?
    3. Do you have a Schema Master?
    4. Do you have a DevOps Master?
    5. Do you have a Scrum Master?

    Note: Anyway keep this brief since it is out of scope. If this step is problematic, there are likely other things you may need to discuss with the developers about improving their process. For example, are they using Test Driven Development, or are they checking in regularly, but all this and more should be saved for the Team & Process Review.

  2. Do you make sure you get latest and compile?

    It's amazing how often you can't simply Clone a repository (aka "Get Latest") and compile it.

    A good developer makes it clear how to get a new project, compile it, and have a smooth "F5" experience.

    f5 key

    Check they have a README or instruction files in their solution as per the rule Do you make instructions at the beginning of a project and improve them gradually?

    Sometimes the experience is more CLI based

    dotnet run
    Figure: Some consider this rule (Do you get latest and compile) to be more a “git clone” and then “dotnet run”

    Sometimes the experience is more Mac-based

    mac f5 key
    Figure: On a MacBook, if you hold down the Fn key, the touch bar will show F buttons

    macbook vscode run button
    Figure: On a MacBook, VSCode has a run button to launch the debugger (similar to F5)

    macbook visualstudio run button
    Figure: On a MacBook, Visual Studio for Mac is similar to VSCode but less obvious, since it looks more like XCode

  3. Do you review the Solution and Project names?

    The name of your solution and the names of the projects in your solution should be consistent.

    Read the rule: Do you have a consistent .NET solution structure?

    solution structure
    Figure: Good Example - The Solution and Projects are named consistently

  4. There are 2 main parts to any application. The UI which is what the customer can see and provide feedback on, and the underlying code which they really can't know if it is healthy or not.

    Therefore it is important to conduct a 'test please' on the internal code and architecture of the application.

    Ideally conduct a small 'Code + Architecture Review' for every sprint. Assuming a 2 week sprint, schedule a 4 hour (2 architects x 2 hours) review.

    The following are items that are addressed in an architecture/code review:

    Background information/overview of the project

    • Current system
    • Objectives of the system
    • Number of users (internal, external, edit, read only)
    • Current technologies
    • Current environment (SOA, SOE)

    Points for discussion

    • Rich client
    • Web client
    • Smart client (any disconnected users?)
    • Technology choices

      • Persistence layer (e.g. Database)
      • Business layer
      • UI
      • Communications
      • Workflow
      • Integration - external systems
    • Requirements for 'package' software

      • PerformancePoint
      • Reporting Services
      • Accounting packages
      • SharePoint
    • Usage Telemetry
    • Performance Monitoring
    • Data migrations
    • Data reporting
    • User experience
    • Network
    • Responsibilities/players
    • Infrastructure

      • Network
      • Hardware
    • Deployment

      • Staged implementation
      • In parallel
      • Development/Test/Staging/Production
    • Disaster recovery
    • Change control/source control
    • Build Server
    • Performance
    • Scalability
    • Extensibility
    • Design patterns
    • Maintainability
    • Reliability (failover servers?)
    • 'Sellability' i.e. is the solution appropriate for the client?

    Note: If you are using Enterprise Architect, be aware of technical debt:

    • Add a datetime of the last time the diagram was modified so we have an indication of when it is out of date
    • On your diagrams, be aware that some parts are done and some are not.
  5. Do you make awesome documentation?

    There are a few styles of documentation:

    Bad Example – Old School

    old documentation
    Figure: Bad example - The dinosaur’s method of documentation

    The old school way is document first – lots of planning, and lots of heavy documentation created upfront before even a single line of code is written.

    This is the method most familiar to teams who are comfortable with Waterfall and have possibly never heard of Agile. Documentation can normally be characterized by:

    • Heavy, long documents
    • Sequence Diagrams
    • UML

    This is a well-established way to do documentation, but it has several problems:

    • Gets out of date quickly
    • High maintenance overhead
    • Needs a business analyst

    enterprisearchitect1
    Figure: Bad example - Documentation can take the form of Sequence Diagrams

    EnterpriseArchitectUseCases
    Figure: Bad example - Use Case Diagrams are even worse!

    There may be exceptions – some situations benefit from this kind of documentation; for example, it may be necessary to support a business case – although a well-defined spec is a better document to support a business case.

    Tip: Documentation should be as minimal as possible. If your circumstances require this style of documentation, start by limiting it to just enough to cover your first couple of Sprints. And recognize that by going down this path you make a commitment to keeping it up-to-date.

    Good example – The 8 Important Documents

    This style of documentation is used by modern teams who are Agile only.

    In the repository (for developers):

    1. README.md – Explains the overview of the project and provides links to the rest of the documentation. It is important for the README.md to show a high-level architecture diagram that illustrates the overarching solution.

    2. _docs\Instructions-Compile.md – Instructions on how to build and run the project (AKA the F5 experience).

    3. _docs\Instructions-Deployment.md – Explains how to deploy the solution, including any additional processes (e.g. DevOps)

    4. _docs\Business.md – explains the purpose of the application, including the problem, goals, and statement of intent.

    5. _docs\Technologies-and-Architecture.md – Provides a technical overview of the solution.

    6. _docs\Alterative-Solutions-Considered.md – explains other options that were discounted. For example

    • We chose to use a code-centric .NET solution over a low code solution because we did not want to be locked into any specific vendor e.g. Dynamics, Outsystems.
    • We chose to use Angular over React because 5/6 developers on the project were more familiar with Angular.
    • We chose to use Azure over on-premises to avoid procurement of costly servers.
    • Note: If you decide that after the fact that the chosen solution is wrong, this should be explained. Include what led to the current circumstances and if there is a planned change.

    7. _docs\Definition-of-Done.md - Ensures that your team maintains a high level of quality with a Definition of Done

    8. _docs\Definition-of-Ready.md – Ensures that all your PBIs are well defined to an agreed standard before adding them to a sprint (see https://rules.ssw.com.au/have-a-definition-of-ready)

    Keeping these documents in the repository means that you ensure that any documentation the developers need to work on or run the code is where they need it - with the code.

    It also means that when a developer makes a change to the code that needs an update to the documentation, the documentation changes can be checked in along with the code in the same commit.

    Exposing documentation through a Wiki (for developers and other stakeholders):

    Documents to be read or edited by the Product Owner (or other members of the Scrum team) should be exposed through a Wiki. The advantage of this approach is that the writing experience in the Wiki is more friendly for non-developers. The Wiki should be sourced from the repo docs\ folder to ensure documentation is kept up-to-date. There are several options for creating a Wiki:

    Azure DevOps wiki options:

    GitHub wiki options:

    Tip : You can publish your documentation from the repo using GitHub Pages

    Tip : All of your documents (in your Wiki and your repository) should be written in Markdown (see https://rules.ssw.com.au/using-markdown-to-store-your-content)

    documentation  level2  bad example gh
    Figure: Bad example - Github project without any documentation or instruction

    azuredevops bad
    Figure: Bad example - Azure DevOps project without any documentation or instruction

    documentation  level2  good example 1 gh
    Figure: OK example - Github project with README instructions on how to compile and run the project (but still has a TODO)

    azuredevops good
    Figure: Good example - Azure DevOps project with README instructions on how to compile and run the project

    documentation  level2  good example 2 gh
    Figure: Good example - Github project with Wiki instructions for product owners, stakeholders, or public consumption (Source: https://github.com/christoment/Northwind/wiki)

    azuredevops wiki good
    Figure: Good example - Azure DevOps project with Wiki instructions for product owners, stakeholders, or public consumption

    Tip: Use your documentation for onboarding developers

    sit dev bad
    Figure: Bad Example - No documentation, go and sit with another developer

    documentation  level2  onboarding pbi 3
    Figure: Good example - Developer onboarding can be self-guided by good documentation, and offers a structure for feedback and improvement if the developer hits any issues

    Tip: Keep your documentation as minimal as possible - automate the F5 experience and deployment process (documents 2 and 3) using PowerShell scripts. Then your documents can just say "run these scripts"

    The rest of the jigsaw

    Scrum Tip: Update your Acceptance Criteria - If you use a policy that requires commits to be linked to PBIs, then you understand that the PBI is now the documentation. If requirements change (based on a conversation with the Product Owner of course) then the PBI should be updated.

    When updating the Acceptance Criteria, strike through the altered Acceptance Criteria, and add the new ones. Get the PO to confirm your understanding.

    E.g.Enter search text, click ‘Google’, and see the results populate below. [Updated] Enter search text and automatically see the results populate below.

    This should be added to the Definition of Done.

    Debt
    Technical Debt
    What's "Technical Debt"?

    During a project, when you add functionality, you have a choice:

    One way is quick but messy - it will make further changes harder in the future (i.e. quick and dirty).

    The other way is cleaner – it will make changes easier to do in the future but will take longer to put in place.

    'Technical Debt' is a metaphor to help us think about this problem. In this metaphor (often mentioned during Scrum software projects), doing things the quick and dirty way gives us a 'technical debt', which will have to be fixed later. Like financial debt, the technical debt incurs interest payments - in the form of the extra effort that we must do in future development.

    We can choose to continue paying the interest, or we can pay the debt in full by redoing the piece of work in a cleaner way.

    The same principle is true with documentation. Using the 'old school' method will leave you with a build-up of documentation that you will need to keep up to date as the project evolves.

    Warning: if you want to follow Scrum and have zero technical debt, then you must throw away all documentation at the end of each sprint. If you do want to keep it, make sure you add it to your definition of done to keep it updated.

  6. Do you have an Architecture Diagram?

    A good architecture diagram (aka a cloud architecture diagram or system architecture diagram) gives a great overview of your project. An architecture diagram lets you see at a glance what the overall structure of the solution is. This is useful for gaining an understanding of how the system fits together, how it flows, and what it does. It also helps to easily show which components can be improved due to updated or better components (or improved architectural guidelines).

    An architecture diagram is useful when:

    • In the initial discussion with a client (see Brendan Richards' quote below)
    • You are onboarding a new developer
    • You have been deep into one aspect of the system and need a refresher on another area
    • You have been off the project for a while
    • Whenever you are discussing requirements that may require structural changes

    The architecture diagram is a technical diagram that demonstrates the technology in use. The purpose of the architecture diagram is to show how a solution has been built and what the technical dependencies are. It is not used for user journeys or business logic.

    image001
    Figure: Bad Example - a screenshot of the Azure resources used helps, but doesn't show data flows or dependencies

    Depending on the complexity of your solution and your comfort/familiarity with the tools, an architecture diagram could take you anywhere from half an hour to a couple of days.

    Usually, the longer an architecture diagram takes you to make, the more important it is for your project.

    • Matt Goldman, Software Architect

    An architecture diagram is part of the 7 crucial documents you need for your project, see our rule: Do you make awesome documentation?

    Tip 1: Include your most important components

    At a minimum, your architecture diagram should include:

    • Your data repository
    • Your business logic component
    • Your UI

    Your diagram needs to include the relationships between these components, and how they share and process data.

    Tip 2: Don't use a .NET Dependency Graph as a Architecture Diagram

    The .NET dependency diagram is a useful tool, but it drills down into a specific component of the solution (the code) while ignoring the rest of it (the infrastructure). If it adds value to your documentation (i.e., there is a specific reason to include it) you can include the .NET dependency diagram, but don't use it here in place of the architecture diagram.

    See SSW rule: Do you generate the VS Dependency Graph?

    dependency validation 01
    Figure: Bad Example - the .NET dependency diagram shows code dependencies, but not the application's architecture

    Tip 3: Show data dependencies and data flows

    Your architecture diagram should show how the components of your solution fit together. It should also show *how* the components of the architecture depend on each other for functionality, as well as upstream and downstream data dependencies.

    architecture diagram good1
    Figure: OK Example - Shows the technologies and data flows (from the data --> Azure Data Factory -->Azure Databricks --> Power BI). This gives an overview of the whole application in one diagram.

    Tip 4: Put data at the top

    It should be easy to tell at a glance which direction data flows in your diagram: left to right, right to left, top to bottom (recommended). Pick a direction for your data flow, and keep it consistent across all your documentation. Where there are exceptions (for example data going to analytics or to/from partner sources) make these perpendicular to the primary data flow direction.

    sugarlearning architecture diagram
    Figure: Good example - SugarLearning (an Angular + .NET project) - data flows from top to bottom, with exceptions (e.g. Application Insights / Raygun, not part of the main data flow) perpendicular to the primary direction

    Tip 5: Group relevant components

    Group components logically by enclosing them in a box. Components that operate independently can stand alone, and those that work together to deliver a logical function can be grouped together. Also show components that are out of scope, i.e. important for understanding the architecture but not necessarily part of it, e.g. legacy components, partner components, or components that have not been implemented yet.

    Note: for clarity, out of scope items whether one or many, should be in a box.

    rewards architecture diagram
    Figure: Good example - SSW Rewards (Xamarin with Azure Active Directory B2C) - consistent styling is used, e.g. as well as all the icons and typography being consistent, you can see that data is a solid line and auth traffic is a dotted line

    Tip 6: Start with paper...

    Make sure you use the right tools when creating your architecture diagrams. There's nothing wrong with starting out with pen and paper, but your hand-drawn sketch should not be considered your 'done' final architecture diagram. ** If you want to save paper, and increase collaboration, a great alternate is the trusty old Whiteboard.

    For me its all about building a shared understanding between the client and the developers. Most pieces of software architecture I do, work starts by building a rough solution architecture diagram on a whiteboard.

    Putting something on a whiteboard is "low risk" for the participants as its really easy to wipe and redraw. It allows us to start working together straight away, building a shared understanding of what we're trying to achieve. There is no software or skills required to participate in whiteboard collaboration.

    A key milestone in the early engagement is the first time a client takes the pen and starts using the whiteboard to explain something to me. Early use of the whiteboard is all about immediate communication. Later, the solution design starts to solidify and we can then use the last state of the whiteboard to make out first architecture diagram.

    • Brendan Richards, SSW Solution Architect

    Figure: OK Example - SSW Rewards - start out with a hand-drawn sketch if that's easier for you, but don't consider this your final architecture diagram

    Tip: Microsoft Office Lens is a free mobile app that uses your smartphone camera to capture scan-like images of documents, photographs, business cards, and whiteboards (including searchable handwritten text).

    Figure: Better Example - SSW Rewards - the same sketch but captured with Office Lens. How much clearer and more vibrant is this!

    Tip 7: ...and Finish up with Diagrams.net

    The best tool for creating these diagrams is diagrams.net (previously draw.io). All the examples on this page were created with this tool.

    It is definitely the most popular diagram tool at SSW:

    FaveTool
    Figure: When SSW developers were surveyed, diagrams.net was the clear winner (see green) for building architecture diagrams

    TimePRO Architecture Diagram v2
    Figure: Better Example - TimePro (an Angular + .NET project with Hangfire) - you can create diagrams quickly and easily with diagrams.net that still look very professional. This one is in the style of a technical document.

    Diagrams.net is free, can be used in the browser, or can be downloaded as a desktop app. But the best way to use diagrams.net is to integrate it directly into VS Code.

    thumbnail image003
    Figure: Great Example - Auctions (a Blazor + .NET + Cosmos DB project) - diagrams.net integrated directly into VS Code

    There are multiple extensions available that let you do this, the best one is VS Code | Extensions | Draw.io Integration. This makes it easy to create and edit the architecture diagram right alongside the code, and check-in with the relevant commits.

    architecture 2
    Figure: Good Example - Auctions (a Blazor + .NET + Cosmos DB project) - architecture diagram created within VS Code and checked into the repo in the same commit as the relevant code changes. Blazor UI layer encapsulated in thematic color

    Tip 8: Polish up Diagrams.net

    Maintain standards to keep your diagrams consistent:

    • Diagram heading: Nameing Convention "Architecture Diagram - [product name]", in font size 43pts
    • Use a standard font: e.g., at SSW we use Helvetica bold
    • Arrowhead sizes: 14pts
    • Bottom left - add location: e.g., DevOps | Wiki or GitHub | Repo | Docs, in font size 22pts
    • Bottom right - add branding and URL, in font size 22pts
    • Add color and icons to make your diagrams engaging and easier to distinguish

    SSW People Architecture Diagram
    Figure: Good Example - SSW People (a Static Site - Gatsby and React with Dynamics 365 and SharePoint Online) - you can just as easily create colorful, engaging diagrams suitable for all of your project stakeholders

    Tip 9: Where to store Diagrams?

    Standardizing where your organisation stores architecture diagrams ensures a consistent experience among developers. Therefore store your architecture diagrams in the repo docs\ folder. Additionally, the \README.md (in the root) should have a link and an embedded image of the high-level architecture diagram (from the docs\* folder). Note: If you have a Wiki, for visibility add an architecture diagram page and embed the images from the **docs\ folder.

  7. The technologies and design patterns form the architecture that is usually the stuff that is hard to change.

    A pattern allows using a few words to a dev and he knows exactly what coding pattern you mean.

    ALM is about refining the work processes.

    We are doing this project using C#Bad example - you know nothing about how the project will be done

    Technologies: WebAPI. The DI container is Structure Map. Entity Framework. Typescript. Angular. Patterns: Repository and Unit of Work (tied to Entity Framework to remove additional abstraction), IOC ALM: Scrum with 2-week sprints and a Definition of Done including StyleCop to green ALM: Continuous deployment to stagingGood example - this tells you a lot about the architecture and processes in a few words

    The important ones for most web projects:

    1. Technologies: WebAPI
    2. Patterns: Single responsibly - if it does more than one thing, then split it. Eg. If it checks the weather and gets data out of the database, then split it.
    3. Patterns: Inversion of control / dependency injection Eg. If your controller needs to get data, then you inject the component that gets the data.
    4. Patterns: Repository/Unit of Work - repository has standard methods for getting and saving data. The code calling the repository should not know where the data lives. Eg. A User Repository could be saving to Active Directory or CRM and it should not affect any other code You may or may not choose to have an additional abstraction away from entity framework.
    5. ALM: Scrum - kind of a pattern for your process. Eg. Sprint Review every 2 weeks. Mostly a senior architect should be added for that 1 day each 2 weeks.

    The decisions the team makes regarding these 3 areas, should be documented in _Technologies.docx as per https://rules.ssw.com.au/do-you-review-the-documentation.

  8. JavaScript projects (for example using Angular, React, or Vue) can have unnecessary libraries that take excessive size in the build bundle. It can cause huge impacts on the performance of the application.

    JavaScript bundle analyzers are tools that visualize the sizes and dependencies of libraries used in JavaScript projects. It helps to monitor the size of the compiled bundle in order to maintain the optimal performance of the final build.

    Here are a few options for the bundle analysis in JavaScript projects:

    For Angular projects use webpack-bundle-analyzer

    This is a popular tool for Angular projects which analyses a webpack stats JSON file generated by the Angular CLI during the build. To produce the bundle analysis using webpack-bundle-analyzer in Angular projects, follow the instructions in this blog: https://alligator.io/angular/angular-webpack-bundle-analyzer/

    architecture good angular
    Figure: Good example – use webpack-bundle-analyzer for Angular applications

    For React projects sadly webpack-bundle-analyzer is too hacky to get going

    Unfortunately, the create-react-app from version 3 has removed the “--stats" flag which produces the webpack stats file used by webpack-bundle-analyzer. Hence, webpack-bundle-analyzer can only be used as a plugin in these React projects, as described in the following blog: https://medium.com/@hamidihamza/optimize-react-web-apps-with-webpack-bundle-analyzer-6ecb9f162c76

    Figure: Bad example – webpack-bundle-analyzer is not user friendly for React applications.

    For React projects use source-map-explorer

    This tool uses a bundle's generated source map files to analyse the size and composition of a bundler and render a visualization of the bundle similar to what webpack-bundle-analyzer does. To create a bundle analysis for a React project, follow the instructions from the Create React App documentation: https://create-react-app.dev/docs/analyzing-the-bundle-size/

    architecture good react
    Figure: Good example – use source-map-explorer on React projects

    Screenshots of these diagrams should be included in the project's wiki as per the rule Do you make awesome documentation?

  9. To visualize the structure of all your code you need architecture tools that will analyze your whole solution.

    They show the dependencies between classes and assemblies in your projects. You have 2 choices:

    • Visual Studio's Dependency Graph. This feature is only available in Visual Studio Ultimate. (recommended)
    • If you want architecture tools for Visual Studio, but don't have Visual Studio Ultimate, then the excellent 3rd party solution nDepend. A bonus is that it can also find issues and highlights them in red for easy discovery

    ArchitectureToolsVS11
    Figure: Visual Studio lets you generate a dependency graph for your solution

    DependencyDiagramInVS11
    Figure: The dependency graph in Visual Studio shows you some interesting information about how projects relate to each other

    nDepend has a similar diagram that is a little messier, but the latest version also includes a "Queries + Rules Explorer" which is another code analysis tool.

    nDependDependencyGraph
    Figure: nDepend Dependency Graph. Issues are highlighted in red for easy discovery

    Read more about nDepend: ndepend.com.

  10. Do you generate the VS Dependency Graph?

    Dependency graphs are important because they give you an indication of the coupling between the different components within your application.

    A well architected application (ie. one that correctly follows the Onion Architecture) will be easy to maintain because it is loosely coupled.

    TimePRODependence
    Figure: Bad Example- The Visual Studio Dependency Graph is hard to read

    TimePRODependence good
    Figure: Good Example – The ReSharper Dependency graph groups dependencies based on Solution Folders. By having a Consistent Solution Structure it is easy to see from your Dependency Graph if there is coupling between your UI and your Dependencies

    Further Reading:

  11. Rather than randomly browsing for dodgy code, use Visual Studio's Code Metrics feature to identify "Hot Spots" that require investigation.

    lotto balls
    Figure: The bad was is to browse the code

    VS 11 Code Metrics
    Figure: Run Code Metrics in Visual Studio

    CodeMetrics 3
    Figure: Red dots indicate the code that is hard to maintain. E.g. Save() and LoadCustomer()

    Identifying the problem areas is only the start of the process. From here, you should speak to the developers responsible for this dodgy code. There might be good reasons why they haven't invested time on this.

    codelens start conversation
    Figure: Find out who the devs are by using CodeLens and start a conversation

    Tip: To learn how to use Annotate, see Do you know the benefits of Source Control?

    Suggestion to Microsoft: allow us to visualize the developers responsible for the bad code (currently and historically) using CodeLens.

  12. SRP The Single Responsibility Principle A class should have one, and only one reason to change.

    **OCP The Open Closed Principle  ** You should be able to extend a class's behavior without modifying it.

    **LSP The Liskov Substitution Principle  ** Derived classes must be substitutable for their base classes.

    **ISP The Interface Segregation Principle  ** Make fine-grained interfaces that are client specific.

    **DIP The Dependency Inversion Principle  ** Depend on abstractions, not on concretions.

    Figure: Your code should be using SOLID principles

    It is assumed knowledge that you know all 5 SOLID principles. If you don't, read about them on Uncle Bob's site above, or watch the SOLID Pluralsight videos by Steve Smith.

    What order?

    1. Look for Single Responsibility Principle violations. These are the most common and are the source of many other issues. Reducing the size and complexity of your classes and methods will often resolve other problems.
    2. Liskov Substitution and Dependency Inversion are the next most common violations, so keep an eye out for them next
    3. When teams first begin implementing Dependency Injection, it is common for them to generate bloated interfaces that violate the Interface Segregation Principle.

    After you have identified and corrected the most obvious broad principle violations, you can start drilling into the code and looking for localized code breaches. ReSharper from JetBrains or JustCode from Telerik are invaluable tools once you get to this level.

    Once you understand common design principles, look at common design patterns to help you follow them in your projects.

  13. The hot spots identified in your solution often indicate violations of common design principles.

    CodeMetrics 3
    Figure: Check Address.Save() and Customer.LoadCustomer() looking for SOLID refactor opportunities

    The most common problem encountered will be code that violates the Single Responsibility Principle (SRP). Addressing SRP issues will see a reduction in the following 3 metrics:

    1. "Cyclomatic Complexity" which indicates that your methods are complex, then
    2. "High Coupling" indicates that your class/method relies on many other classes, then
    3. "Number of Lines" indicates code structures that are long and unwieldy.

    Let's just look at one example.

    This code does more than one thing, and therefore breaks the Single Responsibility Principle.

    public class PrintServer 
    {
        public string CreateJob(PrintJob data) { //...
        }
        public int GetStatus(string jobId) { //...
        }
        public void Print(string jobId, int startPage, int endPage) { //...
        }
        public List GetPrinterList() { //...
        }
        public bool AddPrinter(Printer printer) { //...
        }
        public event EventHandler PrintPreviewPageComputed;
        public event EventHandler PrintPreviewReady;
        // ...
    }

    Figure: Bad example - This class does two distinct jobs. It creates print jobs and manages printers

    public class Printers {
        public string CreateJob(PrintJob data) { //...
        }
        public int GetStatus(string jobId) { //...
        }
        public void Print(string jobId, int startPage, int endPage) { //...
        }
    }
    public class PrinterManager {
        public List GetPrinterList() { //...
        }
        public bool AddPrinter(Printer printer) { //...
        }
    }

    Figure: Good Example - Each class has a single responsibility Additionally, code that has high coupling violates the Dependency Inversion principle. This makes code difficult to change but can be resolved by implementing the Inversion of Control *and* Dependency Injection patterns.

    TODO: Piers - GitHub Issue

  14. Design patterns are useful for ensuring common design principlesare being followed.  They help make your code consistent, predictable, and easy to maintain.

    There are a very large number of Design Patterns, but here are a few important ones.

    • IOC | Inversion of Control
      Control of the object coupling is the responsibility of the caller, not the class.
    • DI | Dependency Injection
      Dependencies are "injected" into the dependent object rather than the object depending on concretions.
    • Factory | Factory Pattern
      Object creation is handled by a "factory" that can provide different concretions based on an abstraction.
    • Singleton | Singleton Pattern
      Instantiation of an object is limited to one instance to be shared across the system.
    • Repository | Repository Pattern
      A repository is used to handle the data mapping details of CRUD operations on domain objects.
    • Unit of Work | Unit of Work Pattern
      A way of handling multiple database operations that need to be done as part of a piece of work.
    • MVC | Model View Controller
      An architectural pattern separating domain logic (Controller) from how domain objects (Models) are presented (View).
    • MVP | Model View Presenter
      An architectural pattern deriving from MVC where the View handles UI events instead of the Controller.

    Choose patterns wisely to improve your solution architecture. It is assumed knowledge that you know these design patterns. If you don't, read about them on the sites above or watch the PluralSight videos on Design Patterns.

  15. Appropriate use of design patterns can ensure your code is maintainable.

    Always code against an interface rather than a concrete implementation. Use dependency injection to control which implementation the interface uses.

    For example, we could implement Inversion of Control by using the Dependency Injection pattern to decrease the coupling of our classes.

    In this code, our controller is tightly coupled to the ExampleService and as a result, there is no way to unit test the controller.

    [This example is from the blog: http://www.devtrends.co.uk/blog/how-not-to-do-dependency-injection-the-static-or-singleton-container]

    public class HomeController{ private readonly IExampleService _service; public HomeController() { _service = new ExampleService(); } public ActionResult Index() { return View(_service.GetSomething()); }}

    Figure: Bad example - Controller coupled with ExampleService

    public class HomeController{ private readonly IExampleService _service; public HomeController() { _service = Container.Instance.Resolve<IExampleService>(); } public ActionResult Index() { return View(_service.GetSomething()); }}

    Figure: Bad example - 2nd attempt using an Inversion of Control container but *not* using dependency injection. A dependency now exists on the Container.

    This is bad code because we removed one coupling but added another one (the container).

    public class HomeController{ private readonly IExampleService _service; public HomeController(IExampleService service) { _service = service; } public ActionResult Index() { return View(_service.GetSomething()); }} Figure: Good example - code showing using dependency injection. No static dependencies. Even better, use Annotate so you can enlighten the developer.

    Code against interfaces   bad
    bad.png
    Figure: Bad Example - Referencing the concrete EF context

    Code against interfaces   good Figure: Good Example - Programming against the interface

    It is important to know when the use of a pattern is appropriate. Patterns can be useful, but they can also be harmful if used incorrectly.

  16. Do you look for GRASP Patterns?

    GRASP stands for General Responsibility Assignment Software Patterns and describes guidelines for working out what objects are responsible for what areas of the application.

    The fundamentals of GRASP are the building blocks of Object-Oriented design. It is important that responsibilities in your application are assigned predictably and sensibly to achieve maximum extensibility and maintainability.

    GRASP consists of a set of patterns and principles that describe different ways of constructing relationships between classes and objects.

    CreatorA specific class is responsible for creating instances of specific other classes (e.g. a Factory Pattern)
    Information ExpertResponsibilities are delegated to the class that holds the information required to handle that responsibility
    ------
    ControllerSystem events are handled by a single "controller" class that delegates to other objects the work that needs to be done
    ------
    Low CouplingClasses should have a low dependency on each other, have low impact if changed, and have high potential for reuse
    ------
    High CohesionObjects should be created for a single set of focused responsibilities
    ------
    PolymorphismThe variation in behaviour of a type of object is the responsibility of that type's implementation
    ------
    Pure FabricationAny class that does not represent a concept in the problem domain
    ------
    IndirectionThe responsibility of mediation between two classes is handled by an intermediate object (e.g. a Controller in the MVC pattern)
    ------
    Protected VariationsVariations in the behaviour of other objects is abstracted away from the dependent object by means of an interface and polymorphism
    ------

    Tip: Visual Studio's Architecture tools can help you visualise your dependencies. A good structure will show calls flowing in one direction. architecture responsibility bad Figure: Bad Example - Calls are going in both directions which hints at a poor architecture architecture responsibility good Figure: Good Example - Calls are flowing in one direction hinting at a more sensible arrangement of responsibilities

  17. Code - Can you read code down across?

    Reading down should show you the what (all the intend)

    Reading across should show you the how (F12)

  18. Do you start reading code?

    “Aim for simplicity. I want to code to read like poetry”

    • Terje Sandstrom

    Good code

    • Is clear and easy to read
    • Has consistent and meaningful names for everything
    • Has no repeated or redundant code
    • Has neat formatting
    • Explains "why" when you read down, and "how" when you read left to right

    public IEnumerable<Customer> GetSupplierCustomersWithMoreThanZeroOrders(int supplierId){    var supplier = _repository.Suppliers.Single(s => s.Id == supplierId);

        if (supplier == null)    {        return Enumerable.Empty<Customer>();    }    var customers = supplier.Customers        .Where(c => c.Orders > 0);

        return customers;}Figure: This code explains what it is doing as you read left to right, and why it is doing it when you read top to bottom.Tip: Read the book Clean Code: A Handbook of Agile Software Craftsmanship by Robert. C. Martin.

    Good code is declarative

    For example, I want to show all the products where the unit price less than 20, and also how many products are in each category.

    Dictionary<string, ProductGroup> groups = new Dictionary<string, ProductGroup>();foreach (var product in products){    if (product.UnitPrice >= 20)    {        if (!groups.ContainsKey(product.CategoryName))        {            ProductGroup productGroup = new ProductGroup();            productGroup.CategoryName = product.CategoryName;            productGroup.ProductCount = 0;            groups[product.CategoryName] = productgroup;        }        groups[p.CategoryName].ProductCount++;    }}var result = new List<ProductGroup>(groups.Values);result.Sort(delegate(ProductGroup groupX, ProductGroup groupY){    return        groupX.ProductCount > groupY.ProductCount ? -1 :        groupX.ProductCount < groupY.ProductCount ? 1 :        0;});Figure: Bad example - Not using LINQ. The yellow gives it away.Tip: Resharper can automatically convert this code.

    result = products
        .Where(product => product.UnitPrice >= 20)
        .GroupBy(product => product.CategoryName)
        .OrderByDescending(group => group.Count())
        .Select(group => new { CategoryName = group.Key, ProductCount = group.Count() });

    Figure: Good example - using LINQTip: For more information on why declarative programming (aka LINQ, SQL, HTML) is great, watch the TechDays 2010 Keynote by Anders Hejlsberg.Anders explains why it's better to have code "tell what, not how".

    Clean front-end code - HTML (This one is questionable as HTML is generally a designer issue)

    Anyone who creates their own HTML pages today should aim to make their markup semantically correct. For more information on semantic markup, see http://www.webdesignfromscratch.com/html-css/semantic-html/.

    For example:

    • <p> is for a paragraph, not for defining a section.
    • <b> is for bolding, not for emphasizing (<strong> and <em>) do that.

    Clean Front-End code

    Clean code and consistent coding standards are not just for server-side code.  It is important that you apply your coding standards to your front-end code as well e.g. JavaScript, TypeScript, React, Angular, Vue, CSS, etc.

    You should use a linter and code formatter like Prettier to make development easier and more consistent.

  19. Using a precision mocking framework (such as Moq or NSubstitute) encourages developers to write maintainable, loosely coupled code.

    Mocking frameworks allow you to replace a section of the code you are about to test, with an alternative piece of code.For example, this allows you to test a method that performs a calculation and saves to the database, without actually requiring a database.

    There are two types of mocking framework.

    The Monster Mocker (e.g. Microsoft Fakes or TypeMock)

    This type of mocking framework is very powerful and allows replacing code that wasn’t designed to be replaced.This is great for testing legacy code, tightly coupled code with lots of static dependencies (like DateTime.Now) and SharePoint.

    monster mocker
    Figure: Bad Example – Our class is tightly coupled to our authentication provider, and as we add each test we are adding *more* dependencies on this provider. This makes our codebase less and less maintainable. If we ever want to change our authentication provider “OAuthWebSecurity”, it will need to be changed in the controller, and every test that calls it

    The Precision Mocker (e.g. Moq)

    This mocking framework takes advantage of well written, loosely coupled code.

    The mocking framework creates substitute items to inject into the code under test.

    precision mocker 1
    Figure: Good Example - An interface describes the methods available on the provider

    precision mocker 2
    Figure: Good Example - The authentication provider is injected into the class under test (preferably via the constructor)

    precision mocker 3
    Figure: Good Example - The code is loosely coupled. The controller is dependent on an interface, which is injected into the controller via its constructor. The unit test can easily create a mock object and substitute it for the dependency. Examples of this type of framework are Moq and NSubstitute

  20. Look for inline SQL to see whether you can replace it with Linq to Entities.

    speed camera
    Speed camera
    Figure: SQL Injection for Speed Cameras :-)

  21. Do you look for opportunities to use Linq?

    Linq is a fantastic addition to .Net which lets you write clear and beautiful declarative code. Linq allows you to focus more on the what and less on the how .

    You should look for opportunities to replace your existing code with Linq.

    For example, replace your foreach loops with Linq.

    var lucrativeCustomers = new List<Customer>();
    foreach (var customer in Customers)
    {
        if (customer.Orders.Count > 0)
        {
            lucrativeCustomers.Add(customer);
        }
    }

    Figure: Bad Example - imperative programming using a foreach loop

    var lucrativeCustomers = Customers.Where(c => c.Orders.Count > 0).ToList();

    Figure: Good Example - declarative programming using Linq

  22. The repository pattern is a great way to handle your data access layer and should be used wherever you have a need to retrieve data and turn it into domain objects.

    The advantages of using a repository pattern are:

    • Abstraction away from the detail of how objects are retrieved and saved
    • Domain objects are ignorant of persistence - persistence is handled completely by the repository
    • Testability of your code without having to hit the database (you can just mock the repository)
    • Reusability of data access code without having to worry about consistency

    Even better, by providing a consistent repository base class, you can get all your CRUD operations while avoiding any plumbing code.

    Tip: Entity Framework provides a great abstraction for data access out of the box. See Jason’s Clean Architecture with ASP.NET Core talk for more information

  23. Do you look for large strings in code?

    Long hard-coded strings in a codebase can be a sign of poor architecture.

    To make hard-coded strings easier to find, consider highlighting them in your IDE.

    LongStringBadExample
    longstringbadexample.png
    Figure: Bad Example - The connection string is hard-coded and isn't easy to see in the IDE. longstringbadexample2 Figure: Better Example - The connection string is still hard-coded, but at least it's very visible to the developers.
    ShortStrings
    longstringgood.png
    Figure: Good Example - The connection string is now stored in configuration and we don't have a long hard-coded string in the code.

  24. Do you believe in being verbose in your code (don't compress code and don't use too many ternary operators)? 

    Different developers have different opinions.  It is important your developers work as a team and decide together how verbose their code should be.

    What is your opinion on this?  Contribute to the discussion on Stack Overflow.

  25. Do you review the code comments?

    Comments can be useful for documenting code but should be used properly. Some developers like seeing lots of code comments and some don't.

    Some tips for including comments in your code are:

    1. Comments aren't always the solution.  Sometimes it's better to refactor the comments into the actual method name. If your method needs a comment to tell a developer what it does, then the method name is probably not very clear.
    2. Comments should never say *what* the code does, it should say *why* the code does it.  Any decent developer can work out what a piece of code does.
    3. Comments can also be useful when code is missing.  For example, why there is no locking code around a thread-unsafe method.

    // returns the Id of the first customer with the matching last namepublic int GetResult(string lastname){    // get the first matching customer from the repository    return repository.Customer.First(c => c.LastName.StartsWith(lastname));} Figure: Bad Example - The first comment is only valuable because the method is poorly named, while the second describes *what* is happening, not *why*public int GetFirstCustomerWithLastName(string lastname){    // we use StartsWith because the legacy system sometimes padded with spaces    return repository.Customer.First(c => c.LastName.StartsWith(lastname));}Figure: Good Example - The method has been renamed so no comment is required, and the comment explains *why* the code has been written in that way

  26. Do you use the best Code Analysis tools?

    Whenever you are writing code, you should always make sure it conforms to your team's standards. If everyone is following the same set of rules; someone else’s code will look more familiar and more like your code - ultimately easier to work with.

    No matter how good a coder you are, you will always miss things from time to time, so it's a really good idea to have a tool that automatically scans your code and reports on what you need to change in order to improve it.

    Visual Studio has a great Code Analysis tool to help you look for problems in your code. Combine this with Jetbrains' ReSharper and your code will be smell free.

    The levels of protection are:

    CricketHelmet
    Figure: You wouldn't play cricket without protective gear and you shouldn't code without protective tools

    Level 1

    Get ReSharper to green on each file you touch. You want the files you work on to be left better than when you started. See Do you follow the boyscout rule?

    Tip: You can run through a file and tidy it very quickly if you know two great keyboard shortcuts:

    • Alt + [Page Down/Page Up] : Next/Previous Resharper Error / Warning
    • Alt + Enter: Smart refactoring suggestions

    48bc81 image001
    Figure: ReSharper will show Orange when it detects that there is code that could be improved

    image002
    Figure: ReSharper will show green when all code is tidy

    Level 2

    Is to use Code Auditor.

    CodeAuditor
    Figure: Code Auditor shows a lot of warnings in this test project

    Note: Document any rules you've turned off.

    Level 3

    Is to use Link Auditor.

    Note: Document any rules you've turned off.

    Level 4

    Is to use StyleCop to check that your code has consistent style and formatting.

    StyleCopInVS2010
    Figure: StyleCop shows a lot of warnings in this test project

    Level 5

    Run Code Analysis (was FxCop) with the default settings or ReSharper with Code Analysis turned on

    CodeAnalysisVS11
    Figure: Run Code Analysis in Visual Studio

    codeanalysis
    Figure: The Code Analysis results indicate there are 17 items that need fixing

    Level 6

    Ratchet up your Code Analysis Rules until you get to 'Microsoft All Rules'

    image003
    Figure: Start with the Minimum Recommended Rules, and then ratched up.

    Level 7

    Is to document any rules you've turned off.

    All of these rules allow you to disable rules that you're not concerned about. There's nothing wrong with disabling rules you don't want checked, but you should make it clear to developers why those rules were removed.

    Create a GlobalSuppressions.cs file in your project with the rules that have been turned off and why.

    suppressions file
    Figure: The suppressions file tells Code Analysis which rules it should disable for specific code blocks

    More Information: Do you make instructions at the beginning of a project and improve them gradually? and https://docs.microsoft.com/en-us/visualstudio/code-quality/in-source-suppression-overview

    Level 8

    The gold standard is to useSonarQube, which gives you the code analysis that the previous levels give you as wells as the ability to analyze technical debt and to see which code changes had the most impact to technical debt

    2016 06 08 12 59 38
    Figure: SonarQube workflow with Visual Studio and Azure DevOps

    2016 06 08 12 59 53
    Figure: SonarQube gives you the changes in code analysis results between each check-in

  27. Do you use the best trace logging library?

    Did you know that writing your own logging infrastructure code wastes time? There are awesome logging abstractions in .NET Core and .NET 5+ that you should use instead!

    These abstractions allow you to:

    • Create log entries in a predictable and familiar fashion - you use the same patterns for logging in a Background Service as you would in a Blazor WASM app (just some slightly different bootstrapping 😉)
    • Use Dependency Injection; your code doesn't take a dependency on a particular framework (as they are abstractions)
    • Filter output based off severity (Verbose/Debug/Info/Warning/Error) - so you can dial it up or down without changing code
    • Have different logs for different components of your application (e.g. a Customer Log and an Order Log)
    • Multiple logging sinks - where the logs are written to e.g. log file, database, table storage, or Application Insights
    • Supports log message templates allowing logging providers to implement semantic or structured logging
    • Can be used with a range of 3rd party logging providers

    Read more at Logging in .NET Core and ASP.NET Core

    trace logging bad
    Figure: Bad Example - Using Debug or Trace for logging, or writing hard coded mechanisms for logging does not allow you to configure logging at runtime

    trace logging bad 2
    Figure: Bad Example - Roll your own logging components lack functionality, and have not been tested as thoroughly for quality or performance as log4net

    _logger.LogInformation("Getting item {Id} at {RequestTime}", id, DateTime.Now);

    Good Example - Using templates allows persisting structured log data (DateTime is a complex object)

    seq2
    Figure: Good Example - Seq provides a powerful UI for searching and viewing your structured logs

  28. Do you look for Code Coverage?

    Code Coverage shows how much of your code is covered by tests and can be a useful tool for showing how effective your unit testing strategy is. However, it should be looked at with caution.

    • You should focus on *quality* not *quantity* of tests.
    • You should write tests for fragile code first and not waste time testing trivial methods
    • Remember the 80-20 rule - a very high-test coverage is a noble goal but there are diminishing returns.
    • If you're modifying code, write the test first, then change the code, then run the test to make sure it passes (AKA red-green-refactor).
    • You should run your tests regularly (see Do you follow a Test Driven Process). Ideally, they'll be part of your build (see Do you know the minimum builds to create for your project?

    CodeCoverage2010
    Figure: Code Coverage metrics in Visual Studio. This solution has a very high code coverage percentage (around 80% on average)

    Tip: Do you use Live Unit Testing to see code coverage?

  29. Do you use the Kent Beck philosophy?

    Kent Beck is the man credited with "rediscovering" the Test Driven Development methodology.  It's a great way to ensure your code works as expected and it will allow you to catch errors that occur down the track.

    Based on Kent Beck's principles, you should:

    1. Write code as it spews out of your brain
    2. Do lots of small refactoring rather than big architectural rewrites
    3. If you are going to change code, add a test first (AKA red-green-refactor)

    Tip: Read Michael Feather’s book, "Working Effectively with Legacy Code" for some insights into effective unit testing.

    Tip: Don't focus on the percentage of code coverage, focus on whether tests will touch the lines of code you care about.

  30. You can waste days evaluating IOC containers. The top ones are quite similar. There is not much in this, but the best ones are StructureMap and AutoFac. At SSW we use Autofac on most projects.

    Other excellent DI containers are Ninject and Castle Winsdor. They have weaknesses, some are listed below.

    Dependency Injection is an essential ingredient to having maintainable solutions. IOC containers make doing dependency injection easier.

    When selecting a Dependency Injection container it is worth considering a number of factors such as:

    • Ease of use
    • Configurability: Fluent API and/or XML Configuration
    • Performance (Unless you have a very high traffic application the difference should be minimal)
    • NuGet Support (only Ninject is doing a poor job of this) - see image

    The top tools all contain comparable functionality. In practice which one you use makes little difference, especially when you consider that your container choice should not leak into your domain model.

    Important: Unless a specific shortfall is discovered with the container your team uses, you should continue to use the same container across all of your projects, become an expert with it and invest time on building features rather than learning new container implementations.

    dic bad
    Figure: Bad Example - Ninject was a top container but is no longer developed as actively as Autofac and Structuremap. Both Autofac and Structuremap have active communities and contributors that ensure they stay up to date with the latest changes in .Net

    dic good
    Figure: Good Example - Autofac has a great combination of performance and features and is actively developed

    Note: Autofac's support for child lifetime containers may be significant for some: http://nblumhardt.com/2011/01/an-autofac-lifetime-primer

    StructureMap does also support a kind of child container:http://codebetter.com/jeremymiller/2010/02/10/nested-containers-in-structuremap-2-6-1/

    Autofac web

    Figure: Good Example - the web / mvc integration package layer for Autofac is developed by the same core Autofac team. Some containers (such as Structure Map) require third-party integration layers

    Further Reading:

  31. We already know what the best IOC container is, but how does ASP.NET core's default dependency injection compare?

    ASP.NET Core includes default dependency injection for new Web Apps in the Startup.cs file. This is adequate for simple projects, but not designed to compete with the features of alternatives containers (like AutoFac's convention based registration).

    "The default services container provided by ASP.NET Core provides a minimal feature set and is not intended to replace other containers."

    Steve Smith, (ASP.NET Dependency Injection)

    You can quickly flag this error and any more by using the SSW Code Auditor.

    Here is an example of rewiring the default code to AutoFac with the SSW's Music Store app:

    SSW DependencyInjection Example Default Bad
    Figure: Bad Example - The default dependency injection for ASP.NET Core

    SSW DependencyInjection Example Default Good
    Figure: Good Example - The bad example rewired to utilize AutoFac. Red boxes outline the modified code

  32. Using subdomains over directories has 2 main benefits:

    1. it is easier to host different sections of your website on different platforms, and
    2. in different geographic locations
    • myservice.com/ssw/
    • myservice.com/northwind /

    Figure: Bad Example - Virtual directories used to distinguish organizations

    • ssw.myservice.com/
    • northwind.myservice.com/

    Figure: Good Example - Subdomains used to distinguish organizations

  33. Don't waste time evaluating which middle tier .NET libraries to use. Most of the commonly used libraries are very similar in functionality. By sticking to a library, you can also increase your expertise in it, reducing development time in the future.

    Great products include:

  34. Do you use the best Web UI libraries?

    Don't waste time evaluating which Web UI libraries to use. Most of the commonly used libraries are very similar in functionality. The recommended library is Bootstrap.

    It's the most popular available framework today, which means more people involved in the project, more tutorials and articles from the community, more real-world examples/websites, more third-party extensions, and better integration with other web development products

    bootstrap
    Figure: Leader among frameworks today, Bootstrap toolkit is recommended to build successful websites

    The 3 things a developer need to know to get up and running quickly with ASP.NET MVC

    Bootstrap & ASP.NET MVC - Intro / Quickstart

    Other useful frameworks

    Now that you saved a lot of UI development time by using Bootstrap, you can play around with other useful frameworks.

    • KendoUI for enhanced HTML and jQuery controls
    • SignalR for real-time web functionality
  35. A common practice we see when developers start to use IOC containers is that the IOC container becomes a central service and configuration repository that all the components across the project become dependent upon.

    Using an IOC container in this manner can bring advantages such as centralised configuration and dependency lifecycle and scope managment. If implemented correctly, however, your classes can benefit from the above without any direct dependency on the IOC container itself.

    IOC badexample

    Figure: Bad Example - the dependency is manually fetched from the IOC container, This class now has a hard dependency on your IOC container

    IOC GoodExample

    Figure: Good example - The dependency is enforced via a constuctor parameter. The class does not need to know anything about the IOC container being used and can potentially be reused in different contexts and with different IOC containers.

    For more information and insight on IOC usage, read the following: http://www.devtrends.co.uk/blog/how-not-to-do-dependency-injection-the-static-or-singleton-container

  36. What is Technical Debt?

    Technical Debt is when you take a shortcut to get a feature in to get some feedback.

    Code that is hard to understand after reading it multiple times or a single method that spans multiple screens is also considered to be Technical Debt.

    Systems need to have features added to them to continually remain useful (or competitive).

    As new features are added to the system, often more Technical Debt will be introduced.

    Example: A developer takes a shortcut to get some early feedback on a new feature

    • $100 - full feature
    • $20 - feature with shortcuts (no tests, dirty code, whatever it takes)
    • $80 - IOU via PBI in the backlog e.g. [FeatureName] – Tech Debt - Planned

    waf tech debt backlog northwind
    Figure: Good example - Tech Debt is very visible to the Product Owner

    What are the consequences of Technical Debt?

    • Fewer features overtime for the customers
    • More molasses (developer friction) for the developers

    The 2 types of Technical Debt

    1. Planned Technical Debt

    Sometimes you do want to quickly implement a new feature to get it out and receive some feedback.

    PBI: [FeatureName] – Tech Debt - Planned

    Note: Martin Fowler calls this "Deliberate Technical Debt".

    2. Discovered Technical Debt

    During a code review, you or the team notice something as part of the system that is clearly Technical Debt. This code is hindering the ability to add new features or is hard to read/understand.

    PBI: [FeatureName] – Tech Debt - Discovered

    Note: Martin Fowler calls this "Inadvertent Technical Debt".

    How to repay Technical Debt

    Just like a business that receives pre-payment from customers, a software team should be reviewing the size of their liabilities (being the list of PBIs with Technical Debt).

    At the Sprint Planning:

    1. Show the Product Owner the list of outstanding Technical Debt PBIs
    2. The Product Owner should make sure that the developers review the list of Technical Debt list and pick at least 1 PBI to pay back during the upcoming sprint

    Screenshots

    techdebt github
    Figure: Screenshot of code with tech debt comment and link to GitHub issue

    techdebt backlog
    Figure: Screenshot of tech debt on backlog

    techdebt architecture
    Figure: SugarLearning architecture diagram

  37. Do you use the Well-Architected Framework?

    The Well-Architected Framework is a set of best practices which form a repeatable process for designing solution architecture, to help identify potential issues and optimize workloads.

    waf diagram revised
    Figure: The Well-Architected Framework includes the five pillars of architectural excellence. Surrounding the Well-Architected Framework are six supporting elements

    The 5 Pillars

    Trade-offs

    There are trade-offs to be made between these pillars. E.g. improving reliability by adding Azure regions and backup points will increase the cost.

    Why use it?

    Thinking about architecting workloads can be hard – you need to think about many different issues and trade-offs, with varying contexts between them. WAF gives you a consistent process for approaching this to make sure nothing gets missed and all the variables are considered.

    Just like Agile, this is intended to be applied for continuous improvement throughout development and not just an initial step when starting a new project. It is less about architecting the perfect workload and more about maintaining a well-architected state and an understanding of optimizations that could be implemented.

    What to do next?

    Assess your workload against the 5 Pillars of WAF with the Microsoft Azure Well-Architected Review and add any recommendations from the assessment results to your backlog.

    waf assessment
    Figure: Some recommendations will be checked, others go to the backlog so the Product Owner can prioritize

    waf reliability results 2
    Figure: Recommended actions results show things to be improved

    waf tech debt backlog northwind
    Figure: Good example - WAF is very visible to the Product Owner on the backlog

  38. Microservices - Do you break down your apps?

    There are two common types of application architecture:

    • Monoliths (aka N-Tier applications)
    • Microservices

    Monoliths have their place. They are easy to get going and often make a lot of sense when you are starting out. However, sometimes your app may grow in size and become difficult to maintain. Then you might want to consider Microservices...

    Microservices let you break down your app into little pieces to make them more manageable, replaceable and maintainable. You can also scale out different parts of your app at a granular level.

    .NET 6 and Azure have heaps of great tools for developing simple APIs and worker services in a Microservices pattern.

    Watch the below video from 35:40 - 46:50

    The tools of the trade

    • .NET Worker Services make it easier to implement dependency injection, configuration and other syntactic sugar using the same patterns you are familiar with in other types of .NET applications
    • Azure Container Apps give you a way to host different little subsections of the application
    • Azure Functions gives you a great way to build applications in small, modular, scalable and easy to manage chunks. It provides triggers other than http to handle other common microservice patterns
    • Minimal APIs give you a way to write APIs in just a few short lines of code

    What's the point?

    • Cost - Provides separation of scalability, keep the hot parts of your app hot and the cold parts of your app cold to achieve maximum pricing efficiency
    • Maintainability - Keep code more manageable by making it bite sized chunks
    • Simplify code - Write minimal APIs
    • Deployment - Standardize deployment with containers
    • Testing - Easier to find problems since they are isolated to a specific part of the app
    • Cognitive Complexity - Devs can focus on one aspect of the app at a time
    • Data - You can use the best way of storing data for each service
    • Language - You can use the best language for each service

    What's the downside?

    • Upfront Cost - More upfront work is required
    • Cognitive Complexity - While individual apps are simpler, the architecture of the app can become more complex
    • Health Check - It's harder to know if all parts are alive
    • Domain boundaries - You need to define the separation of concerns between different services. Avoid adding dependencies between services because you can create a domino of failures...a house of cards.
    • Performance normally suffers as calls are made between services
    • Without adequate testing it's harder to maintain
    • Using multiple languages and datastores can be both more expensive to host and require more developers

    What new techniques are required

    • Contract Testing - To mitigate the risk of changes in one service breaking another, comprehensive tests that check the behaviour of services is required
  39. Tech Debt - Do you avoid 'clever' code?

    What is Clever Code?

    Clever Code comes in several forms. The desired form is solving a complex problem, in a simple way. The Clever Code that causes tech debt, is when code is written in a way to use the language features to look 'smart', while making it difficult for developers to read.

    var totalMoved = sortedChannels.Where((channel, i) => channel.Position != i).Count();

    Bad example - Smart code, that could be even simpler! (Although, not bad by any means)

    var totalMoved = 0;
    
    for (var i = 0; i < sortedChannels.Count; i++)
    {
        var channel = sortedChannels[i];
    
        if (channel.Position != i)
        {
            totalMoved++
        }
    }

    Good example - Simple code, while it has more lines, it is easier to read!

    When is Simple Code Bad?

    Let's take a moment to digest this more generic example:

    [ProducesResponseType(StatusCodes.Status404NotFound)]
    [ProducesResponseType(StatusCodes.Status204NoContent)]
    [HttpDelete("{id:guid}")]
    public async Task<IActionResult> Delete(Guid id)
    {
        var model = await _context.Styles.FirstOrDefaultAsync(x => x.Id == id);
    
        if (model is null)
        {
            return NotFound();
        }
    
        _context.Styles.Remove(model);
        await _context.SaveChangesAsync();
    
        return NoContent();
    }

    At first glance, this is pretty simple - almost what you would find on an intro to EF Core & ASP.NET Core Web API on the Microsoft documentation!

    As code scales, sometimes we need to write more 'clever' code, to abstract away concerns (like data access, or application logic).

    So depending on the context, this is both good & bad code at the same time!

We open source. Powered by GitHub