SSW Foursquare

Rules to Better Architecture and Code Review - 51 Rules

For any project that is critical to the business, it’s important to do ‘Modern Architecture Reviews’. Being an architect is fun, you get to design the system, do ongoing code reviews, and play the bad ass. It is even more fun when using modern cool tools.

Follow these steps to achieve a 'Modern Architecture Review'. See how performing an architecture review on a big project, no longer needs to be daunting. Read about the 1st steps, then check to see if you’re following SOLID principles, then drill in and find patterns and anti-patterns. Use Visual Studio to help the process with architecture tools, code smell tools, and the great Visual Studio Code Review tools.

These steps enable you to attend to the code that needs the most attention. Finally, create PBI's to make sure they get fixed in the next Sprint.

Want to drive business value with IT Transformation? Check SSW's Strategic Architecture consulting page.

  1. Do you evaluate the processes?

    Often an incorrect process is the main source of problems. Developers should be able to focus on what is important for the project rather than getting stuck on things that cause them to spin their wheels.

    1. Are devs getting bogged down in the UI?
    2. Do you have continuous integration and deployment?
    3. Do you have a Schema Master?
    4. Do you have a DevOps Master?
    5. Do you have a Scrum Master?

    Note: Anyway keep this brief since it is out of scope. If this step is problematic, there are likely other things you may need to discuss with the developers about improving their process. For example, are they using Test Driven Development, or are they checking in regularly, but all this and more should be saved for the Team & Process Review.

  2. Project setup - Do you know the importance of the initial developer experience?

    It's amazing how often you can't simply Clone a repository (aka "Get Latest") and compile it.

    A good developer makes it clear how to get a new project, compile it and run it.

    f5 key
    Figure: In Visual Studio the project setup experience starts with the F5 key!

    Sometimes the experience is more CLI based

    dotnet run
    Figure: Some consider this rule (Do you get latest and compile) to be more a “git clone” and then “dotnet run”

    Sometimes the experience is more Mac-based

    mac f5 key
    Figure: On a MacBook, if you hold down the Fn key, the touch bar will show F buttons

    macbook vscode run button
    Figure: On a MacBook, VSCode has a run button to launch the debugger (similar to F5)

    macbook visualstudio run button
    Figure: On a MacBook, Visual Studio for Mac is similar to VSCode but less obvious, since it looks more like XCode

    Project Setup - What you need to know?

    There are tonnes of important things to get right with project setup. Including:

  3. Do you review the Solution and Project names?

    The name of your solution and the names of the projects in your solution should be consistent.

    Read the rule: Do you have a consistent .NET solution structure?

    solution structure
    Figure: Good Example - The Solution and Projects are named consistently

  4. Do you conduct an architecture review every few Sprints?

    There are 2 main parts to any application. The UI which is what the customer can see and provide feedback on, and the underlying code which they really can't know if it is healthy or not.

    Therefore it is important to conduct a 'test please' on the internal code and architecture of the application.

    Ideally conduct a small 'Code + Architecture Review' for every Sprint. Assuming a 2 week Sprint, schedule a 4 hour (2 architects x 2 hours) review.

    The following are items that are addressed in an architecture/code review:

    Background information/overview of the project

    • Current system
    • Objectives of the system
    • Number of users (internal, external, edit, read only)
    • Current technologies
    • Current environment (SOA, SOE)

    Points for discussion

    • Rich client
    • Web client
    • Smart client (any disconnected users?)
    • Technology choices

      • Persistence layer (e.g. Database)
      • Business layer
      • UI
      • Communications
      • Workflow
      • Integration - external systems
    • Requirements for 'package' software

      • PerformancePoint
      • Reporting Services
      • Accounting packages
      • SharePoint
    • Usage Telemetry
    • Performance Monitoring
    • Data migrations
    • Data reporting
    • User experience
    • Network
    • Responsibilities/players
    • Infrastructure

      • Network
      • Hardware
    • Deployment

      • Staged implementation
      • In parallel
      • Development/Test/Staging/Production
    • Disaster recovery
    • Change control/source control
    • Build Server
    • Performance
    • Scalability
    • Extensibility
    • Design patterns
    • Maintainability
    • Reliability (failover servers?)
    • 'Sellability' i.e. is the solution appropriate for the client?

    ::: infoNote: If you are using Enterprise Architect, be aware of technical debt:

    • Add a datetime of the last time the diagram was modified so we have an indication of when it is out of date
    • On your diagrams, be aware that some parts are done and some are not. :::
  5. Do you make awesome documentation?

    There are a few styles of documentation:

    ❌ Bad example – Old School

    old documentation
    Figure: Bad example - The dinosaur’s method of documentation

    The old school way is document first – lots of planning and lots of heavy documentation created upfront before even a single line of code is written.

    This is the method most familiar to teams who are comfortable with Waterfall and have possibly never heard of Agile. Documentation can normally be characterized by:

    • Heavy, long documents
    • Sequence Diagrams
    • UML

    This is a well-established way to do documentation, but it has several problems:

    • Gets out of date quickly
    • High maintenance overhead
    • Needs a business analyst

    Figure: Bad example - Documentation can take the form of Sequence Diagrams

    Figure: Bad example - Use Case Diagrams are even worse!

    There may be exceptions – some situations benefit from this kind of documentation; for example, it may be necessary to support a business case – although a well-defined spec is a better document to support a business case.

    Tip: Documentation should be as minimal as possible. If your circumstances require this style of documentation, start by limiting it to just enough to cover your first couple of Sprints. And recognize that by going down this path you make a commitment to keeping it up-to-date.

    ✅ Good example – The 8 Important Documents

    This style of documentation is used by modern teams who are Agile only.

    In the repository (for developers):

    1. – Gives an overview of the project and provides links to the rest of the documentation. It is important for the to show a high-level architecture diagram that illustrates the overarching solution.

    2. _docs\ – Instructions on how to build and run the project (aka the F5 experience).

    3. _docs\ – Explains how to deploy the solution, including any additional processes (e.g. DevOps)

    4. _docs\ – Explains the purpose of the application, including the problem, goals and statement of intent.

    5. _docs\ – Provides a technical overview of the solution.

    6. _docs\ – explains other options that were discounted. For example

    • We chose to use a code-centric .NET solution over a low code solution because we did not want to be locked into any specific vendor e.g. Dynamics, Outsystems.
    • We chose to use Angular over React because 5/6 developers on the project were more familiar with Angular.
    • We chose to use Azure over on-premises to avoid procurement of costly servers.
    • Note: If you decide after the fact that the chosen solution is wrong, this should be explained. Include what led to the current circumstances and if there is a planned change.

    7. _docs\ - Ensures that your team maintains a high level of quality with a Definition of Done

    8. _docs\ – Ensures that all your PBIs are well defined to an agreed standard before adding them to a Sprint (see have-a-definition-of-ready)

    Keeping these documents in the repository means that you ensure that any documentation the developers need to work on or run the code is where they need it - with the code.

    It also means that when a developer makes a change to the code that needs an update to the documentation, the documentation changes can be checked in along with the code in the same commit.

    Exposing documentation through a Wiki (for developers and other stakeholders):

    Documents to be read or edited by the Product Owner (or other members of the Scrum team) should be exposed through a Wiki. The advantage of this approach is that the writing experience in the Wiki is more friendly for non-developers. The Wiki should be sourced from the repo docs\ folder to ensure documentation is kept up-to-date. There are several options for creating a Wiki:

    Azure DevOps wiki options:

    GitHub wiki options:

    Tip: You can publish your documentation from the repo using GitHub Pages

    Tip: All of your documents (in your Wiki and your repository) should be written in Markdown.

    documentation  level2  bad example gh
    Figure: Bad example - Github project without any documentation or instructions

    azuredevops bad
    Figure: Bad example - Azure DevOps project without any documentation or instructions

    documentation  level2  good example 1 gh
    Figure: OK example - Github project with README instructions on how to compile and run the project (but still has a TODO)

    azuredevops good
    Figure: Good example - Azure DevOps project with README instructions on how to compile and run the project

    documentation  level2  good example 2 gh
    Figure: Good example - Github project with Wiki instructions for Product Owners, stakeholders, or public consumption (Source:

    azuredevops wiki good
    Figure: Good example - Azure DevOps project with Wiki instructions for Product Owners, stakeholders, or public consumption

    Tip: Use your documentation for onboarding developers

    sit dev bad
    Figure: Bad example - No documentation, go and sit with another developer

    documentation  level2  onboarding pbi 3
    Figure: Good example - Developer onboarding can be self-guided by good documentation, and offers a structure for feedback and improvement if the developer hits any issues

    Tip: Keep your documentation as minimal as possible - automate the F5 experience and deployment process (documents 2 and 3) using PowerShell scripts. Then your documents can just say "run these scripts"

    The rest of the jigsaw

    Update your Acceptance Criteria - If you use a policy that requires commits to be linked to PBIs, then you understand that the PBI is now the documentation. If requirements change (based on a conversation with the Product Owner of course) then the PBI should be updated.

    When updating the Acceptance Criteria, strike through the altered Acceptance Criteria, and add the new ones. Get the PO to confirm your understanding.


    Enter search text, click ‘Google’, and see the results populate below. [Updated] Enter search text and automatically see the results populate below.

    This should be added to the Definition of Done.

    Technical Debt

    What's "Technical Debt"?

    Technical Debt

    During a project, when you add functionality, you have a choice:

    • One way is quick but messy - it will make further changes harder in the future (i.e. quick and dirty).
    • The other way is cleaner – it will make changes easier to do in the future but will take longer to put in place.

    'Technical Debt' is a metaphor to help us think about this problem. In this metaphor (often mentioned during Scrum software projects), doing things the quick and dirty way gives us a 'technical debt', which will have to be fixed later. Like financial debt, the technical debt incurs interest payments - in the form of the extra effort that we must do in future development.

    We can choose to continue paying the interest, or we can pay the debt in full by redoing the piece of work in a cleaner way. Learn about the importance of paying back technical debt.

    The same principle is true with documentation. Using the 'old school' method will leave you with a build-up of documentation that you will need to keep up to date as the project evolves.

    Warning: If you want to follow Scrum and have zero technical debt, then you must throw away all documentation at the end of each Sprint. If you do want to keep it, make sure you add it to your definition of done to keep it updated.

  6. Do you have an Architecture Diagram?

    A good architecture diagram (aka a cloud architecture diagram or system architecture diagram) gives a great overview of your project. An architecture diagram lets you see at a glance what the overall structure of the solution is. This is useful for gaining an understanding of how the system fits together, how it flows, and what it does. It also helps to easily show which components can be improved due to updated or better components (or improved architectural guidelines).

    An architecture diagram is useful when:

    • In the initial discussion with a client (see Brendan Richards' quote below)
    • You are onboarding a new developer
    • You have been deep into one aspect of the system and need a refresher on another area
    • You have been off the project for a while
    • Whenever you are discussing requirements that may require structural changes

    The architecture diagram is a technical diagram that demonstrates the technology in use. The purpose of the architecture diagram is to show how a solution has been built and what the technical dependencies are. It is not used for user journeys or business logic.

    bad azure resource screenshot
    Figure: Bad example - A screenshot of the Azure resources used helps, but doesn't show data flows or dependencies

    Depending on the complexity of your solution and your comfort/familiarity with the tools, an architecture diagram could take you anywhere from half an hour to a couple of days.

    Usually, the longer an architecture diagram takes you to make, the more important it is for your project.

    • Matt Goldman, Software Architect

    An architecture diagram is part of the 7 crucial documents you need for your project, see our rule: Do you make awesome documentation?

    Tip #1: Include your most important components

    At a minimum, your architecture diagram should include:

    • Your data repository
    • Your business logic component
    • Your UI

    Your diagram needs to include the relationships between these components, and how they share and process data.

    Tip #2: Don't use a .NET Dependency Graph as a Architecture Diagram

    The .NET dependency diagram is a useful tool, but it drills down into a specific component of the solution (the code) while ignoring the rest of it (the infrastructure). If it adds value to your documentation (i.e., there is a specific reason to include it) you can include the .NET dependency diagram, but don't use it here in place of the architecture diagram.

    See SSW rule: Do you generate the VS Dependency Graph?

    dependency validation 01
    Figure: Bad example - The .NET dependency diagram shows code dependencies, but not the application's architecture

    Tip #3: Show data dependencies and data flows

    Your architecture diagram should show how the components of your solution fit together. It should also show how the components of the architecture depend on each other for functionality, as well as upstream and downstream data dependencies.

    architecture diagram good1
    Figure: OK example - Shows the technologies and data flows (from the data --> Azure Data Factory --> Azure Databricks --> Power BI). This gives an overview of the whole application in one diagram.

    Tip #4: Put data at the top

    Pick a direction for your data flow, and keep it consistent across all your documentation. Where there are exceptions (for example data going to analytics or to/from partner sources) make these perpendicular to the primary data flow direction.

    It should be easy to tell at a glance which direction data flows in your diagram - top to bottom is recommended.

    sugarlearning architecture diagram
    Figure: Good example - SugarLearning (an Angular + .NET project) - data flows from top to bottom, with exceptions (e.g. Application Insights / Raygun, not part of the main data flow) perpendicular to the primary direction

    Tip #5: Group relevant components

    Group components logically by enclosing them in a box. Components that operate independently can stand alone, and those that work together to deliver a logical function can be grouped together. Also show components that are out of scope, i.e. important for understanding the architecture but not necessarily part of it, e.g. legacy components, partner components, or components that have not been implemented yet.

    Note: For clarity, out of scope items whether one or many, should be in a box.

    rewards architecture diagram
    Figure: Good example - SSW Rewards (Xamarin with Azure Active Directory B2C) - consistent styling is used. E.g. as well as all the icons and typography being consistent, you can see that data is a solid line and auth traffic is a dotted line

    Tip #6: Start with paper

    Make sure you use the right tools when creating your architecture diagrams. There's nothing wrong with starting out with pen and paper, but your hand-drawn sketch should not be considered your 'done' final architecture diagram. If you want to save paper, and increase collaboration, a great alternative is the trusty old whiteboard.

    For me its all about building a shared understanding between the client and the developers. Most pieces of software architecture I do, work starts by building a rough solution architecture diagram on a whiteboard.

    Putting something on a whiteboard is "low risk" for the participants as its really easy to wipe and redraw. It allows us to start working together straight away, building a shared understanding of what we're trying to achieve. There is no software or skills required to participate in whiteboard collaboration.

    A key milestone in the early engagement is the first time a client takes the pen and starts using the whiteboard to explain something to me. Early use of the whiteboard is all about immediate communication. Later, the solution design starts to solidify and we can then use the last state of the whiteboard to make out first architecture diagram.

    • Brendan Richards, SSW Solution Architect

    Figure: SSW Rewards - start out with a hand-drawn sketch if that's easier for you, but don't consider this your final architecture diagram

    Tip: Microsoft Office Lens is a free mobile app that uses your smartphone camera to capture scan-like images of documents, photographs, business cards, and whiteboards (including searchable handwritten text).

    Figure: Better example - SSW Rewards - the same sketch but captured with Office Lens. How much clearer and more vibrant is this!

    Tip #7: ...and Finish up with

    The best tool for creating these diagrams is (previously All the examples on this page were created with this tool.

    It is definitely the most popular diagram tool at SSW:

    Figure: When SSW developers were surveyed, was the clear winner (see green) for building architecture diagrams

    TimePRO Architecture Diagram v2
    Figure: Better example - TimePro (an Angular + .NET project with Hangfire) - you can create diagrams quickly and easily with that still look very professional. This one is in the style of a technical document is free, can be used in the browser, or can be downloaded as a desktop app. But the best way to use is to integrate it directly into VS Code.

    thumbnail image003
    Figure: Great example - Auctions (a Blazor + .NET + Cosmos DB project) - integrated directly into VS Code

    There are multiple extensions available that let you do this, the best one is VS Code | Extensions | Integration. This makes it easy to create and edit the architecture diagram right alongside the code, and check-in with the relevant commits.

    architecture 2
    Figure: Good example - Auctions (a Blazor + .NET + Cosmos DB project) - architecture diagram created within VS Code and checked into the repo in the same commit as the relevant code changes. Blazor UI layer encapsulated in thematic color

    Tip #8: Polish up

    Maintain standards to keep your diagrams consistent:

    • Title - Naming Convention. E.g. Architecture Diagram - {{product name}}
    • Title - Standard font size. E.g. 43pts
    • Standard font. E.g. Helvetica bold
    • Standard arrowhead sizes. E.g. 14pts
    • Doc details - at the bottom left, add file location. E.g. DevOps | Wiki or GitHub | Repo | Docs, in font size 22pts
    • Doc details - at the bottom right, add branding and URL E.g. {{logo image}} -, in font size 22pts
    • Add color and icons to make your diagrams engaging and easier to distinguish

    SSW People Architecture Diagram
    Figure: Good example - SSW People (a Static Site - Gatsby and React with Dynamics 365 and SharePoint Online) - you can just as easily create colorful, engaging diagrams suitable for all of your project stakeholders

    Tip #9: Where to store Diagrams?

    Standardizing where your organisation stores architecture diagrams ensures a consistent experience among developers. Therefore store your architecture diagrams in the repo docs\ folder. Additionally, the \ (in the root) should have a link and an embedded image of the high-level architecture diagram (from the docs\* folder).

    Note: If you have a Wiki, for visibility add an architecture diagram page and embed the images from the docs\* folder.

    Tip #10: Use Azure Architecture Center

    Azure Architecture Center is the best tool to help you figure out the pieces you need for an architecure diagram - see SSW.Rules | Do you use Azure Architecture Center

    Alternatives to

    Miro (by Adobe)

    Miro is an online tool designed primarily for whiteboard-style collaboration. It is very easy to use and optimised for this purpose. As a diagramming tool, it is lacking in features compared to, but it can be used to create simple diagrams.

    Note: The paid version of Miro gives you Azure Architecture Diagram templates - see

    miro arch diagram
    Figure: An Azure Architecture Diagram created in Miro


    If you really want to geek out and use markdown, you can try Mermaid to build simple diagrams.

  7. Do you document the technologies, design patterns and ALM processes?

    The technologies and design patterns form the architecture that is usually the stuff that is hard to change.

    A pattern allows using a few words to a dev and he knows exactly what coding pattern you mean.

    ALM is about refining the work processes.

    We are doing this project using C#

    Bad example - you know nothing about how the project will be done

    ::: greybox

    • Technologies: WebAPI. The DI container is Structure Map. Entity Framework. Typescript. Angular
    • Patterns: Repository and Unit of Work (tied to Entity Framework to remove additional abstraction), IOC
    • ALM: Scrum with 2-week Sprints and a Definition of Done including StyleCop to green
    • ALM: Continuous deployment to staging ::: ::: good Good example - this tells you a lot about the architecture and processes in a few words :::

    The important ones for most web projects:

    1. Technologies: WebAPI
    2. Patterns: Single responsibly - if it does more than one thing, then split it. Eg. If it checks the weather and gets data out of the database, then split it.
    3. Patterns: Inversion of control / dependency injection Eg. If your controller needs to get data, then you inject the component that gets the data.
    4. Patterns: Repository/Unit of Work - repository has standard methods for getting and saving data. The code calling the repository should not know where the data lives. Eg. A User Repository could be saving to Active Directory or CRM and it should not affect any other code You may or may not choose to have an additional abstraction away from entity framework.
    5. ALM: Scrum - kind of a pattern for your process. Eg. Sprint Review every 2 weeks. Mostly a senior architect should be added for that 1 day each 2 weeks.

    The decisions the team makes regarding these 3 areas, should be documented as per Do you make awesome documentation?

  8. Do you look at the architecture of JavaScript projects?

    JavaScript projects (for example using Angular, React, or Vue) can have unnecessary libraries that take excessive size in the build bundle. It can cause huge impacts on the performance of the application.

    JavaScript bundle analyzers are tools that visualize the sizes and dependencies of libraries used in JavaScript projects. It helps to monitor the size of the compiled bundle in order to maintain the optimal performance of the final build.

    Here are a few options for the bundle analysis in JavaScript projects:

    For Angular projects use webpack-bundle-analyzer

    This is a popular tool for Angular projects which analyses a webpack stats JSON file generated by the Angular CLI during the build. To produce the bundle analysis using webpack-bundle-analyzer in Angular projects, follow the instructions in this blog.

    architecture good angular
    Figure: Good example – use webpack-bundle-analyzer for Angular applications

    For React projects sadly webpack-bundle-analyzer is too hacky to get going

    Unfortunately, the create-react-app from version 3 has removed the “--stats" flag which produces the webpack stats file used by webpack-bundle-analyzer. Hence, webpack-bundle-analyzer can only be used as a plugin in these React projects, as described in the following blog: Optimize your React application with webpack-bundle-analyzer

    architecture bad react
    Figure: Bad example – webpack-bundle-analyzer is not user friendly for React applications

    For React projects use source-map-explorer

    This tool uses a bundle's generated source map files to analyse the size and composition of a bundler and render a visualization of the bundle similar to what webpack-bundle-analyzer does. To create a bundle analysis for a React project, follow the instructions from the Create React App documentation:

    architecture good react
    Figure: Good example – use source-map-explorer on React projects

    Screenshots of these diagrams should be included in the project's wiki as per the rule Do you make awesome documentation?

  9. Do you look at the architecture of .NET projects?

    To visualize the structure of all your code you need architecture tools that will analyze your whole solution.

    They show the dependencies between classes and assemblies in your projects. You have 2 choices:

    • Visual Studio's Dependency Graph. This feature is only available in Visual Studio Ultimate. (recommended)
    • If you want architecture tools for Visual Studio, but don't have Visual Studio Ultimate, then the excellent 3rd party solution nDepend. A bonus is that it can also find issues and highlights them in red for easy discovery

    Figure: Visual Studio lets you generate a dependency graph for your solution

    Figure: The dependency graph in Visual Studio shows you some interesting information about how projects relate to each other

    nDepend has a similar diagram that is a little messier, but the latest version also includes a "Queries + Rules Explorer" which is another code analysis tool.

    Figure: nDepend Dependency Graph. Issues are highlighted in red for easy discovery

    Read more about nDepend:

  10. Do you generate the VS Dependency Graph?

    Dependency graphs are important because they give you an indication of the coupling between the different components within your application.

    A well architected application (ie. one that correctly follows the Onion Architecture) will be easy to maintain because it is loosely coupled.

    Figure: Bad Example- The Visual Studio Dependency Graph is hard to read

    TimePRODependence good
    Figure: Good Example – The ReSharper Dependency graph groups dependencies based on Solution Folders. By having a Consistent Solution Structure it is easy to see from your Dependency Graph if there is coupling between your UI and your Dependencies

    Further Reading:

  11. Do you know how to laser in on the smelliest code?

    Rather than randomly browsing for dodgy code, use Visual Studio's Code Metrics feature to identify "Hot Spots" that require investigation.

    lotto balls
    Figure: The bad was is to browse the code

    VS 11 Code Metrics
    Figure: Run Code Metrics in Visual Studio

    CodeMetrics 3 1710232021935
    Figure: Red dots indicate the code that is hard to maintain. E.g. Save() and LoadCustomer()

    Identifying the problem areas is only the start of the process. From here, you should speak to the developers responsible for this dodgy code. There might be good reasons why they haven't invested time on this.

    codelens start conversation
    Figure: Find out who the devs are by using CodeLens and start a conversation

    Tip: Learn the benefits of Source Control.

    Suggestion to Microsoft: Allow us to visualize the developers responsible for the bad code (currently and historically) using CodeLens.

  12. Do you know the common Design Principles? (Part 1)

    SRP The Single Responsibility Principle A class should have one, and only one reason to change.

    **OCP The Open Closed Principle ** You should be able to extend a class's behavior without modifying it.

    **LSP The Liskov Substitution Principle ** Derived classes must be substitutable for their base classes.

    **ISP The Interface Segregation Principle ** Make fine-grained interfaces that are client specific.

    **DIP The Dependency Inversion Principle ** Depend on abstractions, not on concretions.

    Figure: Your code should be using SOLID principles

    It is assumed knowledge that you know all 5 SOLID principles. If you don't, read about them on Uncle Bob's site above, or watch the SOLID Pluralsight videos by Steve Smith.

    What order?

    1. Look for Single Responsibility Principle violations. These are the most common and are the source of many other issues. Reducing the size and complexity of your classes and methods will often resolve other problems.
    2. Liskov Substitution and Dependency Inversion are the next most common violations, so keep an eye out for them next
    3. When teams first begin implementing Dependency Injection, it is common for them to generate bloated interfaces that violate the Interface Segregation Principle.

    After you have identified and corrected the most obvious broad principle violations, you can start drilling into the code and looking for localized code breaches. ReSharper from JetBrains or JustCode from Telerik are invaluable tools once you get to this level.

    Once you understand common design principles, look at common design patterns to help you follow them in your projects.

  13. Do you know the common Design Principles? (Part 2 - Example)

    The hot spots identified in your solution often indicate violations of common design principles.

    CodeMetrics 3
    Figure: Check Address.Save() and Customer.LoadCustomer() looking for SOLID refactor opportunities

    The most common problem encountered will be code that violates the Single Responsibility Principle (SRP). Addressing SRP issues will see a reduction in the following 3 metrics:

    1. "Cyclomatic Complexity" which indicates that your methods are complex, then
    2. "High Coupling" indicates that your class/method relies on many other classes, then
    3. "Number of Lines" indicates code structures that are long and unwieldy.

    Let's just look at one example.

    This code does more than one thing, and therefore breaks the Single Responsibility Principle.

    public class PrintServer
        public string CreateJob(PrintJob data) { //...
        public int GetStatus(string jobId) { //...
        public void Print(string jobId, int startPage, int endPage) { //...
        public List GetPrinterList() { //...
        public bool AddPrinter(Printer printer) { //...
        public event EventHandler PrintPreviewPageComputed;
        public event EventHandler PrintPreviewReady;
        // ...

    Figure: Bad example - This class does two distinct jobs. It creates print jobs and manages printers

    public class Printers {
        public string CreateJob(PrintJob data) { //...
        public int GetStatus(string jobId) { //...
        public void Print(string jobId, int startPage, int endPage) { //...
    public class PrinterManager {
        public List GetPrinterList() { //...
        public bool AddPrinter(Printer printer) { //...

    Figure: Good Example - Each class has a single responsibility Additionally, code that has high coupling violates the Dependency Inversion principle. This makes code difficult to change but can be resolved by implementing the Inversion of Control *and* Dependency Injection patterns.

    TODO: Piers - GitHub Issue

  14. Do you know the common Design Patterns?

    Design patterns are useful for ensuring common design principles are being followed.  They help make your code consistent, predictable, and easy to maintain.

    Video: 10 Design Patterns Explained in 10 Minutes (10 min)

    Important design patterns

    • IOC | Inversion of Control
      In this pattern, control over the instantiation and management of objects is inverted, meaning that these responsibilities are handed over to an external framework like a DI container instead of being managed by the classes themselves. This separation enhances flexibility and decouples all the classes in the system.
    • DI | Dependency Injection
      DI is a form of IoC where dependencies are provided to objects rather than created by them, one instance of the dependency can be used by many. This pattern also reduces dependency coupling between components since the instantiation is handled externally, making the system easier to manage and test.
    • Factory | Factory Pattern
      It is a creational pattern that deals with the problem of creating objects without specifying the exact class of object that will be created. This is done by defining an interface or abstract class for creating an object, which subclasses decide how to implement. This pattern helps in managing and maintaining code by encapsulating how any object is created.
    • Singleton | Singleton Pattern
      This ensures that a class has only one instance and provides a global point of access to it. This pattern is used to control access to resources that are shared throughout an application, like a configuration file or connection to a database. This ensures that only a single shared instance of a class is consumed by the application.
    • Repository | Repository Pattern
      A repository abstracts the data layer, providing a collection-like interface for accessing domain objects. It centralizes common data access functionalities and promotes a more organized data access architecture. By isolating the data layer, this pattern ensures that changes to the database access code are minimized when changes to the business logic or database specifics occur.
    • Unit of Work | Unit of Work Pattern
      It is a way to keep track of everything you do during a transaction that can affect the database. When it's time to commit the transaction, it figures out everything that needs to be done to alter the database as a result of your work. This pattern is crucial for maintaining the consistency of data within the boundaries of a transaction.
    • MVC | Model View Controller
      It is an architectural pattern that separates an application into three main logical components: the model, the view, and the controller. Each of these components handles different aspects of the application's data, user interface, and control logic, respectively. This separation helps manage complexity in large applications.
    • MVP | Model View Presenter This pattern is a simpler version of MVC designed for modern applications where the user interface (the view) just displays information and responds to user inputs. In MVP, a middle-man called the presenter handles all the decision-making behind the scenes. It takes care of updating the view and reacting to user actions, making the view very simple and straightforward. This setup makes it easier to test the user interface because the view itself doesn't contain any complex logic—it just shows what the presenter tells it to.
    • Mediator | Mediator Pattern The mediator pattern uses a central object to handle communication between other objects in a system, promoting separation of concerns. This means each object doesn’t need to know about the details of how others operate, making the system easier to maintain and extend.

    design patterns
    Figure: Developers use design patterns to build quality solutions

    Other design patterns

    • Decorator | Decorator Pattern The decorator pattern allows behavior to be added to individual objects, either statically or dynamically, without affecting the behavior of other objects from the same class. This pattern is useful for adding new features to objects without changing their structure, making it easier to extend the functionality of an object.
    • Command | Command Pattern The command pattern encapsulates a request as an object, allowing you to parameterize clients with queues, requests, and operations. This pattern helps in decoupling the sender and receiver of a request, making it easier to implement undo and redo functionalities.
    • Strategy | Strategy Pattern The strategy pattern defines a family of algorithms, encapsulates each one, and makes them interchangeable. This pattern allows the algorithm to vary independently from the client that uses it, making it easier to switch between different algorithms at runtime.
    • Specification | Specification Pattern The specification pattern is used to define business rules that can be combined to form complex rules. This pattern helps in separating the logic for checking business rules from the domain model, making it easier to maintain and reuse the rules.
    • Prototype | Prototype Pattern The prototype pattern is used to create new objects by copying a model instance. This method helps avoid the complexity of using subclasses and reduces the performance cost associated with creating new objects using the standard method (e.g., with the 'new' keyword), especially when it's too costly for the application.
    • Builder | Builder Pattern The builder pattern is a flexible design pattern used to construct complex objects. It separates the process of building an object from the object's representation, making it easier to create different representations of an object using the same construction process.
    • Facade | Facade Pattern The facade pattern simplifies interaction with complex subsystems by providing a single, straightforward interface. This makes the subsystem easier to use and maintain by hiding its complexities.
    • Proxy | Proxy Pattern The proxy pattern uses a placeholder or proxy object to control access to another object. It acts like a representative for the original object, managing interactions and access to it. This helps in controlling how and when the actual object is accessed.
    • Iterator | Iterator Pattern The iterator pattern lets you go through elements in a collection one by one without revealing how the collection is structured. It provides a standard way to loop through different types of collections, making it easier to access their elements.
    • Observer | Observer Pattern The observer pattern defines a one-to-many dependency between objects so that when one object changes state, all its dependents are notified and updated automatically. This pattern is useful for building loosely coupled systems where objects can communicate with each other without knowing each other's details.
    • State | State Pattern The state pattern allows an object to change its behavior when its internal state changes. It encapsulates the behavior of an object into separate classes, making it easier to add new states and transitions without changing the object's code.

    By leveraging these design patterns, developers can solve complex problems more efficiently and ensure that their applications are robust, scalable, and easy to maintain. It is assumed knowledge that you know these design patterns. If you don't, read about them on the sites above or watch the PluralSight videos on Design Patterns.

    It is important to know when the use of a pattern is appropriate. Patterns can be useful, but they can also be harmful if used incorrectly.

  15. Do you use the Dependency Injection design pattern?

    Appropriate use of design patterns can ensure your code is maintainable and easy to read.

    Dependency Injection

    We should implement Inversion of Control by using the Dependency Injection pattern to decrease the direct coupling of our classes. Seprating the creation of objects or instances of services from thier usage allows for more flexibility and testability.

    Lets look at an example, in this code, our controller is tightly coupled to the ExampleService and as a result, there is no way to unit test the controller.

    [This example is from the blog:]

    public class HomeController
        private readonly IExampleService _service;
        public HomeController()
            _service = new ExampleService();
        public ActionResult Index()
            return View(_service.GetSomething());

    ❌ Figure: Bad example - Controller coupled with ExampleService.

    public class HomeController
        private readonly IExampleService _service;
        public HomeController()
            _service = Container.Instance.Resolve<IExampleService>();
        public ActionResult Index()
            return View(_service.GetSomething());

    ❌ Figure: Bad example - 2nd attempt using an Inversion of Control container but *not* using dependency injection. A dependency now exists on the Container.

    This is bad code because we removed one coupling but added another one (the container).

    public class HomeController
        private readonly IExampleService _service;
        public HomeController(IExampleService service)
            _service = service;
        public ActionResult Index()
            return View(_service.GetSomething());

    ✅ Figure: Good example - code showing using dependency injection. No static dependencies.

  16. Do you code against interfaces?

    Appropriate use of design patterns can ensure your code is maintainable and easy to read.

    Code against Interfaces

    Always code against an interface rather than a concrete implementation. Use dependency injection to control which implementation the interface uses. By doing this you create a contract for that service which the rest of the codebase has to adhere to.

    By creating an interface for each service and programming against the interface, you can easily swap out the implementation of the service without changing the code that uses the service.

    It is important to also control the scope of the injection. For example, in ASP.NET 8 application you have the option to register the concrete implementation in the DI container either as a singleton, scoped, or transient service. Each of them will have a different lifetime in the application and should be set as per the requirement.

    Code against interfaces   bad ❌ Figure: Bad Example - Referencing the concrete EF context

    This is bad code because now the controller is directly dependent on the implementation of the EF context. This also increase the effort for unit testing.

    Code against interfaces   good ✅ Figure: Good Example - Programming against the interface

    This is good because now you can test the controller and the services independently. Also the controller only talks to the service through the functionality exposed by the interface, enforcing encapsulation.

  17. Do you look for GRASP Patterns?

    GRASP stands for General Responsibility Assignment Software Patterns and describes guidelines for working out what objects are responsible for what areas of the application.

    The fundamentals of GRASP are the building blocks of Object-Oriented design. It is important that responsibilities in your application are assigned predictably and sensibly to achieve maximum extensibility and maintainability.

    GRASP consists of a set of patterns and principles that describe different ways of constructing relationships between classes and objects.

    CreatorA specific class is responsible for creating instances of specific other classes (e.g. a Factory Pattern)
    Information ExpertResponsibilities are delegated to the class that holds the information required to handle that responsibility
    ControllerSystem events are handled by a single "controller" class that delegates to other objects the work that needs to be done
    Low CouplingClasses should have a low dependency on each other, have low impact if changed, and have high potential for reuse
    High CohesionObjects should be created for a single set of focused responsibilities
    PolymorphismThe variation in behaviour of a type of object is the responsibility of that type's implementation
    Pure FabricationAny class that does not represent a concept in the problem domain
    IndirectionThe responsibility of mediation between two classes is handled by an intermediate object (e.g. a Controller in the MVC pattern)
    Protected VariationsVariations in the behaviour of other objects is abstracted away from the dependent object by means of an interface and polymorphism

    Tip: Visual Studio's Architecture tools can help you visualise your dependencies. A good structure will show calls flowing in one direction. architecture responsibility bad Figure: Bad Example - Calls are going in both directions which hints at a poor architecture architecture responsibility good Figure: Good Example - Calls are flowing in one direction hinting at a more sensible arrangement of responsibilities

  18. Code - Can you read code down across?

    Reading down should show you the what (all the intend)

    Reading across should show you the how (F12)

  19. Do you start reading code?

    Great code isn't just about making computers do stuff; it's about making sure humans can understand and work with it too. Good code is like a well-written story - it's clear, easy to read, and everything has a name that makes sense. There's no unnecessary stuff thrown in, and it's all neatly organized. It's not just a set of instructions; it's a roadmap that explains not only "how" things work but also "why" they work that way when you read through it.

    “Aim for simplicity. I want to code to read like poetry”*

    • Terje Sandstrom

    Good code characteristics

    • Is clear and easy to read
    • Has consistent and meaningful names for everything
    • Has no repeated or redundant code
    • Has neat formatting
    • Explains "why" when you read down, and "how" when you read left to right
    public IEnumerable<Customer> GetSupplierCustomersWithMoreThanZeroOrders(int supplierId) {
        var supplier = repository.Suppliers.Single(s => s.Id == supplierId);
        if (supplier == null) {
            return Enumerable.Empty<Customer>();
        var customers = supplier.Customers
            .Where(c => c.Orders > 0);
        return customers;

    Figure: This code explains what it is doing as you read left to right, and why it is doing it when you read top to bottom

    Tip: Read the book Clean Code: A Handbook of Agile Software Craftsmanship by Robert. C. Martin.

    Good code is declarative

    For example, I want to show all the products where the unit price less than 20, and also how many products are in each category.

    Dictionary<string, ProductGroup> groups = new Dictionary<string, ProductGroup>();
    foreach (var product in products) {
        if (product.UnitPrice >= 20)
            if (!groups.ContainsKey(product.CategoryName))
                ProductGroup productGroup = new ProductGroup();
                productGroup.CategoryName = product.CategoryName;
                productGroup.ProductCount = 0;
                groups[product.CategoryName] = productgroup;
    var result = new List<ProductGroup>(groups.Values);
    result.Sort(delegate(ProductGroup groupX, ProductGroup groupY) {
            groupX.ProductCount > groupY.ProductCount ? -1 :
            groupX.ProductCount < groupY.ProductCount ? 1 :

    Figure: Bad example - Not using LINQ

    Tip: Resharper can automatically convert this code.

    result = products
        .Where(product => product.UnitPrice >= 20)
        .GroupBy(product => product.CategoryName)
        .OrderByDescending(group => group.Count())
        .Select(group => new { CategoryName = group.Key, ProductCount = group.Count() });

    Figure: Good example - using LINQ

    Tip: For more information on why declarative programming (aka LINQ, SQL, HTML) is great, watch the TechDays 2010 Keynote by Anders Hejlsberg.Anders explains why it's better to have code "tell what, not how".

    Clean HTML

    Anyone who creates their own HTML pages today should aim to make their markup semantically correct. For more information on semantic markup, see HTML Semantic Elements.

    For example, <p> is for a paragraph, not for defining a section.

    Clean Front-End code

    Clean code and consistent coding standards are not just for server-side code. It is important that you apply your coding standards to your front-end code as well e.g. JavaScript, TypeScript, React, Angular, Vue, CSS, etc.

    You should use a linter and code formatter like Prettier to make development easier and more consistent.

  20. Do you use a ‘Precision Mocker’ like NSubstitute instead of a ‘Monster Mocker’ like Microsoft Fakes?

    Using a precision mocking framework (such as NSubstitute) encourages developers to write maintainable, loosely coupled code.

    Mocking frameworks allow you to replace a section of the code you are about to test, with an alternative piece of code.For example, this allows you to test a method that performs a calculation and saves to the database, without actually requiring a database.

    There are two types of mocking framework.

    The Monster Mocker (e.g. Microsoft Fakes or TypeMock)

    This type of mocking framework is very powerful and allows replacing code that wasn’t designed to be replaced.This is great for testing legacy code, tightly coupled code with lots of static dependencies (like DateTime.Now) and SharePoint.

    monster mocker
    Figure: Bad Example – Our class is tightly coupled to our authentication provider, and as we add each test we are adding *more* dependencies on this provider. This makes our codebase less and less maintainable. If we ever want to change our authentication provider “OAuthWebSecurity”, it will need to be changed in the controller, and every test that calls it

    The Precision Mocker (e.g. NSubstitute)

    This mocking framework takes advantage of well written, loosely coupled code.

    The mocking framework creates substitute items to inject into the code under test.

    nsubstitute 1
    Figure: Good Example - An interface describes the methods available on the provider

    nsubstitute 2
    Figure: Good Example - The Product Repository is injected into the ProductService class (via constructor injection)

    nsubstitute 3
    Figure: Good Example - The code is loosely coupled. The ProductService is dependent on an interface of the Product Repository, which is injected into the ProductService via its constructor. The unit test can easily create a mock object of the Product Repository and substitute it for the dependency. NSubstitute is one of the most popular mocking libraries.

  21. Do you have opportunities to convert use Linq to entities?

    Look for inline SQL to see whether you can replace it with Linq to Entities.

    speed camera
    Speed camera
    Figure: SQL Injection for Speed Cameras :-)

  22. Do you look for opportunities to use Linq?

    Linq is a fantastic addition to .Net which lets you write clear and beautiful declarative code. Linq allows you to focus more on the what and less on the how .

    You should look for opportunities to replace your existing code with Linq.

    For example, replace your foreach loops with Linq.

    var lucrativeCustomers = new List<Customer>();
    foreach (var customer in Customers)
        if (customer.Orders.Count > 0)

    Figure: Bad Example - imperative programming using a foreach loop

    var lucrativeCustomers = Customers.Where(c => c.Orders.Count > 0).ToList();

    Figure: Good Example - declarative programming using Linq

  23. Do you use the repository pattern for data access?

    The repository pattern is a great way to handle your data access layer and should be used wherever you have a need to retrieve data and turn it into domain objects.

    The advantages of using a repository pattern are:

    • Abstraction away from the detail of how objects are retrieved and saved
    • Domain objects are ignorant of persistence - persistence is handled completely by the repository
    • Testability of your code without having to hit the database (you can just mock the repository)
    • Reusability of data access code without having to worry about consistency

    Even better, by providing a consistent repository base class, you can get all your CRUD operations while avoiding any plumbing code.

    Tip: Entity Framework provides a great abstraction for data access out of the box. See Jason’s Clean Architecture with ASP.NET Core talk for more information

  24. Do you look for large strings in code?

    Long hard-coded strings in a codebase can be a sign of poor architecture.

    To make hard-coded strings easier to find, consider highlighting them in your IDE.

    Figure: Bad Example - The connection string is hard-coded and isn't easy to see in the IDE. longstringbadexample2 Figure: Better Example - The connection string is still hard-coded, but at least it's very visible to the developers.
    Figure: Good Example - The connection string is now stored in configuration and we don't have a long hard-coded string in the code.

  25. Do you decide on the level of the verboseness? E.g. ternary operators

    Do you believe in being verbose in your code (don't compress code and don't use too many ternary operators)?

    Different developers have different opinions.  It is important your developers work as a team and decide together how verbose their code should be.

    What is your opinion on this?  Contribute to the discussion on Stack Overflow.

  26. Do you review the code comments?

    Comments can be useful for documenting code but should be used properly. Some developers like seeing lots of code comments and some don't.

    Some tips for including comments in your code are:

    1. Comments aren't always the solution.  Sometimes it's better to refactor the comments into the actual method name. If your method needs a comment to tell a developer what it does, then the method name is probably not very clear.
    2. Comments should never say *what* the code does, it should say *why* the code does it.  Any decent developer can work out what a piece of code does.
    3. Comments can also be useful when code is missing.  For example, why there is no locking code around a thread-unsafe method.

    // returns the Id of the first customer with the matching last namepublic int GetResult(string lastname){    // get the first matching customer from the repository    return repository.Customer.First(c => c.LastName.StartsWith(lastname));} Figure: Bad Example - The first comment is only valuable because the method is poorly named, while the second describes *what* is happening, not *why*public int GetFirstCustomerWithLastName(string lastname){    // we use StartsWith because the legacy system sometimes padded with spaces    return repository.Customer.First(c => c.LastName.StartsWith(lastname));}Figure: Good Example - The method has been renamed so no comment is required, and the comment explains *why* the code has been written in that way

  27. Do you use the best Code Analysis tools?

    Whenever you are writing code, you should always make sure it conforms to your team's standards. If everyone is following the same set of rules; someone else’s code will look more familiar and more like your code - ultimately easier to work with.

    No matter how good a coder you are, you will always miss things from time to time, so it's a really good idea to have a tool that automatically scans your code and reports on what you need to change in order to improve it.

    Visual Studio has a great Code Analysis tool to help you look for problems in your code. Combine this with Jetbrains' ReSharper and your code will be smell free.

    Figure: You wouldn't play cricket without protective gear and you shouldn't code without protective tools

    The levels of protection are:

    Level 1

    Get ReSharper to green on each file you touch. You want the files you work on to be left better than when you started. See Do you follow the boyscout rule?

    You can run through a file and tidy it very quickly if you know 2 great keyboard shortcuts:

    • Alt + [Page Down/Page Up] : Next/Previous Resharper Error / Warning
    • Alt + Enter: Smart refactoring suggestions

    48bc81 image001
    Figure: ReSharper will show Orange when it detects that there is code that could be improved

    ReSharper green
    Figure: ReSharper will show green when all code is tidy

    Level 2

    Use SSW CodeAuditor.

    Figure: CodeAuditor shows a lot of warnings in this test project

    Note: Document any rules you've turned off.

    Level 3

    Use SSW LinkAuditor.

    Note: Document any rules you've turned off.

    Level 4

    Use StyleCop to check that your code has consistent style and formatting.

    Figure: StyleCop shows a lot of warnings in this test project

    Level 5

    Run Code Analysis (was FxCop) with the default settings or ReSharper with Code Analysis turned on.

    Figure: Run Code Analysis in Visual Studio

    Figure: The Code Analysis results indicate there are 17 items that need fixing

    Level 6

    Ratchet up your Code Analysis Rules until you get to 'Microsoft All Rules'.

    Figure: Start with the Minimum Recommended Rules, and then ratched up.

    Level 7

    Document any rules you've turned off.

    All of these rules allow you to disable rules that you're not concerned about. There's nothing wrong with disabling rules you don't want checked, but you should make it clear to developers why those rules were removed.

    Create a GlobalSuppressions.cs file in your project with the rules that have been turned off and why.

    suppressions file
    Figure: The suppressions file tells Code Analysis which rules it should disable for specific code blocks

    More Information: Do you make instructions at the beginning of a project and improve them gradually? and

    Level 8

    The gold standard is to use SonarQube, which gives you the code analysis that the previous levels give you as wells as the ability to analyze technical debt and to see which code changes had the most impact to technical debt

    2016 06 08 12 59 38
    Figure: SonarQube workflow with Visual Studio and Azure DevOps

    2016 06 08 12 59 53
    Figure: SonarQube gives you the changes in code analysis results between each check-in

  28. Do you use the best trace logging library?

    Did you know that writing your own logging infrastructure code wastes time? There are awesome logging abstractions in .NET Core and .NET 5+ that you should use instead!

    These abstractions allow you to:

    • Create log entries in a predictable and familiar fashion - you use the same patterns for logging in a Background Service as you would in a Blazor WASM app (just some slightly different bootstrapping 😉)
    • Use Dependency Injection; your code doesn't take a dependency on a particular framework (as they are abstractions)
    • Filter output based off severity (Verbose/Debug/Info/Warning/Error) - so you can dial it up or down without changing code
    • Have different logs for different components of your application (e.g. a Customer Log and an Order Log)
    • Multiple logging sinks - where the logs are written to e.g. log file, database, table storage, or Application Insights
    • Supports log message templates allowing logging providers to implement semantic or structured logging
    • Can be used with a range of 3rd party logging providers

    Read more at Logging in .NET Core and ASP.NET Core

    trace logging bad
    Figure: Bad example - Using Debug or Trace for logging, or writing hard coded mechanisms for logging does not allow you to configure logging at runtime

    trace logging bad 2
    Figure: Bad example - Roll your own logging components lack functionality, and have not been tested as thoroughly for quality or performance as log4net

    _logger.LogInformation("Getting item {Id} at {RequestTime}", id, DateTime.Now);

    Good example - Using templates allows persisting structured log data (DateTime is a complex object)

    Figure: Good example - Seq provides a powerful UI for searching and viewing your structured logs

  29. Do you look for Code Coverage?

    Code Coverage shows how much of your code is covered by tests and can be a useful tool for showing how effective your unit testing strategy is. However, it should be looked at with caution.

    • You should focus more on the quality and less on the quantity of tests
    • You should write tests for fragile code first and not waste time testing trivial methods
    • Remember the 80-20 rule - very high test coverage is a noble goal, but there are diminishing returns
    • If you're modifying code, write the test first, then change the code, then run the test to make sure it passes (aka red-green-refactor).
      Tip: This is made very easy by the "Live Unit Testing" feature in Visual Studio - see Do you use Live Unit Testing to see code coverage?
    • You should run your tests regularly (see Do you follow a Test Driven Process?) and, ideally, the tests will be part of your deployment pipeline

    Figure: Code Coverage metrics in Visual Studio. This solution has high code coverage (around 80% on average)

    Tip: To make sure your unit test coverage never decreases, make use of tools such as SonarQube and GitHub action checks to gate your deployment pipelines on non-decreasing test coverage.

  30. Do you use the Kent Beck philosophy?

    Kent Beck is the man credited with "rediscovering" the Test Driven Development methodology. It's a great way to ensure your code works as expected and it will allow you to catch errors that occur down the track.

    Based on Kent Beck's principles, you should:

    1. Write code as it spews out of your brain
    2. Do lots of small refactoring rather than big architectural rewrites
    3. If you are going to change code, add a test first (aka red-green-refactor)

    Tip: Read Michael Feather’s book, "Working Effectively with Legacy Code" for some insights into effective unit testing.

    Tip: Don't focus on the percentage of code coverage, focus on whether tests will touch the lines of code you care about.

  31. Do you know what a container is?

    The main benefits about containers is that it can reduce your running and maintanence costs; and hear no more it worked on my machine...

    A container is like a little virtual machine that does not have a operating system installed. They already contains all the libraries that your web application needs.

    Video: What you need to know about Containers (in under 3 minutes!)

    Related keywords: Docker, AKS, Kubernetes, Container Orchestration.

  32. Do you know the best dependency injection container?

    IoC (Inversion of Control) and Dependency Injection

    IoC is a design pattern that shifts the responsibility of managing object dependencies from the individual classes to a centralized container or framework. This decoupling enhances flexibility and scalability by allowing the framework to handle object creation and wiring.

    Dependency injection is a method for managing Inversion of Control (IoC). This involves creating an interface and passing it as a parameter, allowing us to determine which implementation of the interface we intend to use.

    IoC containers

    IoC containers are powerful tools that apply the IoC principle and automatically handle dependency resolution and object instantiation. They act as central repositories for services and take care of managing the lifespan of objects. At SSW we recommend using .NET built-in Dependency Injection as default. Read more on Dependency injection in ASP.NET Core.

    However, in larger applications, manually registering dependencies can become cumbersome and easy to forget. In those cases, we recommend using Scrutor. While it isn't a DI container itself, it works on top of the .NET built-in Dependency Injection capabilities and adds assembly scanning to automatically register discovered types.

    .NET IoC containers:

    When selecting a Dependency Injection container it is worth considering a number of factors such as:

    • Ease of use
    • Configurability: Fluent API and/or XML Configuration
    • Performance (Unless you have a very high traffic application the difference should be minimal)
    • NuGet Support

    The top tools all contain comparable functionality. In practice which one you use makes little difference, especially when you consider that your container choice should not leak into your domain model.

    Important: Unless a specific shortfall is discovered with the container your team uses, you should continue to use the same container across all of your projects, become an expert with it and invest time on building features rather than learning new container implementations.

    di container bad
    Figure: Bad Example - Ninject and StructureMap were top containers but are no longer actively developed. Together with Autofac, they do not support the latest version of .NET

    Examples of using IoC container

    public class Program
        private static void Main()
            IContainer container = IoC.Initialize(); 
            new BadgeTaskJob(container).Run();

    Bad example - Use the StructureMap IoC container but did not do the proper dependency injection

    var builder = Host.CreateApplicationBuilder(args);
    using IHost host = builder.Build();
    using var scope = host.Services.CreateScope();

    Good example - Use .NET built-in Dependency Injection for console app

    var builder = WebApplication.CreateBuilder(args);
    builder.Services.AddSingleton<ITelemetryInitializer, AppInsightsTelemetryInitializer>();
    var app = builder.Build();

    Good example - Use ASP.Net Core built-in Dependency Injection for web app

  33. Do you know what to do about ASP.NET Core default dependency injection?

    We already know what the best IOC container is, but how does ASP.NET core's default dependency injection compare?

    ASP.NET Core includes default dependency injection for new Web Apps in the Startup.cs file. This is adequate for simple projects, but not designed to compete with the features of alternatives containers (like AutoFac's convention based registration).

    "The default services container provided by ASP.NET Core provides a minimal feature set and is not intended to replace other containers."

    Steve Smith, (ASP.NET Dependency Injection)

    You can quickly flag this error and any more by using the SSW Code Auditor.

    Here is an example of rewiring the default code to AutoFac with the SSW's Music Store app:

    SSW DependencyInjection Example Default Bad
    Figure: Bad Example - The default dependency injection for ASP.NET Core

    SSW DependencyInjection Example Default Good
    Figure: Good Example - The bad example rewired to utilize AutoFac. Red boxes outline the modified code

  34. Do you use subdomains instead of virtual directories?

    Using subdomains over directories has 2 main benefits:

    1. it is easier to host different sections of your website on different platforms, and
    2. in different geographic locations

    ::: greybox

    • /
      ::: ::: bad Figure: Bad Example - Virtual directories used to distinguish organizations :::

    ::: greybox

    • ::: ::: good Figure: Good Example - Subdomains used to distinguish organizations
  35. Do you use the best middle tier .NET libraries?

    Don't waste time evaluating which middle tier .NET libraries to use. Most of the commonly used libraries are very similar in functionality. By sticking to a library, you can also increase your expertise in it, reducing development time in the future.

    Great products include:

  36. Do you use the best Web UI libraries?

    Don't waste time evaluating which Web UI libraries to use. Most of the commonly used libraries are very similar in functionality. The recommended library is Bootstrap.

    It's the most popular available framework today, which means more people involved in the project, more tutorials and articles from the community, more real-world examples/websites, more third-party extensions, and better integration with other web development products

    Figure: Leader among frameworks today, Bootstrap toolkit is recommended to build successful websites

    The 3 things a developer need to know to get up and running quickly with ASP.NET MVC

    Bootstrap & ASP.NET MVC - Intro / Quickstart

    Other useful frameworks

    Now that you saved a lot of UI development time by using Bootstrap, you can play around with other useful frameworks.

    • KendoUI for enhanced HTML and jQuery controls
    • SignalR for real-time web functionality
  37. Do you use your IOC container to Inject Dependencies – and not as a singleton container

    A common practice we see when developers start to use IOC containers is that the IOC container becomes a central service and configuration repository that all the components across the project become dependent upon.

    Using an IOC container in this manner can bring advantages such as centralised configuration and dependency lifecycle and scope managment. If implemented correctly, however, your classes can benefit from the above without any direct dependency on the IOC container itself.

    IOC badexample

    Figure: Bad Example - the dependency is manually fetched from the IOC container, This class now has a hard dependency on your IOC container

    IOC GoodExample

    Figure: Good example - The dependency is enforced via a constuctor parameter. The class does not need to know anything about the IOC container being used and can potentially be reused in different contexts and with different IOC containers.

    For more information and insight on IOC usage, read the following:

  38. Do you know the importance of paying back Technical Debt?

    What is Technical Debt?

    Technical Debt is when you defer work that needs doing in your code. And, just like when you defer a payment and accrue financial debt, Technical Debt must be repaid, and it accumulates interest (in the form of reduced velocity) while it remains unpaid.

    Technical Debt can occur for all kinds of reasons, for example:

    • When you take a shortcut or implement a hack to get a feature out quickly. Sometimes this is because, as a team (including the Product Owner), you've made a conscious decision to take this shortcut because, for example, you need a cut-down version of the feature urgently, or in other cases because of an open bug in a library you depend on.
    • Code that is hard to understand after reading it multiple times or a single method that spans multiple screens is also considered to be Technical Debt.

    Systems need to have features added to them to continually remain useful (or competitive). As new features are added to the system, often more Technical Debt will be introduced. But as any system ages, it will accumulate Technical Debt.

    IMPORTANT: When you become aware of Technical Debt in a product, you must add it to the backlog. Whether you have discovered the Technical Debt or added it intentionally, either way the discussion and decision must be recorded in a PBI. This allows the team to factor paying it back into their Sprint planning.

    Example: A developer takes a shortcut to get some early feedback on a new feature

    • $100 - full feature
    • $20 - feature with shortcuts (no tests, dirty code, whatever it takes)
    • $80 - IOU via PBI in the backlog e.g. [FeatureName] – Technical Debt - Planned

    waf tech debt backlog northwind 1710232021944
    Figure: Good example - Technical Debt is very visible to the Product Owner

    What are the consequences of Technical Debt?

    • Fewer features over time for the customers
    • More molasses (developer friction) for the developers

    The 3 types of Technical Debt

    1. Planned Technical Debt

    Sometimes you want to quickly implement a new feature to get it out and receive some feedback.

    PBI: [FeatureName] – Technical Debt - Planned

    Note: Martin Fowler calls this "Deliberate Technical Debt".

    2. Discovered Technical Debt

    During a code review, you or the team notice something as part of the system that is clearly Technical Debt. This code is hindering the ability to add new features or is hard to read/understand.

    PBI: [FeatureName] – Technical Debt - Discovered

    Note: Martin Fowler calls this "Inadvertent Technical Debt".

    3. Unavoidable Technical Debt

    Every system will accumulate Technical Debt over time. For example, if you built an API with ASP.NET Core 2.0 (which is now out of support), you have Technical Debt because that version is no longer supported. This kind of Technical Debt cannot only negatively impact the productivity of the team, but it can also introduce a security risk. Another example is that the architecture you selected may have been right based on the original spec, but as requirements change or new requriements emerge, this may no longer be the case. The team can choose to refactor now, or accept the Technical Debt and continue to deliver features on the current architecture.

    PBI: [FeatureName] - Technical Debt - Unavoidable

    Note: Martin Fowler would also classify this as "Inadvertent Technical Debt".

    How to repay Technical Debt

    Just like a business that receives pre-payment from customers, a software team should be reviewing the size of their liabilities (being the list of PBIs with Technical Debt).

    At the Sprint Planning:

    1. Show the Product Owner the list of outstanding Technical Debt PBIs
    2. The Product Owner should make sure that the developers review the list of Technical Debt list and pick at least 1 PBI to pay back during the upcoming Sprint


    techdebt github
    Figure: Screenshot of code with Technical Debt comment and link to GitHub issue

    techdebt backlog
    Figure: Screenshot of Technical Debt on backlog

    techdebt architecture
    Figure: SugarLearning architecture diagram

  39. Do You Practice the 'Just Enough Refactoring' When Adding New Features?

    In the world of software development, adding new features to large projects can be a complex task, fraught with complications. Doing it the right way requires a strategic approach that balances new development, technical debt, and overall project health. To maintain this balance, follow the principle of "Just Enough Refactoring."

    In a large project, you add a feature without taking the time to clean up any of the existing, related technical debt (the "crap"). While you may feel like you're moving quickly at first, this approach can lead to significant issues down the line. As the technical debt accrues, the complexity and cost of changes increase, and the stability of your project suffers.

    tech debt
    Figure: Bad example - Unchecked Technical Debt

    Conversely, in a large project, you decide to add a feature and take on the herculean task of trying to fix all the existing technical debt at once. While this might feel like a responsible approach, it's akin to "boiling the ocean" - a task so huge that it's likely to stall progress and overwhelm your team. This approach can lead to significant project delays and could also introduce new bugs as you touch parts of the system that aren't directly related to the new feature.

    boil the ocean
    Figure: Bad example - Boiling the Ocean

    The recommended approach involves a balance. When adding a feature to a large project, address the technical debt that directly surrounds or is impacted by that feature. By doing so, you reduce the overall technical debt incrementally, without stalling progress. This approach ensures that the parts of the codebase most in need of attention receive it when they're being changed, which increases the stability and maintainability of your project over time.

    Good example - Just Enough Refactoring

    A tool like Dotnet-Affected can help in identifying the dependencies and impact of changes. This can be particularly useful for large projects or monorepos, helping you understand what's affected by your changes and where you should focus your refactoring efforts. Just remember to refactor "just enough" – tackle the technical debt that directly impacts your new feature, but avoid trying to fix everything at once.

  40. Do you use the Well-Architected Framework?

    The Well-Architected Framework is a set of best practices which form a repeatable process for designing solution architecture, to help identify potential issues and optimize workloads.

    waf diagram revised
    Figure: The Well-Architected Framework includes the five pillars of architectural excellence. Surrounding the Well-Architected Framework are six supporting elements

    The 5 Pillars


    There are trade-offs to be made between these pillars. E.g. improving reliability by adding Azure regions and backup points will increase the cost.

    Why use it?

    Thinking about architecting workloads can be hard – you need to think about many different issues and trade-offs, with varying contexts between them. WAF gives you a consistent process for approaching this to make sure nothing gets missed and all the variables are considered.

    Just like Agile, this is intended to be applied for continuous improvement throughout development and not just an initial step when starting a new project. It is less about architecting the perfect workload and more about maintaining a well-architected state and an understanding of optimizations that could be implemented.

    What to do next?

    Assess your workload against the 5 Pillars of WAF with the Microsoft Azure Well-Architected Review and add any recommendations from the assessment results to your backlog.

    waf assessment
    Figure: Some recommendations will be checked, others go to the backlog so the Product Owner can prioritize

    waf reliability results 2
    Figure: Recommended actions results show things to be improved

    waf tech debt backlog northwind
    Figure: Good example - WAF is very visible to the Product Owner on the backlog

  41. Do you know the different ways to modernize your application?

    The need to add new features and functionality to legacy systems is always present - over time it can become continually harder to do this. Additionally, these systems are often built using old tools, SDKs, hosted on outdated platforms, or potentially have increasingly complex (or obsolete) architectures - these all contribute to make is harder to maintain the systems in the first place.

    At some point, it becomes necessary to take the existing legacy system and update it to a more modern architecture using newer tools and technologies.

    Without focusing on the specific technology - there are 3 main approaches to modernizing applications:

    1. Big Bang (aka Rewrite)
    2. Evolutionary
    3. Strangler fig pattern

    Figure: Migrate Your Legacy ASP.NET Projects to ASP.NET Core Incrementally with YARP (36:45)

    Big Bang (aka Rewrite)

    Lock the developers into a room and shove some pizza under the door... and don't let them out until the application is modernized! Whilst this is a bit of a joke, it is a valid approach to modernizing applications. The idea is to take the existing application and rewrite it from scratch using the latest technologies and tools. This is a valid approach if you have a small application that is not complex and you have the time and budget to do this.

    ✅ Pros

    • Easy to plan
    • It all gets done in one hit

    ❌ Cons

    • It is still a big task, and you have re-test the entire application to ensure that the changes have not broken anything.
    • Once live, there is no quick rollback strategy if something goes wrong.
    • BAU work must stop whilst this is happening.
    • This is not a realistic approach for most enterprise applications.

    big bang
    Figure: OK example - big bang migration


    The idea is to take the existing application and incrementally update it to a more modern architecture. This is a better approach if you have a large application that is complex and you have the time and budget to do this.

    ✅ Pros

    • BAU development can continue on the old application.
    • You can choose the speed of the evolution - you can do it quickly or slowly, reinspecting the application at each step.

    ❌ Cons

    • Can feel like yak shaving - you can end up spending a lot of time on the migration and not actually modernizing the application.
    • It is still a big task, and you generally have re-test large parts of the application to ensure that the changes have not broken anything.
    • There is no quick rollback strategy if something goes wrong.
    • At some point you'll hit a point where there needs to be a Big Bang change to get it over the line - this is not a realistic approach for most applications.

    Figure: OK example - evolutionary migration (fitting a square peg in a round hole)

    Strangler Fig pattern was first described by Martin Fowler in 2004. The strangler fig is a type of tree that grows around other trees and slowly kills them by strangling them. This is exactly what this pattern does.

    strangler fig
    Figure: an actual strangler fig strangling a tree

    The idea is to create a "new" application (with a modern architecture) that acts as a facade to the existing application - then port features bit by bit to the new/modern architecture. Once slices of functionality have been ported and are ready - re-point the facade to execute the new code. You can trigger this through feature flags and this also allows you to rollback to the old code if something goes wrong.

    Looking to incrementally update an ASP.NET application? Read about using YARP and Incremental ASP.NET to ASP.NET Core Migration

    It's a language & platform agnostic approach that can be applied to any application.

    Microsoft have a great article on the strangler fig pattern - Strangler Fig Application.

    It works for AWS too - Strangler Fig Application.

    ✅ Pros

    • You can roll-back (re-point) to the old application if something goes wrong.
    • You can test the new application in isolation.
    • You can test the new application in parallel with the old application and confirm that the new application is working as expected.
    • BAU development can continue on the old application.

    ❌ Cons

    • Not be suitable for all applications - especially when you cannot intercept calls to the back-end system being replaced
    • Not for smaller/non-complex systems.

    strangler fig pattern
    Figure: Good example - strangler fig pattern in action during a migration

    Tip: this pattern can be used when migrating websites to a new architecture. You can place Azure Front Door in front of the existing website. Once a page (or route) on the new website is ready - you can re-point that page in Front Door to the new website.

    Customer success story

    Campion was able to move from a monolithic application to microservices whilst still continually deploying code to production by leveraging the Strangler Fig pattern. Watch at 28:00

    Figure: Education in the Cloud – Campion's Digital Journey with Alexander Candy-Levy (1:07:46)

  42. Microservices - Do you break down your apps?

    There are two common types of application architecture:

    • Monoliths (aka N-Tier applications)
    • Microservices

    Monoliths have their place. They are easy to get going and often make a lot of sense when you are starting out. However, sometimes your app may grow in size and become difficult to maintain. Then you might want to consider Microservices...

    Microservices let you break down your app into little pieces to make them more manageable, replaceable and maintainable. You can also scale out different parts of your app at a granular level.

    .NET 6 and Azure have heaps of great tools for developing simple APIs and worker services in a Microservices pattern.

    Watch the below video from 35:40 - 46:50

    The tools of the trade

    • .NET Worker Services make it easier to implement dependency injection, configuration and other syntactic sugar using the same patterns you are familiar with in other types of .NET applications
    • Azure Container Apps give you a way to host different little subsections of the application
    • Azure Functions gives you a great way to build applications in small, modular, scalable and easy to manage chunks. It provides triggers other than http to handle other common microservice patterns
    • Minimal APIs give you a way to write APIs in just a few short lines of code

    What's the point?

    • Cost - Provides separation of scalability, keep the hot parts of your app hot and the cold parts of your app cold to achieve maximum pricing efficiency
    • Maintainability - Keep code more manageable by making it bite sized chunks
    • Simplify code - Write minimal APIs
    • Deployment - Standardize deployment with containers
    • Testing - Easier to find problems since they are isolated to a specific part of the app
    • Cognitive Complexity - Devs can focus on one aspect of the app at a time
    • Data - You can use the best way of storing data for each service
    • Language - You can use the best language for each service

    What's the downside?

    • Upfront Cost - More upfront work is required
    • Cognitive Complexity - While individual apps are simpler, the architecture of the app can become more complex
    • Health Check - It's harder to know if all parts are alive
    • Domain boundaries - You need to define the separation of concerns between different services. Avoid adding dependencies between services because you can create a domino of failures...a house of cards.
    • Performance normally suffers as calls are made between services
    • Without adequate testing it's harder to maintain
    • Using multiple languages and datastores can be both more expensive to host and require more developers

    What new techniques are required

    • Contract Testing - To mitigate the risk of changes in one service breaking another, comprehensive tests that check the behaviour of services is required
  43. Tech Debt - Do you avoid 'clever' code?

    What is Clever Code?

    Clever Code comes in several forms. The desired form is solving a complex problem, in a simple way. The Clever Code that causes tech debt, is when code is written in a way to use the language features to look 'smart', while making it difficult for developers to read.

    var totalMoved = sortedChannels.Where((channel, i) => channel.Position != i).Count();

    Bad example - Smart code, that could be even simpler! (Although, not bad by any means)

    var totalMoved = 0;
    for (var i = 0; i < sortedChannels.Count; i++)
        var channel = sortedChannels[i];
        if (channel.Position != i)

    Good example - Simple code, while it has more lines, it is easier to read!

    When is Simple Code Bad?

    Let's take a moment to digest this more generic example:

    public async Task<IActionResult> Delete(Guid id)
        var model = await _context.Styles.FirstOrDefaultAsync(x => x.Id == id);
        if (model is null)
            return NotFound();
        await _context.SaveChangesAsync();
        return NoContent();

    At first glance, this is pretty simple - almost what you would find on an intro to EF Core & ASP.NET Core Web API on the Microsoft documentation!

    As code scales, sometimes we need to write more 'clever' code, to abstract away concerns (like data access, or application logic).

    So depending on the context, this is both good & bad code at the same time!

  44. Do you use Co-Creation Patterns?

    These days Pull Requests are the de facto standard for getting code reviewed. Once a developer has finished their change, they will typically submit a Pull Request and move on to their next task. This allows for an asynchronous process to take place which may seem like a good idea, but often is not and can also lead to inefficiencies.

    Video: Pair Programming best-practices (11 min)

    ❌ Problem - Inefficient Code Reviews

    Inefficient code reviews can be caused by:

    • Requesting feedback too late
    • Receiving feedback too slow
    • Creating large Pull Requests
    • Excessive context switching
    • Too much work in progress
    • Unclear feedback

    co creation 1
    Figure: Bad example - Vicious cycle of being blocked and picking up yet another task

    co creation 2
    Figure: Bad example - Inefficiencies caused by asynchronous code reviews

    Source: From Async Code Reviews to Co-Creation Patterns

    ✅ How to Make Code Reviews More Efficient

    • Author - Do over the shoulder reviews
    • Author - Ask for feedback early before the PR, if you are uncertain that you're on the correct path
    • Limit work in progress

      • Author - Make sure your Pull Requests are merged, before starting a new task
      • Reviewer - Prioritize Pull Requests before starting a new task
    • Author - Create small Pull Requests

      • This requires a smaller block of time to review which makes it easier for the reviewer to find the time
      • Less risk - reduces the chance of an incorrect approach being taken
      • Get quality feedback - small blocks of code are easier to digest
      • Create a great Pull Request to make it easier for the reviewer to understand your changes.
    • Reviewer - When reviewing asynchronously

    The Ultimate Solution - Co-Creation Patterns

    Small Pull Requests have many benefits as outlined above. However, each Pull Request comes with an overhead and making Pull Requests too small can introduce unnecessary waste and negatively affect the throughput of code. In order to not lose throughput with small PRs, reviewers need to react faster. That leads us to synchronous, continuous code reviews and co-creation patterns.

    So, with the async way of working, we’re forced to make a trade-off between losing quality (big PRs) and losing throughput (small PRs).

    We can avoid this by using co-creation patterns. As a general rule, Pull Requests with less than 20 lines of code, and larger changes with a degree of complexity/risk, make good candidates for co-creation.

    The idea is that you do small PR's but also limit WIP. If you create several small PR's quickly and are waiting for code reviews, you can become blocked very quickly. By co-creating, the small PR's get reviewed & merged instantly which avoids getting blocked and enables you to smash out loads of small PRs! 💪

    Daniel Mackay - SSW Solution Architect


    Co-creation patterns can take some different forms:

    1. Pair-programming: Two developers starting, reviewing and finishing a change together
    2. Mob-programming: Working in a small group, that collectively has all skills required. See

    For the patterns above, the similarities are that there is a driver and a navigator.

    Driver - Implements the solution/solves the problem at a granular level. They're the ones on the PC writting the code. Navigator(s) - Observes and understands the what the driver is implementing at a high-level and inquires where needed to help/direct the driver.

    Note: It is not the role of the navigator to micromanage the driver.

    These roles should swap often to keep a high level of focus and give everyone an equal chance to participate as a driver and navigator.

    When using co-creation patterns, ensure you use co-authored commits to give credit to all the developers involved.

    Advantages of co-creation

    Co-creation allows us to have both quality and throughput by providing the following advantages:

    1. More context when reviewing
    2. Higher quality
    3. Faster communication
    4. Faster course correction
    5. Less delay - no waiting
    6. Eliminates context switching - working on a change together reduces WIP which further increases throughput
    7. Emotions are removed - instead of having an 'author' and 'critic', the code is created together.

    How to get started with Pair Programming

    Here's a quick guide to getting started. Just note that these are just guidelines and your team, task and experience will dictate exactly how to achieve your goals and increase your code quality

    1. Select a Collaborative Task: Pick a Product Backlog Item (PBI) that you and a colleague can jointly work on.
    2. Set Up a Shared Workspace: Arrange a comfortable space with one computer and two chairs.
    3. Assign Initial Roles: Decide who will start as the 'driver' (writing the code) and who will be the 'navigator' (reviewing the code).
    4. Maintain Open Communication: Keep an ongoing dialogue to discuss ideas and approaches.
    5. Regularly Swap Roles: Switch between the driver and navigator roles periodically to maintain engagement and balance in the partnership.
  45. Do you use Co-Authored Commits?

    When using co-creation patterns such as pair-programming or mob-programming, you should make sure that all the developers get attribution. When done correctly co-authored commits stand out as a testament to teamwork and shared responsibility, reflecting the collaborative efforts and diverse expertise contributed to each change.

    Figure: GitHub - Co-Authored Commit

    There are several different ways to create co-authored commits, depending on the tools you are using.


    If you use Visual Studio Live Share to collaborate, it will co-author the git commits with everyone on the share session

    Visual Studio Code

    Visual Studio Code the Git Mob Extension can be used to co-author commits.


    Rider has a great UI that makes creating co-authored commits easy. It provides intellisense for the co-authored commit trailer, and will suggest the names of the people who have access to the git repository.

    Figure: Rider - Co-Authored Commits

    GitHub Desktop

    GitHub Desktop supports co-authored commits out of the box.

    github desktop
    Figure: GitHub Desktop - Co-Authored Commits

    Git CLI

    When writing the commit message, leave 2 blank lines, and give each co-author their own line and Co-authored-by: commit trailer

    $ git commit -m "Refactor usability tests.
    Co-authored-by: NAME <NAME@EXAMPLE.COM>
  46. Project setup - Do you containerize your dev environment?

    Developers love the feeling of getting a new project going for the first time. Unfortunately, the process of making things work is often a painful experience. Every developer has a different setup on their PC so it is common for a project to require slightly different steps.

    The old proverb is "Works on my machine!"

    Luckily, there is a way to make all development environments 100% consistent.

    Video: Dev Containers from Microsoft (was Remote Containers) with Piers Sinclair (5 min)

    Dev Containers let you define all the tools needed for a project in a programmatic manner. That gives 3 key benefits:

    ✅ Consistent isolated environments

    ✅ Pre-installed tools with correct settings

    ✅ Quicker setup (hours becomes minutes)

    How do I set it up?

    Microsoft has a great tutorial and documentation on how to get it running.

    How does it work?

    Dev Containers are setup with a few files in the repo:

    These files define an image to use, tools to install and settings to configure.

    Once those files are configured, you can simply run a command to get it running.

    Where to run it - locally or in the cloud?

    There are 2 places that Dev Containers can be run:

    Locally works awesome if you have a powerful PC. However, sometimes you might need to give an environment to people who don't have a powerful PC or you might want people to develop on an iPad. In that case it's time to take advantage of the cloud.

    ⚠️ Warning - Supported Tools

    The following tools are not supported yet

    Figure: Bad example - Before using Dev Containers you would be missing a lot of pre-requisites!

    HappyDevs 1710232021932
    Figure: Good example - After using Dev Containers you would be as happy as Larry!

    If you have a reason for not doing all of this, you should at least containerize your SQL Server environment.

  47. Git - Do you know how to avoid large PRs?

    Nobody wants to review a PR with 100s of files, nor is it nice to have a lot of work that is committed (a PR may even be waiting), but not approved. This is merge debt. Part of getting work complete is getting the PR approved by another developer so you don’t have it sitting around for a long time.

    The default option is to use a feature branch strategy which is awesome for small chunks of work. However, if you know that your work involves 2+ devs working on different tasks and they cannot work in isolation, then it can be tricky. For example, if you need to migrate from Gatsby to Next.js or if you have to upgrade from Xamarin to .NET MAUI then a feature branch strategy might not be appropriate. In that case choose from the below options:

    Option 1 - Create a new repository

    This is a good option if you are concerned with legacy technical decisions impacting the future application.

    ✅ Pros

    • Clean snapshot of legacy application
    • No risk of the legacy application being used or referenced
    • Isolated backlog – keep issues about the new project separated

    ❌ Cons

    • Loss of history in one mono repo
    • Isolated backlog – cannot see old feature or bug requests
    • Have to migrate important issues later

    avoid large prs website
    Figure: Good example – Developing the new application in a new repo

    Option 2 – Create a new folder and keep merging using small PRs

    In this scenario you will be replacing the old project (folder) at the end for a final rubber stamp PR

    This is a good option if you are not worried about the legacy application influencing the new application.

    ✅ Pros

    • Keeps history of previous iteration in one repo
    • Sharing the same backlog – can see both old and new PBIs
    • Can clearly see the difference after the final PR

    ❌ Cons

    • Potential to reference or use old code/models
    • Must implement bug fixes for the new + the old project
    • Easier to be influenced by past legacy decisions
    • Sharing the same backlog – can see both old and new PBIs

    avoid large prs portal
    Figure: Good example – Developing your new application in a new folder

  48. Pull Request - Do you do over the shoulder reviews?

    An "over-the-shoulder" review is one of the best ways to avoid merge debt. When a pull request (PR) is ready to be reviewed, get someone with you either in-person or on call, and go through the PR together. This not only allows you to demo the content of the PR but also talk with the person taking feedback when needed.

    When you have finished coding, don't just create a PR and throw it over the fence. Part of finishing a PR is getting it approved.

    The best way to get it approved is via an "over the shoulder" review

    • Adam Cogan

    Drafting PRs

    A good way to avoid someone merging your PR before you have done an over the shoulder review is to keep the Pull Request in draft mode until you are ready for it to be reviewed for merging.

    Note: You should always avoid merge debt, it's your responsibility to follow up on your PRs and get them merged as soon as possible. For more info,

    See Do you avoid Merge Debt?

  49. Do you use Architectural Decision Records (ADRs)?

    Architectural Decision Records (ADRs) are lightweight documents use to record important decisions in your project. They do not necessarily have to be related to architecture, but could be any important decision made by the team.

    What are the dangers of not documenting important decisions?

    1. Lack of transparency and communication
    2. Loss of intellectual property
    3. Loss of historical context
    4. Risk of repeating mistakes
    5. Difficulty in auditing and governance

    What are the advantages of using ADRs?

    1. Providing documentation and historical context
    2. Collaboration and communication
    3. Informed Decision making
    4. Decision re-evaluation
    5. Avoiding blind acceptance or reversal

    The act of documenting an important decision, forces developers to think more objectively about their decision. If the decision is likely to cause contention it may be quicker to document it via an ADR and get feedback, than it would be to implement the change and let the reviewer try to infer your reasoning.

    Additionally, documenting decision 'deciders' ensures that we have a 2nd pair of eyes across the decision, just like we do with the checked by rule, test please rule, and pull-requests.

    ADRs can also help with knowledge sharing across teams, as other Solution Architects will have access to a succinct explanation of the problem and the decided solution.

    Another benefit is that future developers joining the project now have access to the historical context as to why certain decisions were made.

    Where should ADRs be stored?

    They should be stored wherever the technical documentation for your project lives. Storing them in Git along with your code works well, but alternatively wherever your technical documentation lives (i.e. a wiki).

    What Can I use to Create and Manage ADRs?

    There are several tools available to help create and managed ADRs, but one of the best ones is Log4Brains. Log4Brains can help to create and view ADRs.

    This can be installed by running:

    npm install -g log4brains

    You can then initialize your git repo by running:

    log4brains init

    Which will guide you through a simple setup process.

    To create a new ADR, run:

    log4brains adr new

    Lastly, to preview your ADRs, run:

    log4brains preview

    ADR Examples

    Figure: Example ADR from SSW.CleanArchitecture

    You can see more examples of ADRs with log4brains in action on our SSW.CleanArchitecture template.

  50. Do you use prefixes to improve code review communication?

    In the world of code reviews, ambiguity can lead to confusion, misunderstandings, and wasted time. Utilizing prefixes for code review comments is like providing a map for navigating complex terrain, ensuring that everyone involved in the review process remains on the same page. Incorporating prefixes in code review communication can prevent potential challenges, enhancing collaboration, clarity, and code quality. With these small changes, you can streamline and improve the overall code review process and complete reviews quickly and efficiently.

    When conducting code reviews in a collaborative environment, it is essential to maintain effective communication. Utilizing prefixes in comments, as suggested on, can significantly enhance the code review process. Prefixes help convey the intent and impact of a comment, making it easier for the author and other team members to understand how to address it.

    Let's have a look at the following example from Bob Northwind:


    This code could be better optimized

    Figure: Without a prefix, this comment's intent is vague. It's not evident whether it's a suggestion, a question, or an issue.


    suggestion: This code could be better optimized.

    It is not critical but there are a few minor improvements that can be applied to increase performance.

    Figure: The prefix "suggestion" indicates that the comment is a suggestion for improvement

    Adding a prefix like "suggestion" clarifies the intent of the comment, making it actionable. The context provided helps the author see the potential impact of the suggested change.


    issue: We must address this security vulnerability before merging.

    There is a potential for SQL Injection and this vulnerability could lead to a critical security breach if not fixed.

    Figure: The prefix "issue" and following context clearly define the comment's importance and category.

    In this example, the prefix "issue" conveys the urgency of the matter, helping the team prioritize and act accordingly.

    Using prefixes also helps categorize comments for tracking and reporting purposes, providing valuable insights into the development process.

    By adhering to a consistent format such as:

    {{ prefix }}: {{ subject }}

    {{ discussion }}

    You can create a standardized system that makes comments more parseable by machines, which can lead to valuable metrics and reports in the future.


    The following list is what we suggest and use at SSW. It is based on

    issueblockingIssues point out specific problems with the code. These can affect users or happen behind the scenes.
    todoblockingTODOs are essential changes. If they're complex, create a new task or PBI for them.
    questionblockingQuestions are suitable when you're uncertain about something's relevance. Ask for clarification or investigation for a quick resolution.
    praisenon-blockingPraises highlight something positive. It's good to include at least one in each review. Avoid false praise as it can be harmful. Look for something genuinely praiseworthy.
    suggestionnon-blockingSuggestions offer ways to make the code better. Be clear about what you're suggesting and why it's an improvement.
    thoughtnon-blockingThoughts are ideas that came up during the review. They don't block progress, but they can lead to more focused initiatives and learning opportunities.
    nitpicknon-blockingNitpicks are minor, preference-based suggestions.

    By incorporating prefixes like these, you can enhance the clarity and effectiveness of your code review comments.

    Remember, effective code review communication can save hours of undercommunication and misunderstandings, leading to better code quality and a more efficient development process.

  51. Do you use MassTransit to build reliable distributed applications?

    When building distributed applications messaging is a common pattern to use. Often we might take a hard dependency on a specific messaging technology, such as Azure Service Bus or RabbitMQ. This can make it difficult to change messaging technologies in the future. Good architecture is about making decisions that make things easy to change in future. This is where MassTransit comes in.

    MassTransit is a popular open-source .NET library that makes it easy to build distributed applications using messaging without tying you to one specific messaging technology.

    .NET Messaging Libraries

    There are several .NET messaging libraries that all abstract the underlying transport. These include:

    There are also the service bus specific libraries:

    Advantages of using MassTransit

    ✅ Open-source and free to use

    ✅ Enables swapping of messaging transports by providing a common abstraction layer

    ✅ Supports multiple messaging concepts:

    • Point-to-Point
    • Publish/Subscribe
    • Request/Response

    ✅ Supports multiple messaging transports:

    • In-Memory
    • RabbitMQ
    • Azure Service Bus
    • Amazon SQS
    • ActiveMQ
    • Kafka
    • gRPC
    • SQL/DB

    ✅ Supports complex messaging patterns such as Sagas


    Scenario 1 - Modular Monolith

    A Modular Monolith architecture requires all modules to be running in a single process. MassTransit can be used to facilitate in-memory communication between modules in the same process.

    This allows us to send events between modules and also request data from other modules.

    Scenario 2 - Azure Hosted Microservices

    When building microservices in Azure, it's common to use Azure Service Bus as the messaging transport. With minimal changes, MassTransit can be used to send messages to and from Azure Service Bus instead of using the In-Memory transport.

    Scenario 3 - Locally Developing Microservices

    When developing microservices locally, it's common to use containers for each service. However, some of the cloud based messaging services (e.g. Azure Service Bus) are not able to be run in a container locally. In this scenario, we can easily switch from using the Azure Service Bus transport to Containerized RabbitMQ transport

    Demo Code

    If you're interested in seeing MassTransit in action, check out

We open source. Powered by GitHub