SSW Foursquare

Rules to Better .NET Projects - 63 Rules

Want to build a .NET Application? Check SSW's Web Application / API consulting page.

  1. .NET - Do you set multiple startup projects?

    It's common for .NET solutions to have multiple projects, for example an API and a UI.Did you know Microsoft Visual Studio and Jetbrains Rider allow you to start as many projects as you want with a single click?

    ❌ Split Terminals

    You can run each project in a seperate terminal using dotnet run, but this will quickly become hard to manage the more projects you need to run.

    split terminals
    Figure: Multiple Terminals

    ❌ Manually Launching in IDE

    You could also manually select and launch each project in your IDE, but this will result in a lot of clicking and waiting.It can also be error prone as you may forget to launch a project.

    manual launch
    Figure: Manually selecting and launching each project

    ✅ Setting Multiple Startup Projects

    You can set multiple startup projects in Visual Studio and Rider, this will allow you to launch all your projects with a single click.

    Launch Multiple Projects in Visual Studio
    Launch Multiple Projects in Jetbrains Rider

    Note: If you change the launch profile Visual Studio will not save your configuration and you will have to follow the above steps again.

    Note: Rider will save the launch profile you just created, you can switch between launch profiles without losing your configuration.

  2. Do you have a consistent .NET solution structure?

    When developing software, we implement a dependency injection centric architecture.

    dependency injection structure
    Figure: A Dependency Injection based architecture gives us great maintainability

    solution structure 1710232021934
    Figure: Good Example - The Solution and Projects are named consistently and the Solution Folders organize the projects so that they follow the Onion Architecture

    Dependencies and the application core are clearly separated as per the Onion Architecture.

    In the above example you can clearly see:

    Common Library projects are named [Company].[AssemblyName].
    E.g. BCE.Logging is a shared project between all solutions at company BCE.

    Other projects are named [Company].[Solution Name].[AssemblyName].
    E.g. BCE.Sparrow.Business is the Business layer assembly for company ‘BCE’, solution ‘Sparrow’.

    We have separated the unit tests, one for each project, for several reasons:

    • It provides a clear separation of concerns and allows each component to be individually tested
    • The different libraries can be used on other projects with confidence as there are a set of tests around them
  3. Do you use Solution Folders to Neatly Structure your Solution?

    All the DLL references and files needed to create a setup.exe should be included in your solution. However, just including them as solution items is not enough, they will look very disordered (especially when you have a lot of solution items). And from the screenshot below, you might be wondering what the _Instructions.docx is used for...

    SSW   Rules  NET Projects   Bad Solution

    Bad example - An unstructured solution folder

    An ideal way is to create "sub-solution folders" for the solution items, the common ones are "References" and "Setup". This will make your solution items look neat and in order. Look at the screenshot below, now it makes sense, we know that the _Instructions.docx contains the instructions of what to do when creating a setup.exe. SSW   Rules  NET Projects   Good Solution

    Good example - A well structured solution folder has 2 folders - "References" and "Setup"

    We have a program called SSW Code Auditor to check for this rule.
  4. Do you keep clean on Imports of Project Property?

    When programming in a Dot Net environment it is a good practice to remove the default imports that aren't used frequently in your code.

    This is because IntelliSense lists will be harder to use and navigate with too many imports. For example if in VB.NET, Microsoft.VisualBasic would be a good item to have in the imports list, because it will be used in most areas of your application.

    To remove all the default imports, load Project Property page and select Common properties - Imports.

    ImportsVB
    Figure: Using aliases with the Imports Statement

    The Import statement makes it easier to access methods of classes by eliminating the need to explicitly type the fully qualified reference names. Aliases let you assign a friendlier name to just one part of a namespace.

    For example, the carriage return-line feed sequence that causes a single piece of text to be displayed on multiple lines is part of the ControlChars class in the Microsoft.VisualBasic namespace. To use this constant in a program without an alias, you would need to type the following code:

    MsgBox("Some text" & Microsoft.VisualBasic.ControlChars.crlf _ & "Some more text")

    Imports statements must always be the first lines immediately following any Option statements in a module. The following code fragment shows how to import and assign an alias to the Microsoft.VisualBasic.ControlChars namespace:

    Imports CtrlChrs=Microsoft.VisualBasic.ControlChars

    Future references to this namespace can be considerably shorter:

    MsgBox("Some text" & CtrlChrs.crlf & "Some more text")

    If an Imports statement does not include an alias name, elements defined within the imported namespace can be used in the module without qualification. If the alias name is specified, it must be used as a qualifier for names contained within that namespace.

  5. Do you use the designer for all visual elements?

    The designer should be used for all GUI design. Controls will be dragged and dropped onto the form and all properties should be set in the designer, e.g.

    • Labels, TextBoxes and other visual elements
    • ErrorProviders
    • DataSets (to allow data binding in the designer)

    Things that do not belong in the designer:

    • Connections
    • Commands
    • DataAdapters

    However, and DataAdapter objects should not be dragged onto forms, as they belong in the business tier. Strongly typed DataSet objects should be in the designer as they are simply passed to the business layer. Avoid writing code for properties that can be set in the designer.

    Figure: Bad example - Connection and Command objects in the Designer

    Good example - Only visual elements in the designer

  6. Do you refer to images the correct way in ASP .NET?

    There are many ways to reference images in ASP.NET. There are 2 different situations commonly encountered by developers when working with images:

    • Scenario #1: Images that are part of the content of a specific page. E.g. A picture used only on 1 page
    • Scenario #2: Images that are shared across on user controls which are shared across different pages in a site. E.g. A shared logo used across the site (commonly in user controls, or master pages)

    Each of these situations requires a different referencing method.

    Option #1: Root-Relative Paths

    Often developers reference all images by using an root-relative path (prefixing the path with a slash, which refers to the root of the site), as shown below.

    <img src="/Images/spacer.jpg" />

    Bad example - Referencing images with absolute paths

    This has the advantage that <img> tags can easily be copied between pages, however it should not be used in either situation, because it requires that the website have its own site IIS and be placed in the root (not just an application), or that the entire site be in a subfolder on the production web server. For example, the following combinations of URLs are possible with this approach:

    Staging Server URLProduction Server URL
    bee:81/www.northwind.com.au
    bee/northwind/www.northwind.com.au/northwind

    As shown above, this approach makes the URLs on the staging server hard to remember, or increases the length of URLs on the production web server.

    Option #2: Relative Paths

    Images that are part of the content of a page should be referenced using relative paths.

    <img src="../Images/spacer.jpg" />

    Good example - Referencing images with relative paths

    However, this approach is not possible with images on user controls, because the relative paths will map to the wrong location if the user control is in a different folder to the page.

    Option #3: Application-Relative Paths

    In order to simplify URLs, ASP.NET introduced a new feature, application relative paths. By placing a tilde (~) in front of a path, a URL can refer to the root of a site, not just the root of the web server. However, this only works on Server Controls (controls with a runat="server" attribute).

    To use this feature, you need either use ASP.NET Server controls or HTML Server controls, as shown below.

    <asp:Image ID="spacerImage" ImageUrl="~/Images/spacer.gif" Runat="server" />
    <img id="spacerImage" src="~/Images/spacer.gif" originalAttribute="src"
    originalPath=""~/Images/spacer.gif"" runat="server">

    Good example - Application-relative paths with an ASP.NET Server control

    Using an HTML Server control creates less overhead than an ASP.NET Server control, but the control does not dynamically adapt its rendering to the user's browser, or provide such a rich set of server-side features.

    Note: A variation on this approach involves calling the Page.ResolveUrl method with inline code to place the correct path in a non-server tag.

    <img src='<%# originalAttribute="src" originalPath="'<%#"
    Page.ResolveUrl("~/Images/spacer.gif") %>'>

    Bad example - Page.ResolveUrl method with a non-server tag

    This approach is not recommended, because the data binding will create overhead and affect caching of the page. The inline code is also ugly and does not get compiled, making it easy to accidentally introduce syntax errors.

  7. Do you use Microsoft.VisualBasic.dll for Visual Basic.NET projects?

    The Microsoft.VisualBasic library is provided to ease the implementation of the VB.NET language itself. For VB.NET, it provides some methods familiar to the VB developers and can be seen as a helper library. It is a core part of the .NET redistribution and maps common VB syntax to framework equivalents, without it some of the code may seem foreign to VB programmers.

    Microsoft.VisualBasic.NET Framework
    CInt, CStrConvert.ToInt(...), ToString()
    vbCrLfEnvironment.NewLine, or "\r\n"
    MsgBoxMessageBox.Show(...)
  8. Do you avoid Microsoft.VisualBasic.Compatibility.dll for Visual Basic.NET projects?

    This is where you should focus your efforts on eliminating whatever VB6 baggage your programs or developer habits may carry forward into VB.NET. There are better framework options for performing the same functions provided by the compatibility library You should heed this warning from the VS.NET help file: Caution: It is not recommended that you use the VisualBasic.Compatibility namespace for new development in Visual Basic .NET. This namespace may not be supported in future versions of Visual Basic. Use equivalent functions or objects from other .NET namespaces instead.? ad.?

    Avoid:

    • InputBox
    • ControlArray
    • ADO support in Microsoft.VisualBasic.Compatibility.Data
    • Environment functions
    • Font conversions
  9. Do you publish your components to Source Safe?

    Incrementally as we do more and more .NET projects, we discover that we are re-doing a lot of things we've done in other projects. How do I get a value from the config file? How do I write it back? How do I handle all my uncaught exceptions globally and what do I do with them?

    Corresponding with Microsoft's release of their application blocks, we've also started to build components and share them across projects.

    Sharing a binary file with SourceSafe isn't a breeze to do, and here are the steps you need to take. It can be a bit daunting at first.

    As the component developer, there are four steps:

    1. In Visual Studio.NET, Switch to release build
      build release
      Build Release
      Figure: Switch to release configuration
    2. In your project properties, make sure the release configuration goes to the bin\Release? folder. While you are here, also make sure XML docs are generated. Use the same name as your dll but change the extension to .xml (eg. for SSW.Framework.Configuration.dll -> add SSW.Framework.Configuration.xml)
      build projectproperty small
      Build Project Property
      Figure: Project properties Note: The following examples are considered being used for C#. Visual Basic, by default, does not have \bin\Release and \bin\Debug which means that the debug and release builds will overwrite each other unless the default settings are changed to match C# (recommended). VB does not support XML comments either, please wait for the next release of Visual Studio (Whidbey).
      changetocsharp
      Change to C#
      Figure: Force change to match C#
    3. If this is the first time, include/check-in the release directory into your SourceSafe
      build include
      Build Include
      Figure: Include the bin\Release directory into source safe
    4. Make sure everythings checked-in properly. When you build new versions, switch to Release?mode and checkout the release dlls, overwrite them, and when you check them back in they will be the new dll shared by other applications.
    5. If the component is part of a set of components, located in a solution, with some dependency between them. You need to check out ALL the bin\Release folders for all projects in that solution and do a build. Then check in all of them. This will ensure dependencies between these components don't conflict with projects that reference this component set. In other words, a set of components such as SSW.Framework.WindowsUI.xxx, increment versions AS A WHOLE. One component in this set changes will cause the whole set to re-establish internal references with each other.
  10. Do you use the SharePoint portal in VSTS 2012?

    You should use the SharePoint portal in VSTS2012 because it provides you dashboards to monitor your projects as well as quick access to a lot of reports. You are able to create and edit work items via the portal as well.

    VS2012 SharePointPortal Figure: SharePoint portal in VSTS 2012

  11. Do you keep your Assembly Version Consistent?

    VersionConsistent1 Figure: Keep these two versions consistent If you are not using the GAC, it is important to keep AssemblyVersion, AssemblyFileVersion and AssemblyInformationalVersionAttribute the same, otherwise it can lead to support and maintenance nightmares. By default these version values are defined in the AssemblyInfo file. In the following examples, the first line is the version of the assembly and the second line is the actual version display in file properties.

    [assembly: AssemblyVersion("2.0.0.*")]
    [assembly: AssemblyFileVersion("2.0.0.*")]
    [assembly: AssemblyInformationalVersion("2.0.0.*")]

    Bad example - AssemblyFileVersion and AssemblyInformationalVersion don't support the asterisk (*) character

    If you use an asterisk in the AssemblyVersion, the version will be generated as described in the MSDN documentation.

    If you use an asterisk in the AssemblyFileVersion, you will see a warning, and the asterisk will be replaced with zeroes. If you use an asterisk in the AssemblyInformationVersion, the asterisk will be stored, as this version property is stored as a string.

    AssemblyFileVersion Warning
    Figure: Warning when you use an asterisk in the AssemblyFileVersion

    [assembly: AssemblyVersion("2.0.*")]
    [assembly: AssemblyFileVersion("2.0.1.1")]
    [assembly: AssemblyInformationalVersion("2.0")]

    Good example - MSBuild will automatically set the Assembly version on build (when not using the GAC)

    Having MSBuild or Visual Studio automatically set the AssemblyVersion on build can be useful if you don't have a build server configured.

    If you are using the GAC, you should adopt a single AssemblyVersion and AssemblyInformationalVersionAttribute and update the AssemblyFileVerison with each build.

    [assembly: AssemblyVersion("2.0.0.0")]
    [assembly: AssemblyFileVersion("2.0.0.1")]
    [assembly: AssemblyInformationalVersion("My Product 2015 Professional")]

    Good example - The best way for Assembly versioning (when using the GAC)

    If you're working with SharePoint farm solutions (2007, 2010, or 2013), in most circumstances the assemblies in your SharePoint WSPs will be deployed to the GAC. For this reason development is much easier if you don't change your AssemblyVersion, and increment your AssemblyFileVersion instead.

    The AssemblyInformationalVersion stores the product name as marketed to consumers. For example for Microsoft Office, this would be "Microsoft Office 2013", while the AssemblyVersion would be 15.0.0.0, and the AssemblyFileVersion is incremented as patches and updates are released.

    Note: It would be good if Microsoft changed the default behaviour of AssemblyInformationalVersionAttribute to default to the AssemblyVersion. See Mike's suggestion for improving the version number (in comments).

  12. Do you use configuration management application block?

    How do you get a setting from a configuration file? What do you do when you want to get a setting from a registry, or a database? Everyone faces these problems, and most people come up with their own solution. We used to have a few different standards, but when Microsoft released the Configuration Application Blocks, we have found that working to extend it and use it in all our projects saves us a lot of time! Use a local configuration file for machine and/or user specific settings (such as a connection string), and use a database for any shared values such as Tax Rates.

    See how we configured this reset default settings functionality with the Configuration Block in the .NET Toolkit

  13. Do you have a resetdefault() function in your configuration management application block?

    In almost every application we have a user settings file to store the state of the application. We want to be able to reset the settings if anything goes wrong.

    See how we configured this reset default settings functionality with the Configuration Block in the .NET Toolkit

  14. Do you version your .xml files?

    It is good to store program settings in an .xml file. But developers rarely worry about future schema changes and how they will inform the user it is an old schema.

    What is wrong with this?

    <?xml version="1.0" standalone="yes"?>
    <NewDataSet>
      <xs:schema id="NewDataSet" xmlns=""
         xmlns:xs="http://www.w3.org/2001/XMLSchema"
         xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">
        <xs:element name=NewDataSet" msdata:IsDataSet="true" msdata:Locale="en-AU">
        <xs:complexType>
          <xs:choice maxOccurs="unbounded">
           <xs:element name="Table1">
           <xs:complexType>
           <xs:sequence>
           <xs:element name="DateUpdated" type="xs:dateTime" minOccurs="0" />
           <xs:element name="NewDatabase" type="xs:boolean" minOccurs="0" />
           <xs:element name="ConnectionString" type="xs:string" minOccurs="0" />
           <xs:element name="SQLFilePath" type="xs:string" minOccurs="0" />
           <xs:element name="TimeOut" type="xs:int" minOccurs="0" />
           <xs:element name="TurnOnMSDE" type="xs:boolean" minOccurs="0" />
           <xs:element name="KeepXMLRecords" type="xs:boolean" minOccurs="0" />
           <xs:element name="UserMode" type="xs:boolean" minOccurs="0" />
           <xs:element name="ReconcileScriptsMode" type="xs:boolean" minOccurs="0" />
           <xs:element name="FolderPath" type="xs:string" minOccurs="0" /> />
           <xs:element name="SelectedFile" type="xs:string" minOccurs="0" />
           <xs:element name="UpdateVersionTable" type="xs:boolean" minOccurs="0" />
           </xs:sequence>
           </xs:complexType>
           </xs:element>
          </xs:choice>
         </xs:complexType>
         </xs:element>
        </xs:schema>
    
        <Table1>
          <DateUpdated>2004-05-17T10:04:06.9438192+10:00</DateUpdated>
          <NewDatabase>true</NewDatabase>
          <ConnectionString>Provider=SQLOLEDB.1;Integrated Security=SSPI;
          Persist Security Info=False;
          Data Source=(local);Initial Catalog=master</ConnectionString>
          <SQLFilePath>ver0001.sql</SQLFilePath>
          <TimeOut>5</TimeOut>
          <TurnOnMSDE>false</TurnOnMSDE>
          <KeepXMLRecords>false</KeepXMLRecords>
          <UserMode>true</UserMode>
          <ReconcileScriptsMode>true</ReconcileScriptsMode>
          <FolderPath>C:\Program Files\SSW SQL Deploy\Samples\DatabaseSQLScripts\
          </FolderPath>
          <SelectedFile />
          <UpdateVersionTable>true</UpdateVersionTable>
        </Table1>
    </NewDataSet>

    Bad example - XML file without version control

    <?xml version="1.0" standalone="yes"?>
    <NewDataSet>
      <xs:schema id="NewDataSet" xmlns=""
         xmlns:xs="http://www.w3.org/2001/XMLSchema"
         xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">
        <xs:element name=NewDataSet" msdata:IsDataSet="true" msdata:Locale="en-AU">
        <xs:complexType>
          <xs:choice maxOccurs="unbounded">
           <xs:element name="Table1">
           <xs:complexType>
           <xs:sequence>
           <xs:element name="Version" type="xs:string" minOccurs="0" />
           <xs:element name="DateUpdated" type="xs:dateTime" minOccurs="0" />
           <xs:element name="NewDatabase" type="xs:boolean" minOccurs="0" />
           <xs:element name="ConnectionString" type="xs:string" minOccurs="0" />
           <xs:element name="SQLFilePath" type="xs:string" minOccurs="0" />
           <xs:element name="TimeOut" type="xs:int" minOccurs="0" />
           <xs:element name="TurnOnMSDE" type="xs:boolean" minOccurs="0" />
           <xs:element name="KeepXMLRecords" type="xs:boolean" minOccurs="0" />
           <xs:element name="UserMode" type="xs:boolean" minOccurs="0" />
           <xs:element name="ReconcileScriptsMode" type="xs:boolean" minOccurs="0" />
           <xs:element name="FolderPath" type="xs:string" minOccurs="0" /> />
           <xs:element name="SelectedFile" type="xs:string" minOccurs="0" />
           <xs:element name="UpdateVersionTable" type="xs:boolean" minOccurs="0" />
           </xs:sequence>
           </xs:complexType>
           </xs:element>
          </xs:choice>
         </xs:complexType>
         </xs:element>
        </xs:schema>
    
       <Table1>
          <Version>1.2</Version>
          <DateUpdated>2004-05-17T10:04:06.9438192+10:00</DateUpdated>
          <NewDatabase>true</NewDatabase>
          <ConnectionString>Provider=SQLOLEDB.1;Integrated Security=SSPI;
          Persist Security Info=False;
          Data Source=(local);Initial Catalog=master</ConnectionString>
          <SQLFilePath>ver0001.sql</SQLFilePath>
          <TimeOut>5</TimeOut>
          <TurnOnMSDE>false</TurnOnMSDE>
          <KeepXMLRecords>false</KeepXMLRecords>
          <UserMode>true</UserMode>
          <ReconcileScriptsMode>true</ReconcileScriptsMode>
          <FolderPath>C:\Program Files\SSW SQL Deploy\Samples\DatabaseSQLScripts\
          </FolderPath>
          <SelectedFile />
          <UpdateVersionTable>true</UpdateVersionTable>
        </Table1>
    </NewDataSet>

    Good example - XML file with version control

    The version tags identifies what version the file is. This version should be hard coded into the application. Every time you change the format of the file, you would increment this number.

    The code below shows how this would be implemented in your project.

    Public Function IsXMLFileValid() As Boolean
    
      Dim fileVersion As String = "not specified"
      Dim dsSettings As New DataSet
      Dim IsMalformed As Boolean = False
      ' Is the file malformed all together with possibly version
    
      Try
        dsSettings.ReadXml(mXMLFileInfo.FullName, XmlReadMode.ReadSchema)
      Catch ex As Exception
        IsMalformed = True
      End Try
    
      If (Not IsMalformed) Then
        Dim strm As Stream = Asm.GetManifestResourceStream(Asm.GetName().Name _
         + "." + "XMLFileSchema.xsd")
        Dim sReader As New StreamReader(strm)
        Dim dsXMLSchema As New DataSet
        dsXMLSchema.ReadXmlSchema(sReader)
    
        If dsSettings.Tables(0).Columns.Contains("Version") Then _
          fileVersion = dsSettings.Tables(0).Rows(0)("Version").ToString
        End If
    
        If fileVersion = "" Then
          fileVersion = "not specified"
        End If
    
        If fileVersion = Global.XMLFileVersion AndAlso
            Not dsSettings.GetXmlSchema() = dsXMLSchema.GetXmlSchema() Then
          Return False
        End If
    
      End If
    
      If IsMalformed OrElse fileVersion <> Global.XMLFileVersion Then
    
        If mshouldConvertFile Then
          ' Convert the file
          ConvertToCurrentVersion(IsMalformed)
        Else
          Throw New XMLFileVersionException(fileVersion, Global.XMLFileVersion )
        End If
    
      End If
    
      Return True
    
    End Function

    Figure: Code to illustrate how to check if the xml file is valid

    Note: to allow backward compatibility, you should give the user an option to convert old xml files into the new version structure.

  15. Do you use TreeView control instead of XML control?

    Both controls can represent XML hierarchical data and support Extensible Stylesheet Language (XSL) templates, which can be used to transform an XML file into a the correct format and structure. While TreeView can apply Styles more easily, provide special properties that simplify the customization of the appearance of elements based on their current state.

    <asp:Xml ID="Xml1" runat="server" DocumentSource="~/Web.xml"
    TransformSource="~/Style.xsl"></asp:Xml>

    Figure: Bad example - Use XML to represent XML document using XSL Transformations

    <asp:TreeView ID="TreeView1" runat="server" DataSourceID="siteMapDataSource"
    ImageSet="Faq" SkipLinkText ="">
    <ParentNodeStyle Font-Bold="False" />
    <HoverNodeStyle Font-Underline="True" ForeColor="Purple" />
    <SelectedNodeStyle Font-Underline="True" HorizontalPadding="0px"
    VerticalPadding="0px" />
    <NodeStyle Font-Names="Tahoma" Font-Size="8pt" ForeColor="DarkBlue"
    HorizontalPadding="5px" NodeSpacing="0px" VerticalPadding="0px" />
    </asp:TreeView>
    <asp:SiteMapDataSource ID="siteMapDataSource"  runat="server" />

    Figure: Good example - Use TreeView to represent XML hierarchical data

  16. Are your customizable and non-customizable settings in different files?

    There are three types of settings files that we may need to use in .NET :

    1. App.Config/Web.Config is the default .NET settings file, including any settings for the Microsoft Application Blocks (eg. the Exception Management Block and the Configuration Management Block). These are for settings that dont change from within the application. In addition, System.Configuration classes dont allow writing to this file.
    2. ToolsOptions.Config (an SSW standard) is the file to hold the users own settings, that are users can change in the Tools - Options.

      Eg. ConnectionString, EmailTo, EmailCC

      Note: We read and write to this using Microsoft Configuration Application Block. If we don't use this Block we would store it as a plain XML file and read and write to it using System.XML classes. The idea is that if something does go wrong when you are writing to this file, at least the App.Config would not be affected. Also, this separates our settings (which are few) from the App.Config (which usually has a lot of stuff that we really dont want a user to stuff around with).

    3. UserSession.Config (an SSW standard). These are for additional setting files that the user cannot change.

      e.g. FormLocation, LastReportSelected

      Note: This file is over writable (say during a re-installation) and it will not affect the user if the file is deleted.

  17. Do you secure your web services using WCF over WSE3 and SSL?

    Windows Communication Foundation (WCF) extends .NET Framework to enable building secure, reliable & interoperable Web Services.

    WCF demonstrated interoperability with using the Web Services Security (WSS) including UsernameToken over SSL, UsernameToken for X509 Certificate and X509 Mutual Certificate profiles.

    WSE has been outdated and replaced by WCF and has provided its own set of attributes that can be plugged into any Web Service application.

    1. Security Implementation of security at the message layer security has several policies that can suite any environment including: 1. Windows Token 2. UserName Token 3. Kerbose Token 4. X.509 Certificate Token

      It is recommended to implement UserName Token using the standard login screen that prompts for a Username and a Password, which then gets passed into the SOAP header (at message level) for authorization.
      
      This requires SSL which provides a secure tunnel from client to server.
      
      However, message layer securtiy does not provide authentication security, so it does not stop the ability for a determined hacker to try username / password attempts forever. Custom Policies setup at Application Level can to prevent brute force.
    2. Performance

      Indigo has got the smarts to negotiate to the most performant serialization and transport protocol that either side of the WS conversation can accommodate, so it will have the best performance having "all-things-being-equal". You can configure the web services SSL session simply in the web.config file.

      After having Configure an SSL certificate (in the LocalMachine store of the server), the following lines are required in the web.config:

    <configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0">
    <system.serviceModel>
        <services>
            <service
                type="WCFService"
                name="WCFService"
                behaviorConfiguration="ServiceBehaviour"
            >
                <endpoint
                    contract="IWCFService"
                    binding="wsHttpBinding"
                    bindingConfiguration="WSHttpBinding_IWCFServiceBinding"
                />
            </service>
        </services>
    
        <bindings>
            <wsHttpBinding>
                <binding name="WSHttpBinding_IWCFServiceBinding">
                    <security mode="Message">
                        <message clientCredentialType="UserName" />
                    </security>
                 </binding>
            </wsHttpBinding>
        </bindings>
    
        <behaviors>
            <behavior name="ServiceBehaviour" returnUnknownExceptionsAsFaults="true">
                <serviceCredentials>
                    <serviceCertificate
                        findValue="CN=SSW"
                        storeLocation="LocalMachine"
                        storeName="My"
                        x509FindType="FindBySubjectDistinguishedName"
                    />
                </serviceCredentials>
            </behavior>
        </behaviors>
    </system.serviceModel>
    </configuration>

    Figure: Setting the SSL to Web Service for Message Layer Security

  18. Do you let the adapter handle the connection for you?

    Did you know if you are using DataSets throughout your application (not data readers) then you don't need to have any code about connection opening or closing.

    Some say it is better to be explicit. However the bottom line is less code is less bugs.

    try
    {
        cnn.Open();
        adapter.Fill(dataset);
    }
    catch (SQLException ex)
    {
        MessageBox.Show(ex.Message);
    }
    finally
    {
        //I'm in the finally block so that I always get called even if the fill fails.
        cnn.Close();
    }

    Bad code: The connection code is not needed

    try
    {
        adapter.Fill(dataset);
    }
    catch (SQLException ex)
    {
        MessageBox.Show(ex.Message);
    }

    Good code: Letting the adapter worry about the connection

    Note: A common comment for this rule is... "Please tell users to explicitly open and close connection - even when the .NET Framework can do for them"

    The developers who prefer the first (more explicit) code example give the following reasons:

    • Explicit Behaviour is always better. Code maintainability. Explicit code is more understandable than implicit code. Don't make your other developers have to look up the fact that data adapters automatically maintain the state of your connection for them.
    • Consistency (or a lack of) - not all Framework classes are documented to behave like this. For example, the IDBCommand.ExecuteNonQuery() will throw an exception if the connection isn't open (it might be an interface method, but interface exceptions are documented as a strong guideline for all implementers to follow). The SqlCommand help doesn't mention anything further about this fact, but considering it's an inherited class, it would be fair to expect it to behave the same way. A number of the other methods don't make mention of connection state, making it difficult to know which basket to put your eggs into...
    • Developer Awareness - it's healthy for the developer to be aware that they have a resource that needs to be handled properly. If they learn that they don't need to open and close connections here, then when they move onto using other resource types where this isn't the case then many errors may be produced. For example, when using file resources, the developer is likely to need to pass and open stream and needs to remember to close any such streams properly before leaving the function.
    • Efficiency (sort of) - In a lot of code it will often populate more than one object at a time so that if I only open the connection once, execute multiple fills or commands, then close, then it'll be more clear about what the intent of the developer. If we left it to the framework, it's likely that the connection will be opened and closed multiple times; which despite it being really cheap to open out of the connection pool it will be slightly (itty bitty bit) more efficient but I think the explicit commands will demonstrate more clearly the intention of the developer.

    Bottom line - It is a controversial one. People who agree with the rule include:

    • Ken Getz
    • Paul Sheriff
    • Bill Vaughan
    • George Doubinski

    People who don't:

    • Chris Kinsman
    • Richard Campbell
    • Paul Reynolds

    Microsoft's online guide to Improving ADO.NET performance to see their opinion and other tips.

    One final note: This argument is a waste of time... With code generators developing most of the Data Access layer of the application, the errors, if any, will be long gone and the developer is presented with higher level of abstraction that allows him/her to concentrate on more important things rather than mucking around with connections. Particularly considering that, when we start using the Provider model from Whidbey, it won't even be clear whether you're talking to SQL Server or to an XML file.

  19. Do you use one class per file?

    Each class definition should live in its own file. This ensures it's easy to locate class definitions outside the Visual Studio IDE (e.g. SourceSafe, Windows Explorer)

    The only exception should be classes that collectively forms one atomic unit of reuse should live in one file.

    class MyClass
    {
        // ...
    }
    
    class MyClassAEventArgs
    {
        // ...
    }
    
    class MyClassBEventArgs
    {
        // ...
    }
    
    class MyClassAException
    {
        // ...
    }
    
    class MyClassBException
    {
        // ...
    }

    Bad example - 1 project, 1 file

  20. Do you put all images in the \images folder?

    Instead of images sitting all around the solution, we put all the images in the same folder.

    Bad example - Images under Product root folder.

    Good example - Images under \Images folder.

    We have a program called SSW Code Auditor to check for this rule.
  21. Do you keep \images folder image only?

    We want to keep clear and simple file structure in our solution. Never put any files other than images file in \images folder.

    Bad example - HTML file in \Images Folder.

    Good example - Images only, clean \Images folder.

    We have a program called SSW Code Auditor to check for this rule.
  22. Do you put your setup file in your a \setup folder?

    All setup files should stored under setup folder of your project root directory.

    Good example - All the wise setup file in the \setup folder.

    We have a program called SSW Code Auditor to check for this rule.
  23. Do you deploy your applications correctly?

    Many applications end up working perfectly on the developer's machine. However once the application is deployed into a setup package and ready for the public, the application could suddenly give the user the most horrible experience of his life. There are plenty of issues that developers don't take into consideration. Amongst the many issues, 3 can stand above the rest if the application isn't tested thoroughly:

    1. The SQL Server Database or the Server machine cannot be accessed by the user, and so developer settings are completely useless to the user.
    2. The user doesn't install the application in the default location. (i.e. instead of C:\Program Files\ApplicationName, the user could install it on D:\Temp\ApplicationName)
    3. The developer has assumed that certain application dependencies are installed on the user's machine. (i.e. MDAC; IIS; a particular version of MS Access; or SQL Server runtime components like sqldmo.dll)

    To prevent issues from arising and having to re-deploy continuously which would only result in embarrassing yourself and the company, there are certain procedures to follow to make sure you give the user a smooth experience when installing your application.

    1. Have scripts that can get the pathname of the .exe that the user has installed the application on

    Wise has a Dialog that prompts the user for the installation directory:

    INSTALLDIR
    Figure: Wise Prompts the user for the installation directory and sets the path to a property in wise called "INSTALLDIR"

    An embedded script must be used if the pathname is necessary in the application (i.e. like .reg files that set registry keys in registry)

    The .reg file includes the following hardcoded lines:

    '[HKEY_CLASSES_ROOT\SSWNetToolkit\shell\open\command]
    <a href="mailto:%27@=%22\%22C:\\Program">
        @="\"C:\\Program
    </a>
    Files\\SSW NetToolKit\\WindowsUI\\bin\\SSW.NetToolkit.exe\" /select \"%1\""

    This should be replaced with the following lines:

    'HKEY_CLASSES_ROOT\SSWNetToolkit\shell\open\command
    '<a href="mailto:%27@=%22\%22REPLACE_ME\">
    '    '@="\"REPLACE_ME\
    '</a>
    " /select \"%1\""
    
    Dim oFSO, oFile, sFile
    
    Set oFSO = CreateObject("Scripting.FileSystemObject")
    
    sFile = Property("INSTALLDIR") & "WindowsUI\PartA\UrlAcccess.reg"
    
    Set oFile = oFSO.OpenTextFile(sFile)
    
    regStream = oFile.ReadAll()
    
    oFile.Close
    
    appPath = Replace(Property("INSTALLDIR") & "WindowsUI\bin\SSW.NetToolkit.exe", "\", "\\")
    
    regStream = Replace(regStream, "REPLACE_ME", appPath)
    
    Set oFile = oFSO.OpenTextFile(sFile, 2)
    
    oFile.Write regStream
    oFile.Close

    Figure: The "REPLACE_ME" string is replaced with the value of the INSTALLDIR property in the .reg file

    1. After setting up the wise file then running the build script, the application must be first tested on the developers' own machine. Many developers forget to test the application outside the development environment completely and don't bother to install the application using the installation package they have just created. Doing this will allow them to fix e.g. pathnames of images that might have been set to a relative path of the running process and not the relative path of the actual executable.
    this.pictureReportSample.Image = Image.FromFile(@"Reports\Images\Blank.jpg");

    Bad code - FromFile() method (as well as Process.Start()) give the relative path of the running process. This could mean the path relative to the shortcut or the path relative to the .exe itself, and so an exception will be thrown if the image cannot be found when running from the shortcut

    string appFilePath = System.Reflection.Assembly.GetExecutingAssembly().Location;
    
    string appPath = Path.GetDirectoryName(appFilePath);
    
    this.pictureReportSample.Image = Image.FromFile(appPath + @"\Reports\Images\Blank.jpg");

    Good code - GetExecutingAssembly().Location will get the pathname of the actual executable and no exception will be thrown

    This exception would never have been found if the developer didn't bother to test the actual installation package on his own machine.

    1. Having tested on the developer's machine, the application must be tested on a virtual machine in a pure environment without dependencies installed in GAC, registry or anywhere else in the virtual machine. Users may have MS Access 2000 installed and, the developer's application may behave differently on an older version of MS Access even though it works perfectly on MS Access 2003. The most appropriate way of handling this is to use programs like VM Ware or MS Virtual PC. This will help the developer test the application on all possible environments to ensure that it caters for all users, minimizing the amount of assumptions as possible.
  24. Do you distribute a product in Release mode?

    We like to have debugging information in our application, so that we can view the line number information in the stack trace. However, we won't release our product in Debug mode, for example if we use "#if Debug" statement in our code we don't want them to be compiled in the release version.

    If we want line numbers, we simply need Debugging Information . You can change an option in the project settings so these will be generated in when using Release build.

    #if DEBUG MessageBox.Show("Application started"); #endif

    Figure: Code that should only run in Debug mode, we certainly don't want this in the release version.

    Figure: Set "Generate Debugging Information" to True on the project properties page (VS 2003)

    Figure: Set "Debug Info" to "pdb-only" on the Advanced Build Settings page (VS 2005)

  25. Do you use more meaningful names than Hungarian short form?

    Hungarian notation is used in VB6. In .NET, there are over 35,000 classes, so we can't just call them with three letter short form. We would suggest the developer use the full class name as  in example below. As a result, the code will be much easier to read and follow up.

    DateTime dt = new DateTime.Now();
    DataSet ds = new DataSet();
    
    // It could be confused with Date time.
    DataTable dt = ds.Tables[0];

    Bad code - Without meaningful name

    DateTime currentDateTime = new DateTime.Now();
    DataSet employmentDataSet = new DataSet();
    
    DataTable ContactDetailsDataTable = ds.Tables[0];

    Good code - Meaningful name

    More information on .NET Object Naming Standard.

  26. Do you know how to rename files that under SourceSafe control?

    Whenever we rename a file in Visual Studio .NET, the file becomes a new file in SourceSafe. If the file has been checked-out, the status of old file will remain as checked-out in SourceSafe.

    The step by step to rename a file that under SourceSafe control:

    1. Save and close the file in Visual Studio .NET, and check in the file if it is checked-out.
    2. Open Visual SourceSafe Explorer and rename the file.
    3. Rename it in Visual Studio .NET, click "Continue with change" to the 2 pop-up messages:

    RenameVSS1 small
    Figure: Warning message of renaming files under source control.

    RenameVSS2 small
    Figure: You are seeing this as the new file name already exists in SourceSafe, just click "Continue with change".

    Visual Studio .NET should find the file under source control and it will come up with a lock icon

  27. Do you profile your code when optimising performance?

    Imagine that you have just had a User Acceptance Test (UAT), and your app has been reported as being "painfully slow" or "so slow as to be unusable". Now, as a coder, where do you start to improve the performance? More importantly, do you know how much your massive changes have improved performance - if at all?

    We recommend that you should always use a code profiling tool to measure performance gains whilst optimising your application. Otherwise, you are just flying blind and making subjective, unmeasured decisions. Instead, use a tool such as JetBrains dotTrace profiler. These will guide you as to how to best optimise any code that is lagging behind the pack. You can run this on both ASP.NET and Windows Forms Applications. The optimisation process is as follows:

    1. Profile the application with Jetbrains dotTrace using the "Hot Spot" tab to identify the slowest areas of your application

    JetBrainsProfilerHotSpots
    Figure: Identify which parts of your code take the longest (Hot Spots)

    1. Some parts of the application will be out of your control e.g. .NET System Classes. Identify the slowest parts of code that you can actually modify from the Hot Spot listing
    2. Determine the cause of the poor performance and optimise (e.g. improve the WHERE clause or the number of columns returned, reduce the number of loops or use a StringBuilder instead of string concatenation)
    3. Re-run the profile to confirm that performance has improved
    4. Repeat from Step 1 until the application is optimised
  28. Do you Add SSW Code Auditor, NUnit and Microsoft FxCop project files to your Solution

    SSW Code Auditor, NUnit and Microsoft FxCop are tools to keep your code "healthy". That is why they should be easily accessible in every solution so that they can be run with a double click of a mouse button.

    To add a SSW Code Auditor file to your solution:

    1. Start up SSW Code Auditor
    2. Add a new Job
    3. Add a the solution file to be scanned
    4. Select the rules to be run
    5. Configure email (not required)
    6. Select File > Save As (into the solution's folder as "c odeauditor.SSWCodeAuditor ")
    7. Open your Solution in Visual Studio
    8. Right click and add existing file
    9. Select the SSW Code Auditor project file
    10. Right click the newly added file and select " Open With "

    1. Point it to the SSW Code Auditor executable

    See Do you run SSW Code Auditor? See Do you check your code by Code Auditor before check-in? To add a Microsoft FxCopfile to your solution:

    1. Stat up Microsoft FxC
    2. op
    3. Create a New Project
    4. Right click the project and Add Target
    5. Select the Assembly (DLL/EXE) for the project
    6. Select File > Save Project As (into the solution's folder as " fxc op.FxCop ")
    7. Open your Solution in Visual Studio
    8. Right click and add existing file
    9. Select the Microsoft FxCop project file
    10. Right click the newly added file and select " Open With "
    11. Point it to the Microsoft FxCop executable

    To add a NUnitfile to your solution:

    1. Stat up NUn
    2. it
    3. Create a New Project by selecting File > New Project and save it to your solution directory as " nun it.NUnit "
    4. From the Project menu select Add Assembly
    5. Select the Assembly (DLL/EXE) for the project that contains unit tests
    6. Select File > Save Project
    7. Open your Solution in Visual Studio
    8. Right click and add existing file
    9. Select the NUnit project file
    10. Right click the newly added file and select " Open With "
    11. Point it to the NUnit executable

    Now you can simply double click these project files to run the corresponding applications.

    We have a program called SSW Code Auditor that implements this rule.
  29. Do you know what files not to put into VSS?

    The following files should NOT be included in source safe as they are user specific files:

    • *.scc;*.vspscc - Source Safe Files
    • *.pdb - Debug Files
    • *.user - User settings for Visual Studio .NET IDE
  30. Do you use resource file for storing your static script?

    Resource files provide a structured and centralized approach to storing and retrieving static scripts, eliminating the need for scattered code snippets and enhancing your development workflow.

    StringBuilder sb = new StringBuilder();
    sb.AppendLine(@"<script type=""text/javascript"">");
    sb.AppendLine(@"function deleteOwnerRow(rowId)");
    sb.AppendLine(@"{");
    sb.AppendLine(string.Format(@"{0}.Delete({0}.GetRowFromClientId(rowId));", OwnersGrid.ClientID));
    sb.AppendLine(@"}");
    sb.AppendLine(@"</script>");

    Bad example - Hard to read, the string is surrounded by rubbish + inefficient because you have an object and 6 strings

    string.Format(@"
    <script type=""text/javascript"">
        function deleteOwnerRow(rowId)
        {
            {0}.Delete({0}.GetRowFromClientId(rowId));
        }
    </script>
    ", OwnersGrid.ClientID);

    Good example Slightly easier to read ?but it is 1 code statement across 10 lines

    string scriptTemplate = Resources.Scripts.DeleteJavascript;
    string script = string.Format(scriptTemplate, OwnersGrid.ClientID);
    <script type=""text/javascript"">
         function deleteOwnerRow(rowId)
         {
             {0}.Delete({0}.GetRowFromClientId(rowId));
         }
    </script>

    Figure: The code in the first box, the string in the resource file in the 2nd box. This is the easiest to read + you can localize it eg. If you need to localize an Alert in the javascript

    CreateResource small
    Figure: Add a recourse file into your project in VS2005

    ReadResource small
    Figure: Read value from the new added resource file

  31. Do you know changes on Datetime in .NET 2.0 and .NET 1.1/1.0?

    In v1.0 and v1.1 of .NET framework when serializing DateTime values with the XmlSerializer, the local time zone of machine would always been appended. And when deserializing on the receiving machine, DateTime values would be automatically adjusted based on time zone offset relative to the sender time zone.

    See below example:

    DataSet returnedResult = webserviceObj.GetByDateCreatedAndEmpID(DateTime.
    Now,'JZ');

    Figure: Front-end code in .NET v1.1 (front end time zone: GTM+8)

    [WebMethod] public DataSet GetByDateCreatedAndEmpID(DateTime DateCreated, String
    EmpID)
    {
         EmpTimeDayDataSet ds = new EmpTimeDayDataSet();
         m_EmpTimeDayAdapter.FillByDateCreatedAndEmpID(ds, DateCreated.Date, EmpID);
         return ds;
    }

    Figure: Web service method (web service server time zone: GTM+10)

    When front end calls this web method with the value of current local time (14/01/2006 11:00:00 PM GTM+8) for parameter 'DateCreated', it expects a returned result for date 14/01/2006, while the service end returns data of 15/01/2006, because 14/01/2006 11:00:00 PM (GTM+8) would be adjusted to be 15/01/2006 01:00:00 AM at the web service server (GTM+10)

    In v1.1/v1.0 you have no way to control this serializing/deserializing behaviour on DateTime. In v2.0 with the new notion DateTimeKind you can get a workaround for above example.

    Datetime unspecifiedTime = DateTime.SpecifyKind(DateTime.Now,DateTimeKind.
    Unspecified);
    DataSet returnedResult = webservceObj.serviceObj.GetByDateCreatedAndEmpID,
    (unspecifiedTime,'JZ');

    Figure: Front-end code in .NET v2.0 (front end time zone: GTM+8)

    In this way, the server end will always get a datetime value of 14/01/2006 11:00:00 without GTM offset and return what front-end expects.

  32. Do you know how to use Connection Strings?

    There are 2 type of connection strings. The first contains only address type information without authorization secrets. These can use all of the simpler methods of storing configuration as none of this data is secret.

    When deploying an Azure hosted application we can use Azure Managed Identities to avoid having to include a password or key inside our connection string. This means we really just need to keep the address or url to the service in our application configuration. Because our application has a Managed Identity, this can be treated in the same way as a user's Azure AD identity and specific roles can be assigned to grant the application access to required services.

    This is the preferred method wherever possible, because it eliminates the need for any secrets to be stored. The other advantage is that for many services the level of access control available using Managed Identities is much more granular making it much easier to follow the Principle of Least Privilege.

    Option 2 - Connection Strings with passwords or keys

    If you have to use some sort of secret or key to login to the service being referenced, then some thought needs to be given to how those secrets can be secured.Take a look at Do you store your secrets securely to learn how to keep your secrets secure.

    Example - Integrating Azure Key Vault into your ASP.NET Core application

    In .NET 5 we can use Azure Key Vault to securely store our connection strings away from prying eyes.

    Azure Key Vault is great for keeping your secrets secret because you can control access to the vault via Access Policies. The access policies allows you to add Users and Applications with customized permissions. Make sure you enable the System assigned identity for your App Service, this is required for adding it to Key Vault via Access Policies.

    You can integrate Key Vault directly into your ASP.NET Core application configuration. This allows you to access Key Vault secrets via IConfiguration.

    public static IHostBuilder CreateHostBuilder(string[] args) =>
     Host.CreateDefaultBuilder(args)
      .ConfigureWebHostDefaults(webBuilder =>
      {
       webBuilder
        .UseStartup<Startup>()
        .ConfigureAppConfiguration((context, config) =>
        {
         // To run the "Production" app locally, modify your launchSettings.json file
         // -> set ASPNETCORE_ENVIRONMENT value as "Production"
         if (context.HostingEnvironment.IsProduction())
         {
          IConfigurationRoot builtConfig = config.Build();
    
          // ATTENTION:
          //
          // If running the app from your local dev machine (not in Azure AppService),
          // -> use the AzureCliCredential provider.
          // -> This means you have to log in locally via `az login` before running the app on your local machine.
          //
          // If running the app from Azure AppService
          // -> use the DefaultAzureCredential provider
          //
          TokenCredential cred = context.HostingEnvironment.IsAzureAppService() ?
           new DefaultAzureCredential(false) : new AzureCliCredential();
    
          var keyvaultUri = new Uri($"https://{builtConfig["KeyVaultName"]}.vault.azure.net/");
          var secretClient = new SecretClient(keyvaultUri, cred);
          config.AddAzureKeyVault(secretClient, new KeyVaultSecretManager());
         }
        });
      });

    Good example - For a complete example, refer to this sample application

    Tip: You can detect if your application is running on your local machine or on an Azure AppService by looking for the WEBSITE_SITE_NAME environment variable. If null or empty, then you are NOT running on an Azure AppService.

    public static class IWebHostEnvironmentExtensions
    {
     public static bool IsAzureAppService(this IWebHostEnvironment env)
     {
      var websiteName = Environment.GetEnvironmentVariable("WEBSITE_SITE_NAME");
      return string.IsNullOrEmpty(websiteName) is not true;
     }
    }

    Setting up your Key Vault correctly

    In order to access the secrets in Key Vault, you (as User) or an Application must have been granted permission via a Key Vault Access Policy.

    Applications require at least the LIST and GET permissions, otherwise the Key Vault integration will fail to retrieve secrets.

    access policies
    Figure: Key Vault Access Policies - Setting permissions for Applications and/or Users

    Azure Key Vault and App Services can easily trust each other by making use of System assigned Managed Identities. Azure takes care of all the complicated logic behind the scenes for these two services to communicate with each other - reducing the complexity for application developers.

    So, make sure that your Azure App Service has the System assigned identity enabled.

    Once enabled, you can create a Key Vault Access policy to give your App Service permission to retrieve secrets from the Key Vault.

    identity
    Figure: Enabling the System assigned identity for your App Service - this is required for adding it to Key Vault via Access Policies

    Adding secrets into Key Vault is easy.

    1. Create a new secret by clicking on the Generate/Import button
    2. Provide the name
    3. Provide the secret value
    4. Click Create

    add a secret
    Figure: Creating the SqlConnectionString secret in Key Vault.

    secrets
    Figure: SqlConnectionString stored in Key Vault

    Note: The ApplicationSecrets section is indicated by "ApplicationSecrets--" instead of "ApplicationSecrets:".

    As a result of storing secrets in Key Vault, your Azure App Service configuration (app settings) will be nice and clean. You should not see any fields that contain passwords or keys. Only basic configuration values.

    configuration
    Figure: Your WebApp Configuration - No passwords or secrets, just a name of the Key vault that it needs to access

    Video: Watch SSW's William Liebenberg explain Connection Strings and Key Vault in more detail (8 min)

    History of Connection Strings

    In .NET 1.1 we used to store our connection string in a configuration file like this:

    <configuration>
         <appSettings>
              <add key="ConnectionString" value ="integrated security=true;
               data source=(local);initial catalog=Northwind"/>
         </appSettings>
    </configuration>

    ...and access this connection string in code like this:

    SqlConnection sqlConn = 
    new SqlConnection(System.Configuration.ConfigurationSettings.
    AppSettings["ConnectionString"]);

    Historical example - Old ASP.NET 1.1 way, untyped and prone to error

    In .NET 2.0 we used strongly typed settings classes:

    Step 1: Setup your settings in your common project. E.g. Northwind.Common

    ConnStringNET2 Settings
    Figure: Settings in Project Properties

    Step 2: Open up the generated App.config under your common project. E.g. Northwind.Common/App.config

    Step 3: Copy the content into your entry applications app.config. E.g. Northwind.WindowsUI/App.config The new setting has been updated to app.config automatically in .NET 2.0

    <configuration>
          <connectionStrings>
             <add name="Common.Properties.Settings.NorthwindConnectionString"
                  connectionString="Data Source=(local);Initial Catalog=Northwind;
                  Integrated Security=True"
                  providerName="System.Data.SqlClient" />
            </connectionStrings>
     </configuration>

    ...then you can access the connection string like this in C#:

    SqlConnection sqlConn =
     new SqlConnection(Common.Properties.Settings.Default.NorthwindConnectionString);

    Historical example - Access our connection string by strongly typed generated settings class...this is no longer the best way to do it

  33. Do you avoid using duplicate connection string in web.config?

    Since we have many ways to use Connection String in .NET 2.0, it is probably that we are using duplicate connection string in web.config.

    <connectionStrings>
        <add
            name="ConnectionString"
            connectionString="Server=(local);
            Database=NorthWind;"
        />
    </connectionStrings>
    
    <appSettings>
        <add key="ConnectionString" value="Server=(local);Database=NorthWind;"/>
    </appSettings>

    Bad example - Using duplicate connection string in web.config

  34. Do you use Windows Integrated Authentication connection string in web.config?

    Both SQL Server authentication (standard security) and Windows NT authentication (integrated security) are SQL Server authentication methods that are used to access a SQL Server database from Active Server Pages (ASP).

    We recommend you use the Windows NT authentication by default, because Windows security services operate by default with the Microsoft Active Directory?directory service, it is a derivative best practice to authenticate users against Active Directory. Although you could use other types of identity stores in certain scenarios, for example Active Directory Application Mode (ADAM) or Microsoft SQL Server? these are not recommended in general because they offer less flexibility in how you can perform user authentication.

    If not, then add a comment confirming the reason.

    <connectionStrings>
       <add name="ConnectionString" connectionString="Server=(local);
        Database=NorthWind;uid=sa;pwd=sa;" />
    </connectionStrings>

    Figure: Bad example - Not use Windows Integrated Authentication connection string without comment

    <connectionStrings>
        <add name="ConnectionString" connectionString="Server=(local);
         Database=NorthWind;Integrated Security=SSPI;" />
    </connectionStrings>

    Figure: Good example - Use Windows Integrated Authentication connection string by default

    <connectionStrings>
        <add name="ConnectionString" connectionString="Server=(local);
         Database=NorthWind;uid=sa;pwd=sa;" />
        <!--It can't use the Windows Integrated because they are using Novell -->
    </connectionStrings>

    Figure: Good example - Not use Windows Integrated Authentication connection string with comment

  35. Do you store your secrets securely?

    Most systems will have variables that need to be stored securely; OpenId shared secret keys, connection strings, and API tokens to name a few.

    These secrets must not be stored in source control. It is insecure and means they are sitting out in the open, wherever code has been downloaded, for anyone to see.

    There are many options for managing secrets in a secure way:

    Bad Practices

    Store production passwords in source control

    Pros:

    • Minimal change to existing process
    • Simple and easy to understand

    Cons:

    • Passwords are readable by anyone who has either source code or access to source control
    • Difficult to manage production and non-production config settings
    • Developers can read and access the production password

    BadSettings

    Figure: Bad practice - Overall rating: 1/10

    Store production passwords in source control protected with the ASP.NET IIS Registration Tool

    Pros:

    • Minimal change to existing process – no need for DPAPI or a dedicated Release Management (RM) tool
    • Simple and easy to understand

    Cons:

    • Need to manually give the app pool identity ability to read the default RSA key container
    • Difficult to manage production and non-production config settings
    • Developers can easily decrypt and access the production password
    • Manual transmission of the password from the key store to the encrypted config file

    Figure: Bad practice - Overall rating: 2/10

    Use Windows Identity instead of username / password

    Pros:

    • Minimal change to existing process – no need for DPAPI or a dedicated RM tool
    • Simple and easy to understand

    Cons:

    • Difficult to manage production and non-production config settings
    • Not generally applicable to all secured resources
    • Can hit firewall snags with Kerberos and AD ports
    • Vulnerable to DOS attacks related to password lockout policies
    • Has key-person reliance on network admin

    Figure: Bad practice - Overall rating: 4/10

    Use External Configuration Files

    Pros:

    • Simple to understand and implement

    Cons:

    • Makes setting up projects the first time very hard
    • Easy to accidentally check the external config file into source control
    • Still need DPAPI to protect the external config file
    • No clear way to manage the DevOps process for external config files

    Figure: Bad practice - Overall rating: 1/10

    Good Practices

    Use Octopus/ VSTS RM secret management, with passwords sourced from KeePass

    Pros:

    • Scalable and secure
    • General industry best practice - great for organizations of most sizes below large corporate

    Cons:

    • Password reset process is still manual
    • DPAPI still needed

    Figure: Good practice - Overall rating: 8/10

    Use Enterprise Secret Management Tool – Keeper, 1Password, LastPass, Hashicorp Vault, etc

    Pros:

    • Enterprise grade – supports cryptographically strong passwords, auditing of secret access and dynamic secrets
    • Supports hierarchy of secrets
    • API interface for interfacing with other tools
    • Password transmission can be done without a human in the chain

    Cons:

    • More complex to install and administer
    • DPAPI still needed for config files at rest

    Figure: Good practice -  Overall rating: 8/10

    Use .NET User Secrets

    Pros:

    • Simple secret management for development environments
    • Keeps secrets out of version control

    Cons:

    • Not suitable for production environments

    Figure: Good Practice - Overall rating 8/10

    Use Azure Key Vault

    See the SSW Rewards mobile app repository for how SSW is using this in a production application: https://github.com/SSWConsulting/SSW.Rewards

    Pros:

    • Enterprise grade
    • Uses industry standard best encryption
    • Dynamically cycles secrets
    • Access granted based on Azure AD permissions - no need to 'securely' share passwords with colleagues
    • Can be used to inject secrets in your CI/CD pipelines for non-cloud solutions
    • Can be used by on-premise applications (more configuration - see Use Application ID and X.509 certificate for non-Azure-hosted apps)

    Cons:

    • Tightly integrated into Azure so if you are running on another provider or on premises, this may be a concern. Authentication into Key Vault now needs to be secured.

    Figure: Good Practice - Overall rating 9/10

    Avoid using secrets with Azure Managed Identities

    The easiest way to manage secrets is not to have them in the first place. Azure Managed Identities allows you to assign an Azure AD identity to your application and then allow it to use its identity to log in to other services. This avoids the need for any secrets to be stored.

    Pros:

    • Best solution for cloud (Azure) solutions
    • Enterprise grade
    • Access granted based on Azure AD permissions - no need to 'securely' share passwords with colleagues
    • Roles can be granted to your application your CI/CD pipelines at the time your services are deployed

    Cons:

    • Only works where Azure AD RBAC is available. NB. There are still some Azure services that don't yet support this. Most do though.

    GoodSettings

    Figure: Good Practice - Overall rating 10/10

    Resources

    The following resources show some concrete examples on how to apply the principles described:

  36. Do you share your developer secrets securely?

    Most systems will have variables that need to be stored securely; OpenId shared secret keys, connection strings, and API tokens to name a few.

    These secrets must not be stored in source control. It is not secure and means they are sitting out in the open, wherever code has been downloaded, for anyone to see.

    Do you store your secrets securely? shows different ways to store your secrets securely. When you use .NET User Secrets, you can store your secrets in a JSON file on your local machine. This is great for development, but how do you share those secrets securely with other developers in your organization?

    You may be asking what's a secret for a development environment? A developer secret is any value that would be considered sensitive.

    An encryption key or sql connection string to a developer's local machine/container is a good example of something that will not always be sensitive for in a development environment, whereas a GitHub PAT token or Azure Storage SAS token would be considered sensitive as it allows access to company-owned resources outside of the local development machine.

    Bad Examples

    Do not store secrets in appsettings.Development.json

    The appsettings.Development.json file is meant for storing development settings. It is not meant for storing secrets. This is a bad practice because it means that the secrets are stored in source control, which is not secure.

    development json

    Figure: Bad practice - Overall rating: 1/10

    Sharing secrets via email/Microsoft Teams

    Sending secrets over Microsoft Teams is a terrible idea, the messages can land up in logs, but they are also stored in the chat history. Developers can delete the messages once copied out, although this extra admin adds friction to the process and is often forgotten.

    Note: Sending the secrets in email, is less secure and adds even more admin for trying to remove some of the trace of the secret and is probably the least secure way of transferring secrets.

    using microsoft teams for secrets

    Figure: Bad practice - Overall rating: 3/10

    Good Practices

    For development purposes once you are using .NET User Secrets you will still need to share them with other developers on the project.

    user secrets
    Figure: User Secrets are stored outside the development folder

    As a way of giving a heads up to other developers on the project, you can add a step in your _docs\Instructions-Compile.md file (Do you make awesome documentation?) to inform developers to get a copy of the user secrets. You can also add a placeholder to the appsettings.Development.json file to remind developers to add the secrets.

    development json with placeholder
    Figure: Good Example - Remind developers where the secrets are for this project

    Use 1ty.me to share secrets securely

    Using a site like 1ty.me allows you to share secrets securely with other developers on the project.

    Pros:

    • Simple to share secrets
    • Free

    Cons:

    • Requires a developer to have a copy of the secrets.json file already
    • Developers need to remember to add placeholders for developer specific secrets before sharing
    • Access Control - Although the link is single use, there's no absolute guarantee that the person opening the link is authorized to do so

    1ty me

    Figure: Good Practice - Overall rating 8/10

    Use Azure Key Vault

    Azure Key Vault is a great way to store secrets securely. It is great for production environments, although for development purposes it means you would have to be online at all times.

    Pros:

    • Enterprise grade
    • Uses industry standard best encryption
    • Dynamically cycles secrets
    • Access Control - Access granted based on Azure AD permissions - no need to 'securely' share passwords with colleagues

    Cons:

    • Not able to configure developer specific secrets
    • No offline access
    • Tightly integrated into Azure so if you are running on another provider or on premises, this may be a concern
    • Authentication into Key Vault requires Azure service authentication, which isn't supported in every IDE

    Figure: Good Practice - Overall rating 8/10

    Enterprise Secret Management tools have are great for storing secrets for various systems across the whole organization. This includes developer secrets

    Pros:

    • Developers don't need to call other developers to get secrets
    • Placeholders can be placed in the stored secrets
    • Access Control - Only developers who are authorized to access the secrets can do so

    Cons:

    • More complex to install and administer
    • Paid Service

    developer secrets in keeper

    Figure: Good Practice - Overall rating 10/10

    Tip: You can store the full secrets.json file contents in the enterprise secrets management tool.

    Most enterprise secrets management tool have the ability to retrieve the secrets via an API, with this you could also store the UserSecretId in a field and create a script that updates the secrets easily into the correct secrets.json file on your development machine.

  37. Do you highlight strings in your code editor?

    It is a good practice to highlight string variables or const in source code editor of Visual Studio to make them clear. Strings can be easily found especially you have long source code.

    Default string appearance

    HighlightString good small
    Highlighted string appearance

    Tools | Options form of Visual Studio

  38. Do you use PowerShell to run batch files in Visual Studio?

    Windows Command Processor (cmd.exe) cannot run batch files (.bat) in Visual Studio because it does not take the files as arguments. One way to run batch files in Visual Studio is to use PowerShell.

    BadBatch small
    Bad example - Using Windows Command Processor (cmd.exe) for running batch files.

    goodbatch small
    Good example - Using PowerShell for running batch files

  39. Project setup - Do you make project setup as easy as possible?

    Developers understand the importance of the F5 experience. Sadly, lots of projects are missing key details that are needed to make setup easy.

    Let's look at the ways to optimize the experience. There are 4 levels of experience that can be delivered to new developers on a project:

    Level 1: Step by step documentation

    This is the most important milestone to reach because it contains the bare minimum to inform developers about how to run a project.

    The rule on awesome documentation teaches us all the documents needed for a project and how to struture them.

    The README.md and Instructions-Compile.md are the core documents that are essential for devs to get running on a project.

    ProjectDocumentationBadExample
    Bad example - A project without instructions

    ProjectDocumentationGoodExample
    Good example - A project with instructions

    ::: greyboxTip: In addition to pre-requisites, make sure to mention what isn't supported and any other problems that might come up.

    E.g. Problems to check for:

    • Windows 8 not supported
    • Latest backup of the database
    • npm version :::

    Tip: Don't forget about the database, your developers need to know how to work with the database

    EFCoreMigrations
    Figure: Don't forget about the database!

    Level 2: Less documentation using a PowerShell script

    A perfect solution would need no static documentation. Perfect code would be so self-explanatory that it did not need comments. The same rule applies with instructions on how to get the solution compiling. A PowerShell script is the first step towards this nirvana.

    Note: You should be able to get latest and compile within 1 minute. Also, a developer machine should not have to be on the domain (to support external consultants)

    All manual workstation setup steps should be scripted with PowerShell, as per the below example:

    PS C:\Code\Northwind&gt; **.\Setup-Environment.ps1**

    Problem: Azure environment variable run state directory is not configured \_CSRUN\_STATE\_DIRECTORY.

    Problem: Azure Storage Service is not running. Launch the development fabric by starting the solution.

    WARNING: Abandoning remainder of script due to critical failures.

    To try and automatically resolve the problems found, re-run the script with a -Fix flag.

    Figure: Good example - A PowerShell script removes human error and identifies problems in the devs environment so they can be fixed

    Level 3: Less maintenance using Docker containerization

    docker logo
    Figure: Docker Logo

    PowerShell scripts are cool, but they can be difficult to maintain and they cannot account for all the differences within each developers environment. This problem is exacerbated when a developer comes back to a project after a long time away.

    Docker can solve this problem and make the experience even better for your developers. Docker containerization helps to standardize development environments. By using docker containers developers won't need to worry about the technologies and versions installed on their device. Everything will be set up for them at the click of a button.

    Learn more: Project setup - Do you use Docker to containerize your SQL Server environment?

    Level 4: More standardization using dev containers

    Dev containers take the whole idea of docker containerization to another level. By setting up a repo to have the right configuration, the dev team can be certain that every developer is going to get the exact same experience.

    Learn more: Project setup - Do you containerize your dev environment?

    HappyDevs
    Figure: Good example - After using dev containers you would be as happy as Larry!

  40. Do you always prefix SQL stored procedure names with the owner in ADO.NET code?

    Stored procedure names in code should always be prefixed with the owner (usually dbo). This is because if the owner is not specified, SQL Server will look for a procedure with that name for the currently logged on user first, creating a performance hit.

    SqlCommand sqlcmd = new SqlCommand(); sqlcmd.CommandText = "
                        proc_InsertCustomer" sqlcmd.CommandType
                        = CommandType.StoredProcedure; sqlcmd.Connection = sqlcon;

    Bad example

    SqlCommand sqlcmd = new SqlCommand(); sqlcmd.CommandText = "
                         dbo.proc_InsertCustomer"; sqlcmd.CommandType
                         = CommandType.StoredProcedure; sqlcmd.Connection = sqlcon;

    Good example

    We have a program called SSW Code Auditor to check for this rule.

  41. Do you always make file paths @-quoted?

    In C#, backslashes in strings are special characters used to produce "escape sequences", for example \r\n creates a line break inside the string. This means that if you want to put a backslash in a string you must escape it out by inserting two backslashes for every one, e.g. to represent C:\Temp\MyFile.txt you would use C:\Temp\MyFile.txt . This makes the file paths hard to read, and you can't copy and paste them out of the application.

    By inserting an @ character in front of the string, e.g. @"C:\Temp\MyFile.txt" , you can turn off escape sequences, making it behave like VB.NET. File paths should always be stored like this in strings.

    We have a program called SSW Code Auditor to check for this rule.
  42. Do you always use Option Explicit?

    Option Explict should always only be used in VB.NET.

    This will turn many of your potential runtime errors into compile time errors, thus saving you from potential time bombs!

    We have a program called SSW Code Auditor to check for this rule.
  43. Do you use Asynchronous method and CallBack when invoke web method?

    Web service and web invoking becomes more and more popular today as the distributed systems are widely deployed. However, the normal method invoking may cause a disaster when apply to web method because transmitting data over Internet may cause your program to hang for a couple of minutes.

    private static string LoadContentFromWeb(string strUri)
    
         {
         ...
    
    WebResponse response = request.GetResponse();
    
         ...
         }

    :::Figure: Bad example - Invoke web method by the normal way (because this will hang your UI thread)
    :::

    The correct way to invoke web method is using asynchronous call to send a request and use the delegated CallBack method to read the response, see code below:

    public static void GetOnlineVersionAsync(string strUri)
         {
             try
             {
              ...
    
    IAsyncResult r = request.BeginGetResponse(new AsyncCallback(ResCallBack), request);
              }
              catch(WebException ex)
             {
                 Console.WriteLine(ex.ToString()) ;
              }
         }
    
         private static void ResCallBack(IAsyncResult ar)
         {
            try
            {
          string content = string.Empty;
    
              WebRequest req = (WebRequest)ar.AsyncState;
    
              WebResponse response = req.EndGetResponse(ar);
    
               ...
    
               RaiseOnProductUpdateResult(content);
    
            }
            catch(WebException ex)
            {
               Console.WriteLine(ex.ToString());
               RaiseOnProductUpdateResult(string.Empty);
             }
         }

    Figure: Good example - Invoke web method by using asynchronous method and CallBack (UI thread will be free once the request has been sent)

    When working with Web Service, asynchronous methods will be automatically generated by your web services proxy.

    Figure: Automatically generated asynchronous methods

  44. Do You Create Different App.Config for Different Environment?

    Every application has different settings depending on the environment it is running on, e.g. production, testing or development environment.It is much easier and efficient if app.config is provided in several environment types, so then the developer can just copy and paste the required app.config.

    AppConfigBad

    Figure: Bad Example - Only 1 App.config provided

    App config

    Figure : Good Example - Several App.config are provided

  45. Do you make your projects regenerated easily?

    If you projects is generated by code generators (Code Smith, RAD Software NextGeneration, etc.), you should make sure it will be regenerated easily.

    Code generators can be used to generate whole Windows and Web interfaces, as well as data access layers and frameworks for business layers, making them an excellent time saver. However making the code generators generate your projects for the first time takes much time and involves lots of configurations.

    In order to make it easier to do the generation next time, we recommend you putting the command line of operations into a file called "_Regenerate.bat". When you want to generate it next time, just run the bat file and all things are done in a blink.

    cs D:\DataDavidBian\Personal\New12345\NetTiers.csp

    Figure: An example of command line of Code Smith for NorthWind. Thus "_Regenerate.bat" file must exist in your projects (of course so must other necessary resources).

    RegenerateBat
    Figure: Good - Have _Regenerate.bat in the solution

  46. Do you use comments not exclusion files when you ignore a Code Analysis rule?

    When running code analysis you may need to ignore some rules that aren't relevant to your application. Visual Studio has a handy way of doing thing.

    code analysis bad example
    Figure: Bad example -

    code analysis good example
    Figure: Good example - The Solution and Projects are named consistently

    public partial class Account
    {
        [System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Usage", "CA2214:DoNotCallOverridableMethodsInConstructors", Justification="Gold Plating")]
        public Account()
        {
            this.Centres = new HashSet();
            this.AccountUsers = new HashSet();
            this.Campaigns = new HashSet();
        }
    }

    Figure: Good example - The Solution and Projects are named consistently

  47. C#/VB.NET Configuration - Do you know not to use debug compilation in production applications?

    Debug compilation considerably increases memory footprint since debug symbols are required to be loaded.

    Additionally it will hit the performance because that will include the optional debug and trace statements in the output IL code.

    In debug mode the compiler emits debug symbols for all variables and compiles the code as is. In release mode some optimizations are included:

    • unused variables do not get compiled at all
    • some loop variables are taken out of the loop by the compiler if they are proven to be invariants
    • code written under #debug directive is not included etc.

    The rest is up to the JIT.

    As per: C# debug vs release performance.

    debug bad
    Figure: Bad Example

    debug good
    Figure: Good Example

    We have a program called SSW Code Auditor to check for this rule.

  48. Do you know BAK files must not exist?

    Finding a file with a BAK extension is a "call sign" that your folders need a tidy up.

    bak bad
    Figure: Bad example

    bak good
    Figure: Good example

    We have a program called SSW Code Auditor to check for this rule.

  49. Do you know zz'ed files must not exist in Source Control?

    Keeping your projects tidy says good things about the teams maturity. Therefore any files and folders that are prefixed with zz must be deleted from the project.

    zzed bad
    Figure: Bad example - Zz'ed files should not exist in Source Control

    zzed good
    Figure: Good example - No zz'ed files in Source Control

  50. Do you create your own Process Template to fit into your environment?

    The built-in Process Templates in TFS will not always fit into your environment, so you can fix it by creating your own.

    SSWAgile Baseline 1
    Figure: Good example - The "Baseline work (hours)" field was added to keep the original estimate

    SSWAgile Additional
    Figure: Good example - "Additional Task" was added to track scope creep

    SSWAgile URL
    Figure: Good example - The "URL" field has been added to allow reverse view from the web page

    SSWAgile RichText
    Figure: Good example - Rich text has been enabled in the "Description" field to allow users to enter better text for the requirement

    Note: The URL field is used in the SSW Smashing Barrier.

  51. Do you know the right methodology to choose (new project in VS 2012)?

    When you decide to use TFS 2012, you have the option to choose from different methodologies (aka. Process Templates).

    Choosing the right template to fit into your environment is very important.

    VSTS2010ProcessTemplates
    Figure: Built-in Process Templates in Visual Studio 2012 with TFS 2012

    It is recommended to use the top option, the Scrum one. If you think the built-in template is not going to fulfil your needs, customize it and create your own.

    If you want help customising your own Process Template, call a TFS guru at SSW on +61 2 9953 3000.

  52. Do you always say "Option Strict On"?

    Fixing the Option Strict problem is one of the most annoying aspects of the Visual Basic development environment relates to Microsofts' decision to allow late binding. By turning Option Strict Off by default, many type-casting errors are not caught until runtime. You can make VB work the same as other MS languages (which always do strict type-checking at design time) by modifying these templates.

    So, always set Option Strict On right from the beginning of the development.

    Before you do this, you should first back up the entire VBWizards directory. If you make a mistake, then the templates will not load in the VS environment. You need to be able to restore the default templates if your updates cause problems.

    To configure each template to default Option Strict to On rather than Off, load each .vbproj template with VB source code into an editor like Notepad and then change the XML that defines the template. For example, to do this for the Windows Application template, load the file: Windows Application\Templates\1033\WindowsApplication.vbproj

    Technically, you do not have to add the Option Explicit directive, because this is the default for VB; but I like to do it for consistency. Next, you must save the file and close Notepad. Now, if you load a new Windows Application project in the VS environment and examine Project Properties, you will see that Option Strict has been turned on by default.

    Figure:Bad Example – Option Strict is Off

    Figure:Good Example – Option Strict is On

    In order for this setting to take effect for all project types, you must update each of the corresponding .vbproj templates. After making the changes on your system, you will need to deploy the new templates to each of your developers' machines in order for their new projects to derive from the updated templates.

    However, sometimes we don't do this because of too much work. In some scenarios, such as Wrappers around the COM code, and Outlook stuff with object model, there is going to be lots of work to fix all the type-checking errors. Actually it is necessary to use Object type as parameters or variables when you deal with COM components.

  53. Do you keep your nuget packages small?

    When creating NuGet packages, it is better to create few small packages instead of creating one monolithic package that combines several relatively independent features.

    When you are making a decision to package your reusable code and publish it to NuGet sometimes it is worths splitting your package into few smaller packages. This will improve maintainability and transparency of your package. It will also make it much easier to consume and contribute to.

    Lets assume you have created a set of libraries that add extra functionality to web applications. Some libraries classes work with both ASP.NET MVC and ASP.NET WebForms projects, some are specific to ASP.NET MVC and some are related to security. Each library may also have external dependencies on some other NuGet packages. One way to package your libraries would be to create a single YourCompany.WebExtensions package and publish it to NuGet. Sounds like a great idea, but it has number of issues. What if someone only wants to use some MVC specific classes from your package, they would still have to add your whole package, which will add some other external dependencies that you will never use.

    A better approach would be to split your libraries into 3 separate packages: YourCompany.WebExtensions .Core , YourCompany.WebExtensions .MVC and YourCompany.WebExtensions .Security . YourCompany.WebExtensions .Core will only contain core libraries that can be used in both ASP.NET WebForm and MVC. YourCompany.WebExtensions .MVC package will contain only MVC specific code and will have a dependency on the Core package. YourCompany.WebExtensions .Security will only contain classes that are related to security. This will give consumer a choice as well as better transparency to the features you are trying to offer. It will also have a better maintainabilty, as one team can work on one package while you are working on another one. Patches and enhancements can also be introduced much easier.

    package2 1710232021935
    Figure: Bad Example - One big library with lots of features, where most of them are obsolete with a release of ASP.NET MVC 5

    package
    Figure: Good Example - Lots of smaller self contained packaged with a single purpose

  54. Do you know how to track down permission problems?

    You need process monitor to track down permissions problems.

    E.g. Problem

    To hunt down a problem where say the IIS server couldn’t write to a directory, even after you have given permissions to the app pool account.

    Solution

    1. Install and run process monitor
    2. Apply filter
    3. Rejoice

    process monitor filter
    Figure: Apply filter to only show "ACCESS DENIED" results

    event properties
    Figure: And here we have the offending account

  55. Do you know the best criteria for evaluating 3rd party software?

    At SSW we evaluate and use a lot of 3rd party libraries. Before considering a library for further evaluation we ask the following questions:

    • Is it open source?
    • Is the licence LGPL, Apache, or MIT? Comparison of licences
    • Is there a quick start guide?
    • Is there a FAQ?
    • Is there an API guide?
    • Is it easy to install? Is there a NuGet package?
    • Is there an active user community?

    If the answer is yes to all of these questions then the library is definitely worth further evaluation.

  56. Do you know the best sample applications?

    Before starting a software project and evaluating a new technology, it is important to know what the best practices are. The easiest way to get up and running is by looking at a sample application. Below is a list of sample applications that we’ve curated and given our seal of approval.

    Northwind Schema

    northwind schema

    SQL Server

    SQL Server and Azure SQL Database

    .NET Core

    • SSW Clean Architecture Solution Template An example REST API build with .NET 7 following the principles of Clean Architecture.
    • SSW Northwind Traders A reference application built using Clean Architecture, Angular 8, EF Core 7, ASP.NET Core 7, Duende Identity Server 6.
    • eShopOnWeb Sample ASP.NET Core 6.0 reference application, powered by Microsoft, demonstrating a layered application architecture with monolithic deployment model. Download the eBook PDF from docs folder.
    • eShopOnContainers Cross-platform .NET sample microservices and container based application that runs on Linux Windows and macOS. Powered by .NET 7, Docker Containers and Azure Kubernetes Services. Supports Visual Studio, VS for Mac and CLI based environments with Docker CLI, dotnet CLI, VS Code or any other code editor.
    • ContosoUniversity This application takes the traditional Contoso University sample applications (of which there have been many), and try to adapt it to how our "normal" ASP.NET applications are built.

    Blazor

    UI - Angular

    • Tour of Heroes Default Angular sample app as part of the documentation
    • ngrx Example App Example application utilizing @ngrx libraries, showcasing common patterns and best practices

    UI - React

  57. Do you reference "most" .dlls by Project?

    When you obtain a 3rd party .dll (in-house or external), you sometimes get the code too. So should you:

    • reference the Project (aka including the source) or
    • reference the assembly?

    When you face a bug, there are 2 types of emails you can send:

    1. Dan, I get this error calling your Registration.dll? or
    2. Dan, I get this error calling your Registration.dll and I have investigated it. As per our conversation, I have changed this xxx to this xxx.

    The 2nd option is preferable.The simple rule is:

    • If there are no bugs then reference the assembly, and
    • If there are bugs in the project (or any project it references [See note below]) then reference the project.

    Since most applications have bugs, therefore most of the time you should be using the second option.

    If it is a well tested component and it is not changing constantly, then use the first option.

    1. Add the project to solution (if it is not in the solution).
      Add existing project
      Figure: Add existing project
    2. Select the "References" folder of the project you want to add references to, right click and select "Add Reference...".
      Add reference
      Figure: Add reference
    3. Select the projects to add as references and click OK.
      Select projects to reference
      Figure: Select the projects to add as references

    Note: We have run into a situation where we reference a stable project A, and an unstable project B. Project A references project B. Each time project B is built, project A needs to be rebuilt.

    Now, if we reference stable project A by dll, and unstable project B by project according to this standard, then we might face referencing issues, where Project A will look for another version of Project B ?the one it is built to, rather than the current build, which will cause Project A to fail.

    To overcome this issue, we then reference by project rather than by assembly, even though Project A is a stable project. This will mitigate any referencing errors.

  58. Do you reference "very calm/stable" .dlls by Assembly?

    If we lived in a happy world with no bugs, I would be recommending this approach of using shared components from source safe. As per the prior rule, you can see we like to reference "most" .dlls by project.However if you do choose to reference a .dll without the source, then the important thing is that if the .dll gets updated by another developer, then there is *nothing* to do for all other developers ?they get the last version when they do your next build. Therefore you need to follow this:

    As the component user, there are six steps, but you only need to do them once:

    1. First, we need to get the folder and add it to our project, so in SourceSafe, right click your project and create a subfolder using the Create Project (yes, it is very silly name) menu.
      use createvssfolder
      Use Create VSS Folder
      Figure: Create 'folder' in Visual Source Safe Name it References
      use referencesfolder
      Use References Folder
      Figure: 'References' folder
    2. Share the dll from the directory, so if I want SSW.Framework.Configuration, I go to $/ssw/SSWFramework/Configuration/bin/Release/

      I select both the dll and the dll.xml files, right-click and drag them into my $/ssw/zzRefs/References/ folder that I just created in step 1.

      use dllsxml
      Use Dlls Xml
      Figure: Select the dlls that I want to use
      use rightclicktoshare
      Use right click to share
      Figure: Right drag, and select "Share"

    3. Still in SourceSafe, select the References folder, run get latest?to copy the latest version onto your working directory.
      use getlatest
      Use Get Latest
      Figure: Get Latest from Visual Source Safe VSS may ask you if you want to create the folder, if it doesnt exist. Yes, we do.
    4. Back in VS.NET, select the project and click the show-all files button in the solution explorer, include the References folder into the project (or get-latest if its already there)
      use includeinvs
      Use Include Invs
      Figure: Include the files into the current project
    5. IMPORTANT! If the files are checked-out to you when you include them into your project, you MUST un-do checkout immediately.

      You should never check in these files, they are for get-latest only.

      use undocheckout
      Use Undo Checkout
      Figure: Undo Checkout, when VS.NET checked them out for you...

    6. Add Reference?in VS.NET, browse to the References?subfolder and use the dll there.
    7. IMPORTANT! You need to keep your 'References' folder, and not check the files directly into your bin directory. Otherwise when you 'get latest', you won't be able to get the latest shared component.

    All done. In the future, whenever you do get-latest?on the project, the any updated dlls should come down and be linked the next time you compile. Also, if anyone checks out your project from Source Safe, they will have the project linked and ready to go.

  59. Do you use a Project Portal for your team and client?

    When a new developer joins a project, there is often a sea of information that they need to learn right away to be productive. This includes things like who the Product Owner and Scrum Master are, where the backlog is, where staging and production environments are, etc.

    Make it easy for the new developer by putting all this information in a central location like the Visual Studio dashboard.

    Note: As of October 2021, this feature is missing in GitHub Projects.

    plaindashboard

    2016 06 06 8 00 55
    Figure: Bad example - Don't stick with the default dashboard, it's almost useless

    2016 06 06 9 15 14
    Figure: Good example - This dashboard contains all the information a new team member would need to get started

    The dashboard should contain:

    1. Who the Product Owner is and who the Scrum Master is
    2. The Definition of Ready and the Definition of Done
    3. When the daily standups occur and when the next Sprint Review is scheduled
    4. The current Sprint backlog
    5. Show the current build status
    6. Show links to:

      • Staging environment
      • Production environment
      • Any other external service used by the project e.g. Octopus Deploy, Application Insights, RayGun, Elmah, Slack

    Your solution should also contain the standard _Instructions.docx file for additional details on getting the project up and running in Visual Studio.

    For particularly large and complex projects, you can use an induction tool like SugarLearning to create a course for getting up to speed with the project.

    2016 06 06 7 18 43
    Figure: SugarLearning induction tool

  60. Do you use Trace.Fail or set AssertUIEnabled="true" in your web.config?

    Have you ever seen dialogs raised on the server-side? These dialogs would hang the thread they were on, and hang IIS until they were dismissed. In this case, you might use Trace.Fail or set AssertUIEnabled="true" in your web.config.

    See Scott's blog Preventing Dialogs on the Server-Side in ASP.NET or Trace.Fail considered Harmful  public static void ExceptionFunc(string strException) {     System.Diagnostics.Trace.Fail(strException); } Figure: Never use Trace.Fail <configuration>    <system.diagnostics>       <assert AssertUIEnabled="true" logfilename="c:\log.txt" />    </system.diagnostics> </configuration> Figure: Never set AssertUIEnabled="true" in web.config <configuration>    <system.diagnostics>       <assert AssertUIEnabled="false" logfilename="c:\log.txt" />    </system.diagnostics> </configuration> Figure: Should set AssertUIEnabled="false" in web.config

  61. Project setup - Do you containerize your dev environment?

    Developers love the feeling of getting a new project going for the first time. Unfortunately, the process of making things work is often a painful experience. Every developer has a different setup on their PC so it is common for a project to require slightly different steps.

    The old proverb is "Works on my machine!"

    Luckily, there is a way to make all development environments 100% consistent.

    Video: Dev Containers from Microsoft (was Remote Containers) with Piers Sinclair (5 min)

    Dev Containers let you define all the tools needed for a project in a programmatic manner. That gives 3 key benefits:

    ✅ Consistent isolated environments

    ✅ Pre-installed tools with correct settings

    ✅ Quicker setup (hours becomes minutes)

    How do I set it up?

    Microsoft has a great tutorial and documentation on how to get it running.

    How does it work?

    Dev Containers are setup with a few files in the repo:

    These files define an image to use, tools to install and settings to configure.

    Once those files are configured, you can simply run a command to get it running.

    Where to run it - locally or in the cloud?

    There are 2 places that Dev Containers can be run:

    Locally works awesome if you have a powerful PC. However, sometimes you might need to give an environment to people who don't have a powerful PC or you might want people to develop on an iPad. In that case it's time to take advantage of the cloud.

    ⚠️ Warning - Supported Tools

    The following tools are not supported yet

    NervousDevs
    Figure: Bad example - Before using Dev Containers you would be missing a lot of pre-requisites!

    HappyDevs 1710232021932
    Figure: Good example - After using Dev Containers you would be as happy as Larry!

    If you have a reason for not doing all of this, you should at least containerize your SQL Server environment.

  62. Project setup - Do you use Docker to containerize your SQL Server environment?

    Often, developers jump onto a new project only to realize they can't get the SQL Server instance running, or the SQL Server setup doesn't work with their machine.

    Even if they are able to install SQL Server, developers have a better option with a smaller footprint on their dev machine. Containers give them the ability to work on multiple projects with different clients. In a word "Isolation" baby!

    Using Docker to run SQL Server in a container resolves common problems and provides numerous benefits:

    Video: Run SQL Server in Docker! (5 min)

    In the video, Jeff walks through how and why to run SQL in a container. However, you should not use the Docker image he chose to use in the video.

    For SQL Server with Docker you have a couple of choices being:

    • Azure-Sql-Edge - mcr.microsoft.com/azure-sql-edge (recommended)
    • Microsoft SQL Server - mcr.microsoft.com/mssql/server

    Warning: If you have an ARM chip, the Docker image in the video is not for you. Instead use "Azure-Sql-Edge"

    Benefits

    Isolation: Docker enables you to create separate networks with SQL Server and control access, allowing for multiple instances on a single PC. More importantly if you are a consultant and work on different projects, you need this

    Fast to get Ready to Run (without installing): Docker eliminates the need for repetitive and mundane configuration tasks, speeding up your SQL Server setup. This is especially beneficial for a CI/CD pipeline

    Testing Flexibility: Docker allows for testing against different versions of SQL Server simply by changing an image tag or SQL Server type in the environment variable

    Resetting for Testing: The contents of the image are immutable meaning that it is easy to remove the container, and spin up a new one with the original state. In short, Docker provides the ability to easily reset all changes for fresh testing scenarios

    Transparent Configuration: Docker provides clear and explicit configuration steps in the Dockerfile and docker-compose.yml

    Cross-Platform: These days developers in a team have different Operating Systems. The Docker engine runs on many operating systems, making it ideal for diverse development environments

    runningsqllocally
    Figure: Bad example - Running a SQL Server environment outside a container

    dockersql
    Figure: Good example - Using Docker to containerize a SQL Server environment

    Now you've done this, the next step is to containerize the entire project.

  63. Do you use Minimal APIs over Controllers?

    Traditional controllers require a lot of boilerplate code to set up and configure. Most of the time your endpoints will be simple and just point to a mediator handler.

    Minimal APIs are a simplified approach for building fast HTTP APIs with ASP.NET Core. You can build fully functioning REST endpoints with minimal code and configuration. Skip traditional scaffolding and avoid unnecessary controllers by fluently declaring API routes and actions.

    Check out the Microsoft Docs for more information on Minimal APIs.

    [ApiController]
    [Route("[controller]")]
    public class HelloWorldController : ControllerBase
    {
        [HttpGet]
        public IActionResult Get()
        {
            return Ok("Hello World!");
        }
    }

    Figure: Bad Example - 9 lines of code for a simple endpoint

    app.MapGet("/", () => "Hello World!");

    Figure: Good Example - 1 line of code for a simple endpoint

    Minimal APIs are great for

    • Learning
    • Quick prototypes
    • Vertical Slice Architecture
    • A similar developer experience to NodeJS
    • Performance
We open source. Powered by GitHub