Secret ingredients to quality software

SSW Foursquare

Rules to Better .NET Projects - 64 Rules

If you still need help, visit ASP.NET MVC Web Application Development and book in a consultant.

  1. When developing software, we implement a dependency injection centric architecture.

    dependency injection structure
    Figure: A Dependency Injection based architecture gives us great maintainability

    solution structure
    Figure: Good Example - The Solution and Projects are named consistently and the Solution Folders organize the projects so that they follow the Onion Architecture

    Dependencies and the application core are clearly separated as per the Onion Architecture.

    In the above example you can clearly see:

    Common Library projects are named [Company].[AssemblyName].
    E.g. BCE.Logging is a shared project between all solutions at company BCE.

    Other projects are named [Company].[Solution Name].[AssemblyName].
    E.g. BCE.Sparrow.Business is the Business layer assembly for company ‘BCE’, solution ‘Sparrow’.

    We have separated the unit tests, one for each project, for several reasons:

    • It provides a clear separation of concerns and allows each component to be individually tested
    • The different libraries can be used on other projects with confidence as there are a set of tests around them
  2. Do you name your startup form consistently?

    In every Windows application project, we need to have a main, start-up or wizard page form for a better structure and design.

    Bad Project without Main Form
    Bad example - The entry form is not immediately recognizable because of a non standard name
    Good with Main Form
    Good example - The entry form follows the naming convention rule

    | We have a program called SSW Code Auditor to check for this rule.

    Note: In Code Auditor we check for Form named: Startup, MainForm and WizardPage.
  3. All the DLL references and files needed to create a setup.exe should be included in your solution. However, just including them as solution items is not enough, they will look very disordered (especially when you have a lot of solution items). And from the screenshot below, you might be wondering what the _Instructions.docx is used for...

    SSW   Rules  NET Projects   Bad Solution

    Bad example - An unstructured solution folder

    An ideal way is to create "sub-solution folders" for the solution items, the common ones are "References" and "Setup". This will make your solution items look neat and in order. Look at the screenshot below, now it makes sense, we know that the _Instructions.docx contains the instructions of what to do when creating a setup.exe. SSW   Rules  NET Projects   Good Solution

    Good example - A well structured solution folder has 2 folders - "References" and "Setup"

    We have a program called SSW Code Auditor to check for this rule.
  4. When programming in a Dot Net environment it is a good practice to remove the default imports that aren't used frequently in your code.

    This is because IntelliSense lists will be harder to use and navigate with too many imports. For example if in VB.NET, Microsoft.VisualBasic would be a good item to have in the imports list, because it will be used in most areas of your application.

    To remove all the default imports, load Project Property page and select Common properties - Imports.

    Imports VB
    Figure: Using aliases with the Imports Statement The Import statement makes it easier to access methods of classes by eliminating the need to explicitly type the fully qualified reference names. Aliases let you assign a friendlier name to just one part of a namespace.

    For example, the carriage return-line feed sequence that causes a single piece of text to be displayed on multiple lines is part of the ControlChars class in the Microsoft.VisualBasic namespace. To use this constant in a program without an alias, you would need to type the following code:

    MsgBox("Some text" & Microsoft.VisualBasic.ControlChars.crlf _
        & "Some more text")

    Imports statements must always be the first lines immediately following any Option statements in a module. The following code fragment shows how to import and assign an alias to the Microsoft.VisualBasic.ControlChars namespace:

    Imports CtrlChrs=Microsoft.VisualBasic.ControlChars

    Future references to this namespace can be considerably shorter:

    MsgBox("Some text" & CtrlChrs.crlf & "Some more text")

    If an Imports statement does not include an alias name, elements defined within the imported namespace can be used in the module without qualification. If the alias name is specified, it must be used as a qualifier for names contained within that namespace.

  5. The designer should be used for all GUI design. Controls will be dragged and dropped onto the form and all properties should be set in the designer, e.g.

    • Labels, TextBoxes and other visual elements
    • ErrorProviders
    • DataSets (to allow data binding in the designer)

    Things that do not belong in the designer:

    • Connections
    • Commands
    • DataAdapters

    However, and DataAdapter objects should not be dragged onto forms, as they belong in the business tier. Strongly typed DataSet objects should be in the designer as they are simply passed to the business layer. Avoid writing code for properties that can be set in the designer.

    Figure: Bad example - Connection and Command objects in the Designer

    Good example - Only visual elements in the designer

  6. There are many ways to reference images in ASP.NET. There are two different situations commonly encountered by developers when working with images:

    • Scenario #1: Images that are part of the content of a specific page eg. a picture used only on one page
    • Scenario #2: Images that are shared across on user controls which are shared across different pages in a site eg. a shared logo used across the site (commonly in user controls, or master pages)

    Each of these situations requires a different referencing method.

    Option #1:Absolute Paths (Root-Relative Paths)

    Often developers reference all images by using an absolute path (prefixing the path with a slash, which refers to the root of the site), as shown below.

    <img src="/Images/spacer.gif" height="1" width="1">

    Bad example - Referencing images with absolute paths

    This has the advantage that <img> tags can easily be copied between pages, however it should not be used in either situation, because it requires that the website have its own site IIS and be placed in the root (not just an application), or that the entire site be in a subfolder on the production web server. For example, the following combinations of URLs are possible with this approach:

    Staging Server URLProduction Server URL

    As shown above, this approach makes the URLs on the staging server hard to remember, or increases the length of URLs on the production web server.

    Verdict for Scenario #1:

    Verdict for Scenario #2:

    Option #2:Relative Paths

    Images that are part of the content of a page should be referenced using relative paths, e.g.

    <img src="../Images/spacer.gif" height="1" width="1">

    Good example - Referencing images with absolute paths

    However, this approach is not possible with images on user controls, because the relative paths will map to the wrong location if the user control is in a different folder to the page.

    Verdict for Scenario #1:

    Verdict for Scenario #2:

    Option #3:Application-Relative Paths

    In order to simplify URLs, ASP.NET introduced a new feature, application relative paths. By placing a tilde (~) in front of a path, a URL can refer to the root of a site, not just the root of the web server. However, this only works on Server Controls (controls with a runat="server" attribute).

    To use this feature, you need either use ASP.NET Server controls or HTML Server controls, as shown below.

    <asp:Image ID="spacerImage" ImageUrl="~/Images/spacer.gif" Runat="server" />
    <img id="spacerImage" src="~/Images/spacer.gif" originalAttribute="src" originalPath=""~/Images/spacer.gif"" runat="server">

    Good example - Application-relative paths with an ASP.NET Server control

    Using an HTML Server control creates less overhead than an ASP.NET Server control, but the control does not dynamically adapt its rendering to the user's browser, or provide such a rich set of server-side features.

    Verdict for Scenario #1:

    Verdict for Scenario #2:

    Note: A variation on this approach involves calling the Page.ResolveUrl method with inline code to place the correct path in a non-server tag.

    <img src='<%# originalAttribute="src" originalPath="'<%#" Page.ResolveUrl("~/Images/spacer.gif") %>'>

    Bad example - Page.ResolveUrl method with a non-server tag

    This approach is not recommended, because the data binding will create overhead and affect caching of the page. The inline code is also ugly and does not get compiled, making it easy to accidentally introduce syntax errors.

  7. The Microsoft.VisualBasic library is provided to ease the implementation of the VB.NET language itself. For VB.NET, it provides some methods familiar to the VB developers and can be seen as a helper library. It is a core part of the .NET redistribution and maps common VB syntax to framework equivalents, without it some of the code may seem foreign to VB programmers.

    Microsoft.VisualBasic.NET Framework
    CInt, CStrConvert.ToInt(...), ToString()
    vbCrLfEnvironment.NewLine, or "\r\n"
  8. This is where you should focus your efforts on eliminating whatever VB6 baggage your programs or developer habits may carry forward into VB.NET. There are better framework options for performing the same functions provided by the compatibility library You should heed this warning from the VS.NET help file: Caution: It is not recommended that you use the VisualBasic.Compatibility namespace for new development in Visual Basic .NET. This namespace may not be supported in future versions of Visual Basic. Use equivalent functions or objects from other .NET namespaces instead.? ad.?


    • InputBox
    • ControlArray
    • ADO support in Microsoft.VisualBasic.Compatibility.Data
    • Environment functions
    • Font conversions
  9. Incrementally as we do more and more .NET projects, we discover that we are re-doing a lot of things we've done in other projects. How do I get a value from the config file? How do I write it back? How do I handle all my uncaught exceptions globally and what do I do with them?

    Corresponding with Microsoft's release of their application blocks, we've also started to build components and share them across projects.

    Sharing a binary file with SourceSafe isn't a breeze to do, and here are the steps you need to take. It can be a bit daunting at first.

    As the component developer, there are four steps:

    1. In Visual Studio.NET, Switch to release build
      build release
      Build Release
      Figure: Switch to release configuration
    2. In your project properties, make sure the release configuration goes to the bin\Release? folder. While you are here, also make sure XML docs are generated. Use the same name as your dll but change the extension to .xml (eg. for SSW.Framework.Configuration.dll -> add SSW.Framework.Configuration.xml)
      build projectproperty small
      Build Project Property
      Figure: Project properties Note: The following examples are considered being used for C#. Visual Basic, by default, does not have \bin\Release and \bin\Debug which means that the debug and release builds will overwrite each other unless the default settings are changed to match C# (recommended). VB does not support XML comments either, please wait for the next release of Visual Studio (Whidbey).
      Change to C#
      Figure: Force change to match C#
    3. If this is the first time, include/check-in the release directory into your SourceSafe
      build include
      Build Include
      Figure: Include the bin\Release directory into source safe
    4. Make sure everythings checked-in properly. When you build new versions, switch to Release?mode and checkout the release dlls, overwrite them, and when you check them back in they will be the new dll shared by other applications.
    5. If the component is part of a set of components, located in a solution, with some dependency between them. You need to check out ALL the bin\Release folders for all projects in that solution and do a build. Then check in all of them. This will ensure dependencies between these components don't conflict with projects that reference this component set. In other words, a set of components such as, increment versions AS A WHOLE. One component in this set changes will cause the whole set to re-establish internal references with each other.
  10. In VSTS 2008/2005, the MS Project integration was very bad. You cannot publish your hierarchies with your work items. In VSTS 2010, this had been fixed. With the native support for hierarchy work item support in TFS 2012, all of your work in MS Project will be published to TFS 2012.

    VSTS2010 MSProject Figure: VSTS2010 has better MS Project integration support - you can publish your hierarchies to TFS now

  11. You should use the SharePoint portal in VSTS2012 because it provides you dashboards to monitor your projects as well as quick access to a lot of reports. You are able to create and edit work items via the portal as well.

    VS2012 SharePointPortal Figure: SharePoint portal in VSTS 2012

  12. Do you keep your Assembly Version Consistent?

    VersionConsistent1 Figure: Keep these two versions consistent If you are not using the GAC, it is important to keep AssemblyVersion, AssemblyFileVersion and AssemblyInformationalVersionAttribute the same, otherwise it can lead to support and maintenance nightmares. By default these version values are defined in the AssemblyInfo file. In the following examples, the first line is the version of the assembly and the second line is the actual version display in file properties.

    [assembly: AssemblyVersion("2.0.0.*")]
     [assembly: AssemblyFileVersion("2.0.0.*")]
     [assembly: AssemblyInformationalVersion("2.0.0.*")]

    Bad example - AssemblyFileVersion and AssemblyInformationalVersion don't support the asterisk (*) character

    If you use an asterisk in the AssemblyVersion, the version will be generated as described in the MSDN documentation . If you use an asterisk in the AssemblyFileVersion, you will see a warning, and the asterisk will be replaced with zeroes. If you use an asterisk in the AssemblyInformationVersion, the asterisk will be stored, as this version property is stored as a string. AssemblyFileVersion Warning Figure: Warning when you use an asterisk in the AssemblyFileVersion

    [assembly: AssemblyVersion("2.0.*")]
     [assembly: AssemblyFileVersion("")]
     [assembly: AssemblyInformationalVersion("2.0")]

    Good example - MSBuild will automatically set the Assembly version on build (when not using the GAC)

    Having MSBuild or Visual Studio automatically set the AssemblyVersion on build can be useful if you don't have a build server configured.

    If you are using the GAC, you should adopt a single AssemblyVersion and AssemblyInformationalVersionAttribute and update the AssemblyFileVerison with each build.

    [assembly: AssemblyVersion("")]
     [assembly: AssemblyFileVersion("")]
     [assembly: AssemblyInformationalVersion("My Product 2015 Professional")]

    Good example - the best way for Assembly versioning (when using the GAC)

    If you're working with SharePoint farm solutions (2007, 2010, or 2013), in most circumstances the assemblies in your SharePoint WSPs will be deployed to the GAC. For this reason development is much easier if you don't change your AssemblyVersion, and increment your AssemblyFileVersion instead.

    The AssemblyInformationalVersion stores the product name as marketed to consumers. For example for Microsoft Office, this would be "Microsoft Office 2013", while the AssemblyVersion would be, and the AssemblyFileVersion is incremented as patches and updates are released.

    Note: It would be good if Microsoft changed the default behaviour of AssemblyInformationalVersionAttribute to default to the AssemblyVersion. See Mikes suggestion for improving the version number in the comments here.

  13. How do you get a setting from a configuration file? What do you do when you want to get a setting from a registry, or a database? Everyone faces these problems, and most people come up with their own solution. We used to have a few different standards, but when Microsoft released the Configuration Application Blocks, we have found that working to extend it and use it in all our projects saves us a lot of time! Use a local configuration file for machine and/or user specific settings (such as a connection string), and use a database for any shared values such as Tax Rates.

    See how we configured this reset default settings functionality with the Configuration Block in the .NET Toolkit

  14. In almost every application we have a user settings file to store the state of the application. We want to be able to reset the settings if anything goes wrong.

    See how we configured this reset default settings functionality with the Configuration Block in the .NET Toolkit

  15. Do you version your .xml files?

    It is good to store program settings in an .xml file. But developers rarely worry about future schema changes and how they will inform the user it is an old schema.

    What is wrong with this?

    <?xml version="1.0" standalone="yes"?>
      <xs:schema id="NewDataSet" xmlns=""
        <xs:element name=NewDataSet" msdata:IsDataSet="true" msdata:Locale="en-AU">
          <xs:choice maxOccurs="unbounded">
           <xs:element name="Table1">
           <xs:element name="DateUpdated" type="xs:dateTime" minOccurs="0" />
           <xs:element name="NewDatabase" type="xs:boolean" minOccurs="0" />
           <xs:element name="ConnectionString" type="xs:string" minOccurs="0" />
           <xs:element name="SQLFilePath" type="xs:string" minOccurs="0" />
           <xs:element name="TimeOut" type="xs:int" minOccurs="0" />
           <xs:element name="TurnOnMSDE" type="xs:boolean" minOccurs="0" />
           <xs:element name="KeepXMLRecords" type="xs:boolean" minOccurs="0" />
           <xs:element name="UserMode" type="xs:boolean" minOccurs="0" />
           <xs:element name="ReconcileScriptsMode" type="xs:boolean" minOccurs="0" />
           <xs:element name="FolderPath" type="xs:string" minOccurs="0" /> />
           <xs:element name="SelectedFile" type="xs:string" minOccurs="0" />
           <xs:element name="UpdateVersionTable" type="xs:boolean" minOccurs="0" />
          <ConnectionString>Provider=SQLOLEDB.1;Integrated Security=SSPI;
          Persist Security Info=False;
          Data Source=(local);Initial Catalog=master</ConnectionString>
          <FolderPath>C:\Program Files\SSW SQL Deploy\Samples\DatabaseSQLScripts\
          <SelectedFile />

    Bad example - XML file without version control

    <?xml version="1.0" standalone="yes"?>
      <xs:schema id="NewDataSet" xmlns=""
        <xs:element name=NewDataSet" msdata:IsDataSet="true" msdata:Locale="en-AU">
          <xs:choice maxOccurs="unbounded">
           <xs:element name="Table1">
           <xs:element name="Version" type="xs:string" minOccurs="0" />
           <xs:element name="DateUpdated" type="xs:dateTime" minOccurs="0" />
           <xs:element name="NewDatabase" type="xs:boolean" minOccurs="0" />
           <xs:element name="ConnectionString" type="xs:string" minOccurs="0" />
           <xs:element name="SQLFilePath" type="xs:string" minOccurs="0" />
           <xs:element name="TimeOut" type="xs:int" minOccurs="0" />
           <xs:element name="TurnOnMSDE" type="xs:boolean" minOccurs="0" />
           <xs:element name="KeepXMLRecords" type="xs:boolean" minOccurs="0" />
           <xs:element name="UserMode" type="xs:boolean" minOccurs="0" />
           <xs:element name="ReconcileScriptsMode" type="xs:boolean" minOccurs="0" />
           <xs:element name="FolderPath" type="xs:string" minOccurs="0" /> />
           <xs:element name="SelectedFile" type="xs:string" minOccurs="0" />
           <xs:element name="UpdateVersionTable" type="xs:boolean" minOccurs="0" />
          <ConnectionString>Provider=SQLOLEDB.1;Integrated Security=SSPI;
          Persist Security Info=False;
          Data Source=(local);Initial Catalog=master</ConnectionString>
          <FolderPath>C:\Program Files\SSW SQL Deploy\Samples\DatabaseSQLScripts\
          <SelectedFile />

    Good example - XML file with version control

    The version tags identifies what version the file is. This version should be hard coded into the application. Every time you change the format of the file, you would increment this number.

    The code below shows how this would be implemented in your project.

    Public Function IsXMLFileValid() As Boolean
      Dim fileVersion As String = "not specified"
      Dim dsSettings As New DataSet
      Dim IsMalformed As Boolean = False 
      ' Is the file malformed all together with possibly version
        dsSettings.ReadXml(mXMLFileInfo.FullName, XmlReadMode.ReadSchema)
      Catch ex As Exception
        IsMalformed = True
      End Try
      If (Not IsMalformed) Then
        Dim strm As Stream = Asm.GetManifestResourceStream(Asm.GetName().Name _ 
         + "." + "XMLFileSchema.xsd")
        Dim sReader As New StreamReader(strm)
        Dim dsXMLSchema As New DataSet
        If dsSettings.Tables(0).Columns.Contains("Version") Then _
          fileVersion = dsSettings.Tables(0).Rows(0)("Version").ToString
        End If
        If fileVersion = "" Then
          fileVersion = "not specified"
        End If
        If fileVersion = Global.XMLFileVersion AndAlso 
            Not dsSettings.GetXmlSchema() = dsXMLSchema.GetXmlSchema() Then
          Return False
        End If
      End If
      If IsMalformed OrElse fileVersion <> Global.XMLFileVersion Then
        If mshouldConvertFile Then
          ' Convert the file
          Throw New XMLFileVersionException(fileVersion, Global.XMLFileVersion )
        End If
      End If
      Return True
    End Function

    Figure: Code to illustrate how to check if the xml file is valid

    Note: to allow backward compatibility, you should give the user an option to convert old xml files into the new version structure.

  16. Both controls can represent XML hierarchical data and support Extensible Stylesheet Language (XSL) templates, which can be used to transform an XML file into a the correct format and structure. While TreeView can apply Styles more easily, provide special properties that simplify the customization of the appearance of elements based on their current state.

    <asp:Xml ID="Xml1" runat="server" DocumentSource="~/Web.xml"

    Figure: Bad Code - Use XML to represent XML document using XSL Transformations

    <asp:TreeView ID="TreeView1" runat="server" DataSourceID="siteMapDataSource"
    ImageSet="Faq" SkipLinkText =""> 
    <ParentNodeStyle Font-Bold="False" /> 
    <HoverNodeStyle Font-Underline="True" ForeColor="Purple" />   
    <SelectedNodeStyle Font-Underline="True" HorizontalPadding="0px"
    VerticalPadding="0px" /> 
    <NodeStyle Font-Names="Tahoma" Font-Size="8pt" ForeColor="DarkBlue"
    HorizontalPadding="5px" NodeSpacing="0px" VerticalPadding="0px" />
    <asp:SiteMapDataSource ID="siteMapDataSource"  runat="server" />

    Figure: Good Code - Use TreeView to represent XML hierarchical data

  17. There are three types of settings files that we may need to use in .NET :

    1. App.Config/Web.Config is the default .NET settings file, including any settings for the Microsoft Application Blocks (eg. the Exception Management Block and the Configuration Management Block). These are for settings that dont change from within the application. In addition, System.Configuration classes dont allow writing to this file.
    2. ToolsOptions.Config (an SSW standard) is the file to hold the users own settings, that are users can change in the Tools - Options.

      Eg. ConnectionString, EmailTo, EmailCC

      Note: We read and write to this using Microsoft Configuration Application Block. If we don't use this Block we would store it as a plain XML file and read and write to it using System.XML classes. The idea is that if something does go wrong when you are writing to this file, at least the App.Config would not be affected. Also, this separates our settings (which are few) from the App.Config (which usually has a lot of stuff that we really dont want a user to stuff around with).

    3. UserSession.Config (an SSW standard). These are for additional setting files that the user cannot change.

      e.g. FormLocation, LastReportSelected

      Note: This file is over writable (say during a re-installation) and it will not affect the user if the file is deleted.

  18. Windows Communication Foundation (WCF) extends .NET Framework to enable building secure, reliable & interoperable Web Services.WCF demonstrated interoperability with using the Web Services Security (WSS) including UsernameToken over SSL, UsernameToken for X509 Certificate and X509 Mutual Certificate profiles.

    WSE has been outdated and replaced by WCF and has provided its own set of attributes that can be plugged into any Web Service application.

    1. Security Implementation of security at the message layer security has several policies that can suite any environment including:

      1. Windows Token
      2. UserName Token
      3. Kerbose Token
      4. X.509 Certificate Token

      At SSW we implement UserName Token using the standard login screen that prompts for a Username and a Password, which then gets passed into the SOAP header (at message level) for authorization.

      This requires SSL which provides a secure tunnel from client to server.

      However, message layer securtiy does not provide authentication security, so it does not stop the ability for a determined hacker to try user name / password attempts forever. Custom Policies setup at Application Level can to prevent brute force.

    2. Performance

      Indigo has got the smarts to negotiate to the most performant serialization and transport protocol that either side of the WS conversation can accommodate, so it will have the best performance having "all-things-being-equal". You can configure the web services SSL session simply in the web.config file.

      After having Configure an SSL certificate (in the LocalMachine store of the server), the following lines are required in the web.config:

    <configuration xmlns="">
    <service type="WCFService" name="WCFService"
    <endpoint contract="IWCFService" binding="wsHttpBinding"
    <binding name="WSHttpBinding_IWCFServiceBinding" >
    <security mode="Message">
    <message clientCredentialType="UserName" />
    <behavior name="ServiceBehaviour" returnUnknownExceptionsAsFaults="true" >
    <serviceCertificate findValue="CN=SSW" storeLocation="LocalMachine"             
    storeName="My" x509FindType="FindBySubjectDistinguishedName"/>

    Figure: Setting the SSL to Web Service for Message Layer Security.

  19. Did you know if you are using DataSets throughout your application (not data readers) then you don't need to have any code about connection opening or closing.

    Some say it is better to be explicit. However the bottom line is less code is less bugs.

      catch (SQLException ex)
         //I'm in the finally block so that I always get called even if the fill fails.

    Bad code - The connection code is not needed

      catch (SQLException ex)

    Good code - letting the adapter worry about the connection

    Note: A common comment for this rule is... "Please tell users to explicitly open and close connection - even when the .NET Framework can do for them"

    The developers who prefer the first (more explicit) code example give the following reasons:

    • Explicit Behaviour is always better. Code maintainability. Explicit code is more understandable than implicit code. Don't make your other developers have to look up the fact that data adapters automatically maintain the state of your connection for them.
    • Consistency (or a lack of) - not all Framework classes are documented to behave like this. For example, the IDBCommand.ExecuteNonQuery() will throw an exception if the connection isn't open (it might be an interface method, but interface exceptions are documented as a strong guideline for all implementers to follow). The SqlCommand help doesn't mention anything further about this fact, but considering it's an inherited class, it would be fair to expect it to behave the same way. A number of the other methods don't make mention of connection state, making it difficult to know which basket to put your eggs into...
    • Developer Awareness - it's healthy for the developer to be aware that they have a resource that needs to be handled properly. If they learn that they don't need to open and close connections here, then when they move onto using other resource types where this isn't the case then many errors may be produced. For example, when using file resources, the developer is likely to need to pass and open stream and needs to remember to close any such streams properly before leaving the function.
    • Efficiency (sort of) - In a lot of code it will often populate more than one object at a time so that if I only open the connection once, execute multiple fills or commands, then close, then it'll be more clear about what the intent of the developer. If we left it to the framework, it's likely that the connection will be opened and closed multiple times; which despite it being really cheap to open out of the connection pool it will be slightly (itty bitty bit) more efficient but I think the explicit commands will demonstrate more clearly the intention of the developer.

    Bottom line - I wont be swayed - but it is a controversial one. People who agree with me include:

    • Ken Getz
    • Paul Sheriff
    • Bill Vaughan
    • George Doubinski

    People who don't:

    • Chris Kinsman
    • Richard Campbell
    • Paul Reynolds

    Microsoft's online guide to Improving ADO.NET performance to see their opinion and other tips.

    One final note: This argument is a waste of time... With code generators developing most of the Data Access layer of the application, the errors, if any, will be long gone and the developer is presented with higher level of abstraction that allows him/her to concentrate on more important things rather than mucking around with connections. Particularly considering that, when we start using the Provider model from Whidbey, it won't even be clear whether you're talking to SQL Server or to an XML file.

  20. Do you use one class per file?

    Each class definition should live in its own file.


    Easy to locate class definitions outside the Visual Studio IDE (e.g. SourceSafe, Windows Explorer)

    The only exception should be - classes that collectively forms one atomic unit of reuse should live in one file. For example:

    class MyClass
    class MyClassAEventArgs
    class MyClassBEventArgs
    class MyClassAException
    class MyClassBException
          Bad example - 1 project, 1 file.
  21. There are 5 common methods of inserting rows into your database:

    1. Use SqlCommand with an SQL INSERT statement and parameters:

      <pre>public void SQLInsert(string customerID, string companyName, string contactName)

      { SqlConnection sqlcon = new SqlConnection(); sqlcon.ConnectionString = "Persist Security Info=False; Integrated Security=SSPI;database=northwindJV; server=(local);Connect Timeout=5"; SqlCommand sqlcmd = new SqlCommand(); sqlcmd.CommandText = "INSERT Customers(CustomerID, CompanyName, ContactName) VALUES(@CustomerID, @CompanyName, @ContactName)"; sqlcmd.Connection = sqlcon; sqlcmd.Parameters.Add("@CustomerID", customerID); sqlcmd.Parameters.Add("@CompanyName", companyName); sqlcmd.Parameters.Add("@ContactName", contactName);

      ... // for all columns

       try { sqlcon.Open(); MessageBox.Show("The number of records updated was: " + sqlcmd.ExecuteNonQuery().ToString()); } finally { sqlcon.Close(); } }

      <dd>&#160;&#160;&#160;&#160; Figure&#58; Inserting rows using INSERT </dd>
      This approach has two problems - the SQL is inline in the code, and if the database schema is changed, INSERT statement will have to be manually updated.
    2. Use SqlCommand and a stored procedure on the SQL Server:

      <pre>public void SPInsert(string firstName, string surname)

      {     SqlConnection sqlcon = new SqlConnection();     sqlcon.ConnectionString = "Persist Security Info=False;Integrated Security=SSPI; database=northwind;server=mySQLServer;Connect Timeout=30";     SqlCommand sqlcmd = new SqlCommand();     sqlcmd.CommandText = "proc_InsertCustomer";     sqlcmd.CommandType = CommandType.StoredProcedure;     sqlcmd.Connection = sqlcon;     sqlcmd.Parameters.Add("@firstName", firstName);     sqlcmd.Parameters.Add("@surname", surname);     ... // for all columns     try     {         sqlcon.Open();         sqlcmd.ExecuteNonQuery();     }     finally     {         sqlcon.Close();     } }

      <dd>&#160;&#160;&#160;&#160; Figure&#58; Inserting rows using SqlCommand and a stored procedure on the SQL Server </dd>
      This method is better because the SQL is not mixed up with the code (it is in a stored procedure), but it will still break if the database schema is changed, and the all of the parameters to the stored procedure have to be added manually.
    3. Use DataAdapter with SQL INSERT statement, then use DataApdater.Update (strongly-typed-dataset)

      <pre style="padding-left&#58;15px;">public void DASQLInsert(string firstName, string surname)

      {     SqlConnection sqlcon = new SqlConnection();     sqlcon.ConnectionString = "Persist Security Info=False; Integrated Security=SSPI; database=northwind; server=mySQLServer;Connect Timeout=30";     SqlCommand sqlcmd = new SqlCommand();     sqlcmd.CommandText = "INSERT Customers(firstName, surname) VALUES(@firstName, @surname)";     sqlcmd.Connection = sqlcon;     SqlDataAdapter sqladp = new SqlDataAdapter();     sqladp.InsertCommand = sqlcmd;

          NorthWindCustomer dst = new NorthWindCustomer();     NorthWindCustomer.CustomerRow row = dst.Customer.NewCustomerRow();     row.FirstName = firstName;     row.Surname = surname;     dst.Customer.AddCustomerRow(row);     try     {         slqcon.Open();         sqladp.Update(dst);     }     finally     {
              sqlcon.Close();     } }

      <dd>&#160;&#160;&#160;&#160;&#160; Figure&#58; Inserting rows using DataAdapter with SQL INSERT statement, then use DataApdater.Update </dd>
      In this example, the SQL is mixed up with the .NET code, and has to be manually updated if the database schema is changed. However, the strongly typed DataSet automatically updates when the database schema changes.
    4. Use DataAdapter with a stored procedure for INSERT, then use DataAdapter.Update (strongly-typed-dataset)

      <pre>public void DASPInsert(string firstName, string surname)

      {     SqlConnection sqlcon = new SqlConnection();     sqlcon.ConnectionString = "Persist Security Info=False; Integrated Security=SSPI; database=northwind; server=mySQLServer;Connect Timeout=30";     SqlCommand sqlcmd = new SqlCommand();     sqlcmd.CommandText = "proc_InsertCustomer";     sqlcmd.CommandType = CommandType.StoredProcedure;     sqlcmd.Connection = sqlcon;     SqlDataAdapter sqladp = new SqlDataAdapter();     sqladp.InsertCommand = sqlcmd;     NorthWindCustomer dst = new NorthWindCustomer();     NorthWindCustomer.CustomerRow row = dst.Customer.NewCustomerRow();     row.FirstName = firstName;     row.Surname = surname;     dst.Customer.AddCustomerRow(row);

          try     {         sqlcon.Open();         sqladp.Update(dst);     }     catch     {         MessageBox.Show(           "Unable to open connection.",           "Error", MessageBoxButtons.OK, MessageBoxIcon.Error);     }     finally     {         sqlcon.Close();     } }

      <dd>&#160;&#160;&#160;&#160;&#160;Figure&#58; Inserting rows using DataAdapter with a stored procedure for INSERT, then use DataAdapter.Update (strongly-typed-dataset) - best for SQL Server </dd>
      This is the best approach for Microsoft SQL Server. The parameters for the stored procedure are automatically generated and the strongly typed dataset updates when the database schema changes.
    5. Use DataAdapter with SQL SELECT statement, then use command builder to automatically create INSERT, UPDATE and DELETE statements as required. Then use DataAdapter.Update (strongly-typed-dataset).

      <pre>public void DACmdb(string firstName, string surname)

      {     SqlConnection sqlcon = new SqlConnection();     sqlcon.ConnectionString = "Persist Security Info=False; Integrated Security=SSPI; database=northwind; server=mySQLServer;Connect Timeout=30";     SqlCommand sqlcmd = new SqlCommand();     sqlcmd.CommandText = "SELECT firstName, surname FROM Customers";     sqlcmd.Connection = sqlcon;     SqlDataAdapter sqladp = new SqlDataAdapter();     sqladp.SelectCommand = sqlcmd;     SqlCommandBuilder cmdb = new SqlCommandBuilder(adp);

          NorthWindCustomer dst = new NorthWindCustomer();     NorthWindCustomer.CustomerRow row = dst.Customer.NewCustomerRow();     row.FirstName = firstName;     row.Surname = surname;     dst.Customer.AddCustomerRow(row);

          try     {         sqlcon.Open();         sqladp.Update(dst);     }     catch     {         MessageBox.Show(            "Unable to open connection.",            "Error", MessageBoxButtons.OK, MessageBoxIcon.Error);     }     finally     {         sqlcon.Close();     } }

      <dd>&#160;&#160;&#160;&#160; Figure&#58; Inserting rows using DataAdapter with SQL SELECT statement, then use command builder to automatically create INSERT, UPDATE and DELETE - best for SQL Server</dd>
      This approach is the best approach for Jet (Access) databases, as stored procedures in Access are difficult to implement and unreliable. The INSERT statement is automatically generated by .NET and the strongly typed databases update when the database schema is changed.
  22. Do you put all images in the \images folder?

    Instead of images sitting all around the solution, we put all the images in the same folder.

    Bad example - Images under Product root folder.

    Good example - Images under \Images folder.

    We have a program called SSW Code Auditor to check for this rule.
  23. Do you keep \images folder image only?

    We want to keep clear and simple file structure in our solution. Never put any files other than images file in \images folder.

    Bad example - HTML file in \Images Folder.

    Good example - Images only, clean \Images folder.

    We have a program called SSW Code Auditor to check for this rule.
  24. All setup files should stored under setup folder of your project root directory.

    Good example - All the wise setup file in the \setup folder.

    We have a program called SSW Code Auditor to check for this rule.
  25. Do you deploy your applications correctly?

    Many applications end up working perfectly on the developer's machine. However once the application is deployed into a setup package and ready for the public, the application could suddenly give the user the most horrible experience of his life. There are plenty of issues that developers don't take into consideration. Amongst the many issues, three can stand above the rest if the application isn't tested thoroughly:

    1. The SQL Server Database or the Server machine cannot be accessed by the user, and so developer settings are completely useless to the user.
    2. The user doesn't install the application in the default location. (i.e. instead of C:\Program Files\ApplicationName, the user could install it on D:\Temp\ApplicationName)
    3. The developer has assumed that certain application dependencies are installed on the user's machine. (i.e. MDAC; IIS; a particular version of MS Access; or SQL Server runtime components like sqldmo.dll)

    To prevent issues from arising and having to re-deploy continuously which would only result in embarrassing yourself and the company, there are certain procedures to follow to make sure you give the user a smooth experience when installing your application.

    1. Have scripts that can get the pathname of the .exe that the user has installed the application on

    Wise has a Dialog that prompts the user for the installation directory:

    Figure: Wise Prompts the user for the installation directory and sets the path to a property in wise called "INSTALLDIR"

    An embedded script must be used if the pathname is necessary in the application (i.e. like .reg files that set registry keys in registry)
    'The .reg file includes the following hardcoded lines:
                   '@="\"C:\\Program Files\\SSW NetToolKit\\WindowsUI\\bin\\SSW.NetToolkit.exe\" /select \"%1\""
     'This should be replaced with the following lines:
                   '@="\"REPLACE_ME\" /select \"%1\""
      Dim oFSO, oFile, sFile
      Set oFSO = createobject("Scripting.FileSystemObject")
      sFile = Property("INSTALLDIR") & "WindowsUI\PartA\UrlAcccess.reg"
      Set oFile = oFSO.OpenTextFile(sFile)
      regStream = oFile.ReadAll()
     string appPath = replace(Property("INSTALLDIR") & "WindowsUI\bin\SSW.NetToolkit.exe", "\", "\\")
     regStream = replace(regStream, "REPLACE_ME", appPath)
     Set oFile = oFSO.OpenTextFile(sFile,2)
     oFile.Write regStream
    Figure: The "REPLACE_ME" string is replaced with the value of the INSTALLDIR property in the .reg file
    1. After setting up the wise file then running the build script, the application must be first tested on the developers' own machine. Many developers forget to test the application outside the development environment completely and don't bother to install the application using the installation package they have just created. Doing this will allow them to fix e.g. pathnames of images that might have been set to a relative path of the running process and not the relative path of the actual executable.

        this.pictureReportSample.Image = Image.FromFile(@"Reports\Images\Blank.jpg");

      Bad code - FromFile() method (as well as Process.Start()) give the relative path of the running process. This could mean the path relative to the shortcut or the path relative to the .exe itself, and so an exception will be thrown if the image cannot be found when running from the shortcut.
      string appFilePath = System.Reflection.Assembly.GetExecutingAssembly().Location;

    string appPath = Path.GetDirectoryName(appFilePath);

    this.pictureReportSample.Image = Image.FromFile(appPath + @"\Reports\Images\Blank.jpg");

    Good code - GetExecutingAssembly().Location will get the pathname of the actual executable and no exception will be thrown.
    This exception would never have been found if the developer didn't bother to test the actual installation package on his own machine. 3. Having tested on the developer's machine, the application must be tested on a virtual machine in a pure environment without dependencies installed in GAC, registry or anywhere else in the virtual machine. Users may have MS Access 2000 installed and, the developer's application may behave differently on an older version of MS Access even though it works perfectly on MS Access 2003. The most appropriate way of handling this is to use programs like VM Ware or MS Virtual PC. This will help the developer test the application on all possible environments to ensure that it caters for **all** users, minimizing the amount of assumptions as possible.
  26. Do you distribute a product in Release mode?

    We like to have debugging information in our application, so that we can view the line number information in the stack trace. However, we won't release our product in Debug mode, for example if we use "#if Debug" statement in our code we don't want them to be compiled in the release version. If we want line numbers, we simply need Debugging Information . You can change an option in the project settings so these will be generated in when using Release build.

    #if DEBUG MessageBox.Show("Application started"); #endif

    Figure: Code that should only run in Debug mode, we certainly don't want this in the release version.

    Figure: Set "Generate Debugging Information" to True on the project properties page (VS 2003).

    Figure: Set "Debug Info" to "pdb-only" on the Advanced Build Settings page (VS 2005).

    We have a program called SSW Code Auditor to check for this rule.
  27. Hungarian notation is used in VB6. In .NET, there are over 35,000 classes, so we can't just call them with three letter short form. We would suggest the developer use the full class name as  in example below. As a result, the code will be much easier to read and follow up.

    //Bad Code
                              DateTime dt = new DateTime.Now();
                              DataSet ds = new DataSet();
                              // It could be confused with Date time.
                              DataTable dt = ds.Tables[0];
          Bad code - Without meaningful name.             
    //Good Code
                         DateTime currentDateTime = new DateTime.Now();
                         DataSet employmentDataSet = new DataSet();
                         DataTable ContactDetailsDataTable = ds.Tables[0];
          Good code - With meaningful name.   

    More information on naming convention.

    We have a program called SSW Code Auditor to check for this rule
  28. Whenever we rename a file in Visual Studio .NET, the file becomes a new file in SourceSafe. If the file has been checked-out, the status of old file will remain as checked-out in SourceSafe.

    The step by step to rename a file that under SourceSafe control:

    1. Save and close the file in Visual Studio .NET, and check in the file if it is checked-out.
    2. Open Visual SourceSafe Explorer and rename the file.
    3. Rename it in Visual Studio .NET, click "Continue with change" to the 2 pop-up messages:

    RenameVSS1 small
    Figure: Warning message of renaming files under source control.

    RenameVSS2 small
    Figure: You are seeing this as the new file name already exists in SourceSafe, just click "Continue with change".

    Visual Studio .NET should find the file under source control and it will come up with a lock icon

  29. Imagine that you have just had a User Acceptance Test (UAT), and your app has been reported as being "painfully slow" or "so slow as to be unusable". Now, as a coder, where do you start to improve the performance? More importantly, do you know how much your massive changes have improved performance - if at all?

    We recommend that you should always use a code profiling tool to measure performance gains whilst optimising your application. Otherwise, you are just flying blind and making subjective, unmeasured decisions. Instead, use a tool such as JetBrains dotTrace profiler. These will guide you as to how to best optimise any code that is lagging behind the pack. You can run this on both ASP.NET and Windows Forms Applications. The optimisation process is as follows:

    1. Profile the application with Jetbrains dotTrace using the "Hot Spot" tab to identify the slowest areas of your application

    Figure: Identify which parts of your code take the longest (Hot Spots)

    1. Some parts of the application will be out of your control e.g. .NET System Classes. Identify the slowest parts of code that you can actually modify from the Hot Spot listing
    2. Determine the cause of the poor performance and optimise (e.g. improve the WHERE clause or the number of columns returned, reduce the number of loops or use a StringBuilder instead of string concatenation)
    3. Re-run the profile to confirm that performance has improved
    4. Repeat from Step 1 until the application is optimised
  30. SSW Code Auditor, NUnit and Microsoft FxCop are tools to keep your code "healthy". That is why they should be easily accessible in every solution so that they can be run with a double click of a mouse button.

    To add a SSW Code Auditor file to your solution:

    1. Start up SSW Code Auditor
    2. Add a new Job
    3. Add a the solution file to be scanned
    4. Select the rules to be run
    5. Configure email (not required)
    6. Select File > Save As (into the solution's folder as "c odeauditor.SSWCodeAuditor ")
    7. Open your Solution in Visual Studio
    8. Right click and add existing file
    9. Select the SSW Code Auditor project file
    10. Right click the newly added file and select " Open With "

    1. Point it to the SSW Code Auditor executable

    See Do you run SSW Code Auditor? See Do you check your code by Code Auditor before check-in? To add a Microsoft FxCopfile to your solution:

    1. Stat up Microsoft FxC
    2. op
    3. Create a New Project
    4. Right click the project and Add Target
    5. Select the Assembly (DLL/EXE) for the project
    6. Select File > Save Project As (into the solution's folder as " fxc op.FxCop ")
    7. Open your Solution in Visual Studio
    8. Right click and add existing file
    9. Select the Microsoft FxCop project file
    10. Right click the newly added file and select " Open With "
    11. Point it to the Microsoft FxCop executable

    To add a NUnitfile to your solution:

    1. Stat up NUn
    2. it
    3. Create a New Project by selecting File > New Project and save it to your solution directory as " nun it.NUnit "
    4. From the Project menu select Add Assembly
    5. Select the Assembly (DLL/EXE) for the project that contains unit tests
    6. Select File > Save Project
    7. Open your Solution in Visual Studio
    8. Right click and add existing file
    9. Select the NUnit project file
    10. Right click the newly added file and select " Open With "
    11. Point it to the NUnit executable

    Now you can simply double click these project files to run the corresponding applications.

    We have a program called SSW Code Auditor that implements this rule.
  31. Do you know what files not to put into VSS?

    The following files should NOT be included in source safe as they are user specific files:

    • *.scc;*.vspscc - Source Safe Files
    • *.pdb - Debug Files
    • *.user - User settings for Visual Studio .NET IDE
  32. Write a Intro pragraph here

    StringBuilder sb = new StringBuilder();
         sb.AppendLine(@"<script type=""text/javascript"">");
         sb.AppendLine(@"function deleteOwnerRow(rowId)");
            GetRowFromClientId(rowId));", OwnersGrid.ClientID));
          Bad example - Hard to read ?the string is surrounded by rubbish + inefficient because you have an object and 6 strings  
    string.Format(@"<script type=""text/javascript"">                  
           function deleteOwnerRow(rowId)                    
          { {0}.Delete({0}.GetRowFromClientId(rowId)); } </script> ", 
          Good example Slightly easier to read ?but it is 1 code statement across 10 lines            
    string scriptTemplate = Resources.Scripts.DeleteJavascript;
         string script = string.Format(scriptTemplate, OwnersGrid.ClientID);
    <script type=""text/javascript"">
         function deleteOwnerRow(rowId)

    Figure: The code in the first box, the string in the resource file in the 2nd box. This is the easiest to read + you can localize it eg. If you need to localize an Alert in the javascript

    CreateResource small
    Figure: Add a recourse file into your project in VS2005

    ReadResource small
    Figure: Read value from the new added resource file

  33. In v1.0 and v1.1 of .NET framework when serializing DateTime values with the XmlSerializer, the local time zone of machine would always been appended. And when deserializing on the receiving machine, DateTime values would be automatically adjusted based on time zone offset relative to the sender time zone.

    See below example:

    DataSet returnedResult = webserviceObj.GetByDateCreatedAndEmpID(DateTime.

    Figure: Front-end code in .NET v1.1 (front end time zone: GTM+8)

    [WebMethod] public DataSet GetByDateCreatedAndEmpID(DateTime DateCreated, String                                
         EmpTimeDayDataSet ds = new EmpTimeDayDataSet();                                
         m_EmpTimeDayAdapter.FillByDateCreatedAndEmpID(ds, DateCreated.Date, EmpID);                                
         return ds;

    Figure: Web service method (web service server time zone: GTM+10)

    When front end calls this web method with the value of current local time (14/01/2006 11:00:00 PM GTM+8) for parameter 'DateCreated', it expects a returned result for date 14/01/2006, while the service end returns data of 15/01/2006, because 14/01/2006 11:00:00 PM (GTM+8) would be adjusted to be 15/01/2006 01:00:00 AM at the web service server (GTM+10)

    In v1.1/v1.0 you have no way to control this serializing/deserializing behaviour on DateTime. In v2.0 with the new notion DateTimeKind you can get a workaround for above example.

    Datetime unspecifiedTime = DateTime.SpecifyKind(DateTime.Now,DateTimeKind.
    DataSet returnedResult = webservceObj.serviceObj.GetByDateCreatedAndEmpID,

    Figure: Front-end code in .NET v2.0 (front end time zone: GTM+8)

    In this way, the server end will always get a datetime value of 14/01/2006 11:00:00 without GTM offset and return what front-end expects.

  34. Do you know how to use Connection Strings?

    There are 2 type of connection strings. The first contains only address type information without authorization secrets. These can use all of the simpler methods of storing configuration as none of this data is secret.

    When deploying an Azure hosted application we can use Azure Managed Identities to avoid having to include a password or key inside our connection string. This means we really just need to keep the address or url to the service in our application configuration. Because our application has a Managed Identity, this can be treated in the same way as a user's Azure AD identity and specific roles can be assigned to grant the application access to required services.

    This is the preferred method wherever possible, because it eliminates the need for any secrets to be stored. The other advantage is that for many services the level of access control available using Managed Identities is much more granular making it much easier to follow the Principle of Least Privilege.

    Option #2 - Connection Strings with passwords or keys

    If you have to use some sort of secret or key to login to the service being referenced, then some thought needs to be given to how those secrets can be secured.Take a look at Do you store your secrets securely to learn how to keep your secrets secure.

    Example - Integrating Azure Key Vault into your ASP.NET Core application

    In .NET 5 we can use Azure Key Vault to securely store our connection strings away from prying eyes.

    Azure Key Vault is great for keeping your secrets secret because you can control access to the vault via Access Policies. The access policies allows you to add Users and Applications with customized permissions. Make sure you enable the System assigned identity for your App Service, this is required for adding it to Key Vault via Access Policies.

    You can integrate Key Vault directly into your ASP.NET Core application configuration. This allows you to access Key Vault secrets via IConfiguration.

    public static IHostBuilder CreateHostBuilder(string[] args) =>
    		.ConfigureWebHostDefaults(webBuilder =>
    				.ConfigureAppConfiguration((context, config) =>
    					// To run the "Production" app locally, modify your launchSettings.json file
    					// -> set ASPNETCORE_ENVIRONMENT value as "Production"
    					if (context.HostingEnvironment.IsProduction())
    						IConfigurationRoot builtConfig = config.Build();
    						// ATTENTION:
    						// If running the app from your local dev machine (not in Azure AppService),
    						// -> use the AzureCliCredential provider.
    						// -> This means you have to log in locally via `az login` before running the app on your local machine.
    						// If running the app from Azure AppService
    						// -> use the DefaultAzureCredential provider
    						TokenCredential cred = context.HostingEnvironment.IsAzureAppService() ?
    							new DefaultAzureCredential(false) : new AzureCliCredential();
    						var keyvaultUri = new Uri($"https://{builtConfig["KeyVaultName"]}");
    						var secretClient = new SecretClient(keyvaultUri, cred);
    						config.AddAzureKeyVault(secretClient, new KeyVaultSecretManager());

    Good Example - For a complete example, refer to this sample application

    Tip: You can detect if your application is running on your local machine or on an Azure AppService by looking for the WEBSITE_SITE_NAME environment variable. If null or empty, then you are NOT running on an Azure AppService.

    public static class IWebHostEnvironmentExtensions
    	public static bool IsAzureAppService(this IWebHostEnvironment env)
    		var websiteName = Environment.GetEnvironmentVariable("WEBSITE_SITE_NAME");
    		return string.IsNullOrEmpty(websiteName) is not true;

    Setting up your Key Vault correctly

    In order to access the secrets in Key Vault, you (as User) or an Application must have been granted permission via a Key Vault Access Policy.

    Applications require at least the LIST and GET permissions, otherwise the Key Vault integration will fail to retrieve secrets.

    access policies
    Figure: Key Vault Access Policies - Setting permissions for Applications and/or Users

    Azure Key Vault and App Services can easily trust each other by making use of System assigned Managed Identities. Azure takes care of all the complicated logic behind the scenes for these two services to communicate with each other - reducing the complexity for application developers.

    So, make sure that your Azure App Service has the System assigned identity enabled.

    Once enabled, you can create a Key Vault Access policy to give your App Service permission to retrieve secrets from the Key Vault.

    Figure: Enabling the System assigned identity for your App Service - this is required for adding it to Key Vault via Access Policies

    Adding secrets into Key Vault is easy.

    1. Create a new secret by clicking on the Generate/Import button
    2. Provide the name
    3. Provide the secret value
    4. Click Create

    add a secret
    Figure: Creating the SqlConnectionString secret in Key Vault.

    Figure: SqlConnectionString stored in Key Vault. Note the ApplicationSecrets section is indicated by ApplicationSecrets-- instead of ApplicationSecrets:

    As a result of storing secrets in Key Vault, your Azure App Service configuration (app settings) will be nice and clean. You should not see any fields that contain passwords or keys. Only basic configuration values.

    Figure: Your WebApp Configuration - no passwords or secrets, just a name of the Key vault that it needs to access

    Watch SSW's William Liebenberg explain Connection Strings and Key Vault in more detail

    History of Connection Strings

    In .NET 1.1 we used to store our connection string in a configuration file like this:

              <add key="ConnectionString" value ="integrated security=true;
               data source=(local);initial catalog=Northwind"/>

    ...and access this connection string in code like this:

    SqlConnection sqlConn = 
    new SqlConnection(System.Configuration.ConfigurationSettings.

    Historical example - old ASP.NET 1.1 way, untyped and prone to error

    In .NET 2.0 we used strongly typed settings classes:

    Step 1: Setup your settings in your common project. E.g. Northwind.Common

    ConnStringNET2 Settings
    Figure: Settings in Project Properties

    Step 2: Open up the generated App.config under your common project. E.g. Northwind.Common/App.config

    Step 3: Copy the content into your entry applications app.config. E.g. Northwind.WindowsUI/App.config The new setting has been updated to app.config automatically in .NET 2.0

             <add name="Common.Properties.Settings.NorthwindConnectionString"
                  connectionString="Data Source=(local);Initial Catalog=Northwind;
                  Integrated Security=True"
                  providerName="System.Data.SqlClient" />

    ...then you can access the connection string like this in C#:

    SqlConnection sqlConn =
     new SqlConnection(Common.Properties.Settings.Default.NorthwindConnectionString);

    Historical example - access our connection string by strongly typed generated settings class...this is no longer the best way to do it

  35. Since we have many ways to use Connection String in .NET 2.0, it is probably that we are using duplicate connection string in web.config.

       <add name="ConnectionString" connectionString="Server=(local);
    Database=NorthWind;" />
       <add key="ConnectionString" value="Server=(local);Database=NorthWind;"/>
          Bad example - use duplicate connection string in web.config.   
    We have a program called SSW Code Auditor to check for this rule.
  36. Both SQL Server authentication (standard security) and Windows NT authentication (integrated security) are SQL Server authentication methods that are used to access a SQL Server database from Active Server Pages (ASP).

    We recommend you use the Windows NT authentication by default, because Windows security services operate by default with the Microsoft Active Directory?directory service, it is a derivative best practice to authenticate users against Active Directory. Although you could use other types of identity stores in certain scenarios, for example Active Directory Application Mode (ADAM) or Microsoft SQL Server? these are not recommended in general because they offer less flexibility in how you can perform user authentication.

    If not, then add a comment confirming the reason.

       <add name="ConnectionString" connectionString="Server=(local);
        Database=NorthWind;uid=sa;pwd=sa;" />

    Figure: Bad example - not use Windows Integrated Authentication connection string without comment.

        <add name="ConnectionString" connectionString="Server=(local);
         Database=NorthWind;Integrated Security=SSPI;" />

    Figure: Good example - use Windows Integrated Authentication connection string by default.

        <add name="ConnectionString" connectionString="Server=(local);
         Database=NorthWind;uid=sa;pwd=sa;" />
        <!--It can't use the Windows Integrated because they are using Novell -->                

    Figure: Good example - not use Windows Integrated Authentication connection string with comment.

  37. Do you store your secrets securely?

    Most systems will have variables that need to be stored securely; OpenId shared secret keys, connection strings, and API tokens to name a few.

    These secrets must not be stored in source control in plain text – it is insecure by nature, and basically means that it is sitting.

    There are many options for managing secrets in a secure way:

    Bad Practices

    Store production passwords in source control


    • Minimal change to existing process
    • Simple and easy to understand


    • Passwords are readable by anyone who has either source code or access to source control
    • Difficult to manage production and non-production config settings
    • Developers can read and access the production password


    Figure: Bad practice - Overall rating: 1/10

    Store production passwords in source control protected with the ASP.NET IIS Registration Tool


    • Minimal change to existing process – no need for DPAPI or a dedicated Release Management (RM) tool
    • Simple and easy to understand


    • Need to manually give the app pool identity ability to read the default RSA key container
    • Difficult to manage production and non-production config settings
    • Developers can easily decrypt and access the production password
    • Manual transmission of the password from the key store to the encrypted config file

    Figure: Bad practice - Overall rating: 2/10

    Use Windows Identity instead of username / password


    • Minimal change to existing process – no need for DPAPI or a dedicated RM tool
    • Simple and easy to understand


    • Difficult to manage production and non-production config settings
    • Not generally applicable to all secured resources
    • Can hit firewall snags with Kerberos and AD ports
    • Vulnerable to DOS attacks related to password lockout policies
    • Has key-person reliance on network admin

    Figure: Bad practice - Overall rating: 4/10

    Use External Configuration Files


    • Simple to understand and implement


    • Makes setting up projects the first time very hard
    • Easy to accidentally check the external config file into source control
    • Still need DPAPI to protect the external config file
    • No clear way to manage the DevOps process for external config files

    Figure: Bad practice - Overall rating: 1/10

    Good Practices

    Use Octopus/ VSTS RM secret management, with passwords sourced from KeePass


    • Scalable and secure
    • General industry best practice - great for organizations of most sizes below large corporate


    • Password reset process is still manual
    • DPAPI still needed

    Figure: Good practice - Overall rating: 8/10

    Use Enterprise Secret Management Tool – LastPass/ Hashicorp Vault/ etc..


    • Enterprise grade – supports cryptographically strong passwords, auditing of secret access and dynamic secrets
    • Supports hierarchy of secrets
    • API interface for interfacing with other tools
    • Password transmission can be done without a human in the chain


    • More complex to install and administer
    • DPAPI still needed for config files at rest

    Figure: Good practice -  Overall rating: 8/10

    Use .NET User Secrets


    • Simple secret management for development environments
    • Keeps secrets out of version control


    • Not suitable for production environments

    Figure: Good Practice - Overall rating 8/10

    Use Azure Key Vault

    See the SSW Rewards mobile app repository for how SSW is using this in a production application:


    • Enterprise grade
    • Uses industry standard best encryption
    • Dynamically cycles secrets
    • Access granted based on Azure AD permissions - no need to 'securely' share passwords with colleagues
    • Can be used to inject secrets in your CI/CD pipelines for non-cloud solutions


    • Tightly integrated into Azure so if you are running on another provider or on premises, this may be a concern. Authentication into Key Vault now needs to be secured.

    Figure: Good Practice - Overall rating 9/10

    Avoid using secrets with Azure Managed Identities

    The easiest way to manage secrets is not to have them in the first place. Azure Managed Identities allows you to assign an Azure AD identity to your application and then allow it to use its identity to log in to other services. This avoids the need for any secrets to be stored.


    • Best solution for cloud (Azure) solutions
    • Enterprise grade
    • Access granted based on Azure AD permissions - no need to 'securely' share passwords with colleagues
    • Roles can be ugranted to your application your CI/CD pipelines at the time your services are deployed


    • Only works where Azure AD RBAC is available. NB. There are still some Azure services that don't yet support this. Most do though.


    Figure: Good Practice - Overall rating 10/10


    The following resources show some concrete examples on how to apply the principles described:

  38. Do you highlight strings in your code editor?

    It is a good practice to highlight string variables or const in source code editor of Visual Studio to make them clear. Strings can be easily found especially you have long source code.

    Default string appearance

    HighlightString good small
    Highlighted string appearance

    Tools | Options form of Visual Studio

  39. Windows Command Processor (cmd.exe) cannot run batch files (.bat) in Visual Studio because it does not take the files as arguments. One way to run batch files in Visual Studio is to use PowerShell.

    BadBatch small
    Bad example - Using Windows Command Processor (cmd.exe) for running batch files.

    goodbatch small
    Good example - Using PowerShell for running batch files

  40. Developers are better at coding then creating documentation. However project instructions are very important to enable developers to get up and running quickly.

    In the prior rule: Do you make awesome documentation? We looked at the kinds of documents you need and how to structure them.

    There are 5 levels of project documentation. Documentation can start simple but ends up having a lot of manual steps. The best projects have simple documentation but automate as many steps as possible (level 4 and 5).

    Level #1 - Lots of documentation step by step

    Add a document as a solution item and name it '_Instructions.docx'

    Tip: Microsoft Word documents are preferred over .txt files because images and formatting are important

    You can also break up this document into 4 smaller documents

    • _Business.docx - Explaining the business purpose of the app
    • _Instructions_Compile.docx - Contains instructions on how to get the project to compile
    • _Instructions_Deployment.docx - Describes the deployment process
    • _Technologies.docx - Explaining the technical overview e.g. Broad architecture decisions, 3rd party utilities, patterns followed etc

    Here's a suggestion of what these documents could contain.

    1. Project structure All parts that composes the project and how they work with each other.
    2. Third party components Any software, tools and DLL files that this project uses. (e.g., NHibernate, ComponentArt, KendoUI)
    3. Database configuration
    4. Other configuration information
    5. Deployment information and procedures
    6. Other things to take care of

    Bad example - A project without an instruction.

    Good example - A project with instructions

    Add a to your solution (Use this as a guidance for markdown)

    Level #2: Lots of documentation (and the *exact* steps to Get Latest and compile)

    When a new developer starts on a project you want them to get up and running as soon as possible.

    If you were at Level 2 you might have a document that says: Dear Northwind Developer This documentation describes what is required to configure a developer PC.

    Problems to check for: Windows 8 not supported Many things to build Lots of dependencies

    You are at Level 2 when you have some static Word documents with the steps to compile. The _instructions_compile.docx contains the steps required to be able to get latest and compile.

    Level #3: Lots of documentation (and the exact steps to Get Latest and compile with the *database*)

    instructions level2
    Figure: Level 2 Documentation includes database build scripts. We use SSW SQL Deploy to make keeping all databases on the same version simple. Check out how to use SQL Deploy here

    Level #4: Less documentation (and Get Latest and compile with a PowerShell script)

    A perfect solution would need no static documentation. Perfect code would be so self-explanatory that it did not need comments. The same rule applies with instructions on how to get the solution compiling: the best answer would be for the solution to contain scripts that automate the setup.

    Example of Level 6: PowerShell Documentation

    Recommendation: All manual workstation setup steps should be scripted with powerShell (as per the below example)

    Recommendation: You should be able to get latest and compile within 1 minute. Also, a developer machine should not HAVE to be on the domain (to support external consultants)

    PS C:\Code\Northwind> .\Setup-Environment.ps1

    Problem: Azure environment variable run state directory is not configured (_CSRUN_STATE_DIRECTORY).

    Problem: Azure Storage Service is not running. Launch the development fabric by starting the solution.

    WARNING: Abandoning remainder of script due to critical failures.

    To try and automatically resolve the problems found, re-run the script with a -Fix flag.

    Figure: Good example - you see the problems in the devs environment

    PS C:\Code\Northwind> .\Setup-Environment.ps1 -fix

    Problem: Azure environment variable run state directory is not configured (_CSRUN_STATE_DIRECTORY).

    Fixed: _CSRUN_STATE_DIRECTORY user variable set

    Problem: Azure Storage Service is not running. Launch the development fabric by starting the solution.

    WARNING: No automated fix availab le for 'Azure Storage Service is running'

    WARNING: Abandoning remainder of script due to critical failures.

    Figure: Good example - when running with -fix this script tries to automatically fix the problem

    PS C:\Code\Northwind> .\Setup-Environment.ps1 -fix

    Problem: Azure Storage Service is not running. Launch the development fabric by starting the solution. WARNING: No automated fix available for 'Azure Storage Service is running'

    WARNING: Abandoning remainder of script due to critical failures.

    Figure: Good example - Note that on the 2nd run, issues resolved by the 1st run are not re-reported

    Level #5 Docker Containerization

    docker logo
    Figure: Docker Logo

    Docker can make the experience even better for your developers. Development environments are liable to break easily or have documentation fall out of date. This problem is exacerbated when a developer comes back to a project after a long time away.

    Docker containerization helps to standardize development environments. By using docker containers developers won't need to worry about the technologies and versions installed on their device. Everything will be set up for them at the click of a button. Microsoft has a great tutorial and documentation on setting up docker containers as development environments for VS Code.

    Further Reading

    To see other documentation Rules, have a look at Do you make awesome documentation?

  41. Stored procedure names in code should always be prefixed with the owner (usually dbo). This is because if the owner is not specified, SQL Server will look for a procedure with that name for the currently logged on user first, creating a performance hit.

    SqlCommand sqlcmd = new SqlCommand(); sqlcmd.CommandText = "
                        proc_InsertCustomer" sqlcmd.CommandType
                        = CommandType.StoredProcedure; sqlcmd.Connection = sqlcon;

    Bad example

    SqlCommand sqlcmd = new SqlCommand(); sqlcmd.CommandText = "
                         dbo.proc_InsertCustomer"; sqlcmd.CommandType
                         = CommandType.StoredProcedure; sqlcmd.Connection = sqlcon;

    Good example

    We have a program called SSW Code Auditor to check for this rule.

  42. Do you always make file paths @-quoted?

    In C#, backslashes in strings are special characters used to produce "escape sequences", for example \r\n creates a line break inside the string. This means that if you want to put a backslash in a string you must escape it out by inserting two backslashes for every one, e.g. to represent C:\Temp\MyFile.txt you would use C:\Temp\MyFile.txt . This makes the file paths hard to read, and you can't copy and paste them out of the application.

    By inserting an @ character in front of the string, e.g. @"C:\Temp\MyFile.txt" , you can turn off escape sequences, making it behave like VB.NET. File paths should always be stored like this in strings.

    We have a program called SSW Code Auditor to check for this rule.
  43. Do you always use Option Explicit?

    Option Explict should always only be used in VB.NET.

    This will turn many of your potential runtime errors into compile time errors, thus saving you from potential time bombs!

    We have a program called SSW Code Auditor to check for this rule.
  44. Web service and web invoking becomes more and more popular today as the distributed systems are widely deployed. However, the normal method invoking may cause a disaster when apply to web method because transmitting data over Internet may cause your program to hang for a couple of minutes.

    private static string LoadContentFromWeb(string strUri) 
    WebResponse response = request.GetResponse(); 

    :::Figure: Bad example - Invoke web method by the normal way (because this will hang your UI thread)

    The correct way to invoke web method is using asynchronous call to send a request and use the delegated CallBack method to read the response, see code below:

    public static void GetOnlineVersionAsync(string strUri) 
    IAsyncResult r = request.BeginGetResponse(new AsyncCallback(ResCallBack), request);
              catch(WebException ex)
                 Console.WriteLine(ex.ToString()) ; 
         private static void ResCallBack(IAsyncResult ar)
          string content = string.Empty;
              WebRequest req = (WebRequest)ar.AsyncState;
              WebResponse response = req.EndGetResponse(ar);
            catch(WebException ex)

    Figure: Good example - Invoke web method by using asynchronous method and CallBack (UI thread will be free once the request has been sent)

    When working with Web Service, asynchronous methods will be automatically generated by your web services proxy.

    Figure: Automatically generated asynchronous methods

  45. Every application has different settings depending on the environment it is running on, e.g. production, testing or development environment.It is much easier and efficient if app.config is provided in several environment types, so then the developer can just copy and paste the required app.config.


    Figure: Bad Example - Only 1 App.config provided

    App config

    Figure : Good Example - Several App.config are provided

  46. Do you make your projects regenerated easily?

    If you projects is generated by code generators (Code Smith, RAD Software NextGeneration, etc.), you should make sure it will be regenerated easily.

    Code generators can be used to generate whole Windows and Web interfaces, as well as data access layers and frameworks for business layers, making them an excellent time saver. However making the code generators generate your projects for the first time takes much time and involves lots of configurations.

    In order to make it easier to do the generation next time, we recommend you putting the command line of operations into a file called "_Regenerate.bat". When you want to generate it next time, just run the bat file and all things are done in a blink.

    cs D:\DataDavidBian\Personal\New12345\NetTiers.csp
          Figure: An example of command line of Code Smith for NorthWind  

    Thus "_Regenerate.bat" file must exist in your projects (of course so must other necessary resources).

    Figure: Good - Have _Regenerate.bat in the solution

  47. When running code analysis you may need to ignore some rules that aren't relevant to your application. Visual Studio has a handy way of doing thing.

    code analysis bad example
    Figure: Good Example - The Solution and Projects are named consistently
    code analysis good example

    public partial class Account
            [System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Usage", "CA2214:DoNotCallOverridableMethodsInConstructors", Justification="Gold Plating")]
            public Account()
                this.Centres = new HashSet();
                this.AccountUsers = new HashSet();
                this.Campaigns = new HashSet();

    Figure: Good Example - The Solution and Projects are named consistently

  48. Debug compilation considerably increases memory footprint since debug symbols are required to be loaded.

    Additionally it will hit the performance because that will include the optional debug and trace statements in the output IL code.

    In debug mode the compiler emits debug symbols for all variables and compiles the code as is. In release mode some optimizations are included:

    • unused variables do not get compiled at all
    • some loop variables are taken out of the loop by the compiler if they are proven to be invariants
    • code written under #debug directive is not included etc.

    The rest is up to the JIT.

    As per: C# debug vs release performance.

    debug bad
    Figure: Bad Example

    debug good
    Figure: Good Example

    We have a program called SSW Code Auditor to check for this rule.

  49. Do you know BAK files must not exist?

    Finding a file with a BAK extension is a "call sign" that your folders need a tidy up.

    bak bad
    Figure: Bad example

    bak good
    Figure: Good example

    We have a program called SSW Code Auditor to check for this rule.

  50. Keeping your projects tidy says good things about the teams maturity. Therefore any files and folders that are prefixed with zz, must be deleted from the project.

    zzed bad
    Figure: Bad example - Zz-ed files should not exist in Source Control

    zzed good
    Figure: Good example - No zz-ed files in Source Control

  51. The built-in Process Templates in TFS will not always fit into your environment, so you can fix it by creating your own.

    SSWAgile Baseline 1
    Figure: Good - The "Baseline work (hours)" field was added to keep the original estimate

    SSWAgile Additional
    Figure: Good - "Additional Task" was added to track scope creep

    SSWAgile URL
    Figure: Good - The "URL" field has been added to allow reverse view from the web page

    SSWAgile RichText
    Figure: Good - Rich text has been enabled in the "Description" field to allow users to enter better text for the requirement Note: the URL field is used in the SSW Smashing Barrier product

    If you want help customizing your own Process Template, call a TFS guru at SSW on +61 2 9953 3000.

  52. When you decide to use TFS 2012, you have the option to choose from different methodologies (aka. Process Templates).

    Choosing the right template to fit into your environment is very important.

    Figure: Built-in Process Templates in Visual Studio 2012 with TFS 2012

    It is recommended to use the top option, the Scrum one. If you think the built-in template is not going to fulfil your needs, customize it and create your own.

    If you want help customising your own Process Template, call a TFS guru at SSW on +61 2 9953 3000.

  53. Do you use TFS 2012 instead of TFS 2010?

    With the release of TFS 2012, you should always use TFS 2012 instead of TFS 2010.

    These are the top 3 features:

    Local workspaces Local workspaces allow many operations to be done offline (add, edit, rename, delete, undo, diff) and are recommended only for workspaces with fewer 50,000 files.  Local workspaces are now the default with TFS 2012, but you can control that if you want server workspaces to be the default.

    **Async checkout for Server Workspaces** There is a new TFS 2012 feature so that VS 2012 will do checkouts in the background for server workspaces.  That eliminates the pause when you start typing and VS checks out the file.  Turning it on turns off checkout locks, but you can still use checkin locks.

    Merge on Unshelve Shelvesets can now be unshelved into a workspace even if there are local changes on files in the shelveset.  Conflicts will be created for any items modified both locally and in the shelveset, and you will resolve them as you would any other conflict.

  54. Do you always say "Option Strict On"?

    Fixing the Option Strict problem is one of the most annoying aspects of the Visual Basic development environment relates to Microsofts' decision to allow late binding. By turning Option Strict Off by default, many type-casting errors are not caught until runtime. You can make VB work the same as other MS languages (which always do strict type-checking at design time) by modifying these templates.

    So, always set Option Strict On right from the beginning of the development.

    Before you do this, you should first back up the entire VBWizards directory. If you make a mistake, then the templates will not load in the VS environment. You need to be able to restore the default templates if your updates cause problems.

    To configure each template to default Option Strict to On rather than Off, load each .vbproj template with VB source code into an editor like Notepad and then change the XML that defines the template. For example, to do this for the Windows Application template, load the file: Windows Application\Templates\1033\WindowsApplication.vbproj

    Technically, you do not have to add the Option Explicit directive, because this is the default for VB; but I like to do it for consistency. Next, you must save the file and close Notepad. Now, if you load a new Windows Application project in the VS environment and examine Project Properties, you will see that Option Strict has been turned on by default.

    Figure:Bad Example – Option Strict is Off

    Figure:Good Example – Option Strict is On

    In order for this setting to take effect for all project types, you must update each of the corresponding .vbproj templates. After making the changes on your system, you will need to deploy the new templates to each of your developers' machines in order for their new projects to derive from the updated templates.

    However, sometimes we don't do this because of too much work. In some scenarios, such as Wrappers around the COM code, and Outlook stuff with object model, there is going to be lots of work to fix all the type-checking errors. Actually it is necessary to use Object type as parameters or variables when you deal with COM components.

  55. Do you keep your nuget packages small?

    When creating NuGet packages, it is better to create few small packages instead of creating one monolithic package that combines several relatively independent features.

    When you are making a decision to package your reusable code and publish it to NuGet sometimes it is worths splitting your package into few smaller packages. This will improve maintainability and transparency of your package. It will also make it much easier to consume and contribute to.

    Lets assume you have created a set of libraries that add extra functionality to web applications. Some libraries classes work with both ASP.NET MVC and ASP.NET WebForms projects, some are specific to ASP.NET MVC and some are related to security. Each library may also have external dependencies on some other NuGet packages. One way to package your libraries would be to create a single YourCompany.WebExtensions package and publish it to NuGet. Sounds like a great idea, but it has number of issues. What if someone only wants to use some MVC specific classes from your package, they would still have to add your whole package, which will add some other external dependencies that you will never use.

    A better approach would be to split your libraries into 3 separate packages: YourCompany.WebExtensions .Core , YourCompany.WebExtensions .MVC and YourCompany.WebExtensions .Security . YourCompany.WebExtensions .Core will only contain core libraries that can be used in both ASP.NET WebForm and MVC. YourCompany.WebExtensions .MVC package will contain only MVC specific code and will have a dependency on the Core package. YourCompany.WebExtensions .Security will only contain classes that are related to security. This will give consumer a choice as well as better transparency to the features you are trying to offer. It will also have a better maintainabilty, as one team can work on one package while you are working on another one. Patches and enhancements can also be introduced much easier.

    Figure: Bad Example - One big library with lots of features, where most of them are obsolete with a release of ASP.NET MVC 5

    Figure: Good Example - Lots of smaller self contained packaged with a single purpose

  56. You need process monitor to track down permissions problems.

    E.g. Problem

    To hunt down a problem where say the IIS server couldn’t write to a directory, even after you have given permissions to the app pool account.


    1. Install and run process monitor
    2. Apply filter
    3. Rejoice

    process monitor filter
    Figure: Apply filter to only show "ACCESS DENIED" results

    event properties
    Figure: And here we have the offending account

  57. At SSW we evaluate and use a lot of 3rd party libraries. Before considering a library for further evaluation we ask the following questions:

    • Is it open source?
    • Is the licence LGPL, Apache, or MIT? Comparison of licences
    • Is there a quick start guide?
    • Is there a FAQ?
    • Is there an API guide?
    • Is it easy to install? Is there a NuGet package?
    • Is there an active user community?

    If the answer is yes to all of these questions then the library is definitely worth further evaluation.

  58. Do you know the best sample applications?

    Before starting a software project and evaluating a new technology, it is important to know what the best practices are. The easiest way to get up and running is by looking at a sample application. Below is a list of sample applications that we’ve curated and given our seal of approval.

    Northwind Schema


    SQL Server

    SQL Server and Azure SQL Database



    UI - Angular

    UI - React

  59. Do you reference "most" .dlls by Project?

    When you obtain a 3rd party .dll (in-house or external), you sometimes get the code too. So should you:

    • reference the Project (aka including the source) or
    • reference the assembly?

    When you face a bug, there are 2 types of emails you can send:

    1. Dan, I get this error calling your Registration.dll? or
    2. Dan, I get this error calling your Registration.dll and I have investigated it. As per our conversation, I have changed this xxx to this xxx.

    The 2nd option is preferable.The simple rule is:

    • If there are no bugs then reference the assembly, and
    • If there are bugs in the project (or any project it references [See note below]) then reference the project.

    Since most applications have bugs, therefore most of the time you should be using the second option.

    If it is a well tested component and it is not changing constantly, then use the first option.

    1. Add the project to solution (if it is not in the solution).
      Add existing project
      Figure: Add existing project
    2. Select the "References" folder of the project you want to add references to, right click and select "Add Reference...".
      Add reference
      Figure: Add reference
    3. Select the projects to add as references and click OK.
      Select projects to reference
      Figure: Select the projects to add as references

    Note: We have run into a situation where we reference a stable project A, and an unstable project B. Project A references project B. Each time project B is built, project A needs to be rebuilt.

    Now, if we reference stable project A by dll, and unstable project B by project according to this standard, then we might face referencing issues, where Project A will look for another version of Project B ?the one it is built to, rather than the current build, which will cause Project A to fail.

    To overcome this issue, we then reference by project rather than by assembly, even though Project A is a stable project. This will mitigate any referencing errors.

  60. If we lived in a happy world with no bugs, I would be recommending this approach of using shared components from source safe. As per the prior rule, you can see we like to reference "most" .dlls by project.However if you do choose to reference a .dll without the source, then the important thing is that if the .dll gets updated by another developer, then there is *nothing* to do for all other developers ?they get the last version when they do your next build. Therefore you need to follow this:

    As the component user, there are six steps, but you only need to do them once:

    1. First, we need to get the folder and add it to our project, so in SourceSafe, right click your project and create a subfolder using the Create Project (yes, it is very silly name) menu.
      use createvssfolder
      Use Create VSS Folder
      Figure: Create 'folder' in Visual Source Safe Name it References
      use referencesfolder
      Use References Folder
      Figure: 'References' folder
    2. Share the dll from the directory, so if I want SSW.Framework.Configuration, I go to $/ssw/SSWFramework/Configuration/bin/Release/

      I select both the dll and the dll.xml files, right-click and drag them into my $/ssw/zzRefs/References/ folder that I just created in step 1.

      use dllsxml
      Use Dlls Xml
      Figure: Select the dlls that I want to use
      use rightclicktoshare
      Use right click to share
      Figure: Right drag, and select "Share"

    3. Still in SourceSafe, select the References folder, run get latest?to copy the latest version onto your working directory.
      use getlatest
      Use Get Latest
      Figure: Get Latest from Visual Source Safe VSS may ask you if you want to create the folder, if it doesnt exist. Yes, we do.
    4. Back in VS.NET, select the project and click the show-all files button in the solution explorer, include the References folder into the project (or get-latest if its already there)
      use includeinvs
      Use Include Invs
      Figure: Include the files into the current project
    5. IMPORTANT! If the files are checked-out to you when you include them into your project, you MUST un-do checkout immediately.

      You should never check in these files, they are for get-latest only.

      use undocheckout
      Use Undo Checkout
      Figure: Undo Checkout, when VS.NET checked them out for you...

    6. Add Reference?in VS.NET, browse to the References?subfolder and use the dll there.
    7. IMPORTANT! You need to keep your 'References' folder, and not check the files directly into your bin directory. Otherwise when you 'get latest', you won't be able to get the latest shared component.

    All done. In the future, whenever you do get-latest?on the project, the any updated dlls should come down and be linked the next time you compile. Also, if anyone checks out your project from Source Safe, they will have the project linked and ready to go.

  61. Do you turn Edit and Continue OFF?

    With VS2013, you get the long awaited 64 bit edit and continue, and it is turned on by default. Edit and Continue is great when you need to make a quick change to executing code. However, it has its downsides too:

    • Web Development - Kills IISExpress when you stop
    • Can lead to bad development practices (trying to debug instead of doing RED, GREEN, REFACTOR)

    This is why we recommend that it is turned OFF by default.

  62. When a new developer joins a project, there is often a sea of information that they need to learn right away to be productive. This includes things like who the Product Owner and Scrum Master are, where the backlog is, where staging and production environments are, etc.

    Make it easy for the new developer by putting all this information in a central location like the Visual Studio dashboard.

    Note: As of October 2021, this feature is missing in GitHub Projects.


    2016 06 06 8 00 55
    Figure: Bad Example - Don't stick with the default dashboard, it's almost useless

    2016 06 06 9 15 14
    Figure: Good Example - This dashboard contains all the information a new team member would need to get started

    The dashboard should contain:

    1. Who the Product Owner is and who the Scrum Master is
    2. The Definition of Ready and the Definition of Done
    3. When the daily standups occur and when the next sprint review is scheduled
    4. The current sprint backlog
    5. Show the current build status
    6. Show links to:

      • Staging environment
      • Production environment
      • Any other external service used by the project e.g. Octopus Deploy, Application Insights, RayGun, Elmah, Slack

    Your solution should also contain the standard _Instructions.docx to your solution file for additional details on getting the project up and running in Visual Studio.

    For particularly large and complex projects, you can use an induction tool like SugarLearning to create a course for getting up to speed with the project.

    2016 06 06 7 18 43
    Figure: SugarLearning induction tool

  63. Do you use Slack as part of your DevOps?

    Figure: See how Slack can be setup to improve your Devops

    With all these different tools being used to collect information in your application, a developer will frequently need to visit many different sites to get information like:

    • Was the last build successful?
    • What version is in production?
    • What errors are being triggered on the app?
    • Is the server running slow?
    • What is James working on?

    This is where a tool like Slack comes in handy. It can help your team aggregate this information from many separate sources into one dedicated channel for your project. The other benefits also include a new team member instantly having access to the full history of the channel as well so no conversations are lost.

    At SSW we integrate Slack with:

    • Octopus Deploy
    • TeamCity
    • Visual Studio

    Even better, you can create bots in slack to manage things like deployments and updating release notes.

    2016 06 06 11 22 03
    Good example - One centralized location for team chat, deployment issues, exceptions and TFS changes

  64. Have you ever seen dialogs raised on the server-side? These dialogs would hang the thread they were on, and hang IIS until they were dismissed. In this case, you might use Trace.Fail or set AssertUIEnabled="true" in your web.config.

    See Scott's blog Preventing Dialogs on the Server-Side in ASP.NET or Trace.Fail considered Harmful  public static void ExceptionFunc(string strException) {     System.Diagnostics.Trace.Fail(strException); } Figure: Never use Trace.Fail <configuration>    <system.diagnostics>       <assert AssertUIEnabled="true" logfilename="c:\log.txt" />    </system.diagnostics> </configuration> Figure: Never set AssertUIEnabled="true" in web.config <configuration>    <system.diagnostics>       <assert AssertUIEnabled="false" logfilename="c:\log.txt" />    </system.diagnostics> </configuration> Figure: Should set AssertUIEnabled="false" in web.config

We open source. Powered by GitHub