Secret ingredients to quality software

Rules to Better System Administrators - 18 Rules

System Administrators (SysAdmins) are the lifeblood of any business. They mantain the infrastructure, networks and systems and cloud of businesses. This is why we have developed these standards for better System Administrators.

If you still need help, visit our Network Architecture consulting page and book in a consultant.

  1. For unplanned outages, see Unplanned - Do you have an unplanned outage process?

    If your servers are down or have to go down during business hours you should notify the users at least 15 minutes beforehand so you will not get 101 people all asking you if the computer is down.

    For short outages (under 15 minutes) that only affect only a few people (under 5 people), or are outside of business hours, then IM is the best method. If you use Microsoft Teams or Skype, a quick message will do.

    Note: If they are not online on Teams or Skype, then they can't complain that they were not warned.

    For extended or planned outages, or if you have a larger number of users (50+), email is the suggested method.


    If you send an email it is a good idea to tell the user a way to monitor the network themselves. Eg. Software solutions like SCOM or WhatsUp Gold.

    Include a "To myself". It gives visibility to others who are interested in what needs to be done to fix the problem and makes it easier to remember to send the 'done' email. E.g. "done - CRM is alive again".


    Immediately before the scheduled downtime, check for logged in users, file access, and database connections.


    Open 'Windows Task Manager' (Run > taskmgr) and select the 'Users' tab. Check with users if they have active connections, then have them log off.

    rule outage 3
    Figure: Connected users can be viewed in Task Manager


    Open 'Computer Management' (Run > compmgmt.msc), then 'System Tools > Shared Folders'. Check 'Session' and 'Open Files' for user connections.

    rule outage 4
    Figure: Computer Management 'Open Files' View


    Open SQL Server Management Studio on the server. Connect to the local SQL Server. Expand 'Management' and double-click 'Activity Manager'.

    Figure: SQL Management Studio 'Active Connections' View

    Once these have been checked for active users, and users have logged off, maintenance can be carried out.

    Restarts should only be performed during the following time periods

    1. Between 7am and 7:05am
    2. Between 1pm and 1:05pm
    3. Between 7pm and 7:05pm

    If a scheduled shutdown is required, use the PsShutdown utility from Microsoft's Sys Internals page.

    Always reply 'Done' when you finish the task.

  2. For planned outages, see

    During your course of being a SysAdmin, you will come across many outages. Some of them will impact BAU (Business as usual) and others will just be minor service outages. Do you know what to do you have a plan in the event of these outages?

    Below is a process for these types of outages. Some amount of common sense is required here, an outage would be if services that would affect BAU work are disrupted for SSW and/or some hardware has failed.

    Hardware Outage:

    • Firewall
    • Switch
    • Blade Servers
    • SAN Storage
    • UPS

    Service Outage:

    • Active Directory Domain Services
    • O365 Services; Teams, SharePoint, Exchange, OneDrive
    • File Servers
    • SQL Servers
    • IIS Servers

    Determining What Services are Disrupted

    Many services can be used for device monitoring e.g. WhatsUp Gold, Solarwinds, SCOM. You would do the following in any of them:

    1. Login to monitoring service
    2. Check to see what services are down

    First Contact

    After you have determined what services have been disrupted it is time to call your SysAdmin team and organise a quick conference call. This will allow you to have a discussion prior to making any changes/fixes that could cause the outage to become worse.

    Key Discussion Points:

    1. What services have been disrupted?
    2. What is the impact of these services?
    3. Is an email to everyone in your company required?
    4. What are your next steps?

    What if you cannot reach anyone?

    If you cannot reach anyone move onto the Email section.


    If from the previous discussion you have determined that an email needs to be sent to your entire company, or you have decided this is necessary if you cannot contact anyone above, send an email in the following format:

    A separate email needs to be sent to SysAdmins outlining what was discussed on the call. If no one was contactable, please proceed with what you have determined on your own.

    Next steps did NOT resolve the issue

    If you have completed your tasks but the issue has not resolved, please try to make contact with the SysAdmin team again and send an updated To Myself email.

    Next steps resolved the issue

    If your actions have resolved the issue, please notify ALL of the services being restored and update your To Myself email.

  3. Do you have a disaster recovery plan?

    At some point every business will experience a catastrophic incident. At these times it is important to have a plan that explains who to contact, the priority of restore and how to restore services.

    At the time of a disaster, you should have a few objectives established and measure some results - The objectives are RPO (Recovery Point Objective) and RTO (Recovery Time Objective); and the measurements to take are RPA (Recovery Point Actual) and RTA (Recovery Time Actual).

    It's recommended to practice your disaster recovery at least once every 12 months. This way you make sure that you are investing in the minimum amount of required resources, and that your plan actually works.

    So what do these terms mean?

    93c56eff 8d11 4123 a2d6 1305911f07b0
    Figure: RTO's vs RPO's


    RPO or Recovery Point Objective, is a measure of the maximum tolerable amount of data that the business can afford to lose during a disaster. It also helps you measure how long it can take between the last data backup and a disaster without seriously damaging your business. RPO is useful for determining how often to perform data backups.


    RTO or Recovery Time Objective, is a measure of the amount of time after a disaster in which business operation is retaken, or resources are again available for use. This measurement determines the amount of resources that are required for the recovery to happen within the timeframe required.


    RPA or Recovery Point Actual, is the actual measurement of the amount of data lost during a disaster recovery.


    RTA or Recovery Time Actual, is the actual measurement of downtime during a disaster recovery.

    Note: these may all be different for different services. For example at a bank you may have a transaction database, this may need to be only ever able to experience a RPA\RTA of a few minutes as even in that few minutes, thousands of transactions could be lost. However the same bank may have a website that they are happy to have an RTA\RPA of several hours as this is much less critical to the banks overall operation.

    How to calculate these values?**

    RTO and RPO are determined via a consultation called BIA (Business Impact Analysis). The organization needs to work out what the maximum amount of data that they are prepared to lose and also the maximum amount of time that they are prepared to be without services. These are both measured in time, and could be seconds, minutes, hours or days depending on the organization's requirements. This is a balancing act as generally the shorter the timeframe required, the more resources the organisation will need in order to achieve the target.

    After this a disaster should be simulated to test that the RTA/RPA values match the RTO/RPO required by the organization.

    Example: Mr Bob Northwind experienced a catastrophic incident. The failure occurred at 8pm local time on a Friday night. Their website and sales transaction software were affected.

    In his Disaster Recovery Plan he had the following objectives:

    Northwind Website2 days4 hours
    North Sales4 hours8 hours

    It is important that these objectives are signed off by the product owner as per

    After the recovery was complete they then analyzed the downtime which showed the following:

    Northwind Website8 hours2 days
    North Sales8 hours8 hours

    After analyzing the data, they discovered a few issues with their Disaster Recovery Plan:

    1. They didn't have any spare hardware on premises which meant that to get the website backed up and running they needed to find a shop on a weekend to buy a server and then start the recovery process. This delayed them by an entire day.
    2. Mr Northwind's IT Manager had mistakenly set the backups to 12-hour backups (at midnight and midday each day). This meant that the most recent backup for both services had occurred at 12pm on Friday and they had 8 hours of missing transactions. The greatest allowable data loss should have only been 4 hours.

    This explains why it is important to practice your disaster recovery plan. A real incident is not the ideal time to realize that your backup/procedures are inadequate.

  4. We recommend enforcing strict password policies.

    Below is a capture of the settings we use:


    When passwords have to be changed they should meet the following complexity requirements:

    • Not contain all or part of the user's account name
    • Be at least 6 characters in length
    • Contain characters from 3 of the following 4 categories:

      • English uppercase characters (A through Z)
      • English lowercase characters (a through z)
      • A number (0 through 9)
      • Non-alphanumeric characters (e.g., !, $, #, %)

    Complexity requirements are enforced when passwords are changed or created. We also enforce a lockout policy so if a user gets their password wrong 5 times, their account will be locked out for 15 minutes.

    Passphrases are better than passwords, they are even more difficult to crack than complex passwords

    MFA is essential. You should use it everywhere possible. Check

    Bad practice: Requiring users to change their passwords e.g. every 180 days does not improve security. If you already have a strong password (as above) and a second factor of authentication (e.g. MFA) changing it does very little to make you more secure. Generally, you should change your password only when you believe it has been compromised.

  5. Do you protect your users and administrator accounts with more than one authentication method?

    What is Multi-Factor Authentication (MFA)?

    MFA is another layer of security for your users and administrators, it adds another 'password' or code that you can receive in a device that you possess - a phone, for example - to make it more difficult for attackers to steal your account.If they guess or brute-force your password, they still need the second code to make it to your account.

    Generally, every time you log in on a service, it will ask for your normal password and an additional code. This code can be retrieved through an authenticator app, through an SMS, or even through a phone call to your mobile.

    It is best practice to apply MFA to your Administrators first, as their accounts are the most important on the company and have access to all resources, and your users second, which still benefits from added security.

    Figure: Good Example - Be secure by using MFA apps like Microsoft Authenticator and Google Authenticator

  6. Do you have Password Writeback enabled in your Azure AD Connect?

    If you want to let your users reset their own, on-premises passwords directly from the cloud, you need to have Password Writeback enabled in Azure AD Connect!

    You can read more about Password Writeback from the Microsoft Documentation:

    When setting up Azure AD Connect, you need to set the "Password Writeback" option:


    Good Example: Setting up Password Writeback in Azure AD Connect

  7. Security - Do you use service accounts?

    Do you use service accounts for recurring tasks and systems, or do you use user and personal accounts?

    As a rule, you should never use a user account for accessing systems, reports, tasks and other long-running applications that do not need human or user interaction to run.

    Service accounts provide a security context where the applications run, without the need to worry about their passwords or privileges. If a user changes their password, you will not break anything, because service account password normally does not expire and changing them is never needed.

    Also, if the security of a user account is breached, you do not have to worry about any other systems being compromised - that account was not being used to run any applications. Always keep your service accounts passwords safe and complex, and you will never need to worry about it.

  8. How often do you find files on your network file server that clearly shouldn't be there? Developers are notorious for creating temporary files and littering your file system with them. So how can you identify exactly who created or modified the file, and when?

    Figure: Who created this file?

    Figure: Terminal into your file server using Terminal Services

    Figure: It was Jatin!

    The easiest way is to configure Windows file auditing .

    Thankfully, Windows Server come with built-in file auditing. Any changes create and delete can be logged to your system event log. Here's how to set it up.

    How to implement auditing on your file server

    1. Terminal Server into the file server
    2. In Windows Explorer, locate the directory you want to configure logging for (e.g. C:\Inetpub\wwwroot for logging changes to your website files)
    3. Select Security tab | Advanced
      Figure: Select the folder you want to configure auditing for
    4. Click the Auditing tab
    5. Select the users whose usage you want to monitor (usually all users, so select Everyone )
      Figure: Select Everyone so that anyone who modifies any of the files will be logged
    6. Select what you want to monitor. For best performance, we only tick the options in shown in the figure below - there's no need to log when someone opens a file.
      Figure: Select these 4 options (only audit the events you need to audit - there's no need to log when someone opens a file)
    7. Click OK and OK again to apply the changes. The process may take some time depending on the number of subfolders and files selected. Now you need to configure the system event log.
    8. Open Control Panel->Administrative Tools->Event Viewer
    9. Right-click the Security node and Control Panel | Administrative Tools | Event Viewer
    10. Right-click the sure Overwrite events as needed is checked
      Figure: Keep your log file to about 250MB - otherwise, your system performance may suffer

    Checking who created the file

    Now test to see if auditing is working.

    1. On the server, create a file called "test.aspx" somewhere in the path that is being audited
    2. Open Control Panel->Administrative Tools->Event Viewer
    3. Select the Security node, and notice the entries that have been created. They will have a similar format to the figure below.
      Figure: Any creates, deletes and updates now get logged to the Event Log

    That's all! It is also great for finding out who accidentally deleted files from the file system.

    Furthermore, we can dump the event log to an Access or SQL Server database to make it easier to handle. Here is how to do it:

    • Download the scripts: one for Access database and the other for SQL Server.
    • Find and change the strEventDBConn variable to your connection string, also, modify strEventDB and tblEvents variable to your database name and table name.
    • Write down the names of the servers to monitor in EventHosts.txt.

    Done, now you need only double-click to start it.

    Figure: Caught an action on remote server and logged it to database

  9. At SSW we have moved away from paid certificates for our websites and web apps. We now use Let's Encrypt managed by Certify The Web.

    Previously the way we managed our certificates was using a SharePoint list as well as calendar reminders to inform us when they were going to expire. The issue with using this system is the SharePoint list as well as ensuring the certificates remained up to date was a manual process. This left a lot of room for human error especially when managing hundreds of certificates. There are of course commercial solutions to manage certificates but these haven't been econmical for our environment.

    With Certify the Web and Let's Encrypt, we remove this human error and manual handling, ensuring that our certificates never expire.

    You should use Certify the Web.

    manage certificates bad
    Figure: Bad example - Keeping a database is unnecessary

    manage certificates good
    Figure: Good example - Using Certify The Web

  10. What is the best option for your business when it comes to securing your website with HTTPS?

    When you create a website, you can only access it through HTTP (http://), and not securely through HTTPS (https://) if you do not own an SSL Certificate.

    When it comes to website certificates, you can choose from free or paid SSL certificates!

    Free certificates can be obtained from Certificate Authorities like Let's Encrypt, which is helping provide free and automated certificates for the web.

    Free certificates:

    • provide the same level of SSL encryption as paid certificates;
    • provide HTTPS with a green padlock on the address bar of your browser, just like paid certificates;
    • can be automatically renewed, easily.

    Good Example: Let's Encrypt Free Certificate Authority

    Why would anyone use paid certificates, then?

    If you are operating a big business, paid certificates give you some more assurances over free ones, and you can obtain them through reputable Certificate Authorities like Comodo, GeoTrust, Symantec, etc:

    Paid certificates:

    • gives you warranty against misuse or wrongly issued certificates;
    • are normally valid for at least 1 year or more, while free certificates are only valid for 3 months;
    • offer support for any errors or problems you have with your certificates.

    Good Example: Comodo Paid Certificate Authority

    SSL Certificates are an important part of any reputable website, so if you are operating a small website, blog, testing environment, personal site, anything that doesn't need too much support, getting a free certificate is the way to go.

    If your business or site does not fit on the above affirmation, getting a paid certificate is the best option!

  11. Do you know if your computer should be joined to the domain or not?

    Joining your company's domain is a trade-off:

    Option #1: If you join the domain, the company is the one responsible for managing your device, so all company rules and policies will be applied to it (Windows Update frequency, users, password resets, etc) and you will need to go through your SysAdmins if you have troubles with it.

    Option #2: If you choose to not join the domain, the PC management is all yours, giving you more freedom, but any automatic scripts would need to be done manually.

    Below are the pros and cons of joining the domain:

    AreaPros (+)Cons (-)
    PC ManagementClient management through GPOs (Group Policy Objects)Lack of freedom/autonomy
    Resource AccessDirect access to resources (e.g. fileserver)Needs to sign in first, or be attached to a VPN or the network to access resources
    Automatic ScriptsGPOs apply automatic scripts like the Login Script and Backup ScriptsNeed to run Login and Backup scripts manually
    Support LevelMore support available from your SysAdmins, you have someone to rely on for any troubleshooting on all computer applicationsLess support available from SysAdmins, you can run any obscure application on your computer that may not be supported by your company
  12. Occasionally, one server and its drives will not have sufficient space to store all related files in a network share. For example, you may have a "SetupFiles" directory that stores all Setup executables on your network e.g. \bee\SetupFiles. There are problems with this approach.

    1. You will run out of space - which means you will have to copy or move old (but still used) setup files around to other drives (\bee\d$\SetupOld\ ) or other machines e.g. \tuna\SetupFiles. This fragmentation of your setup files can cause confusion for your users.
    2. When you retire or rename the old server, links to the old server location will not work

    So how do you get around this problem? The answer is in the Distributed File System (DFS). Instead of having several server-specific file share locations, you can have a domain-wide setup location that offers a seamless experience to your users. DFS will even track a history of when and where file locations were moved.

    Figure: The Distributed File System consolidates many separate file shares into one convenient location for your users

  13. For any kind of backups, it is important to log a record on success so you can check for backups that have failed.

    Without some kind of logging e.g. on a SQL database, on a txt file, on a SharePoint list, it is impossible to tell which backups have been completed or not. This applies to backups of any kind e.g. servers, personal computers, emails.

    Some important stats to log:

    1. Date - Date backup has run
    2. Username - If a personal backup, which user was logged in when the backup ran
    3. PC Name - The name of the server (or PC) the backup came from

    Having entries logged in a database is better than having an email sent because entries are easier to see and manage, and emails might get lost in the noise.

    backup notification bad
    Figure: Bad example - an email is sent on completion

    backup notification good
    Figure: Good example - a record is logged on completion

    Figure: Best example - the latest completion is logged in a SharePoint list

    Now you are able to be aware of missing backups. You can make automatically notifications based on the above table e.g. by SQL Reporting Services data-driven subscription

    It is also important to review the state of your backups at least on a weekly basis, ensuring that backups are not failing and that you are able to restore them when necessary. This is part of a good disaster recovery process.

    To see the best backup tools currently available, check

    If you need any help with your backups or disaster recovery process, check

    Figure: Good Example - No critical or warnings in your backups

  14. It is important that the system administrator can easily find out how reliable his servers are. This can be achieved using tools like What's Up Gold to monitor many statistics e.g.:

    • Uptime - Ping, Interface monitor
    • Performance - RAM usage, CPU usage
    • Network - Bandwidth, Interface throughput
    • Storage - Disk usage, health

    For example, here is a report in WhatsUp Gold you can use to monitor servers on a daily basis.

    Figure: Good example - WhatsUp Gold - Green indicates servers are healthy

    WUG also sends an email with alerts on servers.

    It is also possible to use SQL Reporting Services to create a custom report that can be emailed via a data-driven subscription, which sends a nicely formatted email when there's a problem.

    Figure: Good example - Email - Red indicates servers are not healthy

  15. Wireless networks are everywhere now. You can't drive down the street without finding a network which is insecure. However, in an office environment, there is a lot more to lose than a bit of bandwidth. It is vital that wireless is kept secure.

    WEP, No SSID broadcast, allowed MAC addresses are all OK but these are more home security.

    Figure: Bad example - the above settings are not suitable for a company's wireless access point

    For the office, you need something a bit more robust and not requiring much management overhead.

    It is recommended to use Radius authentication to integrate with your Active Directory.

    Figure: Good example - configure your wireless access point to authenticate against AD

    This article explains how to setup your wireless AP to use WPA2-enterprise. WPA2-Enterprise verifies network users (AD a/c's) through a server (Domain Controller).

    The recommended method of authentication is PEAP (Protected Extensible Authentication Protocol), which authenticates wireless LAN clients using only server-side digital certificates (In our case we used an AD CA) by creating an encrypted SSL/TLS tunnel between the client and the authentication server. The tunnel then protects the subsequent user authentication exchange.


    • 802.1X-capable 802.11 wireless access points (APs)
    • Active Directory with group policy
    • Network Policy Server (NPS) servers
    • Active Directory Certificate Services based PKI for Server certificates for NPS computer/s and your wireless PC's


    This document assumes you have some knowledge of how to configure your wireless access points and install server roles. It also assumes that you have already configured an Enterprise Certificate Authority on your Active Directory Domain.

    1. Configure your wireless access points In SSW we use Unifi APs. I have configured these access points to: ubntuap ac lite
    2. Install NPS on your server On Windows 2008 or 2008 R2 open up the server manager and:

      1. Add the "Network Policy and Access Services" Role Under role services add:
      2. Network Policy Server
      3. Routing and Remote Access Services
    3. Configure Radius Clients on NPS Open up the NPS Console. Right click on "Radius Clients", and then click on "New". Fill out the fields for Friendly Name (enter the name of the wireless access point), Address (IP address) and then add the shared secret (Keep this safe for example we use Keepass as a password repository) you configure on your access point.

    Figure: Radius client settings

    1. Configure 802.1x on the NPS server In the NAP servers Server Manager, open "Roles", then "Network Policy and Access Services" then click on NPS (Local). In the right-hand pane under standard configuration choose "Radius Server for 802.1x Wireless or Wired Connections", and then click on "Configure 802.1X" to start a wizard-based configuration.

      1. Select the top radio button “Secure Wireless Connections" click next
      2. On the Specify 802.1X Switches Page check the AP's you have configured under Radius Clients are in that list then click next
      3. Now the authentication method. From the Drop Down lists select Protected EAP (PEAP) NOTE: This method requires a Computer Certificate and the Radius Server and either a computer or user certificate on the client machine
      4. Select the groups (eg. Domain\WirelessAccess) you would like to give wireless access to. You can do this by user or computer or both
      5. If you need to configure VLan's in the next step, wasn't required in my case I just used the defaults
      6. You then need to register the server with Active Directory. So right click on NPS (local) and select Register Server in Active Directory

    Figure: How to register NAP server with AD
    You should now have a Connection Request Policy and a Network Policy. Remove the MS-CHAP v1 authentication method from the network policy (under the constraint's tab).

    1. Configure Certificate Auto enrolment First open Group Policy Management.

      1. Create a new GPO policy and name it "CertEnrollmentWireless" or whatever name you deem suitable and link it to the root of the domain or a specific OU depending on your needs and OU structure
      2. Under the security filtering scope for what the policy gets applied to remove "Authenticated Users" and add your AD created. This ensures that the policy, once configured, is applied only to members of those groups.
      3. Edit the settings of the group policy and go to:
      4. Computer Configuration\Policies\Windows Settings\Security Settings\Public Key Policies In the details pane, you need to right-click the Certificate Services Client – Auto-enrolment and then select properties. In the Properties, dialog box select enabled from the drop down box and then place a tick in all the remaining tick boxes. This makes sure that the computer auto-enrolls for a certificate from AD CA.
      5. Now navigate to Computer Configuration\Policies\Windows Settings\Security Settings\Public Key Policies\Automatic Certificate Request Settings Right-click in the details pane and select New | Automatic Certificate Request. This will open up a wizard and you can select a Computer Certificate.

    Figure: Group policy settings

    1. Creating a Windows Wireless 802.1x GPO Policy

      1. Now go to Computer Configuration\Policies\Windows Settings\Security Settings\Wireless Network (IEEE 802.11) Policies Right click and Create a new policy for Windows Vista and later (if you only have XP machines, do only an XP one). If you have Vista or later you must do a Vista policy or else Vista will try to use the XP policy (not recommended).
      2. Enter a Policy Name (e.g. BeijingWifiSettings) and description and link to the root of the domain.

    Figure: GP link and scope settings
    3. Click "Add" and then enter a Profile Name and then Add the SSID name from the Wireless Access Point/s. Make sure the tick box "Connect Automatically when this network is in range" is ticked... 4. Click on the Security Tab Make sure Authentication is "WPA2-Enterprise" and Encryption is "AES). Under "Select a network authentication method, choose "Microsoft: Protected EAP (PEAP). Under Authentication Mode, you need to choose whether you want to authenticate computers and/or users with your digital certs. Then select "Computer Authentication". 5. Click on the "Properties" button Tick "Validate server certificate" and then tick "Connect to these servers". Enter the FQDN of the NPS. Then under Trusted Root Certification Authority, tick your Root CA certificate. Then click OK.

    Figure: Connection security settings
    6. Click OK twice. Optional: Under Network Permission tab you can use the tick boxes to restrict clients to infrastructure networks or only GPO profiled allowed networks if you desire. 7. Click OK and you have completed your Windows Wireless Policy

    Figure: Wifi_Settings settings

  16. When guests come to an SSW Office, we provide them with easy Wifi access using a QR code. This saves people manually typing in a password and can have them up and running in a matter of moments.QR codes can easily be created with services like QR Code Monkey.

    Bad Example: Providing an SSID and Password for guest to manually sign in to

    Good Example: QR Code for Guest Wifi

  17. A company-wide Word template brings many benefits e.g.:

    word template bad
    Figure: Bad Example - creating an email/document does not have the company templates

    word template good
    Figure: Good Example - creating an email/document with the company templates

    How to have a company-wide Word template:

    • Modify your Normal.dotm file to have the headings and format that you want for Word document
    • Create standard employee email footer files e.g. JamesZhou.htm or JamesZhou.txt
    • Put the files on a network location - this is the place that will have the master copies
    • Have a logon script which is set up through Group policy that will copy the file to the users' computer when they logon. e.g. a PowerShell login script like
    ECHO Copy Office Templates To Workstation >> %LogonLogFile%
    call %ScriptFolder%\SSWLogonScript\BatchScript\SafeCopyNewerFile.bat "\\fileserver\DataSSW\DataSSWEmployees\Templates\" "%APPDATA%\Microsoft\Templates\" %LogonLogFile%
    call %ScriptFolder%\SSWLogonScript\BatchScript\SafeCopyNewerFile.bat "\\fileserver\DataSSW\DataSSWEmployees\Templates\Normal.dotm" "%APPDATA%\Microsoft\Templates\Normal.dotm" %LogonLogFile%
    call %ScriptFolder%\SSWLogonScript\BatchScript\SafeCopyNewerFile.bat "\\fileserver\DataSSW\DataSSWEmployees\Templates\ProposalNormalTemplate.dotx" "%APPDATA%\Microsoft\Templates\ProposalNormalTemplate.dotx" %LogonLogFile%
    call %ScriptFolder%\SSWLogonScript\BatchScript\SafeCopyNewerFile.bat "\\fileserver\DataSSW\DataSSWEmployees\Templates\" "%APPDATA%\Microsoft\Templates\" %LogonLogFile%
    call %ScriptFolder%\SSWLogonScript\BatchScript\SafeCopyNewerFile.bat "\\fileserver\DataSSW\DataSSWEmployees\Templates\Microsoft_Normal.dotx" "%APPDATA%\Microsoft\Templates\Microsoft_Normal.dotx" %LogonLogFile%
    call %ScriptFolder%\SSWLogonScript\BatchScript\SafeCopyNewerFile.bat "\\fileserver\DataSSW\DataSSWEmployees\Templates\Blank.potx" "%APPDATA%\Microsoft\Templates\Blank.potx" %LogonLogFile%
    xcopy /Y "\\fileserver\DataSSW\DataSSWEmployees\Templates\NormalEmail.dotm" "%APPDATA%\Microsoft\Templates\" >> %LogonLogFile%
    xcopy /Y "\\fileserver\DataSSW\DataSSWEmployees\Templates\NormalEmail.dotx" "%APPDATA%\Microsoft\QuickStyles\" >> %LogonLogFile%
    ECHO Templates Copied

    Figure: Bad Example - This is a snippet of an old login script

    You can automatically have your SSW Word doc template on sign-in via a script. See

    Good Example - New Login script on Github

    Note #1: We don't want people using .RTF emails so we include this message in SSW.rtf. Be aware that we don't want to use RTF because of Remove RTF as an option or explain when it is a good choice.

    Note #2: If you use a Mac computer, a login script will not work. In order to use a Word template, you must open the template on Word locally, hit "Save as Template", and then upload that document to Teams.

  18. Do you track and label your assets?

    Most companies have physical assets and it's crucial to keep track of those assets - are they in a particular location? Who are the assets with? Are they assigned somewhere else?

    Businesses generally need to provide their employees with a multitude of assets e.g.:

    1. Keyboards
    2. Mouses
    3. Laptops
    4. Workstations
    5. Mobile Phones

    Keeping track of those assets is essential for the business to have any control over them, and having a spreadsheet with values for the assets and all that is not the best approach.

    Figure: Bad Example - Asset Tracking on spreadsheets is bad

    In our day and age, we have better (and free!) systems that allow us to track the businesses' assets, including:

    1. Purchase Date
    2. Order Number
    3. Serial Number
    4. Model
    5. Which location that asset belongs to
    6. Which user that asset belongs to (or is in possession of/checked out to)
    7. Number of assets
    8. And even their depreciation value

    All this in a nice UI that allows for you - or even your user themselves - to edit and check out assets.

    Tracking is all fun and games, but what about knowing which asset is which? You also need to physically label your assets.

    This means that after creating the asset in the system, it generally gets a unique ID within it, and you should generate a label (preferably with a QR or bar code for easy scanning) and attach the label to the asset in question. This makes it super easy to see the asset ID and name at a glance, and, in the case the asset is lost somewhere, anyone can easily scan the QR code and be brought to a site with instructions on how to return or notify the company that asset is lost.

    pxl 20211014 235412634
    Figure: Good Example - A professional label printed with the important asset info e.g. ID, name and serial number

    A good system that does all this is SnipeIT -

    SnipeIT has a nice interface, easy to use, maintain and upgrade. It generates labels for you, has an API for you to integrate with your current systems and is free if you host it yourself!

We open source. Powered by GitHub