Rules to Better Azure
To help you out, here is a list of the top 9 Azure services you should be using:
- Computing: App Services
- Best practices: DevOps Project
- Data management: Azure Cosmos DB (formerly known as Document DB)
- Security: Azure AD (Active Directory)
- Web: API Management
- Automation: Logic Apps
- Automation: Cognitive Services
- Automation: Bots
- Storage: Containers
Watch the video
The goal of a modern complex software project is to build software with the best software architecture and great cloud architecture. Software developers should be focusing on good code and good software architecture. Azure and AWS are big beasts and it should be a specialist responsibility.
Many projects for budget reasons, have the lead developer making cloud choices. This runs the risk of choosing the wrong services and baking in bad architecture. The associated code is hard and expensive to change, and also the monthly bill can be higher than needed.
The focus must be to build solid foundations and a rock-solid API. The reality is even 1 day of a Cloud Architect at the beginning of a project, can save $100K later on.
2 strong developers (say Solution Architect and Software Developer) No Cloud Architect No SpendOps
Figure: Bad example of a team for a new project
2 strong developers (say Solution Architect and Software Developer) + 1 Cloud Architect (say 1 day per week, or 1 day per fortnight, or even 1 day per month) after choosing the correct services, then looks after the 3 horsemen:
- Load/Performance Testing
- Security choices
Figure: Good example of a team for a new project
Problems that can happen without a Cloud Architect:
- Wrong tech chosen e.g. nobody wants to accidentally build and need to throw away
- Wrong DevOps e.g. using plain old ARM templates that are not easy to maintain
- Wrong Data story e.g. defaulting to SQL Server, rather than investigating other data options
- Wrong Compute model e.g. Choosing a fixed price, always-on, slow scaling WebAPI for sites that have unpredictable and large bursts of traffic
- Security e.g. this word should be enough
- Load/Performance e.g. not getting the performance to $ spend ratio right
Finally, at the end of a project, you should go through a "Go-Live Audit". The Cloud Architect should review and sign off that the project is good to go. They mostly check the 3 horsemen (load, security, and cost).
Azure transactions are CHEAP. You get tens of thousands for just a few cents. What is dangerous though is that it is very easy to have your application generate hundreds of thousands of transactions a day.
Every call to Windows Azure Blobs, Tables and Queues count as 1 transaction. Windows Azure diagnostic logs, performance counters, trace statements and IIS logs are written to Table Storage or Blob Storage.
If you are unaware of this, it can quickly add up and either burn through your free trial account, or even create a large unexpected bill.
Note: Azure Storage Transactions do not count calls to SQL Azure.
Ensure that Diagnostics are Disabled for your web and worker roles
Having Diagnostics enabled can contribute 25 transactions per minute, this is 36,000 transactions per day.
Question for Microsoft: Is this per Web Role?
Disable IntelliTrace and Profiling
Search bots crawling your site to index it will lead to a lot of transactions. Especially for web "applications" that do not need to be searchable, use Robot.txt to save transactions.
When deploying to Azure, the deployment package is loaded into the Storage Account. This will also contribute to the transaction count.
If you have enabled continuous deployment to Azure, you will need to monitor your transaction usage carefully.
If you use the default Azure staging web site Url, it can be difficult to remember and a waste of time trying to lookup the name every time you access it. Follow this rule to increase your productivity and make it easier for everyone to access your staging site.
| Default Azure Url:
Figure: Bad example - Site using the default Url (hard to remember!!)
| Customized Url:
- staging .sugarlearning.com
Figure: Good example - Staging Url having production Url with "staging." prefix
How to setup a custom Url
- Add a CName to the default Url to your DNS server
- Instruct Azure to accept the custom Url
![custom domains (1).png](custom domains (1).png)Figure: Azure being configured to accept the CName
AzureSearch is designed to work with Azure based data and runs on ElasticSearch. It doesn't expose all the advanced search features. You may resist to choose it as your search engine from the missing features and what seems to be an expensive monthly fee ($250 as of today). If this is the case, follow this rule:
Consider AzureSearch if your website:
- Uses SQL Azure (or other Azure based data such as DocumentDB), and
- Does not require advanced search features.
Consider ElasticSearch if your website:
- Requries advance search features that aren't supported by AzureSearch
Keep in mind that:
- Hosting of a full-text search service costs you labour to set up and maintain the infrastructure, and
- A single Azure VM can cost you up to $450. So do not drop AzureSearch option unless the missing features are absolutely necessary for your site
Like other services, it is important that your company has a structured and secure approach to managing Azure Permissions.
First a little understanding of how Azure permissions work. For each subscription, there is an Access Control (IAM) section that will allow you to grant overall permissions to this Azure subscription. It is important to remember that any access that is given under Subscriptions | "Subscription Name" | Access Control (IAM), will apply to all Resource Groups within the Subscription.
From the above image, only the main Administrators have been given Owner/Co-administrator access, all other users within the SSWDesigners and SSWDevelopers Security Groups have been given Reader access. The SSWSysAdmins Security group has also been included as an owner which will assist in case permissions are accidentally stripped from the current Owners.
We’ve been down this road before where developers had to be taught not to manually create tables and databases. Now, in the cloud world, we’re saying the same thing again. Don’t manually create your Azure resources.
Manually Creating Resources
This is the most common and the worst. This is bad because it requires manual effort to reproduce and leaves margin for human error.
- Create resources in Azure and not save a script
Manually creating and saving the script
Some people half solve the problem by manually creating and saving the script. This is also bad because it’s like eating ice cream and brushing your teeth – it doesn’t solve the health problem.
Tip: Save scripts in a folder called Azure
So if you aren't manually creating your Azure resources, what options do you have?
Option A: Terraform
- It’s a great tool
- Free for up to 5 users with limited features
Not recommended because:
- Pulumi is better
- Proprietary ‘HCL’ (Hashicorp Configuration Language) which is as bad as YAML
Option B: Ansible
- Proprietary product owned by RedHat
- First red flag – ‘Contact us for pricing’ – a toxic warning sign of their lack of transparency
Option C: Bicep by Microsoft
- Not a huge step forward from ARM templates
- But is one to watch
Option D: Farmer (Recommended)
- It's a great tool
- Simply add a very short and readable F# project in your solution
- Tip: The F# solution of scripts should be in a folder called .azure
Option E: Pulumi (Recommended)
- It's a great tool that uses real code (C#, TypeScript, Go, and Python) as infrastructure rather than JSON/YAML
- Abstracts the entire Azure REST API to the language of your choice (see above)
- Includes a tool for converting your existing JSON ARM templates into code: Arm2Pulumi
- Free for individual developers (even for commercial use), but is a paid product for teams > 1
It’s early days so not much help (from Google trends) yet.
- After you’ve made your changes, don’t forget to visualize your new resources
Organizing your cloud assets starts with good names. It is best to use all lower case and use “-“ and not put the Resource Type in the name. Different resource types can be identified by the resource icon.
Azure defines naming rules and restrictions for Azure resources.
There is no cost saving to group databases into on single resource group. It is better to provision the database in the same resource group as the application that uses it.
To help maintain order and control in your Azure environment, applying tags to resources and resources groups is the way to go.
Azure has the Tag feature, which allows you to apply different Tag Names and values to Resources and Resource Groups:
You can leverage this feature to organize your resources in a logical way, not relying in the names only. E.g.
- Owner tag: You can specify who owns that resource
- Environment tag: You can specify which environment that resource is in
Tip: Do not forget to have a strong naming convention document stating how those tags and resources should be named. You can use this Microsoft guide as a starter point: Recommended naming and tagging conventions.
Looking at a long list of Azure resources is not the best way to be introduced to a new project. It is much better to visualize your resources.
You need an architecture diagram, but this is often high level, just outlining the most critical components from the 50,000ft view, often abstracted into logical functions or groups. So, once you have your architecture diagram, the next step is to create your Azure resources diagram.
Option A: Just viewing a list of resources in the Azure portal
- When there are a lot of resources this doesn't work.
Option B: Visually viewing the resources
Figure: Good Example - ARM template and automatically generated Azure resources diagram in the SSW Rewards repository on GitHub Figure: Good Example - The Azure resources diagram generated by the ARM Template Viewer extension for SSW Reewards
Suggestion to Microsoft: Add an auto-generated diagram in the Azure portal. Have an option in the combo box (in addition to List View) for Diagram View.
UPDATE: This is now happening.
Scrum Warning: Like the architecture diagram, this is technical debt as it needs to be kept up to date each Sprint. However, unlike the architecture diagram, this one is much easier to maintain as it can be refreshed with a click. You could reduce this technical debt by automatically saving the .png to the same folder as your architecture diagram. It is easy to do this by using Azure Event Grid and Azure Functions to generate these for you when you make changes to your resources.
Azure is Microsoft's Cloud service. However, you have to pay for every little bit of service that you use.
Before diving in, it is good to have an understanding of the basic built-in user roles:
It's not a good idea to give everyone 'Contributor' access to Azure resources in your company. The reason is cost: Contributors can add/modify the resources used, and therefore increase the cost of your Azure build at the end of the month. Although a single change might represent 'just a couple of dollars', in the end, everything summed up may increase the bill significantly.
The best practice is to have an Azure Spend Master . This person will control the level of access granted to users. Providing "Reader" access to users that do not need to/should not be making changes to Azure resources and then "Contributor" access to those users that will need to Add/Modify resources, bearing in mind the cost of every change.
Also, keep in mind that you should be giving access to security groups and not individual users. It is easier, simpler, and keeps things much better structured.
**Built-in Automatic Backup in Azure SQL Database **
Microsoft Azure SQL Database has built-in backups to support self-service Point in Time Restore and Geo-Restore for Basic, Standard, and Premium service tiers.
You should use the built-in automatic backup in Azure SQL Database versus using T-SQL.
T-SQL: CREATE DATABASE destination_database_nameAS COPY OF[source_server_name].source_database_name
Figure: Bad example - Using T-SQL to restore your database
Azure SQL Database automatically creates backups of every active database using the following schedule: Full database backup once a week, differential database backups once a day, and transaction log backups every 5 minutes. The full and differential backups are replicated across regions to ensure the availability of the backups in the event of a disaster.
Backup storage is the storage associated with your automated database backups that are used for Point in Time Restore and Geo-Restore. Azure SQL Database provides up to 200% of your maximum provisioned database storage of backup storage at no additional cost.
Service Tier Geo-Restore Self-Service Point in Time Restore Backup Retention Period Restore a Deleted Database Web Not supported Not supported n/a n/a Business Not supported Not supported n/a n/a Basic Supported Supported 7 days √ Standard Supported Supported 14 days √ Premium Supported Supported 35 days √
Figure: All the modern SQL Azure Service Tiers support back up. Web and Business tiers are being retired and do not support backup. Check Web and Business Edition Sunset FAQ for up-to-date retention periods ** **Learn more:
- Microsoft documentation - Azure SQL Database Backup and Restore
- Video demo on Channel 9 - Restore a SQL Database Using Point in Time Restore
**Other ways to back up Azure SQL Database:**
- Microsoft Blog - Different ways to Backup your Windows Azure SQL Database
Do you configure your web applications to use application specific accounts for database access?
An application's database access profile should be as restricted as possible, so that in the case that it is compromised, the damage will be limited.
Application database access should be also be restricted to only the application's database, and none of the other databases on the server
Bad Example – Contract Manager Web Application using the administrator login in its connection string
Good Example – Application specific database user configured in the connection string
Most web applications need full read and write access to one database. In the case of EF Code first migrations, they might also need DDL admin rights. These roles are built in database roles:
db_ddladmin Members of the db_ddladmin fixed database role can run any Data Definition Language (DDL) command in a database. db_datawriter Members of the db_datawriter fixed database role can add, delete, or change data in all user tables. db_datareader Members of the db_datareader fixed database role can read all data from all user tables.
Table: Database roles taken from Database-Level Roles
If you are running a web application on Azure as you should configure you application to use its own specific account that has some restrictions. The following script demonstrates setting up an sql user for myappstaging and another for myappproduction that also use EF code first migrations:
GOCREATE LOGIN myappstaging WITH PASSWORD = '****'; GO CREATE USER myappstaging FROM LOGIN myappstaging; GO USE myapp-staging-db; GO CREATE USER myappstaging FROM LOGIN myappstaging;
GOEXEC spaddrolemember 'dbdatareader', myappstaging; EXEC spaddrolemember 'dbdatawriter', myappstaging; EXEC spaddrolemember 'dbddladmin', myappstaging;
Script: Example script to create a service user for myappstaging
Note: If you are using stored procedures, you will also need to grant execute permissions to the user. E.g.:
GRANT EXECUTE TO myappstaging
Data Source=tcp:xyzsqlserver.database.windows.net,1433; Initial Catalog=myapp-staging-db; User ID=myappstaging@xyzsqlserver; Password='*************'
Figure: Example connection string
Here's a cool site that tests the latency of Azure Data Centres from your machine. It can be used to work out which Azure Data Centre is best for your project based on the target user audience: http://www.azurespeed.com
As well as testing latency it has additional tests that come in handy like:
- CDN Test
- Upload Test
- Large File Upload Test
- Download Test
Setting up a WordPress site hosted on Windows Azure is easy and free, but you only get 20Mb of MySql data on the free plan.
References: John Papa: Tips for WordPress on Azure
Data in Azure Storage accounts is protected by replication. Deciding how far to replicate it is a balance between safety and cost.
Locally redundant storage (LRS)
- Ma intains three copies of your data.
- Is replicated three times within a single facility in a single region.
- Protects your data from normal hardware failures, but not from the failure of a single facility.
- Less expensive than GRS
- o Data is of low importance – e.g. for test websites, or testing virtual machines
- o Data can be easily reconstructed
- o Data is non-critical
- o Data governance requirements restrict data to a single region
Geo-redundant storage (GRS).
- The default when you create it storage accounts.
- Maintains six copies of your data.
- D ata is replicated three times within the primary region, and is also replicated three times in a secondary region hundreds of miles away from the primary region
- In the event of a failure at the primary region, Azure Storage will failover to the secondary region.
- Ensures that your data is durable in two separate regions.
- o Data cannot be recovered if lost
Read access geo-redundant storage (RA-GRS).
- Replicates your data to a secondary geographic location (same as GRS)
- P rovides read access to your data in the secondary location
- Allows you to access your data from either the primary or the secondary location, in the event that one location becomes unavailable.
- o Data is critical, and access is required to both the primary and the secondary regions
Often we use Azure VM's for presentations, training and development. As there is a cost involved to store and use the VM it is important to ensure that the VM is shutdown when it is no longer required.
Shutting down the VM will prevent compute charges from incurring. There is still a cost involved for the storage of the VHD files but these charges are a lot less than the compute charges.
The following is stated on http://www.windowsazure.com/en-us/pricing/member-offers/msdn-benefits/ "Stop your virtual machines and we will stop billing them within a minute. " Please note that is for MSDN Azure subscriptions.
You can shutdown the VM by either making a remote desktop connection to the VM and shutdown server or using Azure portal to shutdown the VM.
If you use a strong naming convention and is using Tags to its full extent in Azure, then it is time for the next step.
Azure Policies is a strong tool to help in governing your Azure subscription. With it, you make it easier to fall in The Pit of Success when creating or updating new resources. Some features of it:
- You can deny creation of a Resource Group that does not comply with the naming standards
- You can deny creation of a Resource if it doesn't possess the mandatory tags
- You can append tags to newly created Resource Groups
- You can audit the usage of specific VMs or SKUs in your Azure environment
- You can allow only a set of SKUs within Azure
Azure Policy allow for making of initiatives (group full of policies) that try to achieve an objective e.g. a initiative to audit all tags within a subscription, to allow creation of only some types of VMs, etc...
You can delve deep on it here: https://docs.microsoft.com/en-us/azure/governance/policy/overview