Microsoft Azure Fundamentals AZ-900 Full Course | #azure #az900

e hello everyone this is an introduction to Microsoft Azure fundamentals we will also see what are the examination topics for a900 Azure is a cloud computing platform which offers more than 200 plus Services Azure is a public Cloud infrastructure the services can be classified as infrastructure as a service platform as a service and software as a service under infrastructure as a service it is a core Services which includes compute networking and storage Services Under platform as a service there are services like web applications containers and kubernetes and software as a service we have Office 365 also we have services for artificial intelligence and machine learning so why do you want want to learn Azure fundamentals this is a start for your Cloud Learning path you understand the basics of azure Cloud infrastructure and the services which are being offered if you are going for next level of certification or Azure Cloud which is asz 900 Azure fundamental course this course will provide the Basic Ground forther to enhance and upgrade your Cloud skills if you are planning for a ification which is a900 aure fundamental exam this exam focus on three domains these three domains are describe Cloud Concepts which has the weightage of 25 to 30% describe Azure architecture and services which has weightage of 35 to 40% describe Azure management and governance which has 32 to 35% of weightage from these you will have the questions and you need to understand the topics which are related to these domains I have created the entire course as per the Microsoft study guide you can go through the course and clear the certification thank you and all the best see you in the course let us see the modules and topics required for AZ 900 Azure fundamental course and skills required to pass the examination this is a fundamental exam not a difficult one to clear just we need to understand the basics of cloud computing and the services which are offered by Microsoft Azure there is no need to Deep dive into the concepts all we require is to understand the services and why these services are used there are three modules each module has multiple Topics in Cloud Concepts we understand Shar resp responsibility Cloud models consumption based capex versus Opex based models pricing models and serverless in benefits of cloud services we will take a look at IE availability scalability reliability and predictability security governance and managing Azure Cloud also we will understand in cloud service types we understand about infrastructure as a service platform as a service software as a service and also we will understand what are the use cases for all these service types in the second module Azure architecture and services we have the core Architectural Components of azure in this we will understand about region region pairs Sovereign region availability zones Azure data center resources Resource Group and subscription and Management Group in Azure computer networking we learn about compute types containers what virtual machines and functions also we learn about virtual machine options VM skill sets in application hosting we learn about web apps containers and virtual machines for vet we will go through about topics related to Virtual Network subnets DNS VPN gateways express route public and private end points for storage Services we learn about storage Services storage TS Ry options storage account storage types and also we will learn about a copy Azure storage Explorer Azure file sync for moving the files and for data migration we learn about services like Azure migrate and datab box for identity and access management we learn about Microsoft ENT ID single sign on multiactor authentication and passwordless authentication understand about B2B b2c for external access conditional access rback concept of zero trust DOD model and Microsoft Defender for cloud for the third module there are topics related to Azure management and governance we will look into cost Management in Azure and the factors which can affect cost in Azure TC and pricing calculator management capabilities in Azure purpose of tax governance and complains Microsoft perview Azure policy resource locks topics related to management tools concepts of azure Arc infrastructure as a code which includes arm arm templates and also the bicep and the topics related to monitoring tools let us continue with the topics for AZ 900 Azure fundamental course all the best and good luck in this topic we will see what is cloud computing cloud computing is the delivery of Computing Services over the Internet we have the cloud service provider who is going to provide the services there are more than 200 service which is offered by Microsoft Azure if we are accessing those services using the internet then it is called as public Cloud so Computing Services include common IT services such as virtual machines which is commonly referred as VMS storage databases and networking there are three main core Services which are compute storage and networking cloud services also expand the traditional it offerings to include things like iot ML and artificial intelligence so now there are new Services which has been added like artificial intelligence machine learning services and iot services which also can be accessed using the internet since cloud computing uses the internet to deliver these Services there is no need for the constraint of the physical infrastructure like you have in your traditional data center in your traditional data center you need to deploy all these Services which is going to take months together say suppose if you wanted to deploy the services for artificial intelligence or machine learning or services for iot kubernetes it is going to take many days or months together to deploy all these Services since the infrastructure is already available in Cloud you use the internet to access all these Services you need to pay for the Services what you are going to use and you can deploy these Services rapidly so in summary cloud computing is the on demand delivery of IT services there are three main Services which are called as infrastructure as a service platform as a service software as a service which are all accessed by the internet what are the benefits of cloud computing broad range of services you have more more than 200 plus services in Azure including compute storage networking kubernetes Docker web apps these are the services which you'll be able to access deploy configure and set up to run your business applications and also there are next gen applications which use artificial intelligence machine learning iot sensors kubernetes all these Services can be deployed using the cloud service provider so when you consume the services on cloud how are you going to pay for the services what you have consumed in Cloud it is called as Opex model which is nothing but operational expenditure model in traditional data center it is called as capital expenditure model because we have to deploy the hardware software licenses and also we need to have our own facility either it can be rented facility or it can be cocation but we need to have some kind of facility where we can place our hardw servers storage and networking on top of this we have to buy the licenses for the operating system and applications what we are going to use whereas in the cloud it is operational expenditure model you use the services and you pay for what you are going to use you use the web application you pay for the web web application if you are running a virtual machine you pay for the virtual machine based on per hour building charges and Rapid deployment build deploy and consume the services with less time if you wanted to build the same services in your data center it is going to take days or months together whereas in Cloud all these services are available you need to use the console or CA to access all these services and deploy these services for your application or your business requirement rapid elasticity in Cloud it is very easy to scale up or scale down the resources resources are nothing but the CPU memory and disk if I wanted to increase or decrease the CPU speed which is GS I can increase or I can decrease for the virtual machine and if I wanted to increase or decrease the memory I can increase or decrease the disk size I can increase or decrease this is nothing but scale up and scale down if I am decreasing it is called as scale down if I am increasing the resources for a virtual machine then it is scaled up scale out and scale scal in is basically adding more compute power more virtual machines so I add more virtual machines it is scale out and if I am decreasing the virtual machine count then it is El in for example if I wanted to increase the size of the virtual machine say this virtual machine is B2 standard I wanted to increase the size of the virtual machine of B2 standard by adding more CPU and more memory so this is Gale up so I'm increasing the memory or the CPU if I have some big billion Day Sale okay so there are too much of traffic so let us say there is a sale for my e-commerce platform and I want to add more virtual machines based on the demand so I add virtual machines so I wanted to add more virtual machines based on the demand for my e-commerce sale so I have added virtual machine 1 to Virtual Machine 100 say for example August 15th for Independence Day sale and till August 31st we have the sale so I will retain the virtual machines 1 12 100 because I have the sale going on from these dates so once the sale is done in my e-commerce platform I will decrease because there is no demand anyway I'm going to decrease this to 25 so virtual machine 1 to 25 so the rest of the virtual machine 75 virtual machines I can remove or delete this is nothing but scale in so I have removed the virtual machine this is scal in I have added the virtual machine this is SC out so this was an overview of the cloud computing and these are some of the benefits when you use the cloud service provider it can be Microsoft Azure AWS or Google and this topic we'll discuss about shared responsibility model shared responsibility is what responsibility lies with the service provider who is providing the cloud services and the responsibility of the end user who is consuming the cloud services so whenever we are dealing with on-prem environment we know that the company is responsible for maintaining the physical space so this is nothing but the data center itself in the data center the company is responsible for power cool Link cable so CA link and all the connections and also the company is responsible for ensuring the security so this includes End to End security so End to End security includes physical security network security and the data security itself so company is responsible for all these security and also if there is any hardware issues company is responsible for replacing those Hardwares they need to have proper inventory of all the hardware devices in the infrastructure and they need to have contract with third party vendor so this is to maintain the hardware devices and the IT department is responsible for maintaining all the infrastructure related hardware and software needed to keep the data center up and running so this was related to our on-prem environment let us look into Azure Cloud whenever we are deploying any resources on the Azure Cloud we are basically operating in a shared responsibility model the responsibility gets shared between the cloud provider and the consumer physical security Power Cooling and network connectivity or responsibility of the cloud provider there are some of the areas for which the consumer is responsible let us see what are the areas for which consumer is responsible this picture basically shows what is the responsibility of the customer and what is the responsibility of the cloud service provider so we know that whenever we are dealing with onr environment it is end to endend responsibility of the customer so onr end to end responsibility is of customers or the organization when it comes to infrastructure as a service there are three areas for which Microsoft is responsible which is cloud service provider responsibility and from operating system onwards this is the area customer is responsible because whenever we are creating a virtual machine so infrastructure as a service is is virtual machine we choose the image and we deploy the operating system maintaining the virtual machine patching with security packs and updates so everything is customer responsibility and also the backup part so for these customer is a responsible from operating system onwards for platform as a service there are some areas for which Microsoft and the customer both are responsible which includes network controls applications identity and directory infrastructure and these are responsibility of the cloud service provider which includes operating system physical host physical Network and data center so anything which you see physical is always responsibility of the cloud service provider because the end user is not going to host any physical devices into the cloud infrastructure so it is all maintained and managed by the cloud service provider itself and when it comes to software as a service information and data devices the devices which you use to access the software as a service applications and the accounts and identities all these are all comes under customer responsibility and there is one area identity and directory service infrastructure this is Shar responsibility which is cloud service provider and the customer and everything below this it will be cloud service provider from application to physical data center when you use any cloud service provider you'll be always responsible for the data what you are storing and the devices whatever you are using to access the data and applications and the access whichever you are granting for the users and groups so for all these the customer will be responsible and the cloud provider is always responsible for the physical Data Center phys physical Network and physical hosts and the service model whatever you are using it determine the responsibility for operating system network controls applications identity and infrastructure so basically whenever you use infrastructure as a service platform as a service and software as a service so these service determine the responsibility of all these areas so the this was an overview about shared responsibility model in this topic we'll discuss about Azure so what is azure Azure is a Microsoft public cloud computing platform or cloud service platform Azure provides more than 200 plus products and services these services are offered in multiple geopolitical locations which are called as regions region is not specific to a country so if I take us for examp example let us take us there are multiple regions in US itself we have East US Central us and West us each region has multiple each region will have multiple availability Zone at minimum we have three availability zones and these zones will have multiple data centers so if a customer wanted to choose the Azure cloud service platform to host their services they can deploy the services in their nearest location choosing the nearest location is basically for data security and locality also to avoid any kind of latency whenever we choose to deploy our services onto the cloud service platform we will be working in operational expenditure model so this is Opex model we don't use capex model capital expenditure is like we don't require any hardware we don't require the data center and we don't require the licenses so we will be eliminating the capital cost of buying and maintaining the physical ID systems Azure is far more cost efficient powerful agile Rel aable and secure then on Prem environment let us see the types of cloud services which are offered by Azure there is an infrastructure as a service which is called as IAS with IAS we rent it infrastructure which is the servers and the virtual machine storage and network also comes under infrastructure as a service all we need to do is we need to provision the virtual machines this will be our infrastructure as a service we pay as you go whatever we are consuming on the cloud services we are going to pay only for that when we choose infrastructure as a service we need to maintain and manage the virtual machines we need to maintain the security updates security patches and also we need to ensure that the data is secure we need to take the backup of this virtual machine all these responsibility comes under the customer if I wanted to quickly provision the services without worrying about the underlying infrastructure like servers storage Network and database I don't want to provision the virtual machine I don't want to maintain the virtual machine I don't want to take care of the service pack updates and backups under platform as a service everything will be managed by Azure I can quickly provision the services like web application mobile apps containers functions kubernetes as a service so all these are all services which comes under platform as a service software as a service is a method for delivering software applications over the Internet Office 36y is the best example for the software as a service we can take example of CRM applications sales force all these are all cloud-based applications it comes under software as a service under software as a service Cloud providers host and manage the software application and underlying infrastructure and even handle the maintenance like software upgrades and security patching user connect to the application over the Internet usually with a web browser so user will be using their own device use their browser access these software as a service application over the internet user will be using their own device use the browser to connect to the software as a service application over the Internet let us see what are the services which comes under infrastructure as a service platform as a service and software as a service if I take infrastructure as a service compute storage networking so these are the three main coree services along with this we have Azure VMware solution and VM SK sets under platform as a service we have database so this is database as a service like SQL Osmos DB web application functions containers so this is azure container instance and kubernetes as a service which is azure kubernetes as a service under software as a service we have Office 365 Azure devops Azure migrate database migration tool site recovery which is also called as aure site recovery and Azure backup so these are the services which falls under infrastructure as a service platform as a service and software as a service to understand more about regions you can visit this particular website which is Microsoft website you'll be able to see the listing of each geographic location and also under each geographical location you will see the regions which are available and each region you'll be able to see what are the availability zones and where exactly it is located some of the regions does not have availability Zone still and some of the regions as availability zones so this was an overview about Azure CL CL in this topic we'll discuss about Cloud deployment models there are three main Cloud deployment models private public and hybrid in private Cloud the organization owns the entire infrastructure so if they have the private Cloud infrastructure organization owns the entire the organization is solely resp responsible for their infrastructure and they don't share their infrastructure to any other customer so this private cloud is solely owned by the organization and used by the organization for their services private Cloud provides greater control of the it infrastructure the hardware and the data is under control of the organization cost to procure and host the infrastructure is high organization can build the private cloud in their own data center or they can even choose the third party data center what they will be doing is they have the hardware layer on top of the hardware layer they will use the virtualization layer so this virtualization layer can be of hyperv VMware or Citrix and on top of the virtualization layer they will have orchestration layer so this orchestration layer will deploy the services so whenever we are building the private clo infrastructure very often we see these private Cloud are capable of doing only infrastructure as a service or very rarely we will see it can be used for platform as a service and software a service it is very rare to see in private Cloud let us see what is public cloud in public Cloud there is no Hardware to manage no data center space required all the resources are shared between multiple customers to access public Cloud all we need is let us say we want to access Azure Cloud all we need to have is subscription and internet so we'll be able to deploy the services onto the cloud a public cloud is built controlled and maintained by a third party cloud provider so in this particular case it's Microsoft who owns the public Cloud infrastructure this is for the Azure cloud and we can deploy the services very fast which is not possible in the private cloud like if I wanted to deploy web applic ations or functions I'll be able to easily Deploy on the public cloud service platform rather than deploying in the private Cloud for me it is very easy to deploy these services on the public Cloud let us say what is hybrid Cloud hybrid cloud is a combination of private cloud and public Cloud let us say we have our own private Cloud infrastructure in our own Data Center so this is my private cloud and there are some of the services which are not available in the private Cloud for which I'll be using the public Cloud so this is the Azure public Cloud I can have connection between private and public Cloud using S to side connection or express route if I wanted to access some of the services like containers let us say I have virtual machines which are created inside my private Cloud I have hundreds of virtual machines I wanted to backup all these virtual machines I can configure the backup to move all the backup to the public load environment I can create storage account and I'll be able to move all my backup to this particular storage account or if I wanted to use public Cloud for identity and access management I can use enter ID for identity as a service so that all the services which are deployed in the private Cloud can authenticate using the public cloud and if I wanted to use some of the services in the public Cloud container as a service kubernetes as a service and functions Office 365 I can actually integrate with the public Cloud because if I want to build all these Services it will take time in my private Cloud environment and it is also not very easy to build these kind of services inside the private Cloud so basically for the services which are very difficult to deploy and manage inside the private Cloud I'll be using the public cloud in this environment and the combination of using both the services inside the private cloud and public Cloud it will be hybrid Cloud environment let us do the comparison of cloud models in public Cloud there is no capital expenditure in private Cloud we have capital expenditure for the data center for the hardware for the operating system licenses software licenses everything the organization has to procure and manage and also they need to keep the inventory of all the infrastructure resources in hybrid Cloud provides the most flexibility because we are combining the resources which are not available in the private cloud and using those services in public Cloud so in public Cloud we'll be able to quickly provision and deprovision the services and resources in private cloud data is not Co ated with other organizations data so in public Cloud basically we share the resources with other customers in private Cloud it is solely one company's infrastructure and the data is completely in control with the organization and when it comes to hybrid Cloud we have an option to choose where our application is running if my application is business critical and it is very critical application I can choose it to run in my private Cloud environment I can also use the less critical application in the public Cloud infrastructure so I have the option over here so I can decide where the application has to run whether it has to run in public CL or private clad and the payment method in public load is we will be paying only for the resources what we have consumed and we will be charged on monthly basis in private Cloud we need to purchase the infrastructure related hardware and also we need to maintain those Hardware as well when it comes to the control over resources and Security in public Cloud the customer doesn't have control over the resources and security it will be shared responsibility between the cloud service provider and the customer in private Cloud organization has full control of their hardware and their infrastructure and also they need to do maintenance and update for any hardware devices and also software in Iber Cloud since organization can decide which application has to run on which cloud service platform they have the control over security complaints and legal requirements let us see what is multicloud in a multicloud scenario you'll be using multiple cloud service provider if you are running your services inside Azure cloud you might be requiring the I availability or Dr setup for your infrastructure and you don't want to deploy all the resources into the same cloud service provider hence you will choose another cloud service provider for your HR Dr requirements and also it can be like if you don't have the services in one of the cloud service provider you can actually integrate your application if there are some of the services which are available in another cloud service provider also it can be like you have hosted all your services and resources in one of the cloud service provider it can be AWS or Google and you are in the process of migrating from AWS and Google to Azure so basically in a multicloud environment we will be dealing with two or more public Cloud providers so in hybrid Cloud Model we'll be dealing with private and public but in multicloud environment what we'll be doing is we'll be dealing with two are more public Cloud providers it will be public and public it will be Azure AWS or Google so we will be dealing with who are more cloud service provider let us see what is azure AR I have a separate Topic in upcoming session for Azure AR so Azure AR is a separate service it is a unified management portal to manage your on-prem environment you can also manage the other cloud service resources so you need to onboard the resources which you wanted to manage using the Azure Arc so basically it is a single view where you'll be managing the onpro resources and even the other cloud services resources into a single management portal which is azure Arc all you need to do is you need to onboard the resources which you wanted to manage using the Azure AR there are some of the resources which can be managed using Azure AR not all the resources are supported at this time you need to check the documentation the resources which are supported for Azure Arc we can onboard some of the resources like physical server if physical server is running inside the data center or on PR environment we'll be able to onboard the physical servers into the Azure Arc to manage and also if we are using VMware V Center we will be able to manage all the VMware virtual machines using Azure AR so we need to onboard these resources inside the Azure AR and we can also onboard SQL Server as well so check the documentation for the resources which are supported for Azure AR there are number of resources which are supported at this particular time so it is always better to check the documentation and onboard those resources inside the Aur let us see what is azure VM solution if if an organization has already using the VMware infrastructure like they have the VMware clusters running in their data center let us say this is ESX environment or vpar environment which they have already using and if they wanted to leverage the vmw solution on the Azure clot they can use the Azure vmw solution the solution what Azure is providing is a validated design so this is the validated design for VMware and at minimum we can use three host three host will be provided by Azure and it can go up to 16 host on these three bare metal servers there will be a VSP installed and you will be also provided with v Center v s and NSX so these services will be provided by Microsoft this is a fully manage service by Microsoft Azure there is no need to worry about the licenses for vpar Venter vsan and NSX all you need to do is you need to pay only for the usage and you'll be able to build your virtual machine seamlessly on your data center or inside the Azure vmw solution and if you have have express route you can seamlessly integrate the applications which are running in the on-prem environment with the Azure VM solution this was an overview about Cloud deployment models in this video we will see what is capital expenditure or capex and operational expenditure which is also called as Opex model in capital expenditure the organization uses upfront cost the organization makes up different cost to purchase the hardware and software they also manage the data Center they will purchase the hardware by paying The Upfront cost for all this outw in capital expenditure the organization n times the assets assets are nothing but the employee of the organization Hardware software and the data set so when it comes to cloud cloud Works under consumption based model basically Cloud Works under operational expenditure model that is no need for you to pay upfront cost for using the services we pay only for the resources what we use so we use Virtual Machine we pay for the virtual machine we use managed diss we pay for the managed diss if you are using the databases then we pay only for the database usage so this is called as consumption based model under which the cloud works there is also a flexibility of utilizing more resources when required for example if I have a requirement or if there is a project which requires Android virtual machines I can simply configure and install Android virtual machines with very less time using the cloud service provider if I have a requirement of using terabyte of spaces then I can use the managed diss or the blob storage which is offered by the Azure then I can pay for whatever the consumption I have made on the cloud and also if there is a requirement for me to test the databases with less time I can build these databases using Azure or any other cloud service provider test the databases on cloud then once the testing is complete I'll be able to destroy all these databases so that there will be no more cost attached to the resources which I have consumed in this video we will see what are the cloud pricing models which are available for Azure whenever you are preparing for an Azure exam you also need to consider about the costing operational expenditure cost depend on these factors the resource type what you're going to use the consumption maintenance and the geography where the services are hosted the subscription type whether it is a pay as you go subscription or you have an Enterprise subscription and what kind of discounts you have with your Enterprise subscription and also when you are choosing different services from Azure Marketplace there might be Services which are not available from directly from Azure itself but there are some Services which are offered by third party vendors like data P Walt or third party Services even uh Salesforce Oracle so they provide the services using Azure Marketplace from there you'll be able to choose their services which is third party services and you'll be able to deploy those services using the Azure infrastructure so when it comes to virtual machine you can deploy the virtual machine as a single virtual machine or you can deploy the virtual machine in multiple zones for example if this is an application to and you have some application hosted on this particular virtual machine you wanted to have Zone redundency you will deploy two virtual machines one in AZ 1 availability Zone one and availability Zone 2 and you have the load balancer you have the load balancer here so when you deploy the virtual machine in two different availability zones then we will be paying two times of the cost if you are deploying only one virtual machine then the cost will be one so this is how the costing Works in Azure so it is based on the resource type what you are going to select and how many resources you are going to use for a particular workload or a service if you are using web app then you have an option to select the tier it will start from three shared and standed premium and isolated here so depending upon what you are selecting what type of SKU you are selecting or skew you are selecting based on that you will be charged if you are selecting free not all the options will be available when you choose the web app if you are using shade you will get some of the options which are available in free and you will get additional options in shade whereas when you select standard and premium you will get a lot of options to deploy the web application it might contain the backup option it might contain the SSL or SSL TSL termination and also you can integrate with the CDM so these are the options which you'll get based on the tier what you going to select when it comes to managed diss managed diss there are multiple types of managed diss which are available based on that we will be charged we will see with the pricing calculator how these charge are different when you use managed dis and if you are using kubernetes as a service this is the kubernetes the number of nodes what you are using and size of each node networking cost so all these are all involved when you select kubernetes when you choose directory as a service container as a service and database as a service load balancer as a service so there will be different cost attached to each of these services so basically all these cost will depend on the resource type what you're going to use and the consumption of all these resources which you are going to provision to your infrastructure and how you are going to maintain these resources and also each geography has a different costing if you are using us as a region then there will be a different cost for the services which you deploy if you take example of Europe then there might be change in the costing and also subscription type as I said earlier where as you go subscription and Enterprise agreement subscription as a different costing method and aure Marketplace as I said that also comes into picture so I am into Microsoft portal which has this pricing calculator for Azure you can select the services which you wanted to deploy for example if I select the compute option I can select the virtual machine if I select the virtual machine then it is adding the virtual machine if I select the region this is basically the geography which I talked earlier so if you are selecting UK South then there might be a different pricing the costing is here and if you select us there will be different cost and also you can select what operating system you want Linux or Windows you can select there are options which are available from the type of operating system and tier you want to select basic tier or standard tier and also from this queue you can select how many course how many RAM you require for the virtual machine so based on the size what you're are going to select the cost will change so basically whenever you are checking for any costing part you need to come to this particular page and select the pricing calculator so you have option here under pricing you have option to calculate the pricing you can also calculate the total cost of ownership and also you can select this option to optimize the cost so now we are into pricing calculator so if I select networking even for the IP address there is a charge when you use the virtual Network it doesn't cost anything when you deploy the virtual Network on on aure but when you are using the peering vet to vet peing then there is a cost so if you select traffic manager you can see what is the cost attached to the traffic manager so there is monthly cost of $170 and I am deploying the traffic manager in East Us location so this is the cost attached to it so how many DNS queries you wanted to make with the traffic manager and do you require L check for the traffic manager and what are the end points you wanted to connect either it is a internal endpoint or an external endpoint and there are some other options which you can choose if I select the storage you'll go to the storage accounts from the storage account you can see there are multiple options available if I select managed diss then you'll be able to see there are different tier under managed diss there is standard HDD standard SSD premium and alra disc So based on the disk type what you're going going to select the cost will differ for example if I select Ultra disk it has gone to monthly dollar 5.79 if I select premium SSD you can see the cost it is compared to the ultra disk it is very very less monthly 5.79 for 4 gig and if I select premium assist version two the cost per month is only8 that is for the 4 Gig of the storage so this is one gig of storage and you have number of dis and also you need to remember one thing whenever you are selecting the discs based on the type of disk selection the minimum storage capacity will differ so premium SSD version two we can use minimum of 1 GB but when you use ultra disk so you need to select minimum of 4 gig so it is starting from the 4GB then it goes up incrementing 4 + 4 if I select web application then you can select the app service and you can see what is the cost for the app Services per month and if you are using this size of instance which is basic instance if I select the instance size as B 34 core then the cost will differ here if I select premium V3 then the monthly cost is going to change so this is how you can use the pricing calculator you can visit this particular website and check for each and every Services what will be the costing for each and every services offered by Azure in this video let us discuss about serverless compute and see what is is this serverless compute all about serverless compute doesn't mean there is no servers whenever you provision any serverless services on Azure Cloud Azure will provide the underlying infrastructure which is nothing but the virtual machine the storage and the networking these underlying components will provide the serverless services the compute power memory storage space and networking for example if I choose web application or web app as the surve as Services the underlying infrastructure will be provisioned and managed by Azure because this is a platform as a service whether it is a server based application or serverless application you require three main important components which is compute storage and networking these are the three main important components which are required to run any server based or serverless application when you deploy a serverless Services onto the Azure Cloud Azure will provide this part which is nothing but the underlying infrastructure the developer or the end user should not concern about the platform on which it is running Azure will provide the platform and also it will provide the runtime for the application so the developer will write a code for the application it can be net application Java based application or python Ruby based application you will write the code for the application and the runtime for that application will be provided by Azure so the developer will choose the preferred language to write the code it can be Java code or python code you will deploy that code onto the Azure serverless Services which can be web application once he deploys that code Azure will provide the run time he has to choose the size for the web application to run size is nothing but the underlying infrastructure for the web application to run it is compute memory storage and networking part also there are many services offered under sever as services these services are kubernetes Services Q storage functions multiactor authentication and search when it comes to serverless Computing you need to remember that the end user or the organization will not manage the underlying infrastructure the underlying infrastructure will be always managed by Azure cloud or the cloud service provider end user or organization will always consume the service they will not maintain and manage the infrastructure this is one of the advantage of platform as a service in this video we will see what is I availability and scalability I availability means whenever you have an application and you have deployed this particular application onto the cloud this application is always available no matter what there is an event there is an incident or there is an outage which has caused the region or a particular data center of the Azure to go down even if there is an incident at Azure the application what you have deployed it doesn't have any impact no impact to your application so you need to deploy your application in such a way that there is no impact whatsoever so this is called as I availability so how do you deploy your application with I availability so how do you deploy a web application as an highly available application for example if I have a application which is running with Europe and also because of IE availability I will also create this particular application in US region for this I will use something called as traffic manager we also know that whenever we use the databases we can connect to the databases and also we can create an highly available database so this will be the primary copy this will be the read only this particular traffic manager has the domain for example I wanted to create a domain name as learn lo.com so these are my two apps whenever user searches or browses for Lear cl.com you will hit this traffic manager the traffic manager itself it is highly available and we can check the SLA in the Azure portal based on the geographic location when a user browses this learn cloud tech.com based on the geographic location the user connection will be either it will be directed to EU region or it will be directed to us region since we also have highly available database here even if one of the region fails for example this is in VI region even if one of the region fails we have another region to take over so the traffic manager will be still running the user iies will be redirected to us then this particular read only can be made as primary once the EU region comes back we can fail over back and again the request can be redirected via EU to the primary database so this is how we can create the I availability in case if I wanted to create highly available virtual machines what I will do is I'll create a virtual machine in availability Zone one and I'll create virtual machine in availability Zone 2 if this is my application here then I will also have something called as load calaner the queries are it at the load balancer from the load balancer because the end point is the virtual machine of both the zones has been added to the load balancer so this is the backend configuration for the load balancer and is the front end configuration at the load balancer so whenever there is a query hit by this user using this particular IP address based on the health based on the L check the queries will be redirected either to AZ 1 or az2 suppose this particular availability zone two is having some outage then all the queries will be redirected to Zone one so this is how the I availability configuration will be done so it is basically whenever you are creating a design for your architecture then you use IE availability concept to design your architecture when it comes to the scalability there are two types of scalability which is a aable one is either you increase which is called as scale up or scale down let us take example of scaleup if I have a virtual machine which is of 3 gz and 2 GB of ram now let me call this as B2 standard so I'm randomly giving name for this virtual machine size for this virtual machine if I wanted to increase this particular virtual machine size I can go to B3 which as 4.8 GHz of CPU speed and I have 8 GB of R so I can increase the size of the virtual machine from moving from B1 to B2 with this size so this is called as scale up so in case if I wanted to scale down if I am already using this size which is 4.8 GHz and 8 GB of RAM if I wanted to scale down to the B2 to 3.0 GHz and 2 GB of RAM if I so I have to scale down this is scale up so scale up is basically increasing your compute CPU speed and the memory and also the disk space scale down is you are scaling down of the CPU speed the RAM and the disk space there are multiple sizes which are available for the virtual machine so we can select the size of from the series a series D series B series D series E Series you can scale up or scale down like this so this is scale up and down scale down there is also another concept of scale out and scale in in scale out and scale in what will happen is we have the virtual machine of 4 GHz and 8 GB of RAM with 128 GB of dis in scale out basically we are adding more number of compete power which is I am adding one more virtual machine of so if I take this is as B3 size so I am adding more virtual machine with the same size so again I will add one more Virtual Machine 8 GB Ram 128 GP of this so you add multiple of virtual machines like this to scale now you have added four virtual Machines of the same size initially it was one virtual machine then you have added four virtual machine three virtual machines so we have added 1+ 3 virtual machines total four virtual machines we have now in case I wanted to remove two of the virtual machine then if I decrease the number of virtual machine count with one virtu machine plus two virtual machine then this is called as scale so scaling is basically decreasing the virtual machine count G out is increasing the virtual machine count so you increase the count here here you decrease the count in scaling whereas in scale up and scale down you are increasing the size of the virtual machine of the same virtual machine you are not going to add any new virtual machine here but you are increasing the capacity of the virtual machine by increasing the CPU speed and the RAM for that particular virtual machine which you wanted to scale up scale down is decreasing the CPU speed RAM and the disk so that will be build down in this video we will see what is SLA RPO RTO reliability predictability and performance so SLA is service level agreement so when you consume any services from Azure so you have multiple services offered right so when you consume services for each and every Services there are different SLS for example for the virtual machine itself for the virtual machine if you are using standard hard disk then you have 95% of SLA if you are using standard SSD dis for the same virtual machine then it will be 99. 5% if the virtual machine as premium SSD then it will have 99.9% of SL so for deploying the virtual machine itself you have different SLS so these different s SLS are calculated based on what type of hard disk you are going to use so these are the discs what you are going to use for the virtual machine what you are going to create so this is the virtual machine you're going to use standard sdd then you have 95% if you are using standard SSD then you will have 99.5% % of SLA and if you are using premium SSD then 99.9% of SLA for the virtual machine if it is provisioned in availability set then the SLA will be 99.95% availability set is within the same data center but multiple tracks if the virtual machine is provisioned in different Zone which is availability Zone then you have 99.99% of SL because in availability Zone what you are doing is basically you are creating the virtual machine in multiple zones so you have Zone one and there is a zone two and these two zones are in different data center so this is in data center 1 this is in data center 2 whereas in availability set within the same data center for example data center 1 and you have multiple racks the virtual machines are spread across multiple racks so that is the difference when you see availability set and availability Zone if I take other examples like function kubernetes web app container so these are nothing but platform as a service for all these Services you have 99.95% of s if I take traffic manager then for this the SLA will be 99 99% if I take Azure DNS for this you have 100% so basically Azure DNS is always online there is no outage whatsoever for traffic manager there is little outage and for all these platform as a service so these are all platform as a service for these we have 99.95% of SLA whenever you are designing any infrastructure based on the Azure cloud services you need to check what are the slas provided for each of the services which you are going to deploy on your environment based on that you will have high availability and scalability let us now understand what is rpu and rtu rpu is recovery Point objective RTO is recovery time objective let me share an example to understand how rpu and RTO Works Suppose there is a banking industry there is a bank which is called as Cloud bank so this is the bank name and they have their services on the azure so this is Zone one this is Zone 2 they have the virtual machine and they have SQL database also for the SQL database there is a h which has been configured or always on has been configured and this copy is in Central us and this is in East us now these virtual machines are in East us so this will be the app tier there is also a load balancer which has been configured now there are multiple users who are accessing this particular banking site so these are the users who are accessing the banking site let me use the color so these are the users who are accessing the banking site via internet if there is an outage or an event which has caused Zone one to fail so this Zone one with the virtual machine for the app tier has failed there is one virtual machine which is is available in zone 2 and this virtual machine is able to write to the database and commit the transaction whatever the users are doing so in this particular case let us take an example of 9:30 a.m. the users have logged into the banking website at this particular time the website was working fine so it was working fine at 10: a.m. there was an outage of Zone 1 and 11:00 a.m. the outage was restored for the end user there was no impact because they add another virtual machine and using the virtual machine they were able to commit to the database whatever transaction they were doing so they were doing some transaction it was getting committed over this particular virtual machine and coming to this database this is the outage and this is the T for the restored the time difference between an outage and the restoration point is called as recovery time objective so this is 1 hour outage so we have one hour of rtu so the time difference between an outage and the time when it was restored there is a time difference between 10: to 11: this difference will be your RTO so this is when your services were brought back so it took almost 1 hour to bring your services so that is the reason one hour will be the rbe now consider because there is no impact here the user were able to do the transaction without any issues considering the same example if both the Zone I will write it in red color so if both The Zone fails which is Zone one and zone two has failed to bring back these two zones how much time it is going to take say for example 9:30 website was working fine 10:00 a.m. there was an outage and 11:00 a.m. it was restored both the zones were working fine at 9:30 10:00 a.m. there was an outage and it took almost 1 hour to restore the services so so this difference again will be the RTO which will be one hour so this is also fine because one hour outage and there is no impact to the data but only problem is users are not able to connect via any of these virtual machines in the previous scenario there was one zone which had failed and there was another virtual machine which was available so the transaction were getting committed using this virtual machine this is the first scenario this is the second scenario in first scenario only one zone failed there is no issue user were able to to transact via another virtual machine to the database but in second scenario the entire Zone 1 and Zone 2 failed and user were not able to transact via both the virtual machine so in this scenario users were unable to do any transaction so this is fine because even though they were not able to do the transaction both the zones are down there is no impact to the data no impact to the data so after 1 hour after 1 hour Services were restored so now comes rpu what is rpu if you take the same scenario again there is a data center and there are zones Zone one zone 2 there is virtual machine there is also virtual machine there is an web application which was used by the there was multiple users who are accessing the web application and there is the SQL database and it also has readon copy whenever the user is doing any transaction because they were having one number of outage from 10 to 11 10 to 11 they were having both the services both the zones went down and whatever users they were transacting that is not getting committed to the database so it is not getting committed to the database user gets the error in the browser okay retry after some time this is still okay because the transaction is not committing to the database that is also fine the data will remain as it is whatever the data during the outage between 10 to 11 it Remains the Same and whatever the data which the user are going to transact for that particular transaction they are getting the error so there are no new transaction which is allowed by the web application because it will not be a consistent data so let us assume that during this particular outage of one hour in East US the entire region failed this entire region failed this Zone failed this also failed and the East US SQL database that also failed now whatever data which was committed to the SQL database this particular data because it is already failed and the copy which was supposed to go to Central us there is a data loss of 5 minutes or for 15 minutes example now the users who were doing the transaction in this particular SQL database there was the transaction of this much of dollars while sending the same data from East us to the central us the database lost some of the transaction so this will be called as invalid data now the business doesn't accept this much of RPO so RPO is basically how much of data loss can a business accept in a banking industry if there is a 5 minutes of data loss or 15 minutes of a data loss it will be a huge impact for the banks so that is why the banking industry or any financial industry the RPO will be always zero so for non-critical businesses for non-critical business that is okay even if one day data loss also it is okay for them but for Stock Exchange financial institutions and any other institutions related to the finance like Forex and airports so these institutions will have very strict RPO and RTO the RPO will be 0% RTO will be 5% or 5 minutes they can accept only 5 minutes of outage or 15 minutes of outage or at Max one hour of outage so this is the agreement what the organization will have between the cloud service provider and the organization for RPO and RTO to summarize RPO is how much of data loss your business can accept RTO is how much time to restore the services let us understand what is reliability whenever you deploy services on the cloud we have the services on Azure and you have deployed web application in both the location Europe and us these web application are connected using traffic manager the end user is accessing the web application using the traffic manager if one of the region fails for example if this region has gone down there is no impact for the user who is accessing the website the website is only failed in this region so the ability to withstand any failure is called as reliability so you are building your infrastructure in such a way that even if there is a failure of any of the services there is another region or another Zone which will take over the services so this is called as decentralizing your services are decentralized let us now understand what is predictability whenever you have deployed any services like virtual machine web application SQL you have estimated some performance when you are designing your infrastructure whatever you have estimated you need to get the same estimated performance and the cost which you have predicted so this is called as predictability even if there is any event which has happened at Azure there might be event there might be an outage of a region or outage of any particular services at the Azure your performance and the cost is not going to impact if you need the performance which you have estimated you can simply scale the virtual machine even by scaling you are able to manage the cost what you have estimated so this is called as predictability so basically you predict the performance and the cost for the services which you are going to deploy on Azure clo let us now understand what is performance so before going to the performance we need to know what are the five pillars of the cloud architecture whenever you are going to design any architecture on the cloud you need to consider these five pillars which are reliability cost operational excellence performance and and security so these are the five pillars based on which you are going to design your infrastructure on the cloud let us now see performance whenever you are going to deploy any of the cloud services you need to understand what is the requirement for the business and what level of performance is required by the Services which you are going to deploy based on that you are going to size your services if you are sizing the virtual machine you can scale the virtual machine either using manual scaling or Auto scaling some of the services like load balancer will be Auto scaled so this is automatically scaled by Azure itself there is no need for any user intervention Whenever there is a i traffic at the load balancer Azure will automatically scale the load balancer and also if we are using standard HDD for the virtual machine we cannot expect the virtual machine to give the performance of premium htd for each of the services we need to estimate what is the performance required at each service level based on the requirement of the performance we deploy those Services onto the Azure Cloud platform in this video we'll see what are the management options which are available to manage our services on Azure Cloud we know that there are multiple Services which are offered by Azure and we are able to deploy these services so how do we deploy all these services using the Azure cloud service provider we have multiple options to deploy all these Services there is a web portal to access and deploy these Services which is called as https portal. azure.com it is https portal. azure.com also we have CLA options to deploy the services we can use Azure CA to deploy all the services these CLA tools can be installed on our desktop or on our laptop we need to authenticate to the portal then run some commands to deploy these Services there is also an option to use the poell poell can also be installed on our own laptop and similarly how we authenticate using the Azure CLI to the cloud similarly we can authenticate to the Azure Cloud using the Powershell we have to authenticate to the Azure cloud and run some commands in Powell we have Azure modules we need to install these modules for the Azure so that the commands will run so poell is basically a tool which has to be used with different modules for active directory you need to install active directory module similarly for azure you need to install Azure modules we need to install Azure modules onto our Powell then we will be able to run the commands for the Azure cloud services and also we can integrate with the apis we have an SDK and we will be able to use the apis to integrate and we will be able to provision these Services if you don't want to provision or if you don't want to install the Azure CLA and Power Cell on your laptop desktop then there is an option to use these command line tools using the browser itself when you access portal. azure.com you will be provided with an option to use Powershell and Azure CLI so in the portal itself these tools are available access these tools then run the commands provision the services to manage these services on Azure Cloud we have a portal CLI which is azure C tools and Power Cell modules and also we can use SDK and API there is also a tool which is called as terraform to automate the infrastructure so this is an infrastructure as a code which is called as IC we can use terraform to automate the build of the infrastructure so this is the infrastructure as a code which is also called as I we can use terraform to build our infrastructure and provision the services on Azure Cloud terraform is a third party ISC tool within Azure there is a Azure resource manager template this is also called as armm template this is an IAC tool which is provided by Azure itself you can use Azure armm template and provision the resources using this template in this video we'll see what is I pass and SAS services on the Azure Cloud when you deploy infrastructure as a service on Azure Cloud you have complete control over the services what you are going to deploy so the cloud provider is only responsible for for maintaining the hardware for example if you're deploying the virtual machine you have the control over what images you going to choose for the operating system it can be Linux or it can be windows and there are different flavors of Linux operating system what you can install Rat Say Debian these are the flavors and there are many more flavors within Linux you can select and install on the virtual machine for Windows you have Windows 2012 2016 and 2019 you can select from these images to install onto the virtual machine and also you have complete control over the operating system what you are going to install update patch management and security backup of your virtual machine it's all taken care by the organization or the company who is going to use these Services which is infrastructure as a Services only thing is they have the complete control over the operating system what they are going to choose and install when it comes to the platform as a service the cloud provider maintains the physical infrastructure so whenever you are deploying the esale services or the web application the underlying infrastructure for this there is an underlying infrastructure which requires server disk and networking so these are maintained and managed by the cloud provider you only select the type of the web app and the runtime for the web application what you require it can be Java runtime Ruby PHP net so you select the runtime choose the version for this runtime and you deploy the web application the underlying infrastructure everything will be taken care by the cloud provider you don't have control over the operating system you cannot configure the operating system you cannot tweak the operating system as you were doing in infrastructure as a service so no control over the operating system no control over system settings there is only limited control over your cloud services what you are going to deploy under platform as a service there are multiple services offered as platform as a service web application batch functions containers container registry and many more services comes under the platform as a service so when it comes to the database you select three database for example MySQL post or Microsoft so you select the database type what is required and select the version for the database which you are going to deploy the operating system the virtual machine everything will be controlled by the cloud service provider basically it is controlled by the Azure so Azure controls the hardware part the patch management maintenance of the operating system installation of the patch so everything will be taken care by Azure itself even backup also taken care by aure in software as a service most of the control is taken care by the cloud service provider you don't have any control on software as a service only control what you get under software as a Services your data and the authentication you only need to have the devices either mobile device laptop or your desktop you connect to the cloud sign up for the services for email you sign up for the email services use the email Services you will simply log to the email you can send and receive the email everything will be controlled by the cloud service provider there are multiple Services Under software as a service like Office 365 wherein you consume the mail services and SharePoint Services Ms word services so these are the services which are offered under Office 365 so when an organization or an end user selects software as a service services on the Azure Cloud the end user or the organization doesn't have complete control over the services what they are consuming Azure maintains and manages the entire underlying area till the data layer stack above that it is the user responsibility or the organization responsibility who is going to consume software as a service Services when you look at this particular chart it clearly mentions what is the customer responsibility and what is the responsibility of azure or the cloud service provider for each of the services what is being offered in Azure there is the customer responsibility section which is marked in blue and the Azure responsibility Microsoft responsibility and also so there is something called as shared responsibility which is combination of azure and the customer so some of the areas as the shared responsibility so whatever you see in blue it is customer responsibility to summarize what is I pass and SAS let us take an example I wanted to install a database since my organization is very particular about the database type what I am going to use and in my on-prem environment I'm using MySQL as the database I have the option of using the virtual machine selecting the operating system which can be Linux windows and the database version I'm going to manually install the dat datase onto the virtual machine I have the complete control over the configuration and settings I have the complete control of configuration and settings this is called as infrastructure as a service if I take the same scenario for the platform as a service I wanted a database but I am not concerned about the underlying infrastructure I am not concerned about the security the patch management I'm also not concerned about how it is being backed up and also configuration Al configuration and settings we have limited control over the configuration and settings security patch management and backup you don't have any control you just need to ensure that you are configuring the settings for the backup and Patch management you select the database type and version and the size for the database so everything will be taken care by Azure so this kind of scenario comes into platform as a service in software as a service you have your laptop or you have your mobile device you sign up for the cloud service provider and consume the services it can be CRM Services it can be email Services it can be a devop services so simply sign up for the services and consume the services you have limited control over the service itself the control what you have is of your laptop and the desktop or mobile and the data what you are storing on your cloud service data and authentication so this is what this particular chart is briefly tell you in this video we'll see what is aure account subscriptions Resource Group and resources we know that we have Azure cloud and the Azure Cloud provides 200 plus of services to use any of the services we need to have Azure account Azure account is a unique email ID and your password so using which you'll be able to create any of the resources under Azure Cloud so simply it is like banking user ID and password to access your banking services now what is subscription there are multiple subscription which are available like there is a free subscription then there is a student subscription then we have where as you go then Enterprise subscription so these are the subscription what is available with Azure once you have your Azure account you create your subscription initially you start with the free subscription model create some of the services which is provided by Azure Azure provides a1200 worth of credit for a period of 1 month you can use some of the services for dollar200 worth of credit access the services try the services then once your $200 worth of credit is exhausted then the free account will be converted to the where as you go subscription ion and for the students there is additional benefit which is given by Microsoft which is 12 months of free usage of some of the services and free access to developer tools so these are some of the benefits which are given to the student subscription and we know that pay as you go subscription this is converted once your free credit is exhausted then it will be automatically converted as pay as you go subscription then you will be charged on what you are using for example if I using a virtual machine for 1 hour I'll be built for one hour if I am using the virtual machine for 1 month then I'll be built for 1 month so this is how the pay as you go subscription works and with the Enterprise agreement we know that it is the agreement between Microsoft and the customer based on the agreement they can get good discounts they can use the Azure cloud services and also I can have multiple subscription under the same account for example if if my company has multiple departments like I have Finance marketing HR and Manufacturing I can have different subscription for each of these departments so we have account we have [Music] subscription and subscription we can have multiple subscription for each of the Departments whatever the services which are consumed by each of these departments so these departments will be consuming some of the services right so they can be consuming virtual machine they can be consuming IP addresses network storage load balancer datae all these are all service which are consumed by the Departments so these are all called as resources so each department will be consuming some of the other services based on the services they are going to create the resources these resources when they group together that will be called as Resource Group why do we require a resource Group let us take an example of the finance Department itself so at the top layer we have an account so you created your account with your email ID and the password you are loging to the account using your email ID and password you have your subscription we have multiple subscription based on the department one of the department is finance department for finance department itself you can have multiple separate divisions like test and development then QA department and the production environment for example if I wanted to use the resources only for the test and development environment so I'll group all the resources which are being used for the test and development so the these resources like the virtual machine disk IP address Network subnet virtual Network everything I can group for this environment so that I can name this environment and use the resources specifically for the test and development so this Resource Group I can name as test and development so that it is very easy for me to deploy the services and I can manage the life cycle of this particular Resource Group for me it is very easy to manage the life cycle of the resource Group similarly for QA I can create the resources and put all the resources under QA Resource Group this is for the QA Resource Group for the production I will have different type of virtual machine different type of resources during the test and development I will use smaller size of the virtual machines when it comes to the production environment I might be using different virtual machine sizes because of the naming and because of the life cycle of the production environment I can segregate the production environment into a different Resource Group I will put all the resources for the production en so this is how you'll create the resources and you'll place the resources based on the life cycle under a particular Resource Group depending upon the environment which you are going to use on the Azure cloud Resource Group is basically a logical container which you group all the resources together in this video we'll talk about Azure physical infrastructure like any other data center which is being operated owned by any small medium or large Enterprises Azure also has to build their data centers so there will be some basic entities for any data center which is required so if I have a data center for any small data center in this example if I take the data center I need a space or location I need to choose a proper location for my data center I need a location where I can place all my servers all my storage and networking devices so in the data center we have racks of servers and storage devices networking devices cables and connectors power supply cooling so these are the hardware which I require for any data center to operate either it can be small data center medium data center or large data center if it is a small data center then the space location and the number of servers the storage capacity the networking devices cables and connectors these might be less compared to medium and large Enterprise so basically we use servers for our workloads it can be infrastructure workload business workload and management workload I will write it again I will delete and write it again so basically we have business workloads infrastructure workload and management workloads so all your management workloads will go here the management tools basically will go into the management workload infrastructure services like active directory DNS DHCP all these Services fall under the infrastructure workload for business workload any critical Business Services related to the business it can be web application it can be integration with the vendors all these applications load balancer caching databases all these Services is critical for the business to run so these are the types of workload what we host on the servers storage is for storing data for small businesses it will be under terabyte it might be some 500 it will be within 500 to 1,000 terabytes or we can take approximately around 1 paby for the large for the medium and large data centers the number of storage devices can go up it can be in petabytes also it can be in exabytes for Enterprise and also the networking devices what you use for the interconnection the networking is basically to interconnect your workloads and cables and connectors anyway you need to connect your servers storage and networking so you require cables and connectors power supply and cooling to drive the hardware devices we need the power and to cool the hardware devices we require the air conditioning or cooling so this is the basic requirement for any data center only difference is if the company is very large and the company is medium to large the number of servers the numbers of servers may go up storage it will be paby to exabytes and networking we require multiple layers of networking this is for the redundancy purpose and cables and connectors depending on the number of Hardware devices what we are using depending on that we require cables and connectors again for power and cooling to manage the power of the data center to maintain the cooling based on the data center location so if it comes to medium and large Enterprise for the small data center what we add we add data center in one particular location this we can call it as dc1 location 1 for medium and large data center there might be another data center for I availability so they have similar kind of setup so it is basically a copy of the data center in another [Music] location which will be in location number two for example if I take pun as the data center and inj as the location there will be another location in Mumbai so this is to avoid if there is any outage of the data center in location number one there will be another data center which is always a available so that all the services whatever you have deployed the workload which is related to the business infrastructure and management everything will be up and running here so that is why medium and large organization have multiple data centers and the data will be replicated they have the data replicated between these two data centers so this is how the small medium and large Enterprise operate their Data Center and they set up their data center in this video we will discuss about the Azure physical infrastructure this is the continuation of the discussion what we had in our earlier video with the small medium and large data center for an organization Azure will also have their data center to be placed in a location because for any data center you need a location right so Azure will also choose a location the location what they are choosing for their data center it will be a strategic location because whenever they wanted to set up their cloud data center they will ensure that those data centers are not prone to any natural disasters like flood earthquake and also there is no civil unrest basically they wanted to ensure that their data center is safe and secure whatsoever and they build their Data Center one more important point to consider is they use natural resources as much as possible to power up their Data Center and pool the data center they use natural resources for the energy to power up like solar wind and water so they use the natural resources to power up their Data Center and cool the data center similar to any data center we require RS of servers whereas in azure cloud data center you will have multiples of racks and racks of servers it might be thousands or more than thousands of racks of ser servers and betab of storage maybe it can be xab of storage so all the resources which are in the data center it will be shared among multiple customers this is very common for any organization whether it is a cloud data center or small data center or any data center you need to have all these basic entities power and cooling they have a robust way of monitoring securing they have physical security these data centers will have very high level of Security even at the physical level and at the infrastructure level they might use multiple levels of security to enter into the data center because these resources whatever they have placed into the cloud data center Center it is not being used by One customer it is shared between multiple customers and organization if there is any problem with the data center then there might be chances of all these business who are using the Azure cloud services they might run out of the business it is that critical and that is the reason why they choose the Strategic location so wherever they choose the location to build their Data Center this location is called as region so it is a strategic location and important point to note is a region is not a country basically a country doesn't have only one region if you take example of us us has multiple regions East US Central us and West Us and other regions similarly in India also we have three regions Central India West India and South India you should not assume that whenever we talk about region a region is not limited to a country a country might have single region it might have multiple regions based on the geopolitical area so region is not equal to Country a country can have multiple regions the region is not equal to Country a country can have multiple regions if I take example of India there are three regions Central India West India and South India so there are three regions now we understood what is region all about basically it is a strategic location where they place the data center which is cloud data center and it is very highly secured data centers at the physical level and also at the infrastructure level and they have very robust way of monitoring their hardware and their infrastructure we understood about the regions let us see what is availability zone so region is a strategic location we have a region which is strategic location whenever Azure setups their cloud data center a region will have multiple availability zones a minimum of three availability Zone there is a reason why they choose a minimum of three availability zones this is availability Zone 1 this is availability Zone 2 this is availability Zone 3 a availability Zone can have multiple data centers so this is dat data center one and this this is data center 2 this is data center 3 so there can be multiples of data center it is not limited 1 2 3 there can be many more data center within an availability Zone let us take an example of region and availability Zone if I take an example of region which is in North Europe we have a region in Ireland so this is one of the region in north Europe and Ireland has multiple availability Zone availability Zone 1 availability Zone 2 and availability Zone 3 this data center this availability Zone will have multiple data centers data center one data center 2 so similarly each availability Zone will have multiple data center data center one Data Center we have already seen components are entities which go into the data center it will be racks and racks of servers networking devices storage connectors and cables and power supply and cooling devices so why they use multiple availability Zone within a region that is the question if this availability Zone fails if availability zone three fails whatever the services which you have hosted for example if you have created a virtual machine in az3 so this is the virtual machine which you have created in A3 Microsoft says whenever you create a virtual machine you need to create virtual machine at least in two availability Zone this is availability Zone one this is availability Zone 2 at least you need to create in two availability zones since you have provision your virtual machine only in availability Zone 3 if there is a out of availability Zone 3 then this particular virtual machine will not be available so this will be offline and whatever Services we have hosted on this virtual machine it can be web application web server or anything which is interacting with the user so this virtual machine will be offline and there will be no connection to the virtual machine so simply there is an outage off your service so whenever you create a virtual machine in one availability Zone you will have only 95% of SLA so this is very important when it comes to the SLA part and there could be another reason because each availability Zone will have vendor who is providing the power Telecom networking connections if availability Zone 3 is provided by vendor 1 there could be vendor 2 who is providing the power Telecom and networking supply for availability Zone 2 so this will be vendor two this can be vender one so if there is any outage at the vendor side so vendor one as an outage and he was not able to supply the power so during that time the availability Zone has failed because there is a different vendor altogether within the same region Ireland region who is providing the power supply telom and networking for them there is no network outage these availability zones or online so this will be online since these availability zones are online users who have hosted their services on multiple availability Zone will not have any impact so when you deploy your virtual machine on multiple availability Zone then the SLA will be 99.99% so this is the SLA what you get when you deploy your virtual machine on multiple availability zones so there could be many reasons why you need to place your virtual machines or your services on multiple availability zones there are some of the services which are Zone redundant there are some of the uh resources which are only available in one particular availability Zone like you have managed discs virtual machines are all limited to the one availability Zone because of which you need to set up your workload for different availability zones basically it is for your fall tolerance we understood what is availability zone now let us understand what is region pairs we know that there is a strategic location for every region what Azure has selected okay we have region we have minimum of three availability Zone A 1 A2 and a three this is fine Azure also says that you deploy ual machine on multiple availability zone so that you get SLA of 99.99% okay what if there is a region failure right what if there is a outage to region because all your services is accessed by the internet the fiber optic connection or the internet which is coming from the global internet provider if there is an outage maybe under the C cable is cut or something has happened and there is no link to your entire region itself so this is a complete outage right so in that scenario we need to configure our workload or services in such a way that it can withstand Regional failure if I take the same example of Europe north this is one of the region which is in irel land we have another region in Europe which is Europe West which is in Netherlands so this is one of the region this is one of the region if you have hosted your virtual machine in a two different regions we know that each region has a three availability zone for Ireland there is a three availability Zone and Netherlands also has three availability Zone my workload or my services what I have done is I have deployed in one of the availability Zone in Ireland and in one of the availability Zone in Netherland and these two virtual machines are providing the web services for the customers let us say this is an insurance company and there are users who are connecting to the web services to access their insurance statement to pay the premium or to download the premium certificate so they will access via the Internet they have the web services running on the virtual machine if one of the virtual machine fails in one of the region because of the network outage there is another region which is able to take over if there is a failure at the availability Zone level so if you have deployed your virtual machine within the same region and within different availability Zone then if one of the Zone fails then there is no issue because there is a Zone redundancy which is available so if this virtual machine fails in azed 1 which is availability Zone 1 then the virtual machine which is running on azed to availability Zone 2 it is always up so now you have anyway two virtual missiones which are running both in Netherland and Ireland so the user who is accessing the web services for their insurance account that can connect to any of the virtual machine if there is a regional failure only then there is a issue of redirecting the user to the Netherland region if there is a Zone failure there is no issue because a 2 is always available this is available only AZ 1 has gone down user will be able to connect both the virtual machine and is able to access the insurance services so this way we can configure Zone redundant or for the regional failure for our workload of services so one is Zone failure and the other one is regional failure Azure always ensures that there is a pairing between the regions the criteria is the distance between each region will be 300 M away so if they are doing any pairing they will ensure that from one region to another region there is at least 300 mil distance so that if there is any outage because of some natural disaster or some Calamity there is outage only for a particular reason not for the other region so the services for the users or the organization who have hosted their services will be up and running at last we will discuss about Sovereign regions these regions are basically isolated regions from the main Azure regions so we discussed about the regions availability zone right so these regions are a separate regions from the other regions whatever cloud services which are shared by multiple users these isolated regions will not be shared to any other customers so it is only dedicated for defense for government or for regulatory purpose in this video we will see what is azure management infrastructure we have already discussed about Azure resources and the resource Group resources are virtual machine IP addresses virtual network card virtual n card subnet Network Security Group these are the resources what you are going to create in Azure so once you create these resources if you group these resources in a particular group then that will be called as Resource Group to create any of the resources you need to have a resource Group that is the minimum requirement if you have not created the resource Group while creating the resource you will be asked to create a resource Group let us discuss about the Management Group why we require a Management Group in order to group the subscription for example subscription one subscription two I can use the Management Group if I take a real use case for an [Music] example if there is a company which is called as Cloud X and this is operating from two different region Europe north and Europe West so this company has two region Europe north Europe West they have segregated into multiple environment for the test development and Broad environment similarly here also in Europe West also they have similar kind of setup test development and now this company wants to apply a policy for the test environment not to create storage optimized virtual machine so they don't want to create storage optimized virtual machine so in this particular case what you will do is you will create a Management Group this will be at the top level which will be The Root Management Group and you create one more Management Group for the test subscription so this includes for the north and west you bring these two subscription under this Management Group under the test Management Group so we have the management group created test under these test Management Group you'll bring the test subscription so you have two test subscription from north and west once you apply the policy at the at this level for not to create create the storage optimized virtual machine these two test subscription which are under the management group test will not be able to create the storage optimized virtual machine so this is how you can control the creation of the virtual machines and if I wanted to apply the rback policy at the Management Group level so I have the test Management Group created which has Europe north and west region it has both the subscription of test so this is the test and this is the second subscription I have both this subscription if I apply the arbac policy at this Management Group then the arbac policy whatever I have applied at this Management Group it will be inherited by these two subscription and it will also be inherited by the resource Group also at the resource level so you can basically simplify the role based access control for the entire Management Group so all the test environment I want to keep it very simple so I'll give only for the user one user two user three so I only have so you have the root Management Group under this Management Group you have a child Management Group so this is the child Management Group and under this Management Group you have a child Management Group again so you can go up to six levels of Deep like this and each Management Group can have only one parent Management Group so for this subscription this will be the parent Management Group this subscription cannot have another Management Group as parent Management Group so this is not allowed so this subscription can be used by only one Management Group similarly a management group can have only one parent and this will be your Root Management Group Management Group can be listed up to six levels of hierarchy and you can have 10,000 Management Group created and all of these should belong to one tenant your tenant is nothing but the active directory if you have a subscription like this for example let us take this as test and development subscription if I wanted to create a Management Group I'll create a management group or test and I will create a Management Group for dip so then this subscription will fall here and this subscription will go here and whatever resources which I am creating under the subscription so I have created some resources here this will come under the this particular Management Group so whatever resource which I'm going to create here under the dev subscription it will come under development Management Group I apply the policy I can apply the policy I can use role based access control I can control the budget so these are nothing but your organization governance so this is how you will be able to create the management group and manage the organization hierarchy in this video we will discuss about Azure compute similar to the physical servers what you are going to build on your data center you need to have power supply you need to have power supply from different sources you need to have connection for the network and you will install the operating system these are the disk which is mirrored install the operating system it can be Windows Linux you install the operating system and make the server available similarly in Azure the hardware layer is provided by Azure this particular layer is a virtualized layer for the virtualization platform Azure uses hyperv so you choose the virtual machine size select the virtual machine choose the size for the virtual machine then select the disk select networking select firewall then you need to select the image this will be the operating system image the image can be windows or it can be Linux similarly how you will be able to boot the operating system on your physical data center you'll be able to boot the operating system on the virtual machine which is provided by assure since this is an infrastructure as a service there is no need for you to manage the hardware but you have the complete control of the operating system you can have the control of configuration and settings for both windows and Linux so you have the complete control over the settings and you can configure the settings on these operat system you can install the applications you can edit the registry you can add diss to the virtual machine so all this configuration you can do it so when we choose IAS as a service we'll be able to control the operating system we can run the custom application in some of the scenarios we'll be running some custom application in our own data center which is onr data Center and the same custom application has to be installed on the virtual machine which we are going to deploy on the Azure so we can run the custom application and also we can do the configuration and settings on the virtual machines we are responsible for the security patching and backup and availability so it is customer responsibilities for all these areas so now we understood about Azure compute Azure compute is all about installing the virtual machine on the Azure Cloud to summarize what is azure compute Azure compute is a virtual machine which is deployed on the Azure Cloud this virtual machine what you are going to deploy you have the full control since this is an infrastructure as a service or IAS service you have the control over the operating system and you can run custom software and also you have the option to do the configuration on the operating [Music] system now let us look into scaling the virtual machine in azure scaling the virtual machine in Azure is basically you have two types of scaling one is vertical scaling and the other one is horizontal scaling so this is also called as scale out scale in and vertical scaling is also called as scale up scale down as we discussed earlier if we have a virtual machine in the Azure Cloud we install the operating system which is nothing but the image of operating system which can be windows or Linux then we choose the network card then the disk then some firewall rules to secure the network so all these are all basic requirements when we select a virtual machine this is okay because if we are using this virtual machine in the test environment and we wanted to test some application then then this is fine we can run a single virtual machine what if we are in a production environment and we are running only one virtual machine so we are in production environment and we are running one virtual machine if multiple users are connecting to this virtual machine and this virtual machine is running some critical business services so what will happen to the virtual machine when multiple users are connecting to the virtual machine in that case you can either increase the size of the virtual machine by vertical scaling or you can add the virtual machine by horizontal scaling so you can add another virtual machine to the existing virtual machine so let me call this as one this as two you can simply add to the existing in virtual machine and you'll be able to scale so that for the user the compute capacity from both the virtual machine is combined together so for the user now they have both these virtual machine which will be serving their request so if there are multiple users then some of the users will will go to this virtual machine some of the users will go to this virtual machine and also this virtual machine can be configured in an iy available environment which is nothing but either it can be in availability set or it can be in availability zone so in this way we can configure the virtual machine in highly available setup either it can be in availability set or we can have availability zone so one of the machine is in availability Zone one one of the virtual machine is in availability Zone 2 so in this way we have this covered scalability either by increasing the size of the virtual machine we can scale or by adding the virtual machine to the existing environment we can do the horizontal scaling and for the redundency we can have highly available configuration availability set and availability zone so this is how you'll be able to scale your virtual machine do the I availability concept and also you can scale the virtual machine and configure the virtual machine in a redundant environment we understood about the scaling of virtual machines let us understand what is virtual machine scale sets if you wanted to have an highly available environment and also you wanted to have group of virtual machines with a load balanced environment then you can go with an option which is called as virtual machine scale set you can have thousands of virtual machine in a virtual machine skill set and all these, of virtual machine can be placed in an availability set or availability zone so you can choose the option how you wanted to configure your VM scale set virtual machion scale set is also called as VM scale set you can go go up to thousands of virtual machine each virtual machine what you are building in the VM scale set have the load balancer automatically configured so you have the load balancer and the virtual machine will be behind the load balancer these virtual machine should be of the same VM size and it should run the same OS image you can pre-configure these virtual machines with a OS image from the Azure Marketplace or you can use your Custom Image so basically Custom Image is you have the virtual machine install the operating system you package your operating system plus the additional software and tools for example if I have a requirement of Office 365 application then HR application then travel application and some tools I require so all these applications can be packaged within the operating system and operating system plus the application can be combined with a image this will be called as Custom Image for example if I have a Linux virtual machine then I wanted to install Apache web server orix web server so I have the option of installing the Centos then with Centos I have this Apache web server so this is my Custom Image then I can choose this Custom Image to run on all these virtual machines because you are deploying for your business requirement right so that is the reason you build you package your operating system with all these application and run the virtual machine with the Custom Image so what are the use cases for VM scale set the use cases will be if you wanted to run your application in a state full environment I want to to run stateful application for that you can use it for highly available applications and for big data and also for the container workload so these are some of the use cases why you require virtual machine skill set you can scale these virtual machines from 1 12,000 using automated scaling or manual scaling automated you can schedule when you wanted additional virtual machine so why I wanted to schedule the virtual Mission scale set for example let us take an example of amazon.in if amazon.in is running on Azure nice example right if amazon.in is running on Azure then there is a big billion day sale right there is a big billion day sale on some date let us take April 10th so now during this particular day date I know the virtual machine what I have in my infrastructure is not sufficient for the traffic which is coming so I can schedule the VM scale set on April 10th so that my big billion day runs smooth so this is one of the use case there are multiple use cases like this why you require virtual machine skill set to summarize why we require VM skill set is basically for redundancy I availability and main thing is performance for the performance only we are actually scaling so we are increasing the number of virtual machine because of the performance itself and the next comes is redundancy and I availability and load [Music] balancer if I wanted the virtual machines to be behind the load balancer for my setup then I can go with VM scale set it we have already seen what are virtual machines what is virtual machine skill sets and what is resource resource groups we already know that we [Music] have the Azure region so there is an hierarchy of azure clot there is a region for each region we have minimum of three availability Zone Zone 1 availability Zone 2 and availability Zone 3 so each availability Zone has multiple data centers so let us take example of availability Zone one which are has two data center facilities it can be more depending upon the region when you deploy a virtual machine on the Azure Cloud you have the virtual machine you deploy this on the Azure clo for each of the virtual machine what you are going to deploy it can be single virtual machine so whenever you deploy single virtual machine and use standard hard disk which is nothing but the mechanical hard disk this will have 95% of the SL for the same virtual machine if you are going to use standard SSD then the SLA will be 99 5% for the same virtual machine if you are using premium SSD you have the SLA of 99 9% so these are the SLA when you deploy a single virtual machine on the Azure clot if you are deploying a single virtual machine then you are deploying in one of the data center so you are going to deploy in one of the region one of the availability Zone in this example we will take az1 and you are going to deploy this particular virtu machine in one of the data center in availability Zone one we have deployed a virtual machine in a single data center so we have this virtual machine we have this virtual machine which has been deployed in a data center let us say this region is called as us East under us east region there is a availability Zone one and you have deployed this virtual machine in one of the data center if this fails what will happen the virtual machion will be offline the virtual machine will be offline and you will not be able to access the virtual machine you need to Simply recreate the virtual machine for this virtual machine what you have created there will be OS disk this particular disk will be 127 GB of size and also by default you will get a temp disc depending upon the size of the virtual machine what you are going to choose the temporary dis size also various but you will get a temporary dis this will be temporary purpose only we should not store any data in the temporary disk temporary disk as the name says it's only for the temporary use never store any production data on your temporary disk even if there is a start or stop of the virtual machine the data what you have stored will be lost and there is a data disk this data disk is the one what you are going to choose for your data this is for storing the data the image what you have selected for the virtual machine and the application configuration whatever you have done for this virtual machine everything will be last when there is a data center failure the data dis what you have selected for your virtual machine can be of two types so you'll be selecting the data disk from the managed disk so this manage disk will be of two types one is locally renant storage the other one is zonal redundant storage and this one is local renant storage this provides 11 lines of durability and this will provide 12 nines of durability this is per year this can withstand only the server rack failure are any dis valure on the storage The Zone rden and storage even if one of the availability Zone fails it can withstand lrs the data will be replicated three times within the same data center for the availability Zone which is Zone ridden and storage it will be replicated three times but in a different Zone if you have deployed your virtual machine and you have selected lrs even though the lrs replicates three times but this is within the data center itself since the data center has gone down your lrs storage will not be available and the data dis what you have created for the virtual machine that will also be unavailable so that is the reason either you need to create a zrs take the snapshot of the managed disk what you have created using the locally ridden and storage from this managed dis which is lrs you create a snapshot and place it into the storage type of zrs even if there is a data center failure because you have this snapshot which will be replicated three times between multiple availability Zone you will be able to recreate the data or you'll be able to restore the data and the data disk whatever you have attached to the virtual machine you will be able to reattach without any issues and all the data will be available either you have to choose the zrs while creating the virtual machine for your managed disk or if you are selecting the locally ridden in storage while creating the virtual machine for your data disk then you have to create a snapshot in such a way that this snapshot will be residing on the zrs storage in summary the data disk what you are creating for the virtual machine there are two types of manage dis one is locally redund and storage the other one is Zone redund and storage even though the local redund and storage replicates three times but that replication will be within the data center itself if the data center fails you will not be able to restore the data or recover the data that is the reason you need to choose the snapshot on a zrs storage or you need to choose the managed dis while creating your virtual machine as a zrs storage only then it can withstand the data center failure for your data disk in this video we'll discuss about virtual machine availability sets in the previous video we have seen what are the managed diss which are available for your virtual machine and how it is going to protect your virtual machine when there is a failure of the storage when you choose L RS or zrs let us understand what is availability sets in availability sets there are two important Concepts one is fall domain the other one is update domain for domain it is all concerned about the hardware it can be power supply it can be Network and Network update domain it is all concerned about the maintenance of the server this is the maintenance of the host where the virtualization platform is running for each of the host there can be virtual machine which will be deployed and each of the virtual machine what it is deployed on the host which is running the hyperv this particular virtual machine can be of customer X this virtual machine can be of customer y within the same host there will be multiple virtual machines which will be running for multiple customers this host is not dedicated for one particular customer so this is the host this is also a server which is a physical server on top of the physical server there are multiple virtual machines which are running let us understand the availability set with an example this is a fall domain zero this is a fall domain 1 and this is the fall domain 2 you have maximum of three fall domains and maximum of 20 update domains so each F domain if I say this is the power source and we have another power source here each of the rack is going and connecting to two different power sources so this will be connecting to another power source this will be connecting to another power source we have Network switches for each of the Fall domain these are the network switches for each of the Fall domain these Network switches provide traffic for the management data traffic and it will also provide the connectivity for the control traffic also it will connect to the storage devices the idea of fall domain is basically when there is a failure of your power supply to the rack itself if there is a problem with the power supply so both the power supply has failed for this particular rack and if there's a failure of the networking device then this particular rack is completely down all the servers which are hosting the virtual machine for your organization or it can be multiple organization will go down so that is the reason we need to use the fall domain so if I am using fall domain as three my virtual machine will be spread across multiple VA domains so these are the virtual machines will be spread across this is virtual machine 1 this is Virtual Machine 2 this is virtual machine 3 so these virtual machine will be spread across multiple racks this rack which is fall domain zero as failed but fall domain 1 and two is available so the Virtual Machine 2 and 3 is online if I have selected the fall domain as two I can select the fall domain from 1 to 3 now I have selected only two fall domains so now the virtual machine will be spread across two fall domains this will be my first virtual machine this will be my second virtual machine this rack doesn't have any virtual machine because I have not mentioned default domain 2 three so only two virtual machines will be available let us understand what is an update domain as I said there is a server which is a physical server which has multiple virtual machines which are running on the physical server so this is virtual machine one this is Virtual Machine 2 this is virtual machine 3 and this can be customer X customer y and customer Z so each of the server this is a physical server this is also called as host whenever I say host it is nothing but the physical server which runs the virtualization software in this case it will be hyperv and the virtual machine is called as guest Whenever there is a maintenance of this physical server which is sitting in one of the rack if I take an example of the update domain let us say we have the rack zero or fall domain zero under which we have some virtual machines which are running this can be of customer X customer Y and customer set so there are three virtual machine running on top of this physical server so this is the physical server if there is a problem with the physical server due to any maintenance or the server itself has any issues then the virtual machines which are hosted on this particular physical server will be offline so that is the reason we have a update domain concept if I have selected all domain is equal to 3 and update domain is equal to five my virtual machines will be spread across multiple update domains so this will be my update domain two and the virtual machine will be in another update domain if there is a problem with the physical server this is gone down this will be available on another physical server since I have also selected three fault domains my virtual machines will be available in another fall domain in another update domain so this is my update domain which I have selected which is five so three it is done there will be another update domain for the fourth so if this physical server fails there is no impact to this particular physical server right even though it is in the same rack which is in the same fall domain zero but this physical server is not down only this physical server is down we can call this as server zero this we can call it as server one and whatever virtual machines which are running on this particular update domain 4 it will be available similarly I have another virtual machine because I have set all domain to three and update domain to five my another virtual machine will be available in another fall domain in another update domain let us say we don't have any issues on this physical server and this particular physical server is good but what will happen while OS maintenance or OS updates when there is a OS update the servers require a reboot right so between each of these update domain reboots Azure will ensure that there is a 30 minute interval so when this physical server the virtualization software or the host OS is getting updated only after 30 minutes Azure is is going to touch the another update domain so this will ensure the virtual machine whichever which is running on different update domains within an availability set it will not be impacted so you have the fall domain which takes case of the hardware failure and the update domain which will take care of the OS maintenance or the server maintenance of the host itself so this is how default domain and update domain will work so now we understood what is availability set how do we create an availability set to create an availability set we have two options one is while creating the virtual machine when we create the virtual machine we can create the availability set and we choose the fall domain and update domain so once we set the fall domain to three and update domain to five we will not be able to change this availability set basically the availability set is nothing but a policy what you're going to create and you can attach this particular policy while creating the virtual machine you can attach this policy so while creating the virtual machine you have an option to create the availability set or beforehand you can create the availability set create the availability set choosing the fall domain and update domain so once you create the availability set let us say you have created availability set 01 while creating the virtual machine you can attach this particular policy with theault domain and update domain set so when you have a multi-tier application for example there is a application application one for this you can have a availability set created with different fall domain and update domain for app one availability set and for another tier we can have the virtual machine in a different availability set with a different fall domain and update domain so let us say for this we have two and three we can have three and six this can be your database TI or it can be Middle with so basically you are selecting your availability set based on the virtual machines or workloads when to use the virtual machine when you wanted to have complete control of the operating system you deploy the virtual machines as an infrastructure as a service if you wanted to do some kind of testing and development you can choose the virtual machines if there is a custom application which is running on your on Prem environment and you wanted to bring the custom application onto the cloud you can use the virtual machine and install the custom application if you are extending your data center you can use the virtual machine then if you are using your On Prom infrastructure and if you wanted to use Azure as your Dr site then you can use the virtual machines in this video let us compare virtual Ma machine scale sets availability set and availability zones when you configure virtual machine scale sets you will be deploying the load balancer a load balancer will be automatically deployed and the virtual machines will be behind the load balancer based on the traffic what is received by the load balancer the virtual machines can either scale out or it can scale in this is called as horizontal scaling the virtual machines which are behind this load balancer the size of the virtual machine the disk and the configuration of the application should match for all the virtual machine which is be the load balancer so whenever there is a traffic increase on the load balancer it can scale up to 1,000 virtual machines these th000 virtual machines are supported only when you use standard images and custom images to get highly redundant environment we can configure virtual machine scale sets with availability sets and availability zone for highly redundant environment if we have created virtual machine scale set and this is the load balancer we have the scale set this can be in availability Zone one this can be in availability Zone 2 and this can be in availability Zone 3 or within the avail avability Zone we can also configure availability set for the virtual machines again it can be all domain and update domain configuration what you set for your availability set based on the traffic what the load balancer receives it can scale out which means increase the count of the virtual machine and it can scale in which is decrease count of virtual machine so this is the horizontal scaling let us see what is available sets when we talk about availability set we need to understand availability sets works at the data center level availability sets are logical grouping of virtual machines you group your virtual machine based on the fall domain and update domain so for the fall domain we have maximum of three fall domains and for the update domain we have maximum of 20 update domains combination of fall domain and update domain we can ensure that the virtual machine what we are going to deploy it is reliable so when you configure Thea domain and update domain using a availability set configuration if I configure availability set configuration we need to ensure that at least two virtual machines are placed in this availability set to spread between multiple racks availability sets offer improved virtual machine to Virtual Machine latency because these two virtual machines are in the same data center you will have very less latency availability sets are still susceptible to Data Center vales because this particular availability set what you have created when there is a data center failure the availability set what you have created that will also fail if there is a data center outage a VM can be added to an availability set when you create the virtual machine so while creating the virtual machine you can add the virtual machine to the availability set if you wanted to change the availability set you need to delete the virtual machine and then recreate the virtual machine and assign it to different availability set which will be 02 this will be 001 so this is the different availability set and this is the different availability set once I create an availability set let let us take availability availability set which is called as as 001 for this availability set I have configured theault domain as two and update domain as 7 now if I wanted to change the fall domain and update domain to three and six I'll not be able to do it so once the configuration has been done for the availability set you will not be able to change the configuration and the virtual machine whatever you are placing to the availability set once it is placed if I wanted to change the virtual machine to another availability set as 002 I'll not be able to do it I have to delete this virtual machine recreate the virtual machine and place it into the availability set this is as good as creating a new virtual machine and placing it into availability set 002 so let us see what is availability zones are the availability zones are within the region each region will have minimum of three availability zones and each availability Zone has multiple data centers when you deploy the virtual machine in two different availability Zone you have the latency of 2 milliseconds so this is the latency what you get when you deploy the virtual machine in two different availability zone so this is Zone one this is zone two these are all separated by different Power Cooling and networking infrastructure and whenever there is a patching or maintenance which is happening at the host for the availability Zone one the other host at the availability Zone 2 will not be touched for the maintenance or patching if you have created your virtual machine in two different zones then you have a very highly redundant environment because this virtual machine will be in availability Zone one this virtual machine will be in availability zone two this will be backed by different Power different network and this availability Zone will be backed by different power source and network infrastructure so this is how you can configure your workloads as a scalable highly available and and redundant environment in this video we will discuss about Azure virtual desktop Azure virtual desktop is a fully managed desktop Solution by Azure it offers full desktop or use of remote app to deliver individual application you can use Microsoft application in the Azure virtual desktop also you can customize your business application into your virtual dist toop solution we can manage desktop and application from different windows and server operating system we can use hybrid configuration to set up the virtual desktop infrastructure using both private and the public Cloud we can use our own custom image or the images from the Azure Gallery we can also dedicate a desktop for a single user when we configure the Azure virtual desktop environment we can automatically Scale based on the demand we can deploy the Azu virtual desktop and configure the application group to Target a certain segment of the department we can use Azure CLA Azure portal to configure the Azure virtual desktop let us understand why do we require Azu virtual desktop in the first place if I take an example of a company which is called as if company called loadex as 500 employees there is an IT department and for 500 employees this it department has to issue 500 laptops the laptops which the it Department issues we have the operating system the applications which which is required by the business and some application which are required for the vendor interaction this is for third party interaction and some application for the management purpose these laptops will be secured it will be having antivirus and also it will be all connected to the all these 500 laptops will be connected to a central repo where there is the update antivirus patch management so for all these Android laptops there will be a central repository from where it will get the antivirus updates security patches and Service Pack for all these 5 laptops to maintain all these 5 laptops and to install the operating system for each and every laptop and to upgrade all this 500 operating system Whenever there is a new upgrade which has been released by the vendor if I take Microsoft as an example if the laptop is running Windows 11 or it can be Windows 10 we need to install the service packs and updates right so these has to be taken care by the ID Department with some kind of repository what they have configured and also they need to ensure that each and every laptop has to be backed up so they will ensure that there is a certain application which is running on the laptop which will take care of the backup to a central repository or to a archival storage so they have to maintain all these 500 laptops what they have issued to the 500 employees also they need to ensure that operating system applications and the security will be taken care to avoid maintaining and managing all these 500 laptops what usually they will do is they will use the Azure virtual desktop using the Azure Cloud by using the azure virtual desktop you have the flexibility of choosing the virtual machine so you can choose the virtual machine size and you can create a host group host group is nothing but the virtual machines which are grouped together for providing the vdi solution we can configure Windows 11 or Windows 10 as the operating system for the virtual desktop environment so once the user logs in to the virtual desktop infrastructure you will get either Windows 10 or Windows 11 as the virtual desktop and these virtual desktop infrastructure will be configured with application group this application group contains the business application management application and any third party vendor application you can group these applications in an application group and at attach a policy to the virtual desktop environment so whenever the user logs into the vda environment you will be able to get the virtual desktop for the user to access a virtual desktop with Windows 10 or Windows 11 and with the configuration of the application Group whatever application you have selected for a application group those applications will be available for the user who logs into the virtual desktop it can be Office 365 application it can be management applications like for a software developer there can be multiple IDE environment or it can be cicd tools which are already installed onto the application group so you package all these applications and provide the applications as a group also you can group the application based on the department ments like for the finance department there can be different set of application and for marketing there can be different set of applications for IT services there can be different set of application you group these applications based on the Departments which is called as workspace you create an application group create a workspace add to this particular workspace for finance there is a separate set of applications which will be provided to the finance users who access the virtual desktop for the marketing users they have a different set of virtual desktops with a different application group for the IT services they have a different set of application which they access when they use the virtual desktop the hosts which are part of the host group to is nothing but the virtual machine you select the virtual machine size and place these virtual machine as part of the host group from the host group you'll be creating the from the host group you have dedicated virtual desktop and also so session based virtual desktop dedicated is called as personal desktop and session based virtual desktop is called pulled desktop so user will be getting sessions from this desktop from the personal user will get a dedicated desktop so always whatever desktop he accesses the whatever desktop he is accessing from the dedicated environment he will get the same profile so whenever the user accesses the desktop from the dedicated virtual desktop environment you will get a dedicated desktop so that the my documents my downloads all these profile file will not change so whatever documents he has saved it will be available but whereas in the session based virtual desktop when a user logs into a particular session and lcks out he will get a new session whatever he has saved in my documents or my downloads those data will not be available for every log out and login there will be a new session for the user so there are many anything things about virtual desktop in Azure and it is a big topic altogether for AZ 900 for the fundamentals we need to understand what are the basic concepts of the virtual desktop in this video we will understand about Azure containers containers are virtualization environment and this is a operating system virtualization platform containers are lightweight scalable and portable Azure supports Docker as the container platform Docker is a open source platform for running the container so what is this container all about if I take example of the virtual machine this particular virtual machine requires virtualization platform right so I have the hyperv here this hyperv needs Hardware so this is the hardware for the hyperv so we have the hardware we have the virtualization platform on top of the virtualization platform we have the virtual machine on top of the virtual machine we run the operating system this operating system will be tied to one application so if I am using Windows or Linux as an operating system and on top of this operating system I'll be running SQL database so this operating system is tied to this particular database if I am running any python based application or Java based application then the OS is tied to one particular application the container solves the problem of running one application on one operating system so that is why the container is called as OS level virtualization or operating system virtualization platform we have the virtual machine and on top of the virtual machine we are running the operating system and there can be one application so each virtual machine is running an operating system and it is running one application what if I wanted to use application like Python and Java based application on the same operating system is it possible so this is where the container comes into picture since container virtualizes the operating system the virtual machine what we we had which was running the operating system with application because this operating system what we add it add the runtime for python now on the same virtual machine we can this the hardware on which the virtual machine is running on the same virtual machine we can run multiple containers so this can be python based container this can be Java this can be [Music] nodejs and there can be another container which runs the net application one more important concept of container is you take this container if the container is running on the virtual machine which has the operating system of Linux you take the the same container and run the container on a Windows based operating system so the container is very portable you can take from Mac OS this is coming from Linux you can run the same container which was running on Linux the same container can be run on Windows or Mac OS okay now you can say that is fine I can run the container on Windows Linux and Mac OS but how we exactly we are able to run the containers let us understand how we are able to run the containers to run the containers we need something called as image this particular image is don't version of the operating system itself so it contains very minimal file for the application to run so basically it contains the libraries the dependency and the runtime for the application to run that's it it doesn't contain the files like the operating system the operating system has huge files for multiple services like file Services DHCP DNS there will be so many services and also there can be drivers networking video drivers sound adapter drivers and USB for multiple peripheral devices you have drivers which comes with the operating system all these files will not be available in the container image it contains only the minimal files which will be able to run the given application to run the PHP application you have an image for PHP so there is a PHP image to run the nodejs you have nodejs application and there is a image for nodejs to run SQL you have MySQL image so now we understood we have the image using this image we are able to run the container and this particular container we are able to run now on virtual machine so we can run the container on the virtual machine there can be multiple containers which we can run on the virtual machine to run the container on virtual machine or bare metal server we need to just issue the command Docker run and image with some parameters if I wanted to map the port I need to use the port mapping with p these images which we use to create the containers are stored in something called as registry these images are stored in registry there's a public registry for h. do.com so this is the repository for the images there are many public images which are available in abdoer.com we can simply use this abdoer.com repository and we can build our containers the next question will be can we build our own containers yes we can take the base image and we can lay C whatever files we want on top of this base image and we can combine this as a package this as a separate image we can use this image to run our containers for example if I wanted to use the httpd image which is nothing but the Apache image which is where web server image if I wanted to modify the files of the httpd and then create my own image with the mount to store my files which is user www Thum files or index. HTML so I wanted to use these parameters into my image I can package the entire information into this image and I'll be able to run this image on my environment as a container so what are the options which are available on Azure cl to run the containers the first option is we can use is as a service we can build our own virtual machine install Docker engine and we can run the containers the second method is to use the Azure container instance this is the fastest way to build your containers this is a Pass based solution there is no need for you to install the virtual machine so you don't need to install the virtual machine create the ACI you will be provided with an option to choose the registry this registry can be H dod.com and choose the image or you can choose the private registry which might be Azure container registry to pull the image and to run the container Azure container instance does not scale and it doesn't have much options like Azure container apps when you create a container using Azure container instance you will be provided with a fqdn using this fqdn or IP address you'll be able to access your container next one is using the Azure container app using the container app you need to create container app environment you need to create the app environment in this app environment you'll be able to scale you'll be able to deploy multiple versions using deployment slots and there are multiple run times which are available to choose the fourth method is to use Azure kubernetes service Azure kubernetes service is a production grid container orchestration service wherein you can build thousands of containers and you'll be able to manage scale the deployments when you use Azure kubernetes as service you'll be running the containers as a f so this will be the term which will be used in the Azure kubernetes service you'll be able to scale monitor deploy multiple containers and also you can have multiple replicas of the same parts so in summary container is a virtualization environment it is an operating system virtualization platform container are lightweight portable and scalable Azure supports for the container platform and there are multiple ways to deploy the containers in Azure Cloud which we have discussed in this video let us discuss about Azure functions when I want the control of the operating system and the application what I will do I will install the virtual machine so I'll deploy the virtual machine to take control of the operating system and the [Music] applications so this is the application which is running on the virtual machine so I have the control of both this if I wanted to use microservices then I'll will choose the containers web application or I can also choose kubernetes as a service so what is this function all about if I want a service which will act based on a trigger for example if I have HTTP trigger then there is a service which will automatically gets triggered and will act based on this particular Trigger or there are some realtime data which is coming or which is getting streamed via the event grid then based on the event grid data I want to act then I will use this function the function is basically run the code and not worry about the underlying platform or infrastructure functions are commonly used when you need to perform work in response to a event if I have a document processing application let us take an example of document processing application a users or multiple users who are accessing your applic using the web browser they are constantly sending the documents and this particular document has to be stored in a Azure blob storage So based on the trigger event will be triggered Whenever there is a document which is sent there is a event triggered and this document will be stored in a Azure blob storage in one of the particular container this can be the simplest use case where a function can be used function can also automatically Scale based on the demand function will run only when there is a trigger otherwise it will automatically deallocate the resources functions can also be used with the stateless or stateful applications when they are used in a stateful application a context is passed through the function to track the prior activity basically when there is a stateful application there is a constant sequence of sessions which will be maintained and as we know functions is a key component of the serverless Computing let us understand the type of triggers which will make the functions to work there are multiple triggers which are available the triggers are namely https trigger Q trigger blob trigger and timer based trigger and event grid based trigger so what are the programming languages does the function supports function will support python C Java JavaScript and also we can use batch file andex files as well so whenever we wanted to use the function to run a certain action we can create a batch file and schedule so that will be the timer based trigger and also we can use these programming languages to create a trigger for the function when you create a function you need to provide a name for the function the function name and the domain as your websites. net this will be automatically appended this is the default domain name what you get with your function app this is the name for the function what you are going to create with that the azurewebsites.net will be upended this will be the name for your function app and this is the URL using which you'll be able to access using the web browser when you create a function app you will be provided with three hosting plan there are three hosting plans when you create a function app the first one is consumption based hosting plan so this is event driven when there is an event the function app will work once it completes the activity then it will automatically deallocates the resources the second one is premium function in this we are able to use the network isolation and also we are able to scale the function app based on the demand third one is app service plan so this is the same app service plan which we use to create web application mobile application web jobs function app so we can use the same app service plan to host any of these services so basically we need to create app service plan if I wanted to configure the services for web application mobile application web jab or functions for the function I have three option to choose the hosting plan one is consumption based the second one is premium function the third one is app service plan in app service plan basically you choose the server you choose the server or virtual machine for the application to be deployed or for the function to be deployed in this video we will understand what are the the application hosting options which are available in Azure Cloud if I wanted to have full control of my application I can use the virtual machine I can install the application I can also install the application on the containers so we have an option in Azure Cloud to use the containers to install the applications this is by running the image of my application this image I can get it from the hub. do.com there are other services which is called as Azure app service these are all platform as a service Azure app services are nothing but the platform as a service wherein we can bring our code and deploy our code into the Azure app services so app Services enables us to build and host the web application app Services supports automatic scaling and high availability app Services supports Windows and Linux operating system wherein you can deploy your code onto any one of the operating system you can select the operating system either it can be windows or Linux also you can integrate with GitHub git or Azure devops for the cicd pipeline all you have to do is build the code deploy the code onto the Azure app Services Azure focuses on keeping the environment up and running for the application what you have deployed so since this is a platform as a service that is how this service will work Azure app service is an HTTP based service for hosting your web application and it will support rest apis and mobile backends as well so we are talking about apps apps apps so what is this apps all about apps are nothing but the applications so the app is the short form of the application itself so whenever we deploy the application in a traditional way which can be on the virtual machine or it can be on the physical server we deploy the application so application is basically a modules of the code so we have the code which can be combined as an exe file and we can execute this particular exe file to install the application on the virtual machine or the physical machine to access this application we have something called AS application client so this is nothing but the client application so we have the the application which is running on the server and there is also an application which acts as a client this client will interact with the application which has been installed on the server so this application is the one which is doing the business or some logical operations or some logical functions so we have the client which will interact with the application server the application server is the which is doing the business or logical process and using the client we are able to access the application we input some data here and the Logics and all the processing everything will be done at the application layer so this particular layer sends the output for the input what we have given from the client machine so this is the the traditional application will work with the application server and the application client so in Azure the same application is called as web app because this particular web app is accessible via the browser so we have the browser to access this particular application whatever we Deploy on the Azure Cloud so we access the app using the browser and we provide the input the Logics everything will be performed at this web application layer the web app is the one which is doing The Logical operation or logical processing or the business processing and we are able to interact with this browser to the web application so this web application whatever we Deploy on the Azure Cloud we have all these languages which are supported net. net core Java Ruby nodejs PHP and python so we use any one of these languages we deploy the application and we deploy this application as a web application the web application will be accessible by the browser usually the browser is simply a static which has the HTML code and which has the CSS cascaded stylesheet when we embed some Logics on the application itself so that it is accessible via the browser so this Logics which is going to interact with the web application using the browser let us understand the web application using a banking application I have a banking application so this is there in the cloud it might be in Azure cloud and how do we access this banking app servers I will use the web browser right so I use the web browser to access the banking website and this will be the net banking website so I enter my user ID and password to connect to the banking server or banking application you use the user ID and password so this is the dynamic web page wherein there are scripts which are embedded there are scripts which are embedded which will interact with the banking application this is not a static web page like the news portal or some blocks so this is a dynamic web page which you will be accessing from the web server of the banking we site once you are able to interact with the browser then this will be your web application so basically you are interacting with your banking application using the browser with the embedded scripts which is nothing but the dynamic web page so let us understand what are API apps much like hosting a website you can also build rest based apis by using your choice of language and framework you can also use Azure Marketplace to publish your apis the produced applications can be consumed from HTTP or https based clients if I wanted to give a use case for the API apps then we can use the Azure cloud itself so we have the cloud Azure Cloud how do we interact with the Azure Cloud we interact with either C or portal or using the HD case right we isue some command to create the virtual machine it can be virtual Network it can be storage account it can be kubernetes it can be containers so all these resources what we are going to provision what are we going to provision either we use the CLA the portal or the SDK right whenever you issue any of the command to create any of the resources asure itself as something called as service fabric this particular service fabric will be integrated with the API app and the service Fabric in turn as multiple resource provider for the compute for the storage for the network for the containers kubernetes so we have multiple resource Provider from the service fabric so when you issue the command to create any of the resources whatever the commands which you have issued everything will be sent as a API request let us take another use case using the mobile wallet if I'm using a mobile wallet I'll be able to make payment for phone water supply electric so these are some of the vendors to which I'll be able to make the payments right so in the phone itself you have goel and VA if I wanted to make payment to the go then I have something called as API integration so on my mobile the API integration is the one which is called as client this client will be interacting with the API apps using https or HTTP protocol it will use rest based API request to transact and on the back end there will be API app so this API app will serve the request for the client which is coming from the mobile device so these are the use cases for the API apps if I take the web jobs web jobs are often used to run the background tasks as part of your application logic if I have an application which is accepting the documents so let us assume this application is the document processing application and multiple users are send in the documents in different size so these documents are in different size when the application accept the documents it will send these documents to web jobs so the web job job is to convert the different size of documents to A4 size A3 size and A4 size the main application which is the front end application which will provide immediate response to the users whereas the web jobs will only work based on the trigger so it will only accept the document once there is a trigger to convert the documents only then this particular web jobs will work for the web jobs you have multiple languages which are supported exe Java PHP python nodejs we can also use commands and batch overell or bash now let us understand what are mobile apps we can use these mobile apps to build a backend for the IOS and Android apps we can also store the content of the mobile apps using the cloud-based SQL database there are n number of authentication providers we can integrate with the authentication providers like Google Microsoft authentication Twitter and Facebook we can also use the configuration for push notification also we can execute C or node.js custom based Logics on the mobile apps these are the four types of web application what you can deploy on Azure Cloud when you wanted to deploy these applications like web application web jobs mobile apps or API apps we have to create something called as app service plan using the same app service plan we are able to deploy all these four types of web applications we can deploy web app mobile app web jobs and API apps using the same app service plan the app service plan is nothing but the skew what you choose to deploy your web applications these skes are free shared basic standard premium and isolated so these are the skews which are available to deploy your web applications you can choose the virtual machine size based on the skes whatever you are choosing and each of the skes will have different configuration options basically it will give you more disk space for the application files and also you can integrate with the network itself the virtual Network and you can also have SSL configuration and there are other options which are provided with different skewes in this video we'll discuss about Azure virtual networking Azure virtual networking is also called as Azure vnet Azure vet is a infrastructure as a service resource so whenever you create a virtual Network it is similar to the traditional or Legacy data center you have the servers you have the servers right you have this particular server which is physical server and this servers or there can be many servers which are connecting to the ethernet switch which is nothing but the land switch so these servers how it is going and connecting to the switch similar way when you create a virtual Network you will be provided with a router so this router you'll be able to create multiple Networks you can have multiple networks which is called as subnet so this is one big Network when you create a virtual Network in Azure and this particular virtual Network can have multiple subnets I'll call this as subnet one subnet 2 subnet 3 subnet 4 subnet 5 there can be multiple subnets similar to this you'll be able to create multiple subnets so this router will provide DHCP and DNS this is like the Wi-Fi what you have at home the Wi-Fi also provides the same thing it will also provide the dhp IP address for your mobile or your laptops and also it will allow the DNS forwarding for the devices which is your mobile device or your laptop device to communicate with the internet so similar way whatever virtual machines you are going to deploy on the virtual Network these virtual machines can communicate between each other so this virtual machine can communicate with this or any of the virtual machine within this vnet so any virtual machine which you have created in this subnet it can talk to any of the virtual machine within the same network and also the virtual machines whatever you have created in this virtual Network it can also communicate via the Internet it can access Google google.com microsoft.com or any websites you'll be able to access from any of the virtual machine because by default outbound connection to the external network is allowed apart from communicating within the virtual Network and also accessing the internet using this virtual Network we can also communicate to our data center so we can connect to our data center as well so this can be via the virtual private Network or express route so let us understand what is the virtual Network now it will provide isolation and segmentation it also provides internet communication which we have discussed any virtual machine which you have created inside the virtual Network can talk to the internet communicate between Azure resources you'll be able to communicate within the virtual Network this virtual machine can talk to any virtual machine inside the virtual Network and we are also able to communicate with the on Prem resources so we are able to communicate with the data center which is on-prem data center we can actually manage the network traffic through r out the network traffic in a certain way and using the network security group we can filter the inbound and outboard Network traffics we can also connect the virtual network if this is the virtual Network 01 this is the virtual Network 01 I have another virtual Network which is 02 I can actually connect these two virtual Network which has another set of subnets another set of virtual machines running on each of the subnet Azure virtual networking supports both public and private endpoints so whatever virtual machines which you have created and deployed inside the virtual Network it will get the private IP address so this will get the private IP address from the range of IP address what you have used for your virtual Network say for example if I have used 10.18 0.016 and for this particular subnet I have used 108. 0.124 from this range I will get a IP address for this virtual machine it can be 10.18 do 0. 6/24 for each of the subnets in the virtual Network we have five IP address reserved so this is for Azure internal use so for each subnet you have five IPS which are reserved apart from these five IPS the virtual machine will get a different set of IP address from this range and you are also able to create and assign the public IP address this is for the external computer to access the virtual machine which is inside the virtual Network once you assign the public IP address for any of the virtual machine inside the virtual network if I assign a public IP address for this ual machine from the outer World which is external network which is internet I can use PC or my laptop to connect to this machine using the public IP address I can connect via the Internet isolation and segmentation we can create multiple subnets out of the virtual Network what we have created these are called as subnets and each of the subnet will have its own range of IP address as I said 108.0 do0 is the network this is for the virtual Network what we have created I can create multiple subnets out of this range for subnet 01 I can use 10.18 do0 do 0/24 and for subnet 02 I can use 10.18 1.0/4 this is for subnet 02 from this range of IP address one of the IP address will be assigned to the virtual machine each subnet gets an IP range from the defined IP range at the vet so this is the range of IP address which is configured at the virtual Network layer from this range I will get the subset of the IP range for my Subnet for subnet 3 I will have 10.8.2 0/24 I can go on creating the subnets depending upon the subnets what I can config out of this particular range for name resolution we can use Azure internal DNS or external DNS as I said when we create the virtual Network we are basically creating the router on the Azure Cloud so we have multiple subnets right which we are going to create subnet subnet and subnet this router will provide DHCP and DNS facility I can communicate to any of the resources which are created inside this subnet using the DNS fqdn which is the internal DNS at the Azure cloud and also I can use the external DNS if I have the on-prem setup with the hybrid Cloud if I have hybrid cloud and in my on Prem if I have the DNS this will be my external DNS I can use this external DNS to configure for these subnets as well so either I can choose Azure internal DNS or external DNS internet access for all the resources which you have created inside the virtual network has outbound connection enabled by default so any virtual machine which you have created inside the virtual Network can access the internet and we can also use the public IP address we can create the public IP address and assign to the virtual machine this will allow the external user to connect to the virtual machine communicate between Azure resources securely it is not only that you have the virtual Network created you have the virtual Network created right and you also create the subnets say for example this virtual network has two subnet subnet 01 subnet 02 if I wanted to access the other Azure services like the pass Based Services these pass Based Services doesn't have the virtual Network configured so you don't configure the virtual Network for the pass solutions for the web application for the Azure SQL for containers kubernetes if I wanted to access from this virtual machine any of the other Azure resources I'll be able to allow the virtual network access from this particular resource itself so when I configure the web application I have an option to allow the virtual Network and what range of IP address I can allow from this web application I have an option if I choose the option to allow the certain IP address range because my virtual Network IP address range is 2.0.4 if I allow for this subnet all the resources which are in this subnet can talk to the pass solution which is nothing but the web application so this virtual Network as 10. 18. 0.06 and I am allowing only this subnet for the other Azure resources access I can also do that using Azure resources securely so whatever communication which is happening between the virtual Network and the other services it will happen in a secure because this is within the Azure backbone Network and also we can use service endpoints to connect to other Azure resource type such as Azure SQL database and storage accounts you will get a end points and you can integrate those end points to your resources from where you want to access the other azzure resour sources it is not only that you create a virtual Network and all the resources within the virtual Network can talk to each other or communicate between each other within the same virtual Network you can also configure the virtual Network to connect to your data center so if this is your Azure cloud and you have a virtual Network created this virtual Network can connect to your data center using VPN so this can be Point too site to site VPN or via the express route so there are three options how you can connect your virtual Network to your on Prem data center so either it will be point to point s to site and express route express route provides I bandwidth because it is direct connection between the Azure Data Center and your data center using the co location or direct connection to the Azure data center itself Point too connection is basically you have the users who are accessing from the remote location there can be multiple users who have the laptop or desktop they wanted to connect to the Azure Cloud to your virtual Network and access some resources they have the VPN client running on the laptop or desktop so they have the VPN client running they can connect to the virtual Network and they will be able to access the resources this will be encrypted via the VPN basically they are accessing using the internet so all these connections all these users who are accessing your aure resources are via the Internet VPN and this particular connection is encrypted in s to side communication it also uses the internet and this is also encrypted this is also encrypted connection you need to set up the router at the data center and also you use the VPN Gateway at the Azure virtual Network layer you'll be able to communicate from virtual Network to your data center the difference between point to site VPN and site to site VPN is basically in point to site VPN you use the VPN client to access your Azure resources in sight to site VPN you configure the router so that whatever resources at the data centor can be allowed to access the Azure resources so here you are allowing individual users who have the VPN client installed to access the Azure resources whereas it is Data Center and the Azure network communication the third option for hybrid connection is express route you can use express route for greater bandwidth and even for higher levels of security only thing is it will take some time to get the connection done at the Azure Network and also at your data center so how does the Azure virtual Network route the traffic by default whatever resources you have created inside the virtual Network everything is allowed to communicate between each other inside the network all the resources within the network within the subnet can communicate between each other it can also communicate with on-prem network if you have configured any three types of hybrid Network like point to site VPN site to site VPN or express route connections you can also connect two virtual networks by default the outbound connection to the internet is allowed so whatever resources you have deployed on your virtual Network all the resources can access the internet by default other methods to route you can Define the routing tables also you can use bgp which is border Gateway protocol to route between the networks so everything is fine I have created the virtual Network I have created the subnets I am able to configure the virtual Network to connect to the on-prem data center using point to site or site to site or express route connection method now after doing all this how do I control or filter the network traffic I'm able to filter the network traffic using the network security group I can control the inbound and outbound traffic using the security rules I can also set up a network virtual Appliance so that it will allow the inbound traffic via one particular Network virtual Appliance and any outbound connection or inbound connection will go via this particular Network virtual Appliance I can use this method also to filter the traffic we can also connect two different virtual networks for example if this is my virtual Network vet 01 I have some subnets which are created here and these subnets has some resources and this is virtual Network 02 this also has some subnets if I wanted to communicate between these two virtual networks I can configure something called as vnet bearing if any of the resources in this particular subnet wanted to talk to this subnet this can be virtual machine here there is a virtual machine this can go via the pering network and connect to this virtual machine this is within the Azure backboard Network Azure backbone network is nothing but the internal to the Azure Network so this is what communicates via the Microsoft backboard Network never entering the public network pairing enables resources in each virtual Network to communicate with each other these virtual networks can be in a separate region I can also configure the virtual Network in different region EU East EU West so these are in two different region but still I am able to configure the vet pairing and communicate between the resources so these virtual networks can be in a separate region there is something called as user defined routes which is nothing but UDR these routes are different from the system routes and these routes can be configured for greater control over the network traffic flow you wanted to see to that which resources can talk to which resource via what networks so those you can Define by yourself which is user defined routes in this video let us understand what is azure virtual private networks this is with continuation of the Azure virtual Network what we have discussed in the earlier videos let us understand what is VPN all about before going to Virtual private networks let us see what is VPN if I'm a individual user let us say this is me I'm an individual user and I want to connect to my office Network so what are the possibilities to connect to your office Network either you can connect using a cable you can directly connect from your laptop to your office network is this possible this is not possible because there are multiple users there are thousands of user who will be accessing your office Network because they are working from remote location or they are working from work from home in this particular case we have to use internet as the medium so internet is an existing connection you have different provider who provide the internet connection you use the internet to access your office Network so basically what you do is you create a tunnel you have the internet connection you create a tunnel and this particular tunnel is the secure tunnel secured in the sense because internet is a public network and nobody should access your corporate data so that is the reason you create a tunnel to access your office Network so when you are using your Internet to access your office Network you need to authenticate to your office Network it can be via VPN client installed on your laptop or the desktop so you authenticate via VPN client and connect your your office Network you access all the resources it can be mail services it can be SharePoint it can be web services all the services you are able to access via the internet using the VPN tunnel if an individual user is accessing the corporate Network or the remote office then this is called as point to site because a single user is connecting to the office Network which is point and this is your site this is the remote office or your corporate office so this is okay but what if I have two head of office one is head office the other one is branch office I wanted to connect these two networks you can connect these two networks using the router and this is also via the Internet or it can be le line so you can have the connection from one office to another office using the Lee line or the internet so if you are using the internet then you are creating a tunnel end to end tunnel within the internet and this particular tunnel is a secure tunnel as I said the data what you are accessing from one office to another office this has to be in a secured manner so that is the reason we use VPN tunneling at each office location you have the router configured and this particular router will have connection via the Internet and this connection is nothing but your tunneled network this is for the security purpose you are tunneling your data from one location to another location via the internet via the public network in a secured way this is because the data whatever you are sending from head office to the branch office or from the branch office to the head office cannot be sniffed or it cannot be hacked so it will be in a secured way if I apply the same concept on Azure so again this is me and this is the Azure Network so this is the Azure Cloud if I wanted to connect to the Azure network using the Point topoint connection again I need to have internet I'll connect via the Internet and connect to the Azure Network so this is the laptop and I'll be able to connect via the Internet to the Azure Network for you to connect to the Azure Network you need to create something called as VPN Gateway before that you need to have virtual Network created and to create a VPN Gateway you need to create a subnet which is called as Gateway Subnet in any one given virtual Network you can have only one VPN Gateway created so this is your virtual Network which you have created and you create a subnet which is called Gateway subnet only one Gateway subnet is allowed in one virtual Network so this is vet 01 and you can create create only one Gateway subnet with this Gateway subnet you are going to create something called as VPN Gateway when you create a VPN Gateway you are basically creating two instances so this will be automatically deployed by Azure when you create a VPN Gateway and this instance can be in active passive or active standby or active active for the end user who is connecting to the virtual Network again is going to create a tunnel this particular tunnel is a secure tunnel which is being created to connect to the Azure Network and using this tunnel you are basically connected in to your VPN gateway to access the resources so this is another subnet which has Azure resources created so you are going to use the VPN gateway to access your resources on the Azure Cloud so this is okay because you are able to use the point to site connection to connect to your virtual Network so whenever you see any question related to point to site point to site is basically individual user connecting to your Azure Network so this is point to site connection there can be one user there can be multiple user so there are multiple SKS which are available or skos which are available so it will start from 128 connection 500 connection and th000 connections so basically they are saying they can have, users can connect to the virtual Network so this is the skew which you need to select in case if you wanted to connect if you wanted to use th000 individual users to connect to your virtual Network so this is okay because this is the point to site connection indiv individual users who are going to access your Azure resources using the VPN Gateway what if I have onr data center this is the on-prem data center and I wanted to connect the Azure Cloud I can use the same method I have to deploy the virtual Network so I deploy the virtual Network I can create the VPN Gateway I need to create a Gateway subnet before and deploy VPN Gateway when you deploy VPN Gateway two instances will be created automatically and you have the router at the on Prem you need to configure this particular router and this VPN Gateway using the public IP address so this also will have public IP address as the end point and this also will have public IP address as the end point here also at the end user side anyway if you are accessing the internet you have the public IP right when you access the internet you'll definitely get the public IP for configuring the VPN Gateway you need to use the public IP address on the VPN Gateway so here also you have public IP address so using the public network you connect to your Azure network access the private resources and the authentication is azure ad Azure active directory or certificates or radius remote access dialing users so this is very famous in Windows operating system remote access dialing users it is there since very longterm Windows 2000 or might be windows mt4.0 so the same authentication you can use it here we use something called as pre-share key so we use pre-shared key at both the end and a tunnel will be created this is for the security reason this is a secured traffic so this particular traffic should not be act or snipped that is the reason we use the encrypted tunnel only drawback here is the bandwidth the bandwidth what the internet provides that is the same bandwidth what you going to use to connect your network from the same VPN Gateway you can have multiple connections so if you have another branch office or another head office you can have another connection using the same VPN Gateway this is your data center on PR this is in location one this is in location two you can use the same VPN gateway to connect again you need to configure your router in such a way that it will have public IP address and there is the pre-share key which has been provided by the Azure you will be able to access your network let us get back to the notes and see what is VPN Gateway a VPN Gateway is a type of virtual Network Gateway which we have discussed and we are able to connect to side to site connection and we can connect to point to site connection apart from point to site connection side to side connection we can also connect we have one virtual Network and there is another virtual Network we create VPN Gateway and we are able to connect two virtual networks vet 01 and wiet 02 in the previous video we have also seen how we are able to connect two virtual networks with the help of Beering so which is nothing but vet bearing which uses the azure backb Network so it doesn't use the internet to connect the vet ping so whereas here we need public IP address at both the side and it is going via the public internet so if you wanted to connect vet to vet we can also use VPN Gateway and the data which is transferred from source to destination it is always encrypted you can only deploy one VPN Gateway in one virtual Network that we have seen and we can have multiple connection from the same VPN Gateway that also we have discussed when setting up a VPN Gateway you must specify either policy based or route based basically they determine how the traffic needs to be encrypted in Azure regardless of the VPN type either you choose policy based or the route based the pre-shared key is the one which is employed the policy based VPN gateways determine how the IP address which is nothing but the public IP address packet will be encrypted through each tunnel IP routing either static or dynamic decides which tunnel to be used for each of the IP packets route based vpns are the preferred connection for the S to site connections this is from onr to the Azure Network when a route based VPN Gateway is recommended when we have connection for point to site when we have connection for multi sight connection and also when the VPN Gateway coexist with the Azure express route Gateway let us see what are the ey availability scenarios for VPN Gateway VPN Gateway can be configured in active standby by default when you create a VPN Gateway it will be deployed as active standby and you need to use only one public IP address to connect to the point to site or site to site connection and for the active active you need to use two IP address this is for one instance and for the other instance basically when you create a VPN Gateway two instances will be created so for these two instance you need to use these two IP address so that it can be used in active active mode another I availability scenario is we can use VPN gateway to connect to your on-prem data center right so we can use VPN gateway to connect to the on-prem data center at the same time we can also use express route to connect to on-prem so these two are different type of VPN gateways so one is exclusively for S to side connection the other one is for express route which requires the physical cabling and Co location facilities so when you use these two different type of VPN gateways if there is a failure of express route the VPN Gateway which is via the Internet it can be used to access your resources on the on Prem or vice versa or the Azure Cloud so this is another IE availability scenario since the VPN Gateway is deployed as two instances so one instance this is another instance and one of the instance can be in AZ 1 and the other instance can be in az2 this is instance number two this is also for the I availability purpose so this is how we can configure Azure VPN gateways and also we can use VPN gateways in ha design in this video let us understand what is azure express route Azure express route is a dedicated connection from your on Prem Network on Prem data center to the Azure Network so this is the Azure Network you use direct connection to access the Azure resources using Express Road in the previous video we discussed about VPN Gateway right so this VPN Gateway it has a constraint it is very slow because the bandwidth it uses internet and also the traffic is encrypted this can be used only for Less critical or development or test workloads if you wanted to use I avability when connecting to your data center then you can use Virtual private Network Gateway and express route this will be at the Azure end and you can connect to the onr network so this is via the VPN Gateway this has the router configured and from the express route you'll be able to connect to your onp network again this also uses the router to connect so this is high bandwidth and this will be low bandwidth if there is any problem with the express Road connection you can fail over to the low bandwidth Network so what are the features and benefits of using express route we have the connectivity for all the resources across all the regions and also we have Global connectivity to Microsoft Services across all the regions with express route because express route when you connect you have the access to Office 365 and also your Azure virtual Network and express route uses Dynamic routing basically it uses bgp as the protocol to dynamically route the IP traffic and we have built-in redundency at every appearing location as I said you have direct access to the Office 365 applications because you have something called as Microsoft bearing when you use express route connection so using Microsoft Beering you can directly access all the Office 365 applications with private paing we connect to the Azure virtual Network this is azure virtual Network and we have created multiple subnets we can access all these resources using the private peering let us see what is global connectivity if I have a data center I have this data center which is dc1 this is located in US region and this is my data center number two which is in Europe location so these are two different regions us and Europe these are all on from data center this is not Azure region this is an onr data center I wanted to have connection to my Azure virtual Network to the Europe region via the express route so the Europe region on Prem data center is using the express route to access the Azure resources and the US region is also using express route to access the Azure resources for us region if I have multiple data center like this I can simply use the express route Global reach to use the same existing Express Road connect via the Azure and connect to my other data centers I can exchange the data from us to Europe using the same express route connections there is no need for me to connect via the Internet and connect to the other data center so this is not required so this is why Global reach is useful express route uses the bgp protocol for dynamic routing so pgp is border Gateway protocol which is used for the dynamic routing because whenever there is a data exchanged from one location to another location we need to cross multiple Ops these Ops are nothing but the routers so the data will transfer via multiple routers from one location to another location so this Dynamic routing helps to find which is the easiest path to exchange the data from one location to another location so this location one to location two using the dynamic routing protocol it can find the easiest path to exchange the data buil-in redundency whenever you use express route to connect your data center from one location to the Azure Network there is a service provider who is providing you the service for the connectivity from your onr to the Azure Network these service provider for their networking stack use highly available components or ridden in components that is the reason the connectivity from your onpro data center to the Azure network will be highly available connections let us see what are express route connectivity models basically there are four different types of express route connectivity models let us start with the co location at Cloud exchange so we have location facility Co location is a place where you host your services there can be multiple customers there can be customer X this is customer Y and customer X they can host all the resources in a CO location basically there can be multiple customers using using the same Co location facility which is rented by a third party vendor so we have hosted some of the services in a cocation facility this is a big data center where you have your network configured at this collocation facility you have your network configured and this particular Network whatever you have configured this will go and connect to the cloud Exchange so there is something called as Cloud exchange facility and this particular Cloud exchange facility will be of the service provider so this is from the service provider itself so this is called as partner equipment or service provider equipment this is called as C which is customer equipment and using this partner equipment you are going and connecting to the Microsoft Edge networking which is nothing but Microsoft Edge Equipment and this will connect to the Azure Network so this is your Azure VPN Gateway for the express route you might say okay I not hosting my services in a CO location facility I have my own data center from where I wanted to connect to the Azure Network so if you have your own data center so this is the data center and we have the networks created right at your data center so usually what will happen is in your data center you have your network configured so this will be your network this is the physical Network what you have configured and sometimes it will be referred as fine and leaf Network or it can be also called as access layer and and layer three Network so if you have an existing Network like this and you don't want to use the cocation facility in that particular case there is an option from the service provider who provides the Point topoint connection so you have Point topoint connection you connect to the point where the service provider has given an option for you to exchange your network via the point too connection through the service provider Network and eventually you connect to the Azure so this is also provided by service provider and eventually it will connect to the Microsoft Edge net Network and this Microsoft Edge network is the backbone Network for the entire Azure Cloud so once it is connected you can access all the resources whichever you have provisioned in the Azure Cloud similar to co location at Cloud exchange or side to side Connection in any to any network you have the similar case here you have your own Data Center you have your existing infrastructure similar to how you extend your network to another branch office or remote site so this is one of the remote location or branch office of your own corporate data center so this is another data center in a different location similar to how you extend your network you can use the wide area network using router or MLS and you'll be able to connect to the Azure Network so this is using the existing infrastructure of your data center it is just like an extension of your data center using your existing infrastructure and with the help of service provider you are connecting to the Azure Network basically all these three options whether it is cocation at Cloud Exchange point to point ethernet connection any to any network all these three options uses service provider so we need to have service provider help so you can check in the portal or all the service provider who are authenticated by Microsoft for providing these Services of express route so there are multiple service provider for each of the countries directly from express route so in this particular case your data center you are connecting your data center Network as I said you have the fine and leaf connection or access layer or L3 layer connection and this particular L3 network access layer Network whatever which you you have configured in your data center this can be extended via the direct connection to the Microsoft network so here there is no need of service provider in this case there is no service provider so basically be is not required this is Microsoft Edge Network this is your customer equipment Microsoft equipment and you are going to connect using the Microsoft Edge Network to the Azure virtual Network so this gives more bandwidth compared to the partner equipment connection like Co location or point to point and the any to any networks all these three types basically they share the band with for different customers so here you are not sharing the bandwidth you are directly connecting to the Azure network using one of the connection so the bandwidth is not shared when you use express route as the direct connection to the Microsoft network security consideration so whenever you connect using express route to the Azure data center or Azure networks the data is not exchanged via the Internet so it is a private dedicated Network which has very high speeed bandwidth you are using the highpe dedicated Network to exchange the data from one location to another location which is nothing but your on Prem Network to the Microsoft Azure Network even though you have express route connection some of the services like DNS queries certificate revocation Azure CDN these data are still being sent over the public internet so this is one point which needs to be considered for the examination so in summary Azure Express Road is a dedicated private Network and this is used for Mission critical workloads so whenever you wanted to use replications like storage replication for backup you use express route the bandwidth which is provided by the express route is sufficient enough for the data which has to be transferred from your onr network for the backup or for the replication or for H and Dr if you have Azure site recovery services running you use express route to fail over and fail back from on Prem to the Azure resources there are three types of connection one is co location point to point any to any and the other one is direct so these three connectivity options are from service provider and this is without using the service provider we are connecting directly to the Microsoft network so this was an overview about Azure express route Connections in this video let us understand what is azure DNS Azure DNS itself it is a very big Topic in Azure Cloud so we will be covering only the overview part of the Azure DNS so which is required for the AZ 900 exam so Azure DNS is a hosting service for domains so if you have come across this particular term DNS which is nothing but domain name system in DNS you have two types of DNS one is public DNS and the other one is private DNS have you ever wondered if you have a laptop or desktop and you are trying to access using your browser something like ww.google.com or ww. [Music] microsoft.com so these are the websites which you wanted to access from your home and you have your Wi-Fi connected so this will be your router at home which is the Wi-Fi using this particular Wi-Fi you are trying to access google.com so when you type ww.google.com this domain name has to be sent via your router and it will be sent to the is P so ISP themselves have something called as DNS server so this DNS server has something called as resolver cash because there will be thousands or lacks of users who are connecting to the ISP and there will be multiple request which will be sent back and forth there will be different domain name which will be accessed by different users it can be different not only google.com or microsoft.com it can be any websites news portal blogs or any portal so there will be multiple request which are sent via this ISP what this ISP will do is whatever the request which is sent and which is received it will actually store in the DNS server in the cache of this DNS server so if there is a request again for any of the domain name then for the ISP it is very easy to service that particular request via the cache itself right so google.com it has cashed from the cach it can send back the request if at all you are accessing something very different website this is your company website pto.com if this website has to be accessed via this ISP and this ISP doesn't know where exactly this faso.com is there then it will send this particular request to the root DNS server at the root DNS server so basically the dot itself is the root DNS server once the request has been sent to the root DNS server under the root there are multiple plds top level domain so basically you have do edu.org net and do and. JP Au so there are multiple top level domains because this comes under do and it will be sent to the Doom so the root knows who is the authorized for do domain so this.com DNS server gets the request from the route and this Doom doesn't know what exactly is the IP address for this particular contoso.com but it has the record where exactly or who is the authorized DNS server for contoso.com because it has to maintain the registry or it has to maintain the records DNS records at the.com for all the domain names for all the public domain names it has to Main maintain the records for all the edu this domain top level domain. edu maintains the record right so the Doom maintains the record and it knows who is the authorized DNS server for contoso.com since ca.com has been purchased from goad so the GoDaddy DNS provider or the D DNS servers will be the authorized DNS server for contoso.com right now it has reached the GoDaddy DNS server from there it knows what is this IP address because you need to map the contoso.com with the IP address right in the GoDaddy site if you have purchased contoso.com you need to map the domain name with the IP address you can say okay I have purchased contest.com and I have not used any IP address I have not mapped so this domain name if you have purchased and if you have kept like that paying one year subscription 3E subscription or 10e subscription it is of no use so contest.com will be there but it is of no use until and unless you run some Services these services are nothing but web services mail services if you are running some kind of services then you'll be provided with the IP address right so once you run the web server you will definitely get the IP address so this will be a public IP address this public IP address has to be mapped to your name which is domain name what what you have purchased from gooded so once this IP address this is the public IP address mapped with the domain name contoso.com which is under the GoDaddy authorized DNS server so this bad DNS server knows what is this public IP address and this domain name belongs to goad DNS server so it will route the request using this public IP address now the request has come till the domain register DNS server this domain register DNS server knows what is the public IP which has been mapped to this contoso.com domain name since it knows the mapping of the public IP address with the domain name if you have hosted this website contoso.com website in Azure Cloud say suppose if you have created a virtual machine and for the virtual machine you definitely assign the public IP address and this P address will be mapped here so it will lead to this web server in Azure Cloud now we know where exactly the web services is running so this DNS server will return the request via the doom and to the root and then it will eventually go to the ISP DNS server so from now on this ISP this particular ISP will cache this contoso.com in their DNS server so next time whenever there is a user who is browsing contoso.com either from the local cash you'll be able to resolve the contoso.com or if he has flushed the local cache always you'll be able to go to the ISP DNS server to get the contoso.com domain name to resolve the IP address so now we understood the public domain name system is nothing but or the public DNS is nothing but the domain name mapping with the public IP address so we have the domain name registered using one of the register either go DD name chip or hostinger then we create a service either it can be web services or mail services or any Services load Balan of services or any services for which we require a public IP address we we map that domain name with the public IP address this domain name is helpful because I can remember the ww. faso.com I don't want to remember this IP address right so this is how the public domain name works because these are all routable domain names routable domain name in the sense it is routable in the public network which is nothing but the internet so now let us see what is private domain name it is similar to the public domain name but it is limited to the Enterprise or the organization at the local level so it is not routable it is only limited to the Enterprise Network so for example if I wanted to create ao. local this will be my domain name right and it will be mapped to the private IP address which is nothing but the non-routable IP address so this is for your Enterprise here there will be a DNS configured at your organization there will be a DNS server there can be multiple DNS server for I availability and it can actually in sync so that all the records will be copied to another DNS server right you can actually Place multiple DNS servers at multiple data centers in order for I availability so if one of the DNS Ser fails you can actually Point your client or the source to another DNS servers so you can actually configure your client to use multiple DNS servers if one of the DNS server has failed it will be able to connect to another DNS server to get your domain name result if you take for example ip config slash all you'll be able to see how many DNS servers are configured at your client right if I say ip config slash flash DNS so this is another command to flush the local cache so I said there is a local cache in your laptop also so if you wanted to flush DNS resolver cache at the local level you can use this command ip config SL FL DNS so this is how the public DNS and the private DNS Works in Azure since we have multiple Global DNS service which is there in the Azure infrastructure it can actually provide the private as well as public DNS it can actually provide both private DNS service and public DNS service so whenever you create a virtual machine in Azure basically when you create a virtual machine you will get the private IP address right this private IP address will be automatically assigned with the default domain so it gets the domain name from the private DNS automatically so there is no need for you to configure anything so by default this VM whatever you are going to create in Azure it gets a private domain name for the public domain name you have to register with the public domain register by the way Azure does not provide the feature of registering the public domain so you have to use third party to register the public domain names so what are the benefits of a your DNS let us look into the benefits one by one reliability and performance DNS domain in Azure DNS are hosted on Azure Global Network yes because we have multiple domain names or domain servers in Azure if the name starts with NS this naming convention you need to understand this is nothing but name server there can be multiple name server like this this is for the reliability purpose purpose so we have multiple DNS server if one of the DNS server fails there are other DNS servers which will handle the domain name resolving Azure DNS uses anycast networking this is to provide name resolution in quickest way because it uses the closest available DNS server so when it comes to the security Azure DNS is based on Azure resource manager which is nothing but the armm and we can actually control the Services the DNS services using the role based access control or rback and also we can use the locks activity logs to monitor if there is any modification on the DNS records we can also lock the resources if we lock the resources at the subscription level this blck will be applied till the resource level so this is the Archy if you apply the lock at the subscription the entire hierarchy will be locked till the resource if you are placing your lock only at the resource then you are basically locking the particular resource only for example if I wanted to lock only the DNS I can use the lock only on the DNS right so my Resource Group are not locked only the resource is locked I can actually place my lock at different level in the hierarchy at the subscription or at the resource Group or at the resource ease of use actually if I have registered my domain name which is contoso.com with go D I have registered my domain name with a good ID I can transfer this contoso.com to my azure I can manage the entire domain name from the Azure portal itself so I can transfer this domain name to my Azure subscription from Azure portal or the CLA I'll be able to manage if you say I don't want to manage using the Azure portal I will use GoDaddy portal itself to manage the contoso.com that is also fine you can manage using the good ID portal only thing is you need to use the Azure portal to maintain the records so you maintain the records using the Azure portal and you manage your domain name using the good ID and also you can use Powershell command lets and also you can use Azure CLA to manage any services in Azure Cloud customizable virtual networks with private domains you can actually create a private domain say I wanted to create on. local this will be my priv private domain what I will do is I will create a private domain in Azure portal or Azure CLA I will use this private domain name and I will basically link this private domain name to the virtual Network so once I link this particular private domain name to my virtual Network whatever resources I have created the virtual machines whatever I have created all all the virtual machines has to be rebooted in order to take this private domain name into effect so whenever you create any resources in the AO virtual Network by default you will get the default domain name so this is nothing but the domain name which is provided by Azure this is provided by Azure DNS so you will get the domain name from the DNS server of the Azure which is a default domain name which will be nothing but Cloud app.net so this will be the domain name I think will be internal so this will be the fully qualified domain name for your resources whatever you are going to create in the virtual Network so the name and internal. Cloud app.net it is correct right okay I don't want to use why I want to use this big name I wanted to use my custom domain name con. loal I will create a private DNS once I create a private DNS I have to link this private DNS to the virtual Network once I link the private DNS to the virtual Network all the virtual machines has to be rebooted because it has taken the default domain name to get this domain name and to get this domain name into effect we need to reboot all the virtual missiones right so we have to reboot and this VM 01 02 now you will get this domain name fqdn as the private domain name for your virtual machines okay let us see what is alas records so whenever you use DNS server or DNS so you will be able to see there are multiple record types C name MX records and Alias so these are some of the types of Records what you are going to see in the D DS server so a record is basically mapping your host name to the IP address this is for the IP V6 this is for IP V4 so C name is basically a canonical name which I can map a domain name with another domain name for example this is my domain name Ono DM doo.com if I have created a traffic manager profile I can say my app do trffic manager.net so this portion you will get by default so this is the fqdns fully qualified domain name by default when you create a traffic manager profile the name what you are going to create it will be upended with the default domain name so you can access this traffic manager using https and my app do trffic manager.net so you can actually access using this URL so there is no big deal right but you want to use your your company's domain name instead of going mya. trffic manager.net so this looks something like awkward since I wanted to use my company's domain name to access the traffic manager I will use the C name to refer my domain name with the actual traffic manager domain name right similar way if I wanted to access CDN doo.com if I have created a CDN profile then I can say my app do Azure edge.net so this is the CDM domain name default domain name what you get when you create a CDN profile it will be appended with the instance name what you are going to create anyway you can also access the domain name using the URL so you take the URL of the CDN you'll be able to access it since as I said I wanted to use the company name as the standard I can use CDN doo.com to access the CDN profile if you notice one thing I am actually using the subdomain so let me talk about subdomain so if font too.com is the domain name this is called as Apex domain if I am creating anything like ww FTP okay mail. faso.com so basically these are all subdomains under contoso.com right so this portion will be subdomain I can use only the subdomain to map or refer the another domain when I use the C name so that is the reason we use Els I can straight away use contoso.com and I can point to this particular default domain name which is created for the traffic manager profile or I can use for the CDN also I can use the Apex domain for my IP address which is public IP address resource whereas C name you cannot map to the public IP address so this is why Alias record is very important or it is very much useful okay now you can say since I am able to map the Alias with another domain faso.com why I should not try mapping with faso.com with microsoft.com okay you note there is one thing Alias record set to refer to a Azure resource only okay you cannot map to any other resources or any other domain names apart from the Azure resources so you can only map to the Azure resource right as I said you can use the alas record to map your Apex domain with the Azure resources it can be traffic manager it can be CDN it can be IP address so you cannot map to any other resources apart from the azure resources and also Whenever there is an change in the IP address say for example if you have mapped the public IP address to a virtual machine and whenever this particular virtual machine reboots you will get a public IP address you'll get a different public IP address right when you create an ales this particular public IP address will be automatically updated with the alss record let us see what is storage of account whenever you create a virtual machine say if you are creating a virtual machine and you have to attach the dis right you need to attach the operating system disk and also you need to create and attach the data disk for the operating system so this can be windows or Linux you are basically creating the operating system disk and the data disk so these diss are coming from the managed diss right for using operating system disk or the data disk there is no need for you to create the storage account storage is in infrastructure as a service why do I require a storage if I have data these data can be do doc files PDF XLS movie file audio file and any image file if I wanted to store these type of data then I need to have storage if I have the storage then I'll be able to store all this data right and this storage should be persistent storage why do we require persistent storage there are two type of storage one is volatile and the other one is nonvolatile RAM is volatile so once the laptop or the server has been rebooted the data which was stored in the ram it is not available so it is only available when it has been powered nonvolatile is a persistent storage if you write the data it will be always available so this is why we store all these data into a nonvolatile storage or the persistent storage in the initial days we add something called as floppy Drive floppy copy disc so this was only 1.04 m of capacity then we add the CD then we add something called as DVD so this was around 650 MB this was around 4.3 GB file and then we add somewhere around 8.7 I think 8.7 GB so this was basically used for audio and many vendor many operating system vendor like Microsoft to say rat they used to supply the operating system package with the CD and also application like anti virus financial accounting software everything used to come in CD and DVD was very popular for the movie and then we add our disk so this was starting from GB now which is available in terabite so this was a mechanical dis and this had multiple interfaces so this came with ID data SAS so there were multiple interfaces for dis drive and later we add the same hard disk drive which had little bit of flash added so this was called hybrid if you have purchased hard disk drive from Cate or Western Digital you would have noticed there would be hybrid hard disk then we add the SSD itself then we have the current trend is FL dri so both are non- mechanical and this is mechanical Drive which add the arm and the head magnetic GS with the hybrid dis we add little bit of SSD plus HDD so these were the mediums which were used to store the data starting from floppy to The Flash and the data what you are going to store on these type of medium will be classified into structured structure data semi structure data then unstructured data so these are the classification of the data so in structure data we have database in semistructured we have XML Json for unstructured we have text movie file Avi MP3 PDF doc all this comes under unstructured data if I am using the schema to query the data then it will be a database which will be structure data which has the rows columns and tables and semi structure data it doesn't have to be queried it doesn't have any schema but still it has some format of structure and unstructured as you know it doesn't have any structure we cannot query and it also doesn't have any schema so these are the classifications of data and all these data has to reside in some or the other storage medium if I take the database it needs very high performance storage so that is the reason why database are showed in either flash or SSD for semi structure data we don't require such a high performance disk we can go with storage optimized or standard ad disk also we can use standard our disk for the unstructured data so whenever there is a data which is written in smaller chunks and which requires I right performance then we require the flash storage so for semi-structured and unstructured we can use the standard SSD or the standard hard drive when we create a storage account we need to select from the options what exactly we are going to do with this particular storage account whether we are using for the binary large object storage we wanted to use it for the Q storage we want to use it for the table storage or we wanted to use it for the Azure files if I wanted to use for Q storage Azure files table storage and binary large object storage then I have to choose the standard general purpose V2 storage so this is the option what you have to select from the performance you have been given with an option of performance with the standard and the premium so when you choose performance you'll be able to select the standard general purpose V2 and we are able to use the blob storage Q storage table storage and Azure files when you choose standard general purpose V2 you have all the options available lrs GRS R GS zrs gzrs ra gz RS so these are the redundancy type this we will discuss in the next video and in the usage it has been clearly mentioned if you are specifically going for Azure files then choose the option for the premium so you need to choose the premium storage account type say my requirement is only to store block blobs which is storing the block binary large object storage I have to create the premium account premium storage account so these are the supported Services block storage and the redundency which is available is the lrs and zrs only the usage it is used if you have smaller objects and if you require the low storage latency then you go with the premium block blob storage account and also if you require I transaction rate then you choose premium block blob account the next account type is premium file shes the supported Services is azured files the redundancy option for premium file Shares are lrs and zrs the premium file Shares are recommended for Enterprise I scale I performance applications like web services and if I wanted to use SMB which is very familiar with Windows and also NFS for Linux or Unix so whenever you wanted to share the files using the SMB protocol or NFS protocol you use Azure file Services when you create premium file share storage account so this will be basically for the Enterprise users so either it can be NFS or SMB so when you create a file shares then you are using something like this sh then marketing Finance so this is very familiar in Windows when you try to map the shared folder to Enterprise users you'll be mapping these type of shares to the user profile if you are using NFS for the web services or for mounting any file shares in the Unix or Linux operating system then you use something like this NFS PIR 01 PA 02 so this is how you mount NFS shares to the Unix or Linux so this is for Linux and SMB for Windows so when you select premium page blob account the supported Services page blobs only and the rency what you get is lrs Page blobs are used to attach the diss for the virtual machines you create a page blob disk and you attach to the virtual machine so this is the only usage when you create premium page blocks so now there is no need for you to create premium page blobs to create a disk and attach to the virtual machines this is how in earlier days when we used to use the Azure cloud services we used to create the storage account create the page blobs and then attach to the virtual machines so now we are not using this method so simply we are using the managed diss which has lrs and zrs so both are supported in managed diss so what is storage account end points when we create a storage account we need to provide the name for the storage account if I am using the name of demo 123 if this particular name is already taken by some other user or some other organization I cannot repeat the same name so the name should be unique it should be globally unique globally unique in the entire Azure Cloud we cannot repeat the same name either it can be from different user or different organization you cannot use the same name repeatedly the combination of the account name and the storage service endpoint forms the endpoint of your storage account if I am creating account blob this will be my storage account name so if I have account blob this will be my storage account name this is the end point for the service the combination of the storage name and the end point is the end point for the storage account so this combination this entire combination is the end point for the storage account when you create a storage account the name should be between 3 and 24 characters it can contain the lowercase and the numbers it cannot contain the uppercase letters the storage account name should be unique which we have discussed it should be globally unique you cannot repeat the same name end point for the storage account services so if I am creating the storage account for the data L then the name of the storage account then the name of the service DFS doc. windows.net this is the service endpoint point with my account name it will be my storage account endpoint and similarly for Azure files the name of the storage account for the Azure file then file. c. windows.net this will be the end point for the storage account for the Azure files if I am creating the storage account for the Q storage then the storage account name q.c. windows.net this will be the end point for the storage account for table Storage storage account name and.co windows.net this will be my end point in the next video we'll discuss about storage redundancy in this video we will discuss about Azure storage redundancy in the previous video we talked about Azure storage account when we are creating the storage account we have been given with the option of performance so when we select the performance again we have two options to select one is standard and the other one is premium when I choose standard I able to create blobs tables Q files and page so these are the types of services which I am able to create when I select the standard when I select the standard gen V2 I also able to use data link these are the storage Services which are available when I select standard right and the rency what is available when you select the standard is lrs locally redundant storage zrs Zone redundant storage RS read access geo r and storage gzrs which is nothing but globally Zone rant storage and R gz RS so these are the redundancy type what we have when I select the standard storage and if I choose the performance as premium I have again multiple options to choose which are nothing but premium block blobs one is premium blog blobs this will support the redundancy of lrs and zrs I have another type in premium which is file shares which is nothing but premium file shares this also supports lrs and zrs and Page blops this will support only lrs whenever you store any data in the Azure storage multiple copies will be created if I use lrs there will be minimum of three copies which will be created this will protect from unplanned events such as Hardware failures Network failures or power outages or even natural disasters redundancy ensures that your storage account meets its availability and durability since we have copied the data into multiple location we have multiple copies of the data even if one of the copy is not available there are other two copies which are available right and these copies should be durable the data what we have created and copied to multiple location is it durable is it as the same Integrity when we created the data or the data has changed for example if I take the word I have the word file but there is no content so that means the data is not durable when deciding which redundancy option is best for your scenario consider the tradeoffs between lower costs and ier availability if you are selecting lrs as the redundancy then this will be the cheapest option because you are storing your data in the same location in the same availability Zone and in the same data center so this will be the cheapest and it will have high performance because it is stored in the same region and same data center the factors which will help to determine which redundancy option we need is while creating this storage account itself so when you create a storage account you have an option to select which region you want to to place your data the region which you are going to select for example Europe north if you have selected this region this is the region which will be primary region and if you have selected lrs three copies will be placed in this same region in the same availability Zone if I wanted to copy the same data to another region the region pairing comes into picture so if this is Europe north and this is Europe West Europe West is Netherlands and Europe north is Ireland so this is the region pairing it will only copy to the other region which has pairing three copies will be placed here in the Netherland region so this is when you select geographically distant to the primary region because the distance between these two regions are somewhere between 300 M away if your application wanted to have read access from the secondary region you can have read access GRS which is geographically rden and Storage read access with geographically written in storage so one of the copy out of the three copies can be read only so the primary copy will be read right so this copy will be read right and the copy which has been stored to another region can be accessed with redly permission right so this this is useful when we run some kind of data analytics I am not touching the primary data because of performance issue if I touch the primary data then I'm accessing the primary data source which will impact the performance of my primary data so for doing the data analytics I can use one of the copy which has been stored in the secondary region I can run my data analytics and I can generate the reports so since I'm using only the read only there is nothing to worry about writing the data and getting the replicated in the reverse Direction so it will not write the data to the primary region so it is always read only let us see the redundancy option one by one redundancy in primary region if I am selecting the locally rden in storage then we are going to keep the the data of three copies in the same region in the same data center if I am selecting the zrs option which is in the primary region so primary region we can have lrs and zrs so this is in North Europe these two are region specific so when I say region specific we have three availability Zone A 1 AZ 2 and AZ 3 so we have three availability Zone if I am selecting lrs as the redundancy in one of the availability Zone we have the data center in that data center we have something called as update domain all domain and update domain so these are the racks these are called as storage stamps in aure TS so basically we have racks and racks of storage which is nothing but the diss so when you store the data using the lrs option one of the copy will be stored in one of the rack so this we can take example of rack zero rack one rack two so the second copy will be stored in rack number one and the third copy will be stored in rack number two when there is a failure of the availability Zone one the entire three copies will be lost so basically you lose all your data and these data will not be recoverable so there is a complete data loss when you use the lrs and when the availability Zone itself it is done so that is the reason we need to choose the zrs option in case if I wanted to have I availability then instead of going with lrs I will choose zrs so when I choose zrs instead of lrs there is a trade-off of cost and performance so we have I availability but the cost will be little bit more when you compare with the lrs so locally rain and storage lrs replicates your data three times within a single data center within a single data center in the primary region and lrs provides 11 NES of durability so if you see this picture copy one copy 2 and copy 3 which is stored in the same data center this will be your locally R and storage when you create a storage account if you select locally rden in storage which is lrs then three copy will be created in the same data center using the fall domain and update domain strategy lrs is the lowest cost redundancy option and offers the least durability compared to other options lrs protects your data against server and rack failures and also Drive failures if there is any failure of the rack if there is any failure of the Rack or the are drive in this rack it will protect unless and until there is a failure of the entire availability Zone let us see what is Zone redundant storage when you choose redundancy as Zone rant you have an option of placing your data in multiple zones so we have the region which is Europe north one of the copy will be stored in availability Zone 1 and availability Zone 2 and availability zone three the second copy will be stored in availability Zone 2 and the third copy will be stored in availability Zone 3 so this is how the data will be stored in multiple zones even if the rack failures or the availability Zone itself fails we have the data available in another zones so the copy of the data will be placed in one of the zones here in one of the Zone here which is Zone 2 and Zone 3 and Z RS office 12 lines of durability in a given year so what are the benefits of using the zrs when you choose zrs your data is still accessible even if one of these availability Zone fails and if you are using as the Azure file shes there is no need for you to unmount and remount the DNS records will be automatically updated Microsoft recommends using zrs in the primary region because even if one of the Zone fails the other zones will be available and still your data will be accessible so that is where you get the I availability when you choose zrs compared to the lrs so zrs is also recommended when you wanted to store the data in a particular region for the governance it can be government regulations or to meet any security criteria let us see why we wanted to store the data in a different region for applications requiring IE availability you can choose additional copy to be in a secondary region so there is a primary region which is Europe north this is Ireland and I wanted to store or I wanted to replicate at the data from Ireland to Europe West which is Netherland we know that the distance between these two regions will be hundreds of miles away so this is the Strategic decision how Azure places their data centers in different region so why do I want to place the data in different region I have three copies already here so I have the second copy here in this Europe North Region I have fourth fifth and sixth copy in a different region which is Europe West if there is a catastrophe and if there is a failure of this particular region there is an outage there is a disaster in Europe north my data should be accessible I will not be able to access access in case of any outage in the Europe north by keeping my data in a separate region I still have the access to these data so that is why I replicate the data using Geo redundant storage between one region to another region which is hundreds of miles away so when you select the primary region you will not be able to change this primary region based on the selection of the primary region the data will be replicated to the secondary region if you say I have selected primary region as Europe north and I wanted to replicate this data to Singapore data center Singapore region so this is not possible only to the paired region the other three copies will be replicated the there are two options which are offered by Azure to store the data in multiple regions one is Geo redundant storage and geozone redundant storage which is gzrs and GRS GRS is similar to the lrs in two regions and gzrs is similar to running zrs in the primary region and lrs in the secondary region basically when you use GRS the this is how the data will be stored so this example is basically the GRS kind of redundency and when you choose the GRS redundency you will have 16 NS of durability of a data in a given year in this image for the GRS you can see there are three copies in the primary region and there are three copies in the secondary region this is Geo replication and and this is in the primary region and these three copies in the secondary region if there is an outage in the primary region you have your data still accessible either Microsoft has to fail over to the secondary region or manually the customer has to fail over to the secondary region only then you'll be able to do the read write access until and unless you do the failover you will not be able to access this particular data which is in different region so let us see what is geozone rain and storage in GE Zone R and storage we have three copies in different availability zones so these are three availability zones the data has been replicated to another region and the data will be stored as three copies in the same availability zone so this is if I take this as availability Zone one in geozone R and storage the secondary region will store the three copies in the same availability Zone whereas in the primary region one of the copy will be stored in one availability Zone the second copy will be stored in availability Zone 2 and the third copy will be stored in availability Zone 3 Microsoft recommends using GRS for applications requiring maximum consistency durability and if you want wanted to have disaster recovery facility then you can go with the gzrs type of redundency gzrs is designed to provide at least 16 lines of durability in a given year as I said if I want to do some kind of analytics or reporting I need to have read access for one of my copies which is placed in the secondary region if I take the primary region we have three copies and these three copies are stored in the primary region so this is the odd data I don't want to touch this because it is going to impact the performance of my applications the applications are continuously writing to the primary region and once it writes there is a copy of the data which has been created inside the primary region this data I don't want to touch for any analytics I will replicate this data using the redundancy option using either R GS or R A gz RS I wanted one of the copy as read only copy so this is my 456 so one of the copy I want to use for my reporting or business analytics I will choose the redundency as R RS or R gz RS I will use my application to run the analytics and get the reporting without touching the primary data so this is very useful when you select RRS type of redundancy option so we have looked into the redundancy part of the Azure storage in this video we will see what are the Azure storage Services which are offered by Azure Cloud when you create a storage account you have options to create multiple storage Services these storage Services it can be Azure blobs Azure files Azure cues Azure discs Azure tables so these are the storage Services what you you are going to create when you select the Azure storage services so what are Azure blobs when you create a storage account you can store the data in the binary format you create a storage account and you create a container for blob storage you'll be able to store or upload the data in the The Blob storage as binary format so this is the way what you are going to store the data in the blob storage so the data what you are going to store in the blob storage it can be simple text file PDF file it can be do doc file it can be Excel file it can be movie file it can be MPT file it can be image files so all all these files are stored as a binary format so each object what you're going to store in the blob storage it will be called as blob what are Azure files Azure files can be used as a central file share location which will support NFS which will also support SMB if I have a file service fs01 [Music] and fs02 it can be in a cluster or it can be in a stand alone and I have my shares sh HR these are my shares which I have created on these file shares it can be for finance share I can move all these shares to the Azure files as the replacement for my Enterprise file servers on the onr Azure cues or messaging services for application which helps to store large messages say for example if I have a web application it can be in on Prem or in Azure Cloud we have a web application I need to have sync request to be sent to the database and if I have asynchronous request these request has to be sent to the worker notes or worker roles so this is one worker note this is another worker note this is third worker node say for example this is a document processing application and there are multiple users or clients which are accessing this particular web app and for all the transac related request it will send to the database if there is any asynchronous request it has to go via the vi asynchronous request is nothing but when the users are going to upload the documents in different sizes and it has to be sent to the Azure Q so that it will convert all these documents either into A4 or letter so if you have an application and wanted to integrate asynchronous feature then Azure Q will be the best fit for your application so what are Azure discs Azure discs are Block Level storage which will support Azure virtual machines so when you create a virtual machines you need to have operating system Diss and data diss so these diss are supported by Azure discs so these are called as page blobs when you create a storage account so now since we are using the managed diss we are not going to use the storage account and create the page blocks to use with the Azure virtual machines we use the managed discs for the virtual machines what are Azure tables Azure tables are used for storing large scale of structured data it can store terabytes of data it is a new SQL database and it doesn't have any EMA it is basically a non-relational database what are the benefits of using Azu storage when we choose Azu storage we have multiple redundency options like lrs zrs GRS and R GRS and other rency options we can protect our data from any catastrophe or from natural disaster because the data is replicated multiple times we have I availability and the data what we are going to store in Azure storage is I durable when we upload or store any data onto the Azure account by default the data is encrypted with the Microsoft encryption key we can also use customer encryption key to encrypt our data we can also use use fine grain control which is nothing but the rback policy to control our access to the storage account in today's application architecture we don't know what amount of data will be stored onto the Azure account because of that Azu storage will scale massively we can store five petabytes of data per storage account because in today's application we see that there are microservices architectures and also there are streaming applications which use the Azure storage which needs massive amount of storage capacity for the end user or the customer who is going to use the Azure storage there is no need for them to maintain or manage the storage Azure will take care of the hardware maintenance updates even if there is any critical issues on the Azure storage Azure will handle and take care of those issues so we have stored our data on the Azure storage but how do we access these storage we can access the storage account or the blobs using the HTTP or https protocol and we can also use power shell a your C and also we have GUI tool like storage Explorer to access the data what we have stored onto the Azure account in this video we will discuss about Azure blobs in the previous video we have seen what are the storage Services which are offered by Azure Cloud Azure blob is one of the storage Services before going to the Azure blob say if I have a virtual machine or for that matter if I have the hardware physical hardware and if I wanted to install the operating system first thing I need to have something called as file system I need to have file system only then I'll be able to store any kind of files right so for Windows it is NTFS for Linux it is ext3 exd4 or Riser so in Azure storage I need to select something called as Azure block storage service and if I selected Azure blob storage service I can store maximum amount of data and this storage this Azure blob storage can store unstructured data so basically I need to create storage account in the first place so once I create the storage account I also need to create something called as container so this container is similar to the windows folder but it is not a Windows folder once you create the container you can store millions of objects within the container so you can name the container as audio video document image asure vhd file so these are the container names what you are going to create to store the binary large objects so whatever data you are going to store in the Azure storage it will be blob this is unstructured data so you can store maximum amount of data objects which can be of anything like loog files audit logs application logs system logs video files doc documents audio files image files in any format and any size no need to manage the storage infrastructure Azure will take care of the storage infrastructure so we can do simultaneous multiple uploads to The Blob storage and blob storage are not limited to any particular file formats and data is uploaded as blobs Azure will take care of the physical storage needs let us see what are the use cases for blob storage blob storage can be used to serve images and documents via the browser so we can access the blobs which are stored in the containers via the browser and we can use the blob storage for storing files for distributed access by multiple users or multiple clients we can also stream the audio and video files blob storage is also very useful when we wanted to use for backup and Recovery purpose say for example if I have a backup server in on Prem or in the cloud itself and this is my backup server which can be of V or it can be for vaal it can be of net backup I can actually use the storage container and point the storage container which I have created in Azure Cloud to take all my backups right I can take all my backups inside this storage which is actually the blob storage we can also use the blob storage for data analysis if there is any audit log or any application logs which needs to be analyzed we can store the log files into the blob storage and how can we access the blob storage we can access the blob storage from anywhere using the HTTP or https protocol we can also use client libraries such as Ruby PHP python nodejs java.net to access the Azure blob storage also we can use the inbuilt CLA tools which is there in the Azure Cloud CA which is power shell and Azure CLA tool to access the blob storage blob storage TS when you store the data in Azure blobs so when you are storing the data in the Azure blob storage so this is the blob storage when you store the data estimate how frequently you want to access the data so these are the data which are coming from multiple sources it can be log files it can be any files it can be any objects video files audio files documents you need to estimate how frequently you are going to access the data because some data are accessed very frequently and some data are accessed infrequently based on the estimation how the data is accessed frequently we can configure the life cycle policy to move the data to different TI either we can move it to cool tier cold tier or archive access tier based on the data access pattern some data which is not being accessed and idle has the data ages we need to balance the data with different access tier we know that Azure provides multiple access TS we can move our data between the TS let us see the access toer one by one or access tier but the data which is frequently accessed it can be images documents for your websites this will be under art access tier cool access tier for the data which is infrequently accessed and stored for at least 30 days example invoices it can be a tickets or any movie tickets these are the examples for the cool access tiers hold access tier this tier is optimized for storing data which is infrequently accessed and stored for 90 days if the data is stored for 90 days in cool access tier we can actually configure the life cycle policy and move the data to the cold access tier cold access tier this tier is optimized for storing data which is infrequently accessed and stored for at least 90 days archive access here use this archive access toer for the objects which are rarely accessed and stored for at least 180 days examples can be of long-term data backup and audit locks which has to be stored for security reasons Let me Give an example for the access here and see how it is going to work in the traditional environment we can take this example if I have the backup server now this is my backup server where I have configured backups for multiple clients it can be thousand clients or it can be 20,000 clients so all these clients are getting backed up so we have 20,000 clients so clients are nothing but the host or the service which needs to be backed up so this can be of Windows Linux the backup server will take the backup from the production service so these are production servers from these servers the backup server is going to take the backup so this backup server takes backup of all the production servers and it has to store somewhere it has to store somewhere in the sense I will have the policy created for the first 7 Days of the backup which has been taken say if it is a weekly backup of Friday which is full backup then I wanted to store all the data all the backup data on the disk so this is going to my disk storage and it will be there till 7 days once the 7day is completed once the 7-Day retention is completed I need to move this data to another tier this can be virtual tape Library so this is also a disk library but it is not exactly like a disk how we can access from the dis the latency is little bit higher when compared to the disk storage so the vtl latency will be more when compared to the disk storage so I'll keep all my data till 35 days so this is another retention period on which I am going to use on vtl once 35 days is complete I will move this data to the tape so once I copy to the tape then I can ship this tape to the offline so this is exactly how the access tier is going to work in Azure so this is the same scenario how we are going to use in the Azure storage for different access TI say for example these are different TS I have hot tier I have cool tier I have cold tier and I have archived here the cost for storing the data into the OD tier I need to pay the cost whereas for downloading from the art tier if I download the data from the art tier there is no cost and the data which is being stored after 30 days it can be moved to the whole tier so this will have cost less compared to the rtier but whereas when you download the data you need to pay some charges you keep the data in the cold tier for 90 days then you move the data to the cold tier so here you keep for 180 days the cost for storing the data in the cold tier it will be less compared to the cold tier and when you access the data from the cold tier when you download the data from the colder you need to pay the cost and after 180 days you can move this data to the archived perer archive tier is very cheap the cost per GB is very less whereas you need to rehydrate the data if you wanted to access from the archive tier you need to pay additional cost for accessing the data from the archive tier so you can create the life cycle policy so this is the life cycle policy which you are going to create and these are the rules which you can Define in the life cycle policy so you can Define saying that after 30 days I need to move to the coold tier after 90 days I need to move to the cold tier and after 180 days I need to move to the archive tier if at all if I want to move the data back from cool to Art you can also create a rule saying that if the data has been accessed from the the cold tier within 25 days move to the OD tier right you can also Define the policy similar to how you have defined to move the data from cool tier to the hot tier you can also Define the policy to move it back to the H tier you can actually have 100 rules in a life cycle policy what you are going to configure once you define the life cycle policy and if the data is not being accessed for 180 days those data can be move to the archive tier so you can actually change the numbers the default numbers are 30 90 and 180 you can actually keep the data in the archive tier for lifelong or you can have the retention which is long-term retention for 7 years or 10 years or even for lifelong so what are the considerations of access tier hot and cool access tier can be set at the account level the cold and archive access tier are not available at the account level hot cool cold and archive tis can be set at the individual blob levels and data in the cool and cold access here can tolerate slightly lower availability but it requires high durability the retrieval latency and throughput characteristics are similar to the art data archive storage stores data offline and offers the lowest storage cost so this archived tier it has the lowest cost per GB and when you want to access the data which are stored in archived tier it needs to reiterate the data so that we can access the data which is stored in the arch tier so this was an overview about blob storage in this topic we will understand about Azure files Azure files files is one of the storage Services Under Azure cloud storage Azure files offers fully managed file shares which will support SMB as well as NFS protocols Azure file shares can be mounted concurrently by cloud or on-prem deployments we can use the clients which is nothing but the servers which are hosted on cloud and on-prem deployments the clients can be in on-prem or or it can be in the cloud SMB Azure file Shares are accessible from Windows Linux and Mac OS NFS Azure file Shares are accessible from Linux or Mac OS SM SMB file shares can be cached on Windows servers using Azure file sync this Azure file sync is a separate service under Azure Cloud you can use this Azure file sync to sync the data which are from different file service if you have noticed in the On Prom environment you will have file service created it can be on Windows it can be on Linux for Windows we use SMB and for Linux we use NFS protocol let me take example of Windows itself I have the file server fs01 this is a standalone server where I'm going to create the share so I'll create the share I need to define the share name once I define the share name I need to add users and group so that this particular share can be accessed only by this users or groups these users or users who are part of the group can connect to the file share if I have given the share name as share and my share is finance if the users and group are part of finance department they will be given the access using the active directory users and groups so you have the share created for the finance users and groups so the users and groups who are part of the finance department they will authenticate to the Shar folder and then they will be able to access the Shar folder they can also map the Shar folder as s Drive since this is a shared folder I am using S as the letter they can give any name for the map and this will actually point to the shade folder in this way they will will connect to the share folder they will be able to access the share folder if I wanted to add some kind of redundency to the file server what I will do is I will add one more file server I will call this as file server 02 and I will make this as the cluster so the diss which are coming to the file server 01 and 02 it will be from the a storage so this file server 01 also have access to the page storage and file server 02 also have access to the shared storage and whatever shares you are going to create using any one of the server the users or client can be able to mount or map these shares on their client machines if you are running a Linux operating system then for example this this Linux server is named as Linux fs01 for Linux file server I have created the NFS share by name NFS share and I have mounted this NFS share under the data NFS here I need to export this Mount point in the export file and also I need to mention from which IP address these shes can be be accessible and who can access which users can access by default if I give the permission for the root the clients which are connecting to the NFS shares let me say these are the clients these is also Linux clients which wanted to access the NFS shares this will connect to the NFS share if you have mentioned only root can access this NFS share all the client machine has to connect to the exported NFS share then use the root account to mount the export onto the client machines so we need to mount the NFS share which is created on the Linux if all these clients want to access the NFS share then it has to mount so it can mount under a mount point NFS here to access the data using this particular IP address using the root account so this way we can connect to the Linux file server in the On Prom environment similarly we can use the Mac operating system as the client if I have Mac OS as the client I can use the same method to access the NFS share using the Linux file server if I wanted to create a file share on the Azure all I need is first thing storage account I need to create the storage account because we know that storage account offers multiple storage Services it can be blob it can be file it can be disk table and you so these are the storage Services which are offered under every storage account so you need to create the storage account then you need to select the file so you select the file share enter the name for the file share then Define the quota you define the quota for the file share by default the quota will be 5 TB so once you create a file share every share will have the 5 TB limit you can decrease this limit by editing the quota and for that share the limit will be whatever the size which you have chosen if you wanted to select and GB as the pH size you will get the Android GB as the capacity for your file share so if I say I have created a file share by named is it file share by default I have 5 TB of limit for this share I can limit by changing the quota to 100 100 GB so instead of 5B I will assign only 100 GB for this share I can go on creating the file shares and the same storage account and for the storage account itself we have limit of five petabytes so for file share we have 5 TB of limit and for storage account itself we have five petabytes of limit this is the maximum limit for the storage account we can ask Microsoft to increase the storage account limitation but we need to contact Microsoft to increase the limit what are the benit of using Azure files we can have shared access to the shared folder so we can have multiple users so these users can connect to the shared folder which they have created using the Azure files that can do read write simultaneously onto the shared folder fully manage service Azure file storage is a fully managed service there is no need for the end user or the customer to worry about the Hardware or maintenance and there is no need to manage the Azure storage Azure will take care of the hardware and maintenance part scripting and tooling we can also use the Powershell and CLI to do the scripting to create Mount and manage the Azure file shares and we know that aure storage has multiple redundency options and the Azure file is built from the groundup and it is designed in such a way that it will be always available and the last point which I forgot is how do you access when you create a file share so you have created the file share but how do you access on the client machine when you create the file share Azure will automatically provide the option for you to integrate with the active directory there is something called as storage keys so you use the storage keys to access the file share there is a script you can download from the portal itself using the script you will be able to connect to the file shares what you have created in this topic let us understand about Azure cues Azure tables and Azure discs Azure Q service is used to store large number of messages and these Q storage whatever we are going to create using the Azure storage it can be accessed via the HTTP or https a queue can contain millions of messages we create the queue it can store millions of messages a message can be of 64 KB so each messages are broken into 64 KB chunks so this is the new storage so whenever you want to integrate your application with asynchronous type of request then you use Q storage so cues are commonly used to create a backlog of work to process asynchronously if I wanted to give an example for the Azure Q storage I'll give a simple example let me take an application we call this as banking application where multiple users are submitting their application form so this is the application form which the users are going to fill their name their data of birth and address so all these users wanted to create an account using this application form in this Bank say this bank is XY Z so I use this application form to fill and submit to the bank so once the user submits the application form it has to go through certain process we have integrated our Q storage in this application this is my Q storage whatever application which has been submitted by the users it will stay in the U Storage these application form are broken into 64 KB TKS these are the messages which are stored in the Q storage once the messages are stored in the Q storage the user who has sent the application or the user who has submitted the application has to get notified for that I will integrate with another service which is called function this function has to trigger a notification and send it to the user who has submitted the application form so this way we can integrate the Q storage for different application architecture this is one of the simplest example which I have given for the banking application so what are Azure tables Azu table is a no SQL database this is a structured database which doesn't have any schema so it doesn't have any schema and we know that this is a new s database it can store large amounts of structured data so when you create Azure tables under storage account we can store massive amount of structured data these data is non-relational data so unlike the relational database Azu tables doesn't use rows and columns to store the data and it doesn't have the proper schema in the earlier days when we start using the Azure Cloud we used to create the storage account then we create the blob storage Under The Blob storage we add an option to create page blobs so this is very confusing term because the virtual machine which you are going to create it requires the block storage this is the block storage which is required to create the operating system or to store the data so operating system disk and the data disk is a block storage in virtual machine and we need to create the page blob as the blob type which can be attached to the virtual machine as the operating system disk or the data disk since now we are using the managed disks there is no need for us to create create the storage account there is no need for us to create the blob storage and then use the page blobs we simply create the virtual machine and use the managed diss so we assign the managed discs for the operating system and for storing the data so we create the data disk for storing the data and to run the operating system we use the operating system disk by default the operating system disk will be 128 GB Azure will take care of this managed discs these managed discs are locally redundant or it is zone renant so this was the overview of azure cues Azure tables and Azure blobs in this topic we'll see what what is azure data migration and what are the options which are available in Azure data migration Azure supports both realtime migration of infrastructure which is nothing but the virtual machines applications and data so we can move the data from onpro to the Azure cloud storage which is nothing but the Azure blob storage so we have the data which can be the virtual machine and we can move the virtual machine from onr to Azure Cloud if I have the application I can move the application to the Azure Cloud if I wanted to migrate only the data I can also migrate the data to the Azure Cloud so there is a service which is called as Azure migrate which will help us to migrate the data from On Prom to the Azure cloud and also we can migrate from Azure Cloud to onpro as well in case if I wanted to move the data back to on-prem I can use the Azure migrate to move the data from Azure Cloud to onr Azure migrate provides unified migration platform so it is a single portal to start run and track your progress of your migration range of tools Azure migrate includes Discovery and assessment we can also do the assessment if the assessment is okay fine then we can go ahead with the migration plan if we are stuck in the assessment phase and we don't want to do the migration we can stop there itself right and there is also tool called Azure migrate server migration this is used for migrating the servers which can be physical servers virtual machines from different hypervisor and assessment and migration in the migrate H you can actually assess the On Prom infrastructure and decide if you wanted to migrate to the Azure Cloud integrated tools so these are the tools which will help us to migrate from onpro to the Azure Cloud Azure migrate discover and assessment so discover and assess on PR servers which are running on VMware hyperv platform or even on physical servers so we can do the preparation to migrate to the Azure Cloud Azure migrate server migration migrate VMware virtual machines hyperv virtual machines physical servers other virtualized servers like Citrix and km virtual machines and also we can migrate the public Cloud virtual machines which can be of AWS virtual machines or ec2 and Google Cloud virtual machines so we can actually migrate the virtual machines from these two cloud service provider as well to Azure cloud data migration assistant it is a standalone tool to assess SQL servers this will also help us to check for any issues or potential problems which are blocking the migration this tool also helps us to check for any unsupported features and also the new features which has been been added which will benefit us to do the migration and suggest the right path for our database migration Azure database migration service we can migrate our on-prem database to the Azure virtual machines we can also migrate to Azure SQL databases this is a platform as a service we can migrate the database which are running on our on Prem and also there is another platform as a service which is called SQL managed inst instan es we can also use this service to migrate our database from on-prem to SQL managed instances Azure app service migration assistant this is also a standalone tool to assess onr websites or web application and migrate to the Azure web application Services we can migrate net and PHP web applications to Azure Azure datab box this is a physical Hardware which will be sent to your location when you wanted to use to migrate terabytes of data to the Azure Cloud then you use the Azure data box this data box will be sent to the onpro location you copy the data to the Azure data box then move to the Azure Cloud we will see azure data box in much detail in the next video in this topic we'll discuss about Azure data box Azure data box is a physical migration box Microsoft will ship the Box this is an Hardware box this is one type of data box there are multiple types of data box this is called as Azure data box there is another type of data box box which is called as Azure datab box disk Azure datab box AB aure data box Gateway so these three are used for offline migration if you have data of data it is very difficult to move this data to the cloud so if you have so much of data which is more than 40 TB or even 20 TB of data it is very difficult to move with the network so the network from where you are transferring to the Azure so this network bandwidth will not be sufficient so that you can move all your data from your on-prem to the Azure Cloud so this is the Azure cloud you wanted to move all your data to the Azure CL since the data is very large at on Prem you either choose the Azure data box Azure datab box diss Azure datab box a or datab box Gateway this is online transfer of your data so you need to set up a virtual Appliance to move your data from your onr to Azure so all these three options 1 2 and three these are all for offline migration you copy the data onto your datab box datab box disk or datab box a then keep the copied data to the Azure Cloud so whenever you are ordering the data box you choose the region so if your source region is Us East you need to order you need to select the region as Us East while ordering the data box so that if your on-prem location is in Us East Azure will be able to ship the data box and from your onr location this is your on PR location you'll be able to move the data back to the Azure Cloud if you have selected region as Singapore and you are asking to ship the data box to us East then this will be a difficult task so you need to choose the region properly how do you order the datab Box you need to use the Import and Export tool which is in the Azure portal you order the data box data box will be sent to the onr location this data box will have RJ45 connection and it will support up to 80 terab of data which can be copied to the datab box itself if you are using Azure datab box diss then this is going to support 40 TB of data Azure datab box a is like a container which will be sent to your on Prem location and this is a container this is exactly like a container for moving one petabyte of data so if you have more than one petabyte of data you can move using aure datab box a if you wanted to use online transfer facility then you can go with the Azure datab box Gateway so you need to install the Appliance on your on-prem and you need to have a connectivity from your on-prem to the cloud so that the Azure Gateway will be able to transfer the data from your onr to the Azure Cloud Azure data box can be used for both transferring data in and out of the Cloud so I can use Azure data box to transfer to the cloud or out of the cloud I can use the data box so when you have connected this data box to your local network so there is an RJ45 Port which is nothing but the ethernet port you connect the RJ45 with the UTP or HTP cable there will be a GUI portal which can be accessed to Mo move the data from your on-prem location to the data box so you have the ji portal which you can access and from your on Prim up to 80 terab of data you can move to the data box and Shi to Azure the data whatever you are copied onto the Azure data box it will be imported onto the Azure cloud so this is called as Import and Export so when you are using the data box to move your data to the Azure Cloud then it will be import option when you wanted to move out of the Azure Cloud then it will be export action so let us see what are the use cases for using the data box data box is ideally suited to transfer larger than 40 terab of data so for 40 terab of data we can use aure data box diss to transfer it comes in five disk package so it has a bundle of five Diss and we use the five diss of 40 tab of diss we use bit Locker to encrypt copy the data from onr which is equal to or less than 40 terabyte then move to the Azure Cloud so these are the scenarios when we are going to use the Azure data box one time migration when we wanted to move the entire data center data to the Azure cloud and moving a media library from offline tapes so if you have a tape backup and if you wanted to move the offline tapes with the Azure clot we can use that migrating our virtual machine form or SQL server and application server to Azure and if we have large set of data sets which we wanted to analyze using the Azure clo using HD Insight or if we wanted to use some other tools like machine learning tools or artificial intelligence tools we can use the data box to move the data initial bul transfer if I have the data which I wanted to move to the cloud if I have decided that that this is the data which I wanted to move to the cloud then I will do initial bulk transfer so I'll move the bulk data to the cloud using the data box after that the data I will use incremental transfer to Azure Cloud so once the big chunk of data which is more than 40 terab of data if 40 terab of data is there in onr and you wanted to move to the cloud I will use the data box for initial bulk transfer then I will use Network to transfer the incremental data scenarios where data box can be used to export so exporting the data is moving the data from Cloud to on PR so I have the data in Azure cloud and I wanted to move to the on-prem if I have a Dr site on on Prem I can copy the data from the Azure Cloud using the export so there is a tool which I said Import and Export we use export the data from Azure cloud and move the data to onpro keeping the data in the cloud one of the copy and moving the other copy to the onr so that we can set up Dr site in the On Prom location and also this is also very important if you have any security requirements and there is any government regulation to keep your data onto onr location you can export the data from Azure cloud and keep the data or store the data in your on Prem location migrate back to onr or other cloud service provider if you have decided not to use the cloud anymore so I am using Azure Cloud I have decided not to use the Azure Cloud if I wanted to move to any other cloud service so any other cloud service provider or I wanted to move the data back to onr so if I have decided not to use the Azure Cloud I wanted to migrate all the data from Azure or I want to migrate the data back to other cloud service provider I can use export service to move the data to my on Prem location to other cloud service provider right if you have used import facility and you have moved your on-prem data this is your onpro data and you have used the data box this is the data box which you have shipped to the Azure Cloud once the data has been copied the data which was there in the data box which you have copied it will be wiped as per this standard and there will be no data anymore in the data box and the same data box will be given to other customer if they have requested using the import export migrate tool so this was an overview of azure data box in this topic we'll see what are the options which are available for Azure file moment in order to move the files from source to the Target or one location to another location there are tools which are available from Azure cloud these tools are aset copy Azure storage Explorer Azure file sync let us look into aset copy tool as copy tool is a command line utility tool you can simply download from the Microsoft website and install on it can be installed on Windows Linux and even on Mac operating system once you install the tool on any of these oper operating system you'll be able to move the data from one location to another location so why we need a copy in the previous video I have discussed about Azure data box Azure datab box is used to offload the data which is in bigger size so if the data is around 40 tab then if you wanted to move this data from your on Prem to Azure then you use the datab Box Services if I have the smaller CH of data for example let us say I have 100 GB of data or 1 TB of data and 10 TB of data I wanted to move these data incrementally so I wanted to move these data incrementally I don't want to bulk offload the data using the hardware I use the internet and [Music] transfer this data to the Azure clo instead of using the adware I use the internet to transfer the data because my data size is very small if I have larger data which is more than 40tb then it is okay we can go with Azure data box or any other Azure datab box Gateway or Azure datab box a so those kind of Hardware devices so these are basically Hardware devices I want to use the internet to transfer the small chunk of data and to do that I have a tool which is called as aset copy what I can do with a copy tool I can copy to Azure I can download I can download so basically I can copy the data from Azure Cloud I can download the data from Azure Cloud I can also use wild card to copy specific files and folders and also I can use the wild cards to download specific files and folders from the Azure Cloud as copy is going to support two Services which is blob service and file service so basically as copy will work with blob service and file service so a copy is a command line utility tool which you can download and install and you can use exit copy tool to copy blobs or files to or from your storage account you can upload files and download files copy files between storage account if you have different storage account this is storage account one storage account two I can use the AIT copy tool to move the data from one storage account to another storage account I can even use the as copy tool to sync the files if my files are there in on Prim I use as copy and I will copy the data to the Azure cloud and I will sync so whenever you are using AIT copy tool you need to always remember that this is a one-way sync this is not bidirectional sync this is always one way if you wanted to sync from Azure to on Prem then it will be oneway sync from Azure to on Prem if you wanted to sync from on Prem to Azure then it will be oneway sync from onr to the Azure Cloud so this is always oneway sync and also you can use the a copy tool to sync or copy the data from any other cloud service provider to the Azure cloud or from Azure Cloud to the any other cloud service provider so if you have other cloud service provider like AWS or gcp you can move the data to the Azure or you can move copy the data from the Azure Cloud to other cloud service provider so you can use the AIT copy tool to move the data from other cloud service provider to Azure cloud or it can be reverse also from Azure to the other service provider as I said earlier synchronizing blobs or files blobs or files with Az copy is One Direction synchronization so you cannot sync in both the direction in both the direction it is not possible let me give you an example what oneway synchronization means you have your file for example this is the file which is 10 MB file created at 10 A.M and you have created the target as Azure cloud and this is it on from one of the server so you have moved the data from your On Prom to the Azure clot this data of 10 MB will stay in the Azure Cloud if you have deleted anyway you'll be able to restore it back from the Azure Cloud say from the Azure you have accessed this data and in the 10 MB file what you have done is you have edited the file and you have made the file some changes and this particular file has become 5 MB now so this edited data of 5mb at around 11 this will not get synced to the On Prom so whatever Target you have decided so it is always one way if you have selected the target as Azure cloud from your on PR so this is one way sync if you are using the Azure Cloud as the source and you have set the target as onr then it will be in this direction so it is always one way and one more important point is if you're are using the Azure Cloud shell the a copy tool is already pre-installed in the Azure Cloud shell so this tool is already installed in the Azure Cloud shell so what you will do after installing the AIT copy tool once you have installed the AIT copy tool you have to use the command prompt and run as it copy login so this is the command what you are going to run and you need to enter account Dash name is it copy login your account name and your account so this is the account which you are going to use to log to your Azure Cloud so once you use the AIT copy login you will get a code here and that code has to be updated in the section so once you update the section then it will take you to the authentication page where you use the Azure account and use your Azure account credential to log to the Azure cloud services and if you are using the shed access signature then you need to use a copy login then you need to use pass token and the token key so this is the token key which you use so this is the SAS token key what you will be using to connect to the storage Services it can be blob storage or file storage the storage administrator or the Azure Cloud administrator can generate this s token key and provide to the user who wants to log into to the storage Services they can limit to the services and also they can limit the time so how much hour if it is 12 hours they can use the token key after that the token gets expired and you will not be able to connect to the storage account either it is 24 hours one day so they can generate the key based on the requirement and give it to the person who wants to connect to the storage services so once you use the sash token key whatever the time limit and the services which has been defined while generating the key only those services and till that time you'll be able to connect to the storage service and also as copy has many command options we can limit the bandwidth we can limit the concurrency of how many connections we wanted to create to copy the data from the source to the destination Because the Internet service provider whenever they give the internet connection to the customers right the internet service provider has certain bandwidth say for example it is a 10gbps bandwidth and at each customer they can have 1 gbps connections when you you use the AZ copy you can actually limit how much bandwidth you want to consume because you cannot consume all the 1 gbps it might Amper your other services since you don't want to Amper your other services what you are using you can limit the bandwidth for the data copy what you are doing to the Azure CL so there are multiple options when you use the AIT copy tool to move the data from one location to another location so if you see this syntax is it copy copy this is the local file path this is the local which is the source and this is the target once you see this this end point you need to understand this is for the file right let us see what is storage Explorer Azure storage Explorer is Standalone tool this is a g tool for this AUM storage Explorer tool to work behind the scene or behind the wood in the background basically the a copy is the one which is working so AZ copy the command line tool it is working for the storage Explorer so this this tool Prov ID G interface to manage files and blobs in your storage account this tool like the asz copy tool you can install on Windows Mac and Linux operating system as I said it uses the AIT copy in the background to perform all the files and blob management tasks with storage Explorer you can upload to Azure download from Azure or move between any other storage accounts so you have the storage Explorer tool in The Gua you can create the containers you can delete containers you can create life cycle policy you can actually move the data or the blobs from one storage account to another storage account you can move from one storage account to another storage account using the storage Explorer you use your account asure credentials to connect to the Azure storage account once you have connected to the Azure cloud services all the storage accounts will be displayed so if you have created multiple storage account this is storage account 1 2 3 all the storage accounts will be displayed and under all the storage account it will display the services what you have created it will be blob service file services and if it is table and if you have created Q Services it will also display all the services if you have any data which needs to be moved from the storage account one to storage account two from blob or file you will be able to move the data to other storage account as well and also as I said you can also configure the life cycle policy using the storage Explorer for your storage account in this topic we'll see what is azure file sync Azure file sync is a tool that lets you centralized your file shares in Azure files you can turn your windows file server into a miniature CDN so which is nothing but the content delivery Network we can install the Azure file sync on our local Windows server and we will be able to replicate the data bir directionally so this is two way sync when you use the Azure file sync with Azure file sync what we'll be able to do is we can use any protocol SMB NFS ftps and we'll be able to access the data locally and when the file share is available when the local server is connected to the file share it will sync to the file share so this is my local server on which I am working and I am updating some data so once the connection is back to the file share whatever the files which I have updated locally it will sync to the file share and if I am updating to the file share it will sync to my local directory as well we can have as many caches as we need around the world we can actually replace the failed file server and we can use the Azure file sync on the new server this is an agent we need to install the Azure fil sync agent if any of the server fails we can simply download the agent and install the agent and connect to the Azure file share we can also do the cloud tearing so whatever data which are frequently accessed that will be cast Loc and whatever data which is infrequently accessed from the file share it will be moved to the different tier it can be moved to the old tier let us see how the Azure file sync works if in the on PR say for example I have multiple location on PR location Us East Europe This is Europe and this is Asia so in each of the location I have Windows file server running so this is file server 01 this is file server 02 this is file server 03 now since the users who are in different location they are collaborating for one of the project and they are working together on one of the project for this particular project they have a file share to create the file share I need the storage account so once I create the storage account I'll be able to create the file share and I will set the qua of one TB for this file share so this particular file share which I have created let us say this as a it this one I will name it as aset FS 01 so this is aure file share 01 after creating the file share I'll be provided with the script this is pshell script I have to run the Powershell script on each of the server so that I'll be able to connect to the file share this file share which I have connected it should be always connected if I am changing updating any document on the file server 01 02 or 03 if the file share is connected only then I'll be able to copy or update the files to Azure file share right so what if there are users in these location so there are users in these location and they are working offline if this is offline if there is no connection to the Azure file share even if I update any files onto the fs01 fs02 f03 when the connection is back to the file share the data whatever it has changed on these servers has to be automatically updated if I use Azure file sync we know that it is a two-way sync so this is a two-way sync if storage administrator or Cloud administrator delete some file here it will also sync to all the file servers if there is a change done by any of the user on any of the files in any of the file server that also gets sync to the Azure file share to make it two-way sync between the file server and the file share what I have to do is I have to install the agent which is called Azure file sync agent so this is a tool which I have to install on all the Windows Server which I'm going to use to connect to the Azure file share so on the server I need to install the file sync agent on the Azure Cloud I need to create something called as Azure file link group so I'll create the sync group so this is the service you need to select the service create the sync group while creating the sync group you'll be asked the sync group name and the storage account and the file share the file share will be is it FS 01 and the storage account will be this once you have created the sync group the agent whatever you have installed on the fs01 02 03 has to be authenticated using the Azure account credential to the storage account so this will be one of the end point so we have two end point here one is source which is a fs01 and we have Target end point once you authenticate these servers to the Azure Cloud you will be able to register these three services as the target so you registered this as a Target FS 01 fs02 fs03 so this is the connection what you are making in the sync group so this is the sync group so basically what you are setting in the sync group is you are defining your source and you are also defining your target target are for those servers which has the Azure file sync agent running and using the credential it has registered to the Azure clo once it is registered all these three services will be part of the sync group if you are not able to register then these servers will not be part of the sync group so download the Azure file sync agent authenticate to the Azure cloud services then then register these servers in the sync group any data which you have copied or updated onto the Azure file share service these data will be synced to the file servers any data which you are creating or modifying or updating on the file servers on different location it can be in any location you change the file on any of the server that gets updated to the Azure file share so this is why we Define the sync group this is a logical connection between the source and the target this is the source endpoint and this is the target endpoint so we make logical connection for two-way sync so now users who wanted to work offline they will be able to connect to the file service to any of the file servers in their respective location they will update the files once they come online and if the server is connecting to the file share then the data gets synced and in the Target end point you need to specify a specific path for the files which can be synced so I will name this folder as sync location so whatever data which comes here under the sync location all this data will be sync to the Azure file share so this is the path which you are going to define or configure on each of the file servers and you are using this path to configure the sync group so whatever data which is coming to the asfs 01 on all the file servers it will be syn to this particular path if you have different folders under e it will not be touched so this is the only path which is going to be used for the sync so whenever there is a data which is copied in Azure file share as fs01 it will be sync to this particular path on all these three file servers so all these three file servers has to be defined with this path other folders in the same path e Drive it will not be touched like folder one folder two if you have any folder if you make any changes this will not be part of the same group and this is not going to be touched so this is the way how the Azure file sync is going to work when you install and configure the Azure file sync agent on the Windows file server and you need to register these file servers to communicate with the Azure file share services so the idea is the files get cached to the file service 0 1 02 03 and user will be able to modify or update the files at each of their location once it is able to communicate to the file share then whatever changes which has been done to the file server 01 or 02 or 03 those data gets synced to the Azure file share services so basically what we are doing is we are setting up the C cashing server so whatever data we are updating modifying on the onr location these data has to be synced with the Azure file share services this video will be the primer for the upcoming series of videos which are related to Identity and access management services so in my next series of videos I'll be covering the topics which are related to Microsoft entra ID Microsoft entra domain services and Microsoft inra Connect so these are the services in Microsoft Azure Cloud so we will see what is the difference between Windows Active Directory and the Microsoft entra ID so I'll be referring Microsoft entra ID as enter ID from now on so active directory and ENT ID are two different or two separate directory Services you should not get confused whenever you see active directory and entra ID so these are completely two different services so please don't confuse on these topics so what is active directory active directory is a flagship product since Windows 2000 so it has been almost 25 years now when Windows launched Windows 2000 from that time we have this Flagship product or Flagship service which is called active directory so this is similar to telephone directory so where we have the name contact and address so in the earlier days if you have purchased any Telecom equipment from a particular vendor they will be giving the telephone directory right so this directory will contain the name contact and address in Windows Dom controller which is Windows Active Directory you have users groups and computers so these are called as objects so in Microsoft inra ID in entra ID this was earlier called as Azure active directory so since we add the same similar name Act AC directory which resembles the active directory which has been installed on on Prem so Microsoft decided to change the name to enter ID entra means enter and ID means identity so you have an identity to enter into the Microsoft as CL entra ID also supports users groups and divisors so in the first place why we require a directory services for example if I take a laptop or a desktop this is for my personal use I have one user account and this can be running Linux or Windows operating system so this is for me which is personal use I'll be using the single user account so I log into the laptop or desktop and I will use the computer so if at all if I wanted to share the laptop with the family members say for example I wanted to share with my family members I can create account so this is for me and this account I'll be creating for each of my family members so total we have five accounts created so creating five accounts and managing five accounts is okay but when we have hundreds of users or thousands of users so managing the users account and the credential for each of the user account it will be very difficult so in an organization if there are more than hundreds of employees for each of the employees if you want to create user account and manage the user account for each of their laptops or desktop it will be very difficult so you create the users and groups in a centralized location which is in the domain controller you manage users groups and devises in one location and the users will authenticate to the domain controller and access the resources the resources can be printers the resources can be sh holders or the resources can be applications so each of the employee have their laptop or desktop they will login with their individual user ID and password they will authenticate to the domain controller instead of locally authenticating they will authenticate to the domain controller and they will be able to access the centralized resources so this is the reason why we require a domain controller so using the domain controller we can manage the users and groups which can go into thousands to run the active directory domain controller I require Windows Server so to install the domain controller I require windows server Microsoft started supporting active directory from Windows 2000 so from Windows 2000 we have Windows 2003 Windows 2006 Windows 2012 2016 and 2019 so active directory is supported on all these server platforms so we install the domain controller which is active directory domain controller and the protocol which is used for authentication is gbros there is also one more protocol which is called as ntlm so this is an land manager protocol which was used the Legacy applications so prior to Windows 2000 server ntlm was the protocol which was used by 4.0 3.5 so the ntlm protocol was used by earlier versions of Windows 2000 whereas in enter ID since it's cloud-based directory Services we use HTTP and https protocol for authentication so one more important point to consider is for active directory to work we need to have DNS so without active directory DNS can work but for active directory to work or for the domain controller to work we need to have this DNS server without DNS server active directory doesn't work as simple as that so when you create a domain controller what exactly you require you require a name space so this name space you can use the verified name space like if I wanted to use the verified domain name if this domain name is Fabric cam.com and this is verified basically verified in the sense I have registered this domain using good ID or name chip and this domain is a verified domain and it is accessible in the public domain I can use the same naming convention for my domain controller if I build a domain controller I can use the same naming Convention as fabric cam.com if I wanted to isolate my entire network I don't want to use verified domain name and my users wanted to authenticate in an isolated environment I can also choose choose to use the domain name as fabri cam. local there is a spelling mistake what I will do is I will create domain controller with domain name as fabric cam. loal so this is unverified domain and I'll be able to use for my domain controller and the users can authenticate to this domain controller and whatever resources they are permitted to use they can authenticate to the domain controller and they will be able to use the resources whereas when you create an account in Microsoft Azure Cloud you have to provide your email ID so if you are using your email ID for example I have used Cloud Tech at contoso.com by default Microsoft will be creating a tenant this tenant namespace will be uped with on microsoft.com so the name and Dot on microsoft.com this will be the namespace which will be given by Microsoft aure so this will be your default name space or default tenant if you wanted to use your custom domain name you can also use your custom domain name like contoso.com if this contoso.com is registered and you will be able to verify then you can use this as the verified domain name as the custom domain name so Windows Active Directory is much more robust and then enter ID for example if I have multiple Branch offices if there are Branch offices in London in Netherlands in Brussels so this is my London branch office domain controller this is my domain controller in Netherland this is my domain controller in Brussels so I can create child domain so I'm using the three letters or three characters first three characters with fabric cam.com so this name space for my child domain will con continue so this is for Netherland fabric c.com and this is for my London domain controller so this kind of hierarchy and logical structure enter ID doesn't provide and we have users at London office we have employees or users at Netherland office also we have users in Brazil's office so these users can authenticate to their Branch or child domain controller so London users will authenticate to the London domain controller and Netherland users will authentic to the Netherland domain controller and whatever permissions they have been granted so if they have granted for some shared folders or printers or some applications these users will be able to access those shared resources either it will be printers or applications or sh folders and this concept is called as tree so this is a domain tree and if fabric.com is an organization wanted to expand their organization and if they wanted to acquire another company which is called as contoso.com so this contoso.com also has domain controller and this also has some child domains like this is contoso.com parent domain and this is the child domain which is in Houston houston. contoso.com and this will be in TS tess. contoso.com this Fabric cam.com and contoso.com domain controller can have two way rest so these domain controllers fabric.com and contoso.com can have two-way trust relationship if any of the users who are in different branch office like London branch office and wanted to access the contoso.com resources if they have been permitted since we have the two-way Trust trust relationship the parent and child by default as the trust so if london. fabri home.com users wanted to access some applications if they have been permitted since we have two-way trust relationship they will be able to access the resources so these users can access the resources in a different domain so this two-way Trust relationship and this is a different tree and this scenario is called as Forest so this logical and hierarchical structure is not available in ENT ID so ENT ID doesn't support logical and hierarchy it is a flat strcture so when you are running active directory domain controller on your on Prem you have Global catalog you have organizational unit and we also have group policy objects so these features are not available in ENT ID so entra ID doesn't support Global catalog organizational unit and Group Policy objects so these are all not supported in enter ID so this was an overview about Windows activ directory domain controller and enter ID before moving into the next set of topics related to Identity and access management I just wanted to give an NS up about these differences if you wanted to learn more on inra ID and its features there is a separate course which is az500 you can refer that particular course this will be the video series for identity and access management let us start with Azure directory services so what is azure directory Services earlier we had Azure active directory which is also called as Azure ad Azure ad now it is called as Microsoft enter ID so what is enter ID enter ID simply means enter so we are entering into the Azure and we have the ident it to enter into the Azure Cloud so what are the identities identies can be user groups computers application so all these are all referred to as identities when I have similar action a user who is performing some action so he'll be performing creating virtual machine creating storage account if there are other users who also have similar action to be performed so then we can create a group basically a user as a role to perform some action either it can be creating virtual machine creating storage account or creating the virtual Network so when we have similar set of users who are doing the same action we can add all those users into one particular group basically a user is a single user and a group has multiple users users who has the same or common goal Microsoft IDE is a directory services so when you create an account so what exactly you require to create an account in the Azure Cloud so this is the Azure Cloud to create an account in Azure Cloud you require email ID and password so let us assume your email ID is load Tech at hotmail.com the default enant which will be created automatically by Azure is cloud te dot on microsoft.com so this will be the default tenant name which will be automatically created so anyone who is creating an account they will have a default tenant which will be automatically created so with this identity what you can do you can sign in and access both Microsoft cloud application and Cloud application that you develop if you are using the on-prem environment then you will be running Windows operating system installed with active directory software if you are on on Prom environment we know that there will be an active directory which will be running in the on Prom using the Windows server so this will be the Windows Server operating system which you'll be installing on the active directory so if this is the domain controller you will install Windows Server operating system then install the DC promo tool then deploy your domain controller so this will be faso.com so this is the domain name if you wanted to deploy contoso.com domain for your environment Microsoft Android is a cloud-based identity and access management service when you use active directory in on-prem and connect to Azure Cloud the on-prem logins can be monitored Microsoft can help protecting by detecting the suspicious signin attempts at no extra cost I also said the identity can be application or computer so whenever you have an application like web app and this web app needs to access the storage account or a database SQL database so we need to provide the identity for the web app so that it can connect to the SQL database or storage account so that is the reason we have identity for computers and also for the application so who can use Microsoft enter ID it administrator it administrator can use Microsoft enter ID to control access to users and groups resources and applications application developers developers can use Microsoft enter ID to provide a standard based approach for adding functionalities to application for example single sign on with existing user credentials this will be very useful whenever for example if I say I have a web application in Azure Cloud I have integrated this particular code with the authentication so any user who wants to connect to this application he can use his personal email ID and you'll be able to log in with this personal email ID to the web application which I have deployed into the Azure Cloud users users can manage their identities if permitted they also can do the self password reset so they can even do the self password reset if they have been permitted online service subscribers so whenever you create an account in Azure Cloud the first step by default and Microsoft enter ID will be created so enter ID tenant will be created so you can imagine like you are in a hotel and this particular votel the first step to enter into the votel you have the security at the security gate you need to provide your mobile mobile number this is your contact number your name to enter into the hotel so once you enter into the hotel there is a receptionist or there is a reception and in the reception you basically use the credit card to book the room so this is exactly same so when you enter into the Azure Cloud you use your email ID and password to log to the Azure Cloud then create a subscription you manage your subscription by paying your by paying your charges for the resources which you have consumed and the same Microsoft enter ID will be used as the identity and access management for Microsoft 365 Microsoft Office 365 and Microsoft Dynamics CRM online so these are the some of the services which you use Microsoft inra ID when you subscribe for the online subscription so what does Microsoft enter ID do enter ID checks for authentication this includes verifying identity to access applications and resources and also it also provides some other features like self-service password reset multiactor authentication smart lockout facility conditional access access just in time access so these are some of the features which are provided by Microsoft ENT ID depending upon the license what you have and single sign on you just remember one single username and password to access multiple applications application management you can manage your cloud and on-prem application by using Microsoft entra ID device management along with the accounts for individual people and group groups Microsoft entra ID supports the registration of devices registration of devices in the sense you register your devices to enter ID only the authorized devices can authenticate to the Azure resources or Azure Services can I connect my on-prem active directory with Microsoft inra ID before going to this let me highlight some of the key points so if I am discussing about Microsoft enter ID so this is actually Azure active directory if I am referring Microsoft inra domain services this will be Azure active directory domain Services if I am discussing about Windows domain controller or Windows Active Directory this will be always on PR domain controller so let us continue with the topic can I connect my On Prom ad with Microsoft enter ID yes you can connect active directory with Microsoft enter ID this will be referred as hybrid identity model so you have your On Prom data center then you connect to your Azure Cloud this will be your intra ID to connect your onpro domain controller with Microsoft inra ID there is a tool which is called as Microsoft inra Connect so this is earlier called as Azure ready connect now it is called as Microsoft intra connect so Microsoft intra connect synchronizes user identities between onpro active directory and Microsoft enter ID when you connect your onpro domain controller with entra ID using the Microsoft entra Connect you have the features like single sign on multiactor authentication and self-service password reset there is a picture which shows how we can connect on PR domain controller with enter ID so this is the enter ID so which has the default tenant so this is the default tenant when we use our email ID and password to create create an account in Azure CL we have the default tenant which will be created and this is the domain controller which is there in the on promise and we use a tool which is called as Microsoft inra Connect so there is a tool entra connect so this tool we have to install on on Prom this can be installed on any of the server in onpro so once we install the tool so once we install the tool intra connect in one of the server and configure and authenticate to the Azure Cloud we will be able to sync the users groups and password so this can be in both Direction so whenever we are using managed domain which is Microsoft entra domain Services we can only sync in One Direction this is from enter ID to the managed domain domain service which is Microsoft intra domain service there is a service which is called as Microsoft inra domain services so this service is similar to the onr domain controller so if you have deployed the Windows server and installed the domain controller you wanted to have similar features available in Azure Cloud then you can use use the Microsoft in domain service the features which are available such as domain join group policy ltap and kros so these are the features which are available in Microsoft intra domain services so since this is a manage service there is no need to deploy manage and Patch the domain controllers and if you wanted to run Legacy application which needs to be authenticated to domain controller which was available in on Prem we can lift and shift the application to the Azure cloud and authenticate to the Microsoft inra domain Services Microsoft entra domain Services also integrate with the Microsoft entra tenant how does the Microsoft ENT domain Services work when you create a Microsoft ENT domain services this is the manage service when you create the Microsoft interent domain service you need to provide the Nam space Nam space is basically the domain name you can provide the domain name like p. local contoso.com or your organization name basically so you need to provide your domain name this will be your name space two virtual machines will be deployed there will be two virtual missions will be deployed in the same region where you are creating your managed Microsoft oft inra domain services this will be called as replica set this is for redundency why Microsoft is creating two virtual machines this is called as replica set these two are domain controllers and this is for redundancy there is no need for us to manage since this is a manage service there is no need for us to manage configure or update these domain controllers the backups of these domain controllers will be taken care and also that diss will be encrypted at rest so this will be taken care by Azure is information synchronized when you configure Microsoft entra domain Services it will be always oneway synchronization this will be always from entra ID to the managed entra domain services and if we have inra connect which is Microsoft inra Connect deployed in on-prem then this will be two-way sync for hybrid it will be two-way sync when we have managed domain using Microsoft entra domain service then this will be oneway sync so basically a managed domain is configured to perform a one-way synchronization from Microsoft inra ID to Microsoft inra domain services so in case of vibd environment when we have Microsoft inra Connect the users groups and password will be synced from the onr domain controller to the enter ID so this will be sync to the enter ID then it will be sync to the managed domain which is nothing but the entra domain service so this is how the sync will take place when we have an hybrid environment so in summary we have Microsoft entra ID this provides identity and Management Service we can create users groups and also we'll be able to create the service principle for computers and applications we have another service which is called as Microsoft entra domain service if we wanted to have same features like on Prem domain controller then we can use Microsoft intra domain Services we have another tool which is called as Microsoft intra connect so this tool is used to syn the users groups and password from on PR domain controller to inra ID so we have this is in Azure cloud and also we have managed domain service which is entra domain service then it will be synced in this topic we'll discuss about authentication methods authentication is the process of establishing the identity of a person service or device the person service or device have to prove who they are using the credentials for example if you are traveling in a train or in a plane you have the valid ticket and it is you who are authorized to enter into the train or flight so the authentication is also similar to that you need to prove who you are to access any service or resources Azure supports multiple authentication methods including standard password single sign on multiactor authentication and also passwordless new authentication solution provide both security and convenience so let's see which authentication is convenient and secure in this diagram it shows the security level compared to the convenience if I am using the standard password and two Factor authentication this is multiactor authentication since I need to have the device it can be an hardware token it can be RSA token or mobile device to get the token so I need to keep this device always along with me to authenticate so this is inconvenient and if the internet is not working and I am not able to receive the token then I'll not be able to authenticate but when I am using two Factor authentication it is considered as highly secured way of authentication and if I am using the standard way of using the password so I have the standard passwords I can keep all my passwords in a notepad so that is the reason we have this as categorized as low security since I am keeping all the password which is required to access multiple application and this file where I am storing the password it can be easily copied by someone so that is the reason this way of using the password which is standard password it is considered as low security and it is very convenient because I can use the password I can copy the password in a notepad or a text file there is another method of authentication which is passwordless authentication you use windows allow for business using the pin or biometric to configure the passwordless authentication this is considered as I security and also it is very convenient because you are not going to remember the password to access multiple applications or resources the devices which you are going to use to authenticate it will be registered with the authentication provider let us see what is single sign on SSO or single sign on enables a user to sign in one time and use the credential to access multiple resources for example if I have an organization and my organization name is called faso.com so this organization has multiple applications HR application time application travel application and also Azure Office 365 application for the user for example John at contoso.com to access all these application John doesn't require multiple credentials so you require only John contoso.com and he can access all these applications for SSO to work the different application and providers must trust the initial authenticator so these are the application provider and these application provider has to trust the initial authenticator so only if these application trust.com only then the employees will be able to access all these application single sign on is very useful there is no need to manage multiple credentials with single signon you remember only one user ID and password and you'll be able to access multiple application single signon is only as secure as the initial authenticator because the subsequent connections are all based on the security of initial authenticator in this case initial authenticator is the organization domain controller and if the directory service is been Act John ata.com is the original employee or real employee who has access to all these applications if his email ID and credentials has been acted by any hacker then he has the access for all these applications because he'll be authenticating to the initial authenticator once he has been authenticated since it is single sign on you'll be able to access all these application which was supposed to be accessed by John contoso.com what is multiactor authentication multiactor authentication is the process of prompting a user for extra form of identification during the signin process when you have entered your password and if the application or website requires additional credential to be used to authenticate either it can be phone call it can be SMS it can be pin it can be biometric or it can be a simple question and answer so all these are all considered as multiactor authentication apart from the standard password which you have used to authenticate along with the standard password you need to use any one of these element so this will be your multiactor authentication we need to configure multiactor authentication so that the identity provider knows which option you are going to use as a multiactor authentication either it can be Microsoft Windows all of our business or it can be your registered phone multiactor authentication provides additional security as you need two or more elements to prove you are the valid user so multiactor authentication is actually going to provide additional layer of security multiactor authentication also protect you against a password compromise even if you're standard password is act just by knowing the password of an user the Acer cannot intrud into your system AER need to know your other credential as well so if the other credential is your phone or biometric the Acer needs to know that as well to authenticate how does the multiactor authentication Works once the user has entered a standard password there are three categories which can be configured with multiactor authentication which can be something the user knows the user will be challenged with a question so for this question the user as the answer right something the user as something the user processes with him always so this will be his mobile phone after entering the standard password the user will receive phone call he has to answer the phone call to authenticate or it can be a simple OTP pin OTP code which will be sent to his mobile phone via the SMS so usually the banking website the banking website after entering your standard user ID and password you'll be asked to use another credential either it can be phone call or the OTP pin which will be received via the SMS and the third one is something the user is the user is which means physically which can be used to authenticate which is nothing but the biometric either it can be fingerprint or face scan so we now know what is standard credentials and how to integrate the multiactor authentication let us now see what is Microsoft intra multiactor authentication Microsoft multiactor authentication is a service in enter ID which provides multiactor authentication capabilities micr oft entra multiactor authentication enables user to choose one more step of authentication to prove who they are during the signin process it can be a phone call it can be an SMS even an Microsoft authenticator app installed on the phone or it can be an RSA token as well so now let us see what is passwordless authentication passwordless authentication it's a convenient way of Authentication now we don't need any more password it's replaced with what you have so what you have is I have my personal computer or I have my mobile device so to use the passwordless authentication I need to configure the user so I need to register these devices onto the Azure enter ID so what I have to do is I have to change the security parameter for the user I'll say for the user the authentication type is passwordless so if I have selected the authentication for the user is passwordless then it will ask so what are you using for the passwordless Authentication it can be an app it can be hello or it can be vo so these are the three methods which we have to configure for the user to use the passwordless authentication passwordless authentication need to be set up on the device so this is what we are doing we need to set up the device before it can work once I have registered these devices and enable the passwordless authentication this device will be registered only from this device if this device uses allow for business then the user will use pin or th print to login so now we know the three methods which can be used for passwordless authentication let us see what is windows allow for business windows allow for business this is a biometric or pin credential for the user to use so this is available in the laptop so nowadays when you have the laptop so it comes with the fingerprint scanner so if you are having the laptop and if you have installed Windows 10 or 11 you have the windows alow for business so which will support the pin or biometric either you can use pin or biometric and you'll be able to access your application in the organization or you can also access the resources in Azure another method to use the passwordless authentication is Microsoft authenticator app you need to download the Microsoft authenticator app on your iOS which is Apple phones or Android phones once you have downloaded the app you need to register this device in the Azure so you register your device into the Azure which is enter ID and you will configure as the work tool or business you will see the number on your mobile phone and when you try to access any application you need to input this number there is also another method for passwordless authentication which is VO2 security Keys F to security Keys is part of alliens who promote open standard for authentication and this alliens also support passwordless authentication so F to users us USB devices and also it will support Bluetooth or NFC you just need to plug the USB device on your mobile phone or the computer so that you can use passwordless authentication and fed2 is the latest standard that incorporates the web authentication so in summary we have discussed about authentication methods and we have discussed about single syon we have also discussed about multiactor authentication and methods for multiactor authentication we have also discussed about passwordless authentication I'll will see you in the next Topic in this topic we'll see what is azure external identities as the name suggest external identities or the identities which are not part of your enter ID tenant so the identities it can be a person it can be a device or service which are outside of your organization and not within your ENT DET tenant those will be referred as external identities so why do I require external identities if my organization want to communicate or collaborate with other organization it can be a partner organization it can be Distributors it can be suppliers vendors if I wanted to communicate or collaborate with users outside of my organization then I will use the external identities let us take an example and see how the external identity work say for example if I have a company which is a manufacturing company and I am running my services in the Azure Cloud so this is my Azure Cloud on which I am running some services so this company is called as Tail Spin toys.com so what they do basically is they manufacture toys cars plane jet and JCB so these are some of the ties which they will manufacture and they have their own application running and they have some services and also they have some database they have database of inventory they have database of catalog database they have multiple databases out of which inventory is one of the database so now they wanted to give access to another company which is a e-commerce company so this e-commerce company is called as so this e-commerce company is called as hiptop toys.com so this tiptop toys.com they have users so this tiptop toys.com can also be on Azure or it can be in onr or the users can have their own personal account it can be Gmail account or Outlook account LinkedIn Facebook so these are the accounts which the users can have in this company and they can even have an app which can connect to the database and fetch the inventory services so once it fetches the Inventory Services since this is an e-commerce company and they have to fetch the inventory from the database and simply they will list all the inventory okay I have cars 1,000 I have Jets 2,000 and I have plane 5,000 so anybody external which is the users who wanted to purchase from this e-commerce site they will be able to see the inventory what this company as which is tiptop toys.com so in order to give the access on tailpin toys.com we use B2B connection to manage the external identities basically so the external identities can be person so we can have the person we can have the device they can have their virtual machines running so we can provide the service we can actually provide the access to the devices and service like applications right so this is how we'll be able to use B2B to give the access to external identities and once they have the identity they can only access the resources which the tailpin toys.com has given access to so this this salpin toys.com as an admin so this can be Global admin who manages the entire Azure enter ID tenant and he will provide the access only to the certain applications or certain Services which the tiptop toys.com can access so this is how the btb will work if I wanted to give access to B2B I have to add the users and devices as a guest account so I need to use the guest account to give the access so once I add into the guest account the users devices or services will be able to access the resources which are hosted in tpin toys.com now we have understood what is Microsoft enter ID B2B and let us see what is Microsoft enter ID b2c which is business to customer in case I have an platform which is running on Azure Cloud so let us take an example of ticketing platform which is book my.com so this ticketing platform has hosted their services on Azure cloud so what they have they have the database they have the virtual machine and they have their application this is the front end application which is accessed by the public user so anyone who have Facebook Gmail Microsoft account or any third party account they will be able to access the application and they will be able to book the tickets they book the tickets they take the print they can send the tickets to the email and they can also send as the QR code so this is what the application will do and the user who are public users who want to watch movie they will book the tickets using the bookm show.com portal now these users Identity or managed by their identity provider it can be Google it can be Microsoft or Facebook or any other third party which is supported by Microsoft as a external identity partner right so these identity provider manage the individual user account now the Microsoft the bookm show tenant doesn't have to manage the user account here the identity provider who provides the access to all these users manages their identity which is individual identities and they will sign up to this application and they will be able to access the application basically they have been given permission only to book the tickets and print or email or take the QR code so this is what they have been given as in Access whereas in B2B we are adding the user as a guest user account so in my tail spin toys.com I need to add the tiptop toys.com so any user or services or devices which are coming from tiptop toys.com I need to add them as a guest account in my tailpin toys.com tenant right but here the identity provider manages the individual user ID but in B2B the tailpin toys.com will manage the hiptop toys.com accounts as well as a guest account so that is the difference between B2B and B2 C hope you are clear with B2B and b2c let us continue with the notes so in B2B what exactly will happen is we can collaborate with external users by letting them use their preferred identity to access the SAS application or custom developed application so B2B collaboration users are represented in your directory so this is very important so B2B collaboration users are represented as I said the users who are part of tiptop toys.com will be represented in tailpin do tailpin toys.com as a guest users this is what it is mentioned so as the guest users so there is also something called as B2B Direct Connect in B2B Direct Connect there is a two-way trust relationship between two organization or multiple organization it can be example.com and sun.com so these are two different companies if there are some issues or there are some activities which are going on sun.com and the users who belongs to example.com wanted to come to the Microsoft teams meeting they will be provided with the access so they will be able to join the teams meeting they will be able to chat with sun.com enter ID tenant so this is the B2B Direct Connect so B2B Direct Connect currently supports teams Shar channels enabling external user to access your resources from within their home instance of teams so example.com as their own Microsoft instance of team and there are users who wanted to join the sun.com teams Channel they will be able to collaborate with Microsoft teams be to be direct connect users are represented in your directory say you have allowed example.com users and you have the Microsoft enter ID tenant in sun.com they will be represented in your enter ID tenant they are visible from within the teams shared Channel and they can be monitored in teams admin reports as well let us see what is Microsoft Azure active directory business to customers in business to customer as we have discussed we will publish the applications so we basically publish the SAS application or any custom developed application and we will provide the access to the public users who wanted to access our application and the these public users can be third party organization or consumers or customers so they will be provided with the access for your pass applications based on the requirement how you want to provide access to the external organizations or external customers or consumers or third partyy vendors suppliers there are multiple methods you can use B2B b2c or B2B Direct so this was an overview about Microsoft enter ID B2B b2c and B2B Direct Connect in this topic we'll discuss about conditional access conditional access is a tool that Microsoft entra ID uses to allow or deny access to resources based on identity signals so basically it is like if then statement so if you have access then prove with an action so the action can be in a certain location you need to be in a certain location to access the application or you need to be in certain IP network range for you to access the applications or might be the device should comply with security policy so there are some patches which has been updated from Microsoft or for the devices which you are using might be the operating system version it has to comply with the operating system version to access the application or resources only then you will be able to access the application so how does conditional access helps it administrator conditional access provide users to be productive wherever and whenever so whenever there is an IT administrator who has been called it is not that he will be always available in the office Network he can be at home it can be in some remote location or in a coffee shop so you should be able to access the application so conditional access with the help of policy the it administrator will be given an access with additional security measure so conditional access not only provides the access for the administrator wherever and whenever it also protect the organization asset by taking into consideration the right administrator or the right user is given the access conditional access also provides a more granular multiactor authentication experience for users for example a user might not be challenged for a second authentication Factor if they are at a known location let us take example if these users are in an office location and this office is in Pune and it's in inj it Center so this is the office location and this office location has a network of 10 do sorry 18.10 dox Network since the users are already in the office Network they are not asked for the second authentication which is multiactor authentication with only the single Factor authentication which is first Factor authentication they should be able to log to the office Network and access the resources they can access the virtual machine they can access the applications whichever which has been permitted for the employees during signin conditional access collects signal from the users so what is the signal here they are in the office location and they are allowed to access the applications which are permitted and it makes the decision based on the signal so during the signin conditional access collects the signal from the users so basically you are trying to access with the first Factor authentication which is your user ID password email ID or your windows allow for business so you have used your first Factor authentication and you have already logged into your office Network basically it makes decision based on those signals from where you have logged in so if you have already logged in from your office Network only single Factor authentication is okay for you to access all these applications right so based on the signal detected it will enforce the decision by allowing or denying the access so since you are already there in the office Network you are always allowed to access the application if you are not in the office Network which is in this particular It Center Network then you might be challenged for a second Factor authentication so that is what it says so there is a diagram which shows the signal the decision and the enforcement which will be taken care by conditional access policy the signal can be of user location the user location as I said it can be in the office Network so if the employees or the users are in the office Network then the first factor is okay the first Factor authentication is enough to prove the user that is the right user and he will be authenticated if the user like wherever and whenever the it administrator are called he should be available to do some configuration settings or to do the troubleshooting or if there is any P1 P2 issues which is going on then he should be able to troubleshoot he can be in a remote location right it can be in it can be in coffee shop or it can be in airport say Suppose there is an emergency mail which requires an approval there is a workflow which requires an approval and the manager is at airport how do you prove his authenticity so basically this location is not part of the office Network only if the users are connected in office Network then the single Factor authentication is okay but when the users are connecting from remote location coffee shop or Airport network then you challenge with the second Factor authentication then the user will prove his authenticity so this will be the action statement once he has proved his authenticity with the second Factor authentication then you will be able to provide the access for the resources right the next signal can be of user device so user device is completely compant with the corporate security policy like the updates are perfect the antivirus software is running with the latest update and all the security patches as has been update only then we allow to access the resources and the applications so whichever applications which require the access it has to go through certain condition only then the application will be granted the access So based on these signals the decisions are made whether to allow or deny the access the signal is basically the user user device and the application and the decision is either to Grant the access or deny the access if I am granting the access what is the condition the condition is multiactor authentication second Factor authentication so you are challenged to use the multiactor authentication because you are not in office network if you are in office Network single Factor authentic ation is fine since you are in remote location you have to prove that you are the right owner to access the resources so this is what if the user is signing from an unusual location which is not in the office Network then you'll be prompted for a second Factor authentication and enforcement is the action that carries out the decision so when can I use conditional access when you wanted to enforce the multiactor authentication when a user is not in an office Network then you should be able to apply the conditional access policy and if you wanted to allow access to the services only through approved clients then you'll be able to use the conditional access and if you wanted to permit the user to access the application from a managed devices so they have the mobile devices or computers or laptops which as the standard so which meets the corporate standard and which has some certain security and complains policy this device whatever the user is using either it will be a mobile device or a computer or laptop which we have seen in a regular authentication process so these are the devices which will be used either it will be a mobile device or it will be a PC or laptop if the devices are meeting the corporate security standard only then the devices are allowed to access the resources if the user are in a some remote location which is untrusted and you have blocked those locations and you have also blacklisted those IP addresses then if the users are trying to access from the blacklisted location and blacklisted IP networks you can use conditional access to deny the access so in summary there is a sign signal there is a decision there is also an enforcement so there is signal there is decision there is enforcement right and this is for the conditional access so I have the policy with conditional access which has to meet certain criteria and to meet the certain CR criteria the signal can be applied for the user and device and application the decision can be Grant the access or allow access or deny the access enforcement is what is the condition if I am granting the access or allowing the user to access what is the enforcement I can challenge with second Factor multiactor authentication if I am denying what is the condition because is not coming from the office network is in the untrusted network or from a blacklisted IP range and to enable the conditional access we require P1 license so there are licenses for Microsoft entra ID so Microsoft entra ID comes with licenses which is PR license P1 license P2 license and Microsoft enter ID governance license so these apart from this these are applied at the so these are basically built per user and per month so this will be the bill link so this was an overview about conditional access policy in this topic we'll discuss about Azure role based access control which is also called as arbac when you have a multiple it and Engineering teams how do you control the access for the resources there is something called as apply least privilege apply least privilege which says if the it admin is part of a team who are responsible to create a virtual machine give access to create only to create the virtual machine so basically you just give access to create the virtual machine and not giving any other access so simply you give access to create the virtual machine not asking them to create storage account or creating the web application so you don't give any of these access you only Grant these access to create the virtual machine so this is what least privilege good security practice to follow is if you only need to give read access to a storage blob then you should give only read access you should not give any right access to the storage blob and also you should not also give any other access for other storage accounts so you you basically limit the access for the resources so if you wanted to manage this kind of level of permission it will be a t job because there are not only one user as an IT admin or engineering team there can be multiple admins right so if I take it admin hierarchy there can be admin for the enter ID so which is active directory administrator there can be admin for compute there can be admin for networks there can be admin for storage there can be admin for database as well there can be admin for security so if you have these many teams and each of the team has a different hierarchy L1 L2 L3 so similarly for computer similarly for Network you have these kind of hierarchy right and managing the access for all these it admins will be a tedious job because of that Azure has something called as buil-in roles so instead of defining detailed access for each and every admin at different hierarchy who joins the team basically Azure has a feature which is called as arback so this is role based Access Control Azure enables you to control access through Azure role based Access Control Azure provides built-in roles that describe common access rules for cloud resources so there are hundreds of built-in rules which you can use There are buil-in rules for each Services virtual machine storage SQL Network so there are hundreds of buil-in roles which you can use to manage your resources you can also Define your own roles so this is your custom roles you can create your own custom roles so custom role is basically when an IT admin so this is an IT admin you wear multiple ads right if he wanted to create storage account he can also create create virtual machine you can also create networks so when this it admin we multiple ads and you want to combine the roles so these are the roles which you want to combine so this is one of the buil-in role this is also one of the built-in role and this is also one of the built-in role you combine all these roles together and associate with this it admin so let us say this it admin name is John and John is wearing storage account role he will be able to create virtual machines he can also work on networks so you create combine the roles all together and Associate to this user John each role as an Associated set of access permissions that relate to that particular role so if you have created a storage account this is particular to storage account if you are creating a role for virtual machine to create a virtual machine this is specific to creation of virtual machine so this set of access will be Associated to the user since we have combined the roles with storage virtual machine and networks this is the combined rule let us say if I wanted to associate a buil buin role which is backup operator so this is one of the built-in role in Azure I will be able to associate this particular buil-in role to user John so each role as an Associated set of aess this role can be used only to take the backups take backup so backup of virtual machine SQL database so these are the things which you can do with the backup operator when you assign individuals or groups to one or more roles they receive all the associated access permissions so we have seen associating the role which is custom role to a particular user which is an individual user you can associate the role to an individual user or to group of users so this is the group let us say this is an this is also it admin and they are the database admin for this group I wanted to associate a role related to database so I can choose the buil-in roles so I'll choose the buil-in role which is available and once I associate the buil-in role to this particular group it applies to all the users instead of associating to one user I am straight away associating this particular built-in role to all the users within the ITB admin team so let us call this as it database admin team right and this will be L3 team I have created a role for the L3 and I have Associated this role to the L3 group so this is the group so now all the users who are part of this group will be able to do create delete and they will be able to take any action on the database what they have created how is role based Access Control applied to resources role based access control is applied to a scope let us see in the diagram there is a relationship between roles and scope this is the scope and this is the role let us see what is the relationship say for example there is a user whose name is John and John is part of building department this is also part of it department since John has been given with a reader access role at the the scope the scope is Management Group since John has been given access at the management group for the reader role you will be able to view the complete hierarchy you will be able to view the Management Group because management group has multiple subscription he will be able to view all the subscription which are part of Management Group and you will be able to view the subscription as well Resource Group and also you'll be able to view the resources under the resource Group there can be multiple Resource Group because he has been given the access at the Management Group scope then the hierarchy will be from management group from the top level till the bottom level till the resource he will be able to view all the information if the same John user is only given access to view specific resource it can be virtual machine it can be web app it can be any database so if he has given only specific to this particular resource he will be able to view only this resource information he will not be able to view any information related to bottom up approach so any access which has given at the top level will follow the hierarchy so if John has been given at the resource Group level he will be able to view all the information from the resource Group till the resources so it will be specific to the resource Group if it is at the subscription then from the subscription F the resources he will be able to read or view the information so let us understand once again John is part of the bilding department and he has been assigned with a role of reader role this reader role has been associated with Management Group scope since Management Group scope is at the top level whatever resources which you are creating from the Management Group John will be able to view all the information so Management Group has multiple subscription each of the subscription he will be able to view the information and there can be multiple Resource Group under each subscription John will be able to view all the information related to any Resource Group which are part of any subscription within the management group and you will be able to view information related to the resources as well so let us take an example of owner role I have an admin whose name is entry so I have an IT admin whose name is entry entry has been assigned with the owner role permission and he has been Associated to a Management Group scope now entry is at this level since entry has the full control because he is the owner he will be able to create delete or update from the management group till the resources so he has the full control he can create resource delete resource or update the resources from the management group till the resources so this is how the hierarchy works at roles and Scopes and if entry has been assigned with the owner role and he has been Associated at this particular scope which is at the resource scope he will be able to create delete and update only at the resource scope so you'll not be able to do anything above the resource which is Resource Group subscription and Management Group you will not be able to view you will not be able to create delete or update you can only work at the scope of resource so the Scopes can include a Management Group which is a collection of multiple subscription a single subscription itself a resource Group or it can be of a single resource so the scope will be at Management Group subscription Resource Group or it can be a single resource it can be web app it can be virtual machine it can be database it can be any resources within that particular Resource Group Azure rback is hierarchical in that when you grant access at the parent scope those permissions are inherited by all the child Scopes so when you assign the owner role to a user at the Management Group scope that particular user will be able bble to manage everything from the management group till the resources within the Management Group hierarchy say if you have assigned a reader role to a group of users at the subscription scope the members of that particular group can view information related to the resource Group and the resources within the subscription so always Azure RB back input Azure rback is enforced on action against any Azure resource example when you create a virtual machine read storage blob view subscription then it has to pass through the Azure resource manager resource manager is a Management Service that provide a way to organize and secure your Cloud resources so basically the resource manager checks who is trying to perform the action does that particular user has access to the resources to perform the action and if the checks are done then it will be allowed or denied to perform the action you typically access resource manager from Azure portal Cloud shell Azure Powershell and also using the Azure CLA so Azure arback is not meant to enforce access permission at the application or at the data level application security must be handled by your application Azure arback uses an Alo model if one role assignment grants you read permission and the other role grants you write permission to a resource Group you have both the permissions granted which means you have both the permission which is read and write permission for the same Resource Group let's say we have an user who is part of the compute admin group so this user is part of the compute admin group who is John user and this user is also part of network admin group so now in the computer admin group John has been provided with the owner role so this owner role has been Associated to John and in network admin group John has been associated with read access role for the network now John will have the combined role of owner plus read so you'll have owner role at the compute admin group and read role at the network admin group so this is to view the network view the virtual Network view the subnet view the IP address here you will be able to create virtual machine delete virtual machines and add diss create network security group so using this particular role you'll be able to do all these actions since he has the read permission at the network admin group you will be able to view only the information related to the network which is virtual Network sub Nets and the IP address which are part of the network so this was an overview about arback in this topic we'll discuss about zero trust model zero trust is a security model to protect Resources with worst case scenario expectation zero trust assumes breach at the outside and then verifies each request as though it originated from an uncontrolled net Network so basically zero trust is a security strategy it is not a service or product which you will be able to create in Azure clo so you need to remember always zero trust is not a service or a product it is just a framework it is just a model which you can apply to your organization security policy Microsoft highly recommends to use the zero trust security model in your organization so there are very important guiding principle of zero trust security model so these three guiding principles are verify explicitly so basically it means always authenticate and authorize based on all available data points unless you authenticate and authorize you will not be provided the access next is use least privilege access limit users access with just in time just enough access and risk based adaptive policies and data protection in the previous video we have already discussed about role based access control so we provide access to certain services or resources for example if I provide read access to the blob I will provide read access only to this specific blob I will not provide read access to the container I will not even provide read access at the the storage account if I wanted to provide read access to the virtual machine I will provide read access to the virtual machine I not provide read access to the resource Group so this is basically to limit the access to the user so that only the access which is required to the user to perform any action only those permissions are given not more than that so that is using least privilege access always assume breach minimize blast radius and segment access verify end to end encryption use analytics to get visibility Drive thread detection and improve the defenses let us say we have some Services running on the Azure cloud and this is the virtual Network we have created some subnet here there are some applications it's a three tier application which is running so this is the first tier this is the second tier and this is the third tier let us say there is a threat there is a breach of security and there is a threat detected to control the threat we need to minimize the blast radius if we are able to contain this particular threat at the virtual Network level we are limiting the blast radius is so we are not allowing the threat to come into the virtual Network and pass through this entire subnet we have limited the threat at this particular virtual Network level there are two types of encryption one data in transit so there is data in transit encryption and the other one is data in rest data in rest dat interest is by default supported by Azure so whenever you store any data onto the Azure storage or Azure virtual dis so the data will be always encrypted whenever you transfer the data using the wire so basically you are transferring the data using the internet or you are accessing over the network then you have to use the secure protocols like https protocol which is end to endend encryption adjusting to zero trust traditionally corporate networks were restricted protected and generally assumed save zero trust does not assume that the device is safe because the device is in DMZ Network just because the managed computers have joined the network it doesn't mean the computers is safe the zero trust model flips the scenario instead of assuming that a device is safe because it's within the corporate Network it requires everyone to authentic tiate there are two diagrams here one is classic approach and one is zero trust model approach in the classic approach there is a secured Network and all the resources are behind the secured Network since all these resources are behind the secured Network we should not assume that these resources are secured what if there is a open port so there is a open port behind this secured Network and somebody comes with a laptop you will connect to this open port and there are very critical resources which you have hosted behind the secured Network let us say there are some databases and with the help of open port if if this is a rogue laptop and if it is able to scan the networks and the services which are hosted be the firewall or behind the secured Network we have a serious issue of data breach in this scenario whereas in zero trust model the zero trust model expects authentication first it doesn't matter whether you are in the secured Network you are behind the secured Network or you are in open network it doesn't matter you authenticate first then access is granted based on the authentication what you have so this was an overview about zero trust model in this topic we'll discuss about defense in depth whether you deploy services in on Prom or Cloud we need to protect the data to protect the data we need to protect the devices which are used to host the data so basically what we do is we keep the data in the center and use all these layers to protect the data so our goal is to protect the data right without data there is no business so we need to protect the data only then our business will run so these layers are physical security identity and access perameter network compute application and the action what we have taken to protect the data itself there should be a strategy at the organization level to have each layers to be protected because each layer has interconnect and depends on other layers if one layer is compromised other layers will be in trouble so basically layers of defense is all about protecting your data keep your data in Central and all the other layers functioning to protect that Central data layer if there is any attack on one of the layer set up an alert mechanism to get alerts and mitigate the risk of breach within that particular layer let us see all these layers one by one so physical security will be the first layer of Defense to protect your hardware and the infrastructure in cloud data center Microsoft uses multiple layers of security to protect the assets starting from the security gate which will be the entry point and till the data center rooms it will be a highly secured environment getting access to all these rooms or the assets will be a very tough job when you take cloud data center there can hundreds of customers who will be running their business critical application on the cloud data center so that is the reason the cloud data center will be highly secured environment let us see identity and access in identity and access ensure that only authorized users are given the access you always control the access to the infrastructure using the Change Control mechanism so whenever there is a access which was granted always audit the access so whatever signning events as in place which user has logged in and what action that particular user has done on the infrastructure has to be logged so it has to be logged and it has to be even audited so you need to audit all the events which has caused the changes let us see what is perimeter perimeter layer protects from network based attack secure your network configure alerting mechanism use DS protections and use firewall to limit the ports which are allowed for the outbound access let us see the network layer at the network layer limit the network connectivity allow only the resources which need to communicate always deny by default restrict inbound access limit outbound access where appropriate and Implement secure connectivity to on-prem networks if if we are using the virtual Network we don't allow all the ports so basically there are ports from 0 to 65,000 ports right you don't open all the ports so that malicious attacker can come into the network and you will be able to access any of the resources so you don't allow basically you don't open all the ports you open only the ports which are required so let's say if you have extended your network to the On Prom this is our Azure Cloud so this is our vet and we have extended to our on Prem Network which can be P2P s to site or express route so we will always ensure that whatever the data which has been transferred either from onpro to the Azure or Azure to the onpro it is always in encrypted we can use IPC protocol to encrypt the data or we have to apply some kind of mechanism so the data in transit is not getting breached or it is not getting act compute layer make sure at this layer the resources are secure even if there is any malware attack there should not be any impact by using the endpoint security which is nothing but the antivirus software update your systems diligently keep your systems passed with the service packs and the security updates if there is any service pack or security updates released by the operating system vendor use the service packs and security updates diligently and enforce strict access control for your virtual machines application layer integrate security into application development life cycle itself ensure that applications are secure and free of vulnerabilities store sensitive application secrets in a Secure Storage medium make security a design requirement for all application development so basically whenever you are developing an application integrate the security in the application development life cycle itself so always keep Security in mind while developing in the application above all the layers which we discussed where the medium which you use to process the data with the actual data itself you need to ensure confidentiality integrity and availability of the data usually the attacker Target the data which are stored in the database stored on the disk inside the virtual machine stored in software ASAS application like Office 365 and the data which are stored in the cloud storage so this was an overview about defense in depth we have discussed about physical security layer identity and access layer perameter layer Network layer compute layer application layer and data layer the central goal or the common goal of all these layers are to protect your data which you have stored in your infrastructure in this topic we'll discuss about Microsoft Defender for cloud Microsoft Defender for cloud is a monitoring tool for security posture management and also threat protection we will be able to monitor the native Cloud which is azure Cloud we'll be able to monitor the on-prem resources hybrid environment and also multicloud environments which can be Google or AWS deployment of Defender for cloud is easy because it is already integrated with Azure Services Microsoft Defender for cloud is an Azure native service many services are monitored and protected without the need to deploy the defender if you have on-prem or an hybrid cloud model or multicloud model we will be able to use Microsoft Defender for cloud to monitor those Services as well using Microsoft Defender for cloud in Azure native protection we'll be able to Monitor and detect the threats for Azure pass services like Azure app Services Azure SQL Azure storage account and many more data services for Azure data services Defender for cloud includes the capability that will help automatically classify your data in Azure SQL get assessments for potential vulnerabilities across Azure SQL and storage services and G give the recommendation to mitigate these vulnerabilities for Network limit Brute Force attacks by limiting the ports for virtual machines use justtin time access for virtual machines use Source IP which you want to allow in your networks restrict port and IP ranges for your hybrid resources to protect non- aure servers you will get customized threat intelligence and alerts for for your specific environment defend resources running on other clouds protect resources in other clouds such as AWS and Google this will be agentless plan assessment assessing the cloud service provider recommendation and includes the result in a secure score the resources what you are going to assess for other cloud service platform like AWS or gcp it can be of the complain assessment or it can be of the built-in standards specific to AWS or Google we can also use Defender for cloud to detect any threat on the container services on AWS or AWS eks Linux clusters which are hosting the containers Microsoft Defender for servers brings thread detection and advanced defenses to your windows and Linux ec2 instances so basically these are all for the other cloud service platforms so it can be specific to the AWS or Google Microsoft Defender for cloud fills three vital needs which are basically continuously assess secure and defend continuously assess is basically know your security status identify if there is any issues secure ORD the resources and services with standard benchmarks defend is to detect and resolve the threats to resources workloads and services which are hosted on the Azure Cloud this is the image of the Microsoft Defender for the cloud it shows secure score it shows recommendation status it shows the resource health and the assessment what it has done on all the services if you have mitigated any threat it will show as completed the first two areas which are continuous assess and secure where focused on assessing monitoring and maintaining your environment Defender for cloud also helps you to defend your environment by providing the security alerts and also the threat protection features security alerts when Defender for cloud detects a threat in any area of your environment it generates a security alerts describe details of the affected resources suggest remediation steps and also triggers an application if there is any application conf like functions or any applications which you have configured to trigger in case of any alert Advanced threat protection Defender for cloud provides Advanced threat protection features for virtual machines SQL databases containers web applications and even for your network so in summary Microsoft Defender for cloud is a monitoring tool for monitoring the security and to protect from any threats we'll be able to monitor Cloud resources and on-prem resources and even multicloud environment we can use the tools which are provided by Microsoft Defender for cloud to add in the resources it's basically integrated into Azure as a free service we are in the third module management and governance let us see the topic factors that can affect cost in Azure what will be the costing when you consume a resource in Azure clo we choose Azure resources and consume these Resources pay for the consumption usually the usage will be calculated on per hour basis and it will be buildt on monthly basis we know Azure uses Opex model we don't pay upfront payment for capital in Azure we pay only for the services what we are going to consume basically the capital expenditure which is nothing but capex model is in the onpro environment so this is in the onpro the services what we are going to consume it can be compute storage and networking there are multiple factors which will impact the cost this will be resource type consumption maintenance the geographic location and the subscription type and also the resources what we are consuming from the Azure Marketplace when it comes to the resource type the region where you are hosting the resources and the type of resources what your hosting that will influence the cost on the Azure clot so if we are choosing the virtual machine in Us East location and if you are selecting the same type of let us say DS V3 and you are also selecting DS V3 in Singapore so these are two separate regions there will be a cost difference whenever you select the same virtual machine in two different regions the cost will not be same so the influence of the cost is based on type of resources what you are selecting settings for the resources and the region where you are hosting the resources so whenever you create any resources in aure clo all the resources will have a metered instance for that resource and it will be calculated for your building purpose in this example if you are going to create a storage account and you need to specify what kind of storage account you are going to create if you are creating a storage account for the blob you need to select the performance tier access tier and the redundency settings and also the region where you are going to create the storage account based on the settings the cost will differ so in terms of access tier the art tier will have more cost compared to the full tier with virtual machines the factors which are going to influence the cost is the license of the operating system processes and number of cores per processor for that virtual machine the disk what you are attaching whether it is standard SD disk or SSD disk or premium SSD disk and also the network interface for the virtual machine and also if you are proving the same virtual machine type in different region it might differ in the cost so when you start creating virtual machine this is how the screen will look and you will be prompted to enter some details you need to choose the resource Group you also need to enter the name for the virtual machine you need to select the region where you wanted to create the virtual machine availability options whether it is availability zones or availability sets you need to choose and the security type and you need to select the image whether it is Windows operating system or Linux operating system and also you need to select the size of the virtual machine so whenever you select the size of the virtual machine based on the CPU and based on the memory what you are going to select this will impact the cost or basically this will influence the cost for your virtual machine what you're going to select and the same virtual machine type if you are using in Us East you cannot assume that it will be the same pricing for both the regions so this might have different costing and Singapore region might have different costing altoe in consumption based plan you pay more if you are using more resources it's straightforward pricing mechanism that allow for maximum flexibility aure offers to reserve the resources we can reserve many services including compute database and storage when we reserve we commit to Microsoft that we will be requiring these resources since we commit we get up to 70 or 72% of discount we can commit typically for 1 year or 3E period maintaining the resources what we have been using is also crucial say if I want to to delete a virtual machine which I have been using since several months I have been using this virtual machine and there are related resources for this virtual machine there is disk attached there's a network interface card which is attached there is also public IP address which is attached whenever we delete the virtual machine we also need to ensure we delete all these resources properly other otherwise it will be costing so these related resources will add up to the cost so if we are cleaning up the virtual machine we also need to ensure that all the resources related to the virtual machine is also deleted so basically to manage our resources what we have been using we need to use Resource Group proper naming of the resource Group resources and also we need to ensure the resources are properly tagged whenever we deploy any Services we will always deploy these services in a specific region we will always ensure that these resources what we are going to deploy it will be always near to our location each region has its own infrastructure cost like power supply cost labor cost and the government tax cost that's the reason we have different cost in different regions for the same Azure Services let us see how the network traffic influence the cost bandwidth refers to the data which is moving in and out of the Azure data center so any data which is coming into the Azure data center is always free so this is inbound traffic which is always free so data in is always free in Azure data out which is ESS it always depends on the Azure zones cost of the data out will always depend on the zones from where the data is leaving so basically it is outbound traffic pricing for the network traffic depends on Ingress ESS and the amount of data which has been transferred so Ingress is inbone traffic to Azure igress is outbound traffic to Azure and the data will be the actual data what you are going to transfer if it is 1 GB you will be charged for 1 GB if it is 10 GB you'll be charged for 10 GB and if you are doing 1 TB of transfer then you have to pay for 1 TB of data transfer let us see the subscription type whenever we sign up for the Azure clo we will get $1200 worth of credit using this $200 we'll be able to create the resources till we consume $200 or we can spend the $200 within 30 days and there are some of the services which are free for almost 12 months we will be able to use these Services till 12 months let us see what is azure Marketplace Azure Marketplace Place lets you purchase Azure based Solutions and services from third party vendors let us say you wanted Oracle database so this is an Oracle database and you wanted in a customized configured Linux virtual machine according to your requirements so this is a customized solution you can refer Azure Marketplace and you'll be able to use from the third partyy vendors let us say you wanted some customized firewall solutions from Barracuda or Cisco you can get it from the Azure Marketplace so basically it's like a Google Play store or Apple Store marketplace where many services and products are offered targeting specific needs we can choose from the marketplace and we will be able to use it for our requirement the billing structure for the solutions whatever we are using from the marketplace is set by the vendor so in summary we have seen the factors which will influence the cost for the Azure resources these factors are resource type consumption maintenance geography subscription type and Azure Marketplace in this topic we'll see pricing and TCO calculators we will also compare the pricing and total cost of ownership calculators these two calculators are used for two different purposes both are accessible from internet both the calculators can be used to estimate the expenses which has incurred in Azure Cloud pricing calculator will provide estimated cost for provisioning resources in Azure you can use it to check how much you will spend when you deploy the services on the Azure Cloud which you have selected it will estimate the cost of provisioned resources in Azure Cloud check the price for Compu storage and network cost using the pricing calculator beforeand if you are dealing with any kind of solutions we can also use the pricing calculator to estimate the storage type access to year and redundancy for the storage account what we are going to create this is the pricing calculator tool which you will be able to access from the internet this is a free tool which you can access from the internet select the popular category like compute networking storage estimate the cost for each of the resources which you are going to deploy onto the Azure clot and there are some example scenarios if you click on example scenarios there are multiple Solutions which are given and the cost you will be able to estimate for those resources as well let us see what is TCO calculator ACU calculator is used to compare the cost between onr and Azure clo the pricing calculator is used to estimate the cost of azure resources whereas the TCO calculator is used to compare the cost between your onr and the Azure clot so with the TCO calculator enter your infrastructure configuration which you are running in onr which might include the servers database storage and also the outb traffic so basically you are collecting all the information related to your on Prem so there might be Android servers there might be some 10 database and the data traffic amount of data traffic so everything you are going to enter and compare with the Azure cloud and also you can add the power cost and it labor cost as an assumption when you are comparing your on prom with the Azure cloud and there is also an option to do the bulk upload where you can upload the infrastructure configuration which is nothing but the servers database storage data traffic and power and it labor cost so you can upload all the information using bulk upload using Excel file or CSV file and you'll be able to compare the on Prem environment with the Azure clo in case if you don't want to do the bulk upload you wanted to enter the servers and the configuration one by one you can name the configuration which is the workload and you can select which operating system and what is the type of environment whether it is a physical server or virtual machine what type of operating system how many servers you have in your on Prim and what are the CPUs and the course per CPU the ram per physical server or virtual machine and you'll be able to compare with the Azure clot so in exam if you are asked about the tool used to compare onr infra calculate it labor cost power it will be TCO if only resources on Azure clo then it will be pricing calculator so now we know the difference between pricing calculator and total cost of ownership calculator in this topic we'll discuss about tax so what are tax basically it's a name value key pair so it's very very important whenever we are dealing with resources on the Azure Cloud so basically whenever we are deploying the resources in the Azure Cloud we need to ensure those resources are properly Tagged so tax provide extra information so tax are metadata it will provide extra information for the resources which we are going to deploy and it is also very useful for resource management cost management and optimization operation management security governance and Regulatory Compliance workload optimization let us say I have created a virtual machine in East US and I'll be able to name this virtual machine I can name this virtual machine based on the operating system based on the region where it has been deployed and the name of the virtual machine itself let us say if I wanted to name this virtual machine I can name the virtual machine as rod and I will say East us and the virtual machine name will be 01 and I can even Define the operating system if it is a Linux then I will say red for R and if it is Windows then I'll simply specify Windows if it is sus I'll specify sus and if it is UB 2 I will simply make it as U so the virtual machine whatever we are going to deploy we can name this as like this VM 03 U2 fraad region and the virtual machine 03 this will be 04 and 02 for Windows so like this we will be able to name the virtual machine to identify the resources whatever we have deployed so what if I wanted additional attributes to identify the resources whatever we are deploying in that case the tax comes very useful because it acts as a metadata and what I can do is basically I can have the taxs let us say I can enter the IP address because this is a name value and I can this is the name and this is the value which I have to provide for the tax which is nothing but the key pair with name value key pair and I can this is the name IP address I'll specify the name and instead of specifying the region year here I'll location I'll say East us and I can have environment I can specify production I can also specify the operating system type I can specify red Linux full if it is Windows I can specify in full Windows Server 2019 like this and what is the status status is live or in production and I can go on defining the tax so I will be able to Define up to 50 name value pairs so this is very useful also I can say owner owner is if it is Linux then I can specify this is for Linux team and I can even go specifying about and also I can go on defining other useful information like application which is running on this virtual machine let us say this application is inventory application say somebody wanted to pull up the monthly report based on any of these criteria if I wanted to find out the owner of Linux I wanted to take the billing only for Linux or red at Linux I can actually pull up using this particular tag so this is very important and also we can specify what is the criticality very high low or medium this is also used for governance and Regulatory complaints and also we can do workload optimization and automation using the tax so the maximum tax what we can have for any given resource is 50 tax per resource let us say how do I manage resource tack we can manage these resource tack for the Azure resources using poell using Azure CLI using the ERM templates API and of course Azure portal as well so we will be able to modify update or create the tax using any of these tools and also we can use Azure policy to enforce the tagging rules say suppose if I wanted to deploy any of the resources it can be virtual machine it can be web application if I am deploying to the Azure Cloud the Azure policy if it is defined to enforce the tagging I need to use the tagging so the tagging it will be defined it might be three tags or five tags which will be mandated as per the governance and and Regulatory Compliance so we have to adhere to this standards and we have to tag unless we are not tagging these resources will not be deployed to the Azure clot and the resources don't inherit the tax say suppose I have defined some tax say I have defined some 10 tax to the virtual machines like location environment operating system status owner application criticality so all these taxs I have defined for the virtual machine definitely it is not going to inherit by the IP address by the storage or by the network security group or the virtual Network so whatever tax you are defining it will be defined per resource basis okay you can Define up to 50 tax per resource so let us see what are the examples of tagging the resources we can use the application name the name of the application so basically as I said it is a name and value so we have the name and we will provide the value so it is like IP address so what is the IP address you specify the IP address so what is the region which you have deployed onto the Azure Cloud if it is East US yes East us if it is Singapore location Singapore location if it is South India you specify the region okay and what is the operating system and what is the version so all this can be tagged this is the name and this will be the value and like cost center we can Define the cost center owner of the resource if it is like I said if it is Linux virtual machine it will be handled by Linux team if it is Windows Virtual Machine it will be handled by Windows team if it is a SQL server or active directory or backup resources it will be handled by different teams and what is the environment we can define whether it is used by fraad environment if it is used by development or test environment so all these we can Define within the name value key pair so whatever tax you are defining it will be within the 50 tax limit per resource right and we can also Define the impact by saying if it is Mission critical highly impact or low impact and if you have already defined the resources using Mission critical resource whatever resources you have already defined as the impact as Mission critical all those will be considered as Mission critical and if you have not defined any of the tax for the impact then those resources will be considered as non critical resources so this was an overview about the tax it is very easy whenever we are using the tax to identify the resources what we have deployed and where exactly we have deployed and what is the use of the resources which we have deployed we can have proper life cycle policy of these resources in this topic we'll discuss about Microsoft perview so what is Microsoft perview Microsoft perview is a separate service which is used for data governance so perview is used for data governance risk and complain Solutions so basically whenever we have our data so let's say the organization is very keen to know and understand what kind of data they are dealing with it can be in the Azure Cloud it can be in other Cloud environment like AWS or Google and also it can be in on Prem let me right okay this is onr and also we have data in other software as a service like Office 365 or any other software service any other software as a service provider so the data is there in multiple sources and we don't know exactly what kind of data we are dealing with so basically the organization wanted to know and understand what kind of data they have in their infrastructure so they wanted to classify the data if it is very critical and it is highly critical medium critical or low critical only if they understand what kind of data they have in their infrastructure only then they will be able to classify the data right so we have the data scattered across multiple sources in our infrastructure and we wanted to know and analyze what kind of data we have for that what we have to do is we need to do the data Discovery so first we will discover the data itself so this discovery of data is basically it will collect the metadata of the data so metadata is nothing but the information about the data itself so this is the first thing what we are going to do then we are going to classify the data we will classify the data whether it is very critical High critical medium critical or low critical and also if it is sensitive data then also we are going to classify the data and we also have end to end map of the data now end to endend map is basically where the data is exactly residing whether it is in onr whether it is in other cloud service provider like it is in Azure or if it is in other service provider or if it is in on Prem so we have the map of the data itself and we have the view of the data so once we have the view of the data what we can do is we can actually input this data to other tools like powerbi or insights let me explain with an example why there is so much of importance about classification of data let us say we have a particular company or and organization which is actually dealing with tax filing and this particular company runs a website which is called as tax.com so this is tax.com website and in this website we have all the information very sensitive information about the customer identity then credit card information pan card information then bank account phone number address so all the sensitive information are there in this particular website now unfortunately one day one particular haer is randomly visiting some of the web pages and he finds your website he comes to your website okay tax.com this particular website is so vulnerable when the hacker uses this particular tool hacking tool and crawls the website then you will be able to list the directories so directory 01 directory 02 and all the files inside this particular website and all these directories or folders contains all the sensitive information so before the hacker knows what the data you have you can use Microsoft perview and understand the data and you will be able to classify the data if the data is sensitive critical or highly critical then you can Harden the security of your infrastructure right so now you understood how critical the data is let us understand how Microsoft perview Works so we have the source itself there can be multiple sources from where the data will be discovered the sources can be Azure Cloud so it can be Azure Cloud it can be on Prem and it can be multicloud like AWS or Google it can be in the software as a service provider like Office 365 or git or any other software as a service provider and on all these infrastructure like we have some kind of data right so this is the first thing and the second thing will be the data itself on all these resources or infrastructure where the data is residing there can be multiple data types it can be of user data it can be the database data it can be HR data marketing data manufacturing if it is a pharmaceutical company then you have chemical composition and farma related data if it is banking sector then you have all your financial data right if it is a insurance then all the customer information about the insurance their illness like this and if it is hospital then you know what kind of data we are dealing with so basically these are the data types which is residing in your environment and we need to discover the data so what we'll do is what Microsoft perview will do is it will discover the data it will discover the data basically using the connectors so there are default connectors which are used by Microsoft perview to discover the data and it will collect the metadata information using the connector it will discover the data and it will collect all the metadata information so metadata is nothing but the information about the data itself basically we don't copy any data from The Source it is just collecting the metadata itself to understand what kind of data we have in your infrastructure and once the metadata is collected then we map the data mapping the data is understanding the data end to end from where exactly the data is residing if it is there in the aure then where exactly it is residing it is there in the storage account it is there in the SQL or it is there in the cosmos DB so all the information you have in your metadata and that is why if it is giving you the map of the data which is end to end flow of the data then it has to tell what is the source of the data right the source can be storage account SQL server Cosmos DB or even the virtual machine so basically Microsoft perview will provide end to endend information about the data so once you have end to endend map of your data then you will be able to query the data or you'll be able to search the data right so this is how Microsoft perview will be able to discover the data and track the data from different sources so let us go back to the notes again Microsoft 365 features as the core component of Microsoft perview Microsoft 365 has multiple services like teams one drive exchange and even the Office Products as well Microsoft perview help your organization by protecting sensitive data across clouds applications and devices we will be able to identify data risks and manage regulatory complex requirements we can get started with regulatory complains as well using the Microsoft perview Microsoft perview provides unified View for the entire data Estates that includes data classification and in to end lineage and we'll be able to identify the sensitive data and classify the data when we discover the data from different sources we can create a secure environment for the data consumers to find Value able data once the data has been discovered using the Microsoft perview we can generate insights about how our data is stored and where exactly is stored and how it is being used and also we can manage access to the data in our estate securely and that scale so this was an overview about Microsoft perview in this topic we'll discuss about Azure policy so what is azure policy Azure policy is one of the important service which is used for rules enforcement we can audit the resources if it is complaint or non- complaint as per our organization standards so where we will be able to apply this policy we can apply the policy at Management Group level so we will be able to apply the policy at Management Group subscription Resource Group and also at the resource level so what is this particular policy contains this particular policy will contain rules so it can be single rule it can be multiple rules and you can create the policy and enable the policy when you enable the policy you can enable the policy as a individual policy or group of related policies so when you enable group of related policy that particular policy will be called as initiatives so initiatives are nothing but the group of relative policies the policy is Json template so the policy is a Json template say if I wanted to define a policy saying that if I'm going to create a resource Group I have to create the resource Group only in central India West India and South India so this is my policy so this is my policy for the resource Group creation if I wanted to enforce this particular policy I'll be able to en Force this policy at the Management Group level subscription level and for me there is no need to enforce this policy at the resource Group level because this policy itself it is for the resource Group creation if I enforce this policy at the Management Group level or subscription level if I am going to create any Resource Group from this subscription or at the management level I'll be able to choose only Central India West India or South India I'll not be able to select any other region for creating the resource Group so you might be asking okay this is fine this is only one policy what if if I needed multiple policies within the same template for that we have something called as initiative initiatives are nothing but the group of related policy say for example if I'm wanted to define something related to storage account I have the set of policy for the storage account it will be 30 35 so all these policies are combined together and we'll be able to enforce at Management Group subscription or Resource Group level because under the resource Group you will be creating the storage account so we will be able to enforce this particular policy whichever we are going to Define as an initiative we will take all the 35 policies and we will enforce all the 35 policy which has rules and we will apply that rule to the resource Group so if I wanted to enforce this particular policy for the storage account I can actually mandate the policy saying that GRS should be enabled whenever you create the storage account and https protocol should be used and HTTP will be disabled and maximum blob size will be 4tb or something like this once you define the policy and whenever you try to create the storage account if you try to enable HTTP protocol and if you are not using the https protocol then you will not be able to create the storage account so once you apply this policy at any of this level management group subscription or at the resource Group once it is enforced then this policy comes into effect so you might be asking a question okay I have an existing storage account storage account one and this particular storage account as HTTP protocol opened so you say that HTTP is disabled whenever I create a storage account if HTTP is opened then it is not going to create so what if I have an existing storage account which has HTTP protocol opened in this particular case this applies the policy when you inforce this applies only to the new storage account it will not apply to the existing storage account so this existing storage account which has HTTP protocol opened which is unsecure protocol then it will be treated as non complaint okay so this is how you will get a report saying that this storage account is non- complaint in the Azure policy there are multiple buil-in initiatives which are available inside Azure policy we'll be able to choose these initiative specific to our services and we will be able to enforce those initiatives the policy is going to eval at your resources which you have deployed in your environment and if it is non- complaint it is going to report in the Azure policy compliance section Azure policy can also prevent non-com resources from being created so let us say I have the virtual machine and for this virtual machine or any of the virtual machine which has been created inside Resource Group this Resource Group group is called as production Resource Group and any virtual machine there can be multiple there can be hundreds or thousands of virtual machine which is being created inside this Resource Group if I Define a policy saying that only Windows 2016 Windows 2019 and data center server Edition as to be deployed so you define the operating system inside the policy and once you apply this particular policy to the resource Group whatever virtual machine you are going to create if any of these operating system is not there when you select then it is not going to create those virtual machines because I wanted to avoid some of the operating system version because there is some vulnerability right or it is outdated because of which I am not encouraging to use those operating system versions in my environment which is in the production environment I can have a different policy set at development Resource Group saying that I'm okay to use Windows 2012 Windows 2016 and windows 2019 server Edition Azure policy comes with built-in policy and initiative definitions there are multiple built-in initiatives initiative is nothing but group of policy itself so there are multiple initiatives which are available inside the Azure policy for multiple services like storage service networking service compute security Center and even for the monitoring for example if you if you define a policy saying that only a particular size of the virtual machine to be used in your environment when that particular policy is involved and when you are going to create a new virtual machine or when you are trying to resize an existing virtual machine then the policy will be applied in some cases we can use Azure policy to automatically remediate non- complaint resources I have the liberty for not to remediate in case if it is non-complaint so I can allow or disallow the non-complaint resources to be remediated so in this particular example we can see that any of the resources what you are going to create in a resource Group has to be tagged with the app name tag and the value of the tag should be special orders if this tag is missing then the policy is going to apply that tag with the app name and value with special orders Azure policy also integrate with devops for continuous integration and delivery pipeline to the pre-deployment and post- deployment faes of your application so what are Azure policy initiatives let us say as I said this particular policy is Json template I have this Json template and this is an individual policy and this particular policy is for monitor unencrypted SQL database in security Center so this is one of the policy and there is another policy which says monitor operating system vulnerability in security Center then we have another policy here and we also have another policy monitor missing endpoint protection in security Center so this is another policy and when you combine these three policies together you'll be able to enforce this particular policy at different hierarchy at the man agement group level at the subscription or at the resource Group level when you are combining multiple policy as one policy then it will be called as initiatives so basically initiative is nothing but group of policies there are multiple policies involved so in this particular case there are three individual policies which has been combined as one initiative and this particular initiative you are going to enforce at Management Group subscription or at the resource Group level so this was an overview about Azure policy in this topic we'll discuss about resource lock a resource lock prevents resources from being accidentally deleted or changed with the right level of access Azure rback which is nothing but role based Access Control we will be be able to delete because we have the Privileges to delete any of the resources and we can go on deleting the critical Cloud resources so since I have rback this is my storage account and this is me I am the owner of this storage account since I have the Privileges from Azure RB back as owner for this storage account let us say this storage account holds critical data like this is for go TV which has streaming data so all the TV channel related data everything is there or let us say this is airel TV or something like that so basically this particular storage account has all the critical streaming data so I am the owner of this storage account I can simply delete this right so once you put a lock for this particular storage account even if you have owner privileges you will not be able to delete this storage account so basically the resource lock is going to prevent resources from being deleted or updated depending on the type of the lock what you have selected and we'll be able to apply the lock on the resources or at the resource Group level or even at the subscription level then whatever the lock which you have applied it is is going to inherit so basically the lock is going to inherit from the subscription Resource Group and all your resources if I'm placing the lock at the subscription level saying that this is my subscription and I have placed the lock at this particular subscription level so this lock will be applied for the entire subscription and also the resource Group and whatever resource I am going to create from this hierarchy if I'm placing the lock at this Resource Group whatever resources I going to create under the resource Group everything will be locked I'll not be able to delete any of the resources if I am placing the lock at the resources let us say this is the example of the resource if I have plac the lock here then I'll not be able to delete only this particular resource if any of the resources which has been created under this Resource Group doesn't have the lock then we will be able to delete those resources so basically there are two types of locks one is delete we'll be able to modify the resource but we will not be able to delete the resource and there is also another type of lock which is called as read only we can only read the resource we will not be able to do any kind of modification or update the resource let us see how we can manage the resource lock we can manage the resource lock using the Azure portal power shell and Azure CLA and also using the cloud shell in the Azure portal if you click or if you select the storage account there is a section over here you'll be able to see there is a lock and basically there are two types of lock one is readon lock and delete lock let us say if you have a resource which has been locked so this is a storage account let us say this is storage 00 1 this is a storage account and this has been locked if I wanted to delete this particular storage account what is the process to delete this storage account you need to First unlock you need to remove the lock so since it is locked then you have to unlock or remove the lock and only if you have correct rback privileges or role assigned like storage account owner then then you'll be able to delete this particular storage account even if the storage account is unlocked and you don't have right privileges saying that storage account if you have only read permission then you can only read you will not be able to delete because rback comes into picture let us take an example and see why the resource lock is very useful and important when we are dealing in the production environment in Azure Cloud it is very easy to create the resource and also to delete the resource to prevent the resources being deleted what we can do is we can simply use the lock so let us say we have production Resource Group and this particular Resource Group has virtual machines which is and virtual machines and we have five virtual Network and 50 Network Security Group this particular network security group as rules inbound rules outbound rules everything has been defined in the network security group and we have somewhere around 200 IPS and also we have some five public IPS so this will be private IPS and this is public IP and let us say we have 10 web application and also we have 150 because all these virtual machine needs managed diss so we have managed Diss and we have 100 100 Nick cards virtual Nick card so this particular Resource Group has so much of resources which are very critical to the business right and I am a user I am the owner for this Resource Group I have all the rights I have the owner right right if I don't place the lock here at the resource Group level let us say by mistake or accident I have deleted this production Resource Group all the resources which you have created under this production Resource Group everything will be deleted so this way the resource lock is very useful in the production environment so this was an overview about resource lock and why it is very important when we are dealing with the Azure resources thank you for watching the video if you like the content Please Subscribe and share the video in this topic we'll discuss about Microsoft service trust portal service trust portal is a portal that provides access to various content tools and other resources which are related to Microsoft security privacy complaints practices so basically there are three main components which are security privacy and complains so these are the three components based on which multiple custom customers will be hosting their services on the Azure Cloud for every country there are certain laws since Azure cloud is a public cloud services not everyone knows the infrastructure of the Azure Cloud so we don't know what exactly is there in the Azure cloud data center it is only the trust so basically we deploy all our cloud services onto the Microsoft cloud based on this thrust so because of the trust Microsoft has to provide the documents necessary which are related to security privacy and complains so this Microsoft service trust portal is not a service which you will be able to deploy onto the Azure clo it is just a portal where all these artifacts all the documents audited reports everything will be stored it is like a SharePoint portal of an organization like how it is used used for the organization itself so this is specific to the organization where they will keep all their documents related to the infrastructure their design documents their governance model their organization hierarchy so everything will be stored in the SharePoint portal similar to that since Microsoft Azure cloud is an public cloud service Microsoft has to provide the documents related to security privacy and complains report on on the portal which is accessible by the customers all the information related to security privacy and complaints can be accessed via this particular portal which is called as servic trust. microsoft.com this is the portal link through which we'll be able to access and download the documents related to certifications regulations and standards we'll be able to see what are the certifications which Microsoft has been awarded and there are some of the certifications and the standards which are specific to the country like irap which is specific to Australia and mtcs which is specific to Singapore pain ens and PCI which is for payment card industry and we have the gdpr for the European regulations and we have standard ISO iecoc complaints documents we'll be able to refer all these documents using this portal and also we'll be able to access the reports white papers and artifacts related to artificial intelligence BCP and Dr we can also do some kind of penetration testing on the Azure Cloud if Microsoft gives an approval for every penetration testing on the Azure Cloud you require an approval from the Microsoft and also we will get documents related to privacy and data protection and also we have frequently asked questions and white papers which we will be able to download from the service trust portal and there are some industry specific resources which we will be able to download and refer the information which are related to financial services Healthcare and Life Sciences media and entertainment United States government Regional resources information so all these documents artifacts and resources are available under service. microsoft.com let us see the portal itself and take a look I into the portal itself service. microsoft.com I into the homepage where I'll be able to see all the information related to certifications regulations and standard and also we have reports white papers and artifacts which are related for BCP and Dr pentest security assessment privacy and data protection and if you wanted to understand more about Microsoft aure cloud if you are new customer and you wanted to deploy your services or you wanted to move your on Prem environment to the Azure clot you can come to this portal refer the documents which are industry specific documents you will be able to see how Microsoft is operating their Cloud infrastructure and what are the security complaints privacy standards they have and then you'll be able to decide how you'll be able to deploy your resources onto the Microsoft cloud all documents section over here and you'll be able to refer the documents which are related to security privacy and complaints say if I want to download any frequently asked questions and white papers I'll come to here and I'll be able to refer the documents related to data protection and security and I'll be able to see when it has been last updated and if I wanted to download this I'll be able to download this document I need to sign in to download this document so I'll cancel this and if I wanted to download all the documents then I'll select this and again I need to download or I can even save to the library if it is Sav to library I can come to my library and I'll be able to download the documents which has been saved to library so I'll cancel this I'll go back to the all documents and if I wanted to search by date I'll be able to search by date so what are the documents which has been published in the last 3 months and for which cloud services I wanted if I wanted for Azure if I wanted to check for dynamic 365 GitHub Microsoft General so all the Microsoft platform has been listed over here if I wanted to select only for Windows I can select for Windows since I wanted to host services on Azure then I will select Azure then I'll select all the documents related to past 3 months and I have filtered for 3 months then I'll be able to see there are some 12 documents which has been published in the last 3 months for ISO if I click on the same thing and I'll be able to see there is an update in the month of July 23rd 2024 if I download this particular document I'll be able to see what are the new updates for ISO 90001 certificate so in summary Microsoft will store all their documents related to security privacy and complaints for their customer and a service TR portal in this topic we'll discuss about tools for managing Azure clo there are multiple tools which are available for us to deploy the resources onto the Azure clot we will be able to use the portal itself and there is also a CLA tool which is called as Azure Powers cell so basically this is a Windows based power shell and we need to install the modules for the Azure if I wanted to run any of the Azure commands then I have to install the modules for the Azure and also we have Azure command line interface which is also called Azure so basically we'll be able to use all these management tools apart from these we also have HD case and also we can integrate with the AP as well so if I want to create any of the resource onto the Azure clo we will be able to use all these tools so let us look into the portal so what is azure portal Azure portal is a g based tool we'll be able to build manage and monitor everything from simple web apps to complex deployments we will be able to create custom dashboards for an organized view of resources we'll be able to configure accessibility options for an optimal experience the Azure portal is designed for resiliency and also it has I availability this portal is available in every Azure data centers even if there is any data center failures or any issues related to Data Center this portal will be always available and whenever a user tries to connect to the Azure portal since this portal is available in all the Azure data center for the user he'll be connecting from his nearest locations to avoid Network slowdowns the portal gets updates continuously and there is no downtime required for maintenance activities so let us see what is azure Cloud shell Azure Cloud shell is a browser based shell so you will be able to access the cloud shell from the portal itself so when you log into the Azure portal there is an option for you to access the cloud shell you will be able to use either poers shell or the Azure C so this is the portal and this is the cloud shell from where you'll be able to access the power shell and Azure CLA even if you use power shell or Azure CLA these two commands will go via the rest API and it will go via the RM so this is nothing but Azure resource manager whatever commands you are going to issue via the Powershell or Azure CLA everything will be converted and sent via the rest API protocol then it will be sent to the armm armm is the one which is going to deploy your resources so armm will deoy the resources based on the commands what you have issued using the powers shell and Azure C let us see what are the features of the Azure Cloud shell since this is a browser based you will be able to directly access the cloud shell tool from the portal itself there is no local installation which is required and since you have already authenticated to the portal there is no additional authentication is required either you choose the Powershell or the Azure CLI based on your experience so this is for if you are a Windows background user then you can use the Powershell or if you are coming from Linux background then you'll be able to use Azure CLA tool let us see what is azure Powell if we are running the Windows operating system like we have the server and we have Windows operating system we know that the Powershell comes built in as a default package or tool command line utility tool and the regular command what we are going to use like Ping CD Dr these are the commands which are inbuilt for the Windows operating system if I wanted to run any of the commands related to the Azure then I have to install something called as I need to install the modules so this module is nothing but the Azure mod module so this module contains the command LS so command LS is nothing but set of commands which are then grouped as modules so once you install the Azure module onto the Powershell then you will be able to run all the commands which are required for you to deploy any of the resources onto the Azure clo whenever you are using the Azure Powershell it it is calling the rest API to perform management tasks onto the Azure Cloud you'll be able to delete create and update the resources on the Azure clo and also you'll be able to deploy the entire infrastructure using the Powershell scripts so you can create poell scripts or you'll be able to run single commands and you'll be able to deploy the entire infrastructure basically that is going to take time so what you can do is you can combine all your commands and create a script and deploy the entire infrastructure using the script poell scripts if you have already created the scripts and deployed the resources on the Azure clot then you will be able to use the same process to create the resources say for example if you have created the scripts for the development environment and if you want to repeat the same resources for production environment you can use use the same script and repeat the process of creating the infrastructure since Powershell is a crossplatform utility you'll be able to install on Windows Linux and Mac platforms as well so let us see what is azure CLA Azure CLA is similar to Powershell but the syntax of the commands will be different while using Azure Powershell we use the Powershell commands when we use Azure CLA we use the bash commands so that is is why I said if you are familiar with Windows operating system for the windows user they'll be able to use Powershell and for the Linux user since they are very familiar with the bash commands they will be able to run the a CLA commands I have already logged into the Azure portal if I wanted to access the cloud shell utility tool then I have to come here so this is where I'll get the cloud shell and I'll be able to use either Pell or I'll be able to use the Azure CLA utility tool so this is the Powershell command line tool which it is going to connect let us see how much time it is going to take to get the prompt if I enter Then it is taking some time let us wait or if I wanted to switch to bash which is azure CLA tool I'll be able to switch to bash let us wait for sometime yeah we have the prompt yeah we have the prompt if I want to run any of the command if I want to check how how many resource groups are available in my subscription I can use the command if you notice the command is automatically populating all I need to do is just use the right arrow key on my keyboard it will automatically complete the syntax of the command see it has automatically completed the syntax of the command now I need to just enter since I have not created any of the virtual machines so this Resource Group is just empty there is no resources apart from the resource Group let us switch back to the Azure CLA utility and here if I run the CLA commands Azure CLA commands it will start with AET then if I want to list the virtual machines which has been created a VM list so there is no virtual machines if I say it group list I will say give me in table so this is the resource Group which is available in this subscription and there is no virtual machines I have already installed the command line utility tool which is the Azure Powershell module on my laptop and also I have installed the Azure CLA utility tool on my laptop if I wanted to use the tools from my laptop I can actually use the let us say if I wanted to use the Azure C tool I'll be I'll check the version so this is the Azure CLA utility tool which I have already installed on my laptop if I want to check the Powershell version it is automatically populating the command all you need to do is enter the use the right arrow key on my keyboard it will automatically complete and it will tell what is the version of the poell which has been installed on the laptop if I use get okay if I use a VM list okay I need to First login because I have not logged into the portal I have only logged into the portal I have not used Azure CLA to login I need to use this particular command azed login it will open the browser then it is saying login is successful so I'll go back to the visual studio code and if I want to run the command which we have run on the cloud shell I will run the same command is VM list there is no virtual machines azed group list okay I want you to see in table format so once I have logged into the Azure I'll be able to use or I'll be able to run all the commands which we have ran on the cloud shell instead of a login if I am using the azzure poell then I have to use this command to connect to the Azure account and let us say if I am creating a new Resource Group since demo RG 001 is already created I will create another Resource Group and I will enter so I have logged in using the Azure CLA tool but you have not logged in using the Azure poell command so what I will do is I will connect to Azure account okay have misspelled connect okay your authentication is completed because I have logged in I have already logged in so that is why it is saying authentication is already completed now if if I wanted to run this command to create the resource Group yeah it has already created the resource Group get is it resource see you will be able to list the resource Group which you have created so this is the new Resource Group what we have created so this was an overview about the cloud shell utility tool poers shell and also the Azure C in this topic we'll discuss about Azure Arc what is azure Arc Azure Arc is a centralized Management Service let us say we have multiple resources which are spread across multiple infrastructure so we have different infrastructure so let us say this is azure Cloud this is our on Prem environment and this is AWS or Google Cloud so we have our resources inside Azure Cloud we have resources in on Prem and we also have resources in AWS and Google if I wanted to have single view of all the resources which are deployed across different infrastructure so basically I wanted to have a single view wherein I'll be able to manage Monitor and deploy the resources using a single unified view Azure Arc is the service which provides centralized Management Service to provision any of the resources inside Azure Cloud we have tools to provision configure and monitor so these tools are nothing but our Azure portal CA tools but if I wanted to manage the resources across multiple infrastructure then I need to use something called as Azure Arc with the help of azure Arc we have simplified governance and management and also we will be able to deliver consistent multicloud and onr management platform let us understand Azure Arc with a scenario let us say I have resources which are spread across multiple infrastructure so these resources are in Azure Cloud these resources are in on Prem environment and there are resources inside Google learn dats for me I wanted to have a single point of view a single portal unified view to manage all these resources right for that I can use Azure Arc currently not all the resources are supported by Azure AR you need to check the latest Microsoft document for the resources which are supported currently so we have Azure AR to manage all these resources what I need to do is if I have Windows or Linux server I need to install Arc agent there is something called as Arc agent I need to install these agents onto the server for each and every server I need to install the agent once the agent has been installed then I need to onboard these resources using Azure AR once it is onboarded to the Azure AR then we will be able to manage the resources the user who is already managing the resources inside the Azure Cloud for him it doesn't matter the resources if it is in on-prem or it is in AWS gcp all the resources will be available in the Azure Arc itself so you'll be able to seamlessly manage the resources which are deployed in multiple infrastructure so what Azure Arc does basically is it provides a control plane so with the help of control plane we will be able to manage the resources which are spread across multiple infrastructure it is also good that if you have any applications or resources which are deployed in on-prem environment or any other cloud service platform if I wanted to integrate these services and I'll be able to use the arc to manage those integrated Services which are deployed in different environment let us see what we'll be able to do using Azure Arch using Azure Arch we'll be able to manage the entire environment together by projecting our existing non aure resources into armm so armm is basically Azure resource manager so non aure resources are the resources which are in onr and other cloud service platform we'll be able to manage multicloud and hybrid virtual machines so we'll be able to use the virtual machines which are deployed in on-prem environment we'll be able to onboard the V Center itself so we have the virtual machines which are managed by the V Center there can be many data center many clusters which V Center is managing we can actually onboard all these virtual machine using Azure AR we'll be able to manage kubernetes cluster and databases as if they are running in Azure Cloud if I am running kubernetes cluster in the On Prom environment so there is a cluster in onr environment and also I have kubernetes cluster in the Azure clo I also have the Azure container registry wherein I'll be able to pull the images so I wanted to pull or push the images to the Azure container registry using kubernetes cluster on onr and also the kubernetes cluster which has been configured on Azure cloth using Azure Arc I'll be able to easily manage these two clusters which are deployed in different environment one is in on-prem the other one is in Azure clot I'll be able to manage monitor these two cluster using the Azure Arc and for me it is also very easy to manage and integrate these two cluster to use the Azure container registry to access the images which has been stored we can also Leverage it Ops with devops with cicd we can also configure custom locations as an abstraction layer on top of azure Arc enabled kubernetes clusters and cluster extension let us see what are the resource type which are supported by Azure AR currently these are the resources which are supported by Azure AR the service are windows and Linux servers kubernetes cluster Azure data services SQL server and the virtual machines these virtual machines which are managed using V Center and system center virtual machine manager so this is for hyperv this is for VMware virtual machines as for the documentation 9,000 virtual machine 9,500 virtual machines can be onboarded into the Azure Arc so this was an overview about Azure AR in summary Azure AR is a centralized Management Service wherein we'll be able to manage the resources which are deployed across multiple infrastructure or multiple environments in this topic we'll discuss about Azure resource manager and arm templates arm or Azure resource manager is the deployment and Management Service for Azure whenever we use AR templates to create update and delete resources all the requests it can be via portal it can be via the CLA the Azure CLA or the Pell all the request will be sent via the API it will be sent through the Azure resource manager and the resources are provisioned inside the load the commands can be create update and delete so all these commands are sent via the API using Azure resource manager let us see what are the benefits of Azu resource manager we can manage the infrastructure through templates it's a Json template which is used to deploy resources we'll be able to deploy manage monitor resources and as a group let us say for example I wanted to create a three tier application and my application is a three tier application which has web tier and app tier and the database tier so I can deploy all the resources which are required for the three tier application as a group I can combine all the resources which are required for the three tier application in a group and the format of the file will be Json format and this will be deployed via the ERM template once I have deployed the resources for my environment let us say this is for my development environment I can repeat the same process deploying for my production environment as well I can reuse the template for my production environment because I have the confidence since I have used the template to create the resources which is the arm template to create all the resources for the three tier application now I have the confidence then I'll be able to use the same template to create the resources for my production environment as well and we will be able to define the dependencies between resources so they are deployed in the correct order and I'll be able to apply the rback which is nothing but the access control since I am deploying the resources onto the Azure Cloud if I have the rbac which is nothing but the role based Access Control if I have permission to create the resources only then I'll be able to create the resources even though I am creating the template if I don't have the permission to create the resources I will not be able to create the resources we will be able to define the tax and use the tax for the resources what we are going to create and these TXS will be useful whenever we wanted to view the cost for group of resources infrastructure is a code the armm template itself is an infrastructure as a code this template will be in Json format infrastructure as a code where we can manage the entire infrastructure using the code so we write the code in Json format we'll be able to deploy the resources onto the Azure clo if I wanted to run single command I'll be able to use the management tools we have the tools Azure CLA and Azure peld the way these tool works is different these tools will create the resources in serial fashion so it will create one after another so it will create I need to define the resources in a correct order if I wanted to create a virtual machine if there is no virtual Network then I need to start with the virtual Network I need to start with the resource Group then virtual Network then subnet and then virtual machine so I need to Define all these resources in a correct order whereas when I use the arm template there is no need for me to mention the resources in correct order the infrastructure as a code which is arm template will provision the resources in the correct order and all the resources will be provisioned in parallel unlike the CLA tools wherein the resources will be provision in serial whenever we use infrastructure as a code the resources will be deployed in parallel if I wanted to create 50 instances of the same resources all the 50 instances are created at the same time so this is the difference between serial and parallel so when you use infrastructure as a code the resources will be deployed in parallel let us see what are the benefits of arm templates deploy Azure infrastructure declaratively declare what resources you want to deploy you use the same code for development or test environment in consistent manner Azure resource manager orchestrate the deployment of interdependent resources so they are created in the correct order even if you have not mentioned in a correct sequence of order of the resources arm template is going to create these resources in a correct order we will be able to break the templates into smaller understandable easy to read code so others don't have to worry looking at at the code we'll be able to Nest these modular templates along with the code we can easily include Powershell scripts or CLA scripts since arm template uses Json language you need to Define all your resources and variables in Json format if you are not coming from a coding background it will be very complex to understand the Json format because of which Microsoft has developed another tool called as bicep let let us see what bicep is bicep is a domain specific language also called as DSL bicep is simplified to use with arm templates it is not complex as Json template it is very easy if you understand basics of bicep not like Json you'll be able to deploy the resources very easily using bicep it is very easy if you understand the basics platform it is specific to Azure cloud services let us see what are the benefits of bicep bicep will support all the preview and general available versions for Azure Services bicep uses simple syntax compared to Json template bicep files are more concise and easier to read we don't require programming languages to understand the bicep like arm template we can use the same code for development and test environment in a repeatable man manner we will be able to deploy all the resources using single command instead of multiple commands using bicep resource manager orchestrate the deployment of interdependent resources so they are created in correct order we can break the bicep code into manageable Parts by using modules these module deploy set of related resources if I wanted to use the bicep there is a bicep CLA which I wanted to install so this is available for Windows Linux and also for the Mac operating system so I need to install the bicep CLA on these platforms and I need to install Azure CLI or Azure Powershell I can even add a extension if I am using the visual studio code the there is an extension for the bicep bicep is a open source so this is an open source language we can install the extension for the bicep using vs code then we will be able to use the vs code to write the bicep template bicep is not complex as arm template we need to understand basics of bicep to deploy and manage resources on Azure cloud in this topic we'll discuss about Azure advisor in every organization there will be a situation for the IT team to analyze if the infrastructure is provisioned in a right method the IT team wanted to know if the provision servers are being backed up if there are business critical applications it team also wanted to know if the applications whatever they have configured is having ha capability or Dr capability in some of the cases there might be a situation where the IT team would have scaled the virtual machines from 10 to let us say 20 virtual missions during some Festival discount sale and this has been over provisioned this has not been scaled in in this particular case it is a direct impact of the cost to the company for all these checks in Azure there is a service which is called as Azure advisor so Azure advisor is a service which will give the score based on cost security reliability operational excellence performance and this is a free service let us see how the Azure advisor will work let us say we have our environment we have res resources which we have provisioned and this can be virtual machine this can be applications this can be virtual machine SC set this can be storage this can be Network so all these are all resources which we have already provisioned and based on the configuration and the Telemetry data Azure advisor will collect the configuration data and Telemetry data based on the configuration and Telemetry data what Azure advisor collects it will give the recommendation so it is going to give the recommendation you can actually remediate the recommendation whatever which is given by the Azure ADV or you'll be able to postpone the recommendation so you can postpone the recommendation and if you want it you can remediate and also you can dismiss the remed when you say dismiss when you choose the option to dismiss so this will not be calculated in the score and if you are choosing to remediate you will be given an option to straight away do the remediation from the aure adviser itself you can actually postpone the recommendation whatever Azure adviser gives so basically the Azure adviser is going to collect the configuration data and Telemetry data to give you the recommendation and these recommendations are categorized into high impact medium impact and low impact let us go to the notes the recommendation are available via the Azure portal and the API you'll be able to set up the alerts if there is any new recommendation which is provided by Azure adviser aure adviser displays the personalized recommendation for all your subscription you'll be able to filter the subscription Resource Group or even at the service level there are five categories on which Azure adviser will provide the recommendation it will be reliability security performance operational excellence and even the cost in this particular case we have provision the virtual machines we have over provision the virtual machines during some sale and we have not scaled in because we have not scaled in we are paying additional cost for 10 virtual machines this is direct implication of the cost let me tell you something if you are taking up next level of certification like Azure architect and Azure design these five categories are nothing but the five pillars of cloud architect these are the five pillars of cloud architecture any design you will be doing you will decide on these five pillars for your infrastructure so your design will be validated based on these five categories whenever you do any design on the cloud infrastructure when Azure adviser provides you the recommendation for reliability it is used to ensure and improve the continuity of your business critical applications let us say if you have deployed your virtual machines or services this recommendation for reliability will ask you to ensure that the VMS are deployed in availability zones and the recommendation for security is used to detect threats and vulnerabilities that might lead to security breaches recommendation of performance is used to improve the speed of your application let us say if you have deployed your web application in a basic tier Azure adviser May recommend you to move this web application to a standard tier or premium tier to improve the performance of the web application operational excellence this is for recommending to use new version of API upgrade to latest software use analytics and it can also recommend to use the Azure monitor as well the recommendation for cost is used to optimize and reduce your overall Azure spending so let us see the Azure advisor dashboard so this is how the Azure advisor dashboard will look like and we have the five categories reliability security and Performance Based on the score and impact we will be able to remediate so these are the recommendation what it is giving there are 31 recommendation for security year there are four recommendation for reliability and and it is also telling you that there are 122 impacted resources if we are going to remediate for recommendation then all these impacted resources will be remediated we can also see the category of the impact there is a medium impact over here and here also we can see there is a medium impact for operational excellence we see there is one recommendation and for cost there is a three recommendation over here if I going to select one of the category out of five categories so now I am into reliability I can select cost security reliability operational excellence performance in the Azure advisor now I am selecting reliability option over here under reliability you will be able to see there is high impact medium impact and low impact and we'll be able to see the description of the resources which has high impact and if I click this detail here I'll be able to remediate the recommendation provided by aure adviser whenever Azure adviser provides the recommendation you can you can either remediate postpone or you can dismiss so this was an overview about Azure adviser in this topic we'll discuss about Azure service Health we know that the services whatever we are deploying will be deployed in a particular region these services are dependent on the infrastructure which is provided by Azure Cloud if there is any incident or any outage or any issues on the Azure Cloud infrastructure we wanted to know the status of the Azure infrastructure so we have deployed our resources onto the Azure clo if there is any outage it can be any issues so we wanted to know what is the status of the infrastructure there are three main categories one is azure status service health and resource so these are the three main categories for Azure service health aure status will provide the information of the services globally the overall status of azure Global infrastructure will tell you the health of all services across all the Azure regions for widespread impact check Azure status it can be a crow strike or it can be storage it can be a storage account tisue there could be a possibility of underc underc cable connection so basically AUST it will tell you the status of the Azure Global infrastructure if there is any outage if there is any issues on the Azure Cloud s you need to check Azure status service health will provide a narrower view of azure services and regions it focuses on the Azure services and regions which you are using so Azure status will provide the information irrespective of the resources and services where you have deployed so it is is going to provide the information for all the regions whereas service health will provide the information about the region specific to your resources if you have deployed your resources in East US say you have deployed virtual machine you have deployed storage account you have deployed your web application service health will tell you what is the status of your region this particular region where you have deployed the resources service Health experience knows which services and resources you currently use so basically the service health will understand what are the services what are the resources which you are using and in which region you are currently using these Services you can even set up service Health alerts to notify you when there is a issues service health is a good place to check if there are any planned maintenance activities happening or any outages which has caused your applications or services to go down so you can come and check the service Health if there is any issues about the services which you have deployed or if there is any issues for the region itself let us see what is resource Health Resource health is a tailored view of your actual resources so in Azure status we know what is the status of azure globe glob infra so this is for Global infrastructure so service Health we can check if there is any issues related to services and regions where your services are deployed and resource health is specific to your services what you have deployed it can be the virtual machine it can be the web application it can be the storage of so resource health is basically specific to your resources what you have deployed it can be like the virtual machine has gone into the power of state by using Azure status service Health Resource Health Azure service Health gives you a complete view of your Azure environment all the way from the global infrastructure to the services what you have deployed additionally historical alerts are stored and accessible for later review there is a picture of as status in this picture we can see we have multiple geographic location one of the location is americaas in Americas we have multiple regions and each region is provided with different color code if it is green then it will be good if there is any information alert then it is going to provide the informational alert if there is any warning then it will be showing in the Amber and if there is any critical issues for that particular region it will be shown in red which is critical we have the picture of resource Health if there is any issues related to your resources so these resource health is specific to your resources what you have deployed in a particular region it can be any resource it can be virtual machine it can be web application it can be virtual Network it can be any resource it is going to tell you what is the status of the resources what you have deployed these status can be available unavailable unknown and degraded there seems to be a issue with one of the virtual machine and it has been resolved resource Health watches your resources and tells you if it is running as expected there is also another picture of the resource Health we can see here the status of one of the virtual machine is unavailable this is because of azure compute infrastructure so there was a problem with the Azure compute infrastructure because of which your virtual machion what you have deployed it is not working as expected so these information status will be available unavailable unknown and degraded in resource Health you will see one of these messages and it is going to provide the status for each of the message what exactly has caused the issue and what is the status of your services I'm into the Azu status web portal itself in the current impact page you'll be able to see if there is any current impact or any outages on any of the region you can select refresh and select 2 minutes or 5 minutes to refresh the status if I wanted to check for any particular geography if I select America you will see there are multiple regions in America and if I wanted to check any of the services for East us or Central us or W location I can check for specific services like function if the function is available in East Us location or Central us or west us I'll be able to check these Services if I have deployed some of the services and if it is not working properly or if there is any issues for the services what you have deployed you can come and check the status of the services from here if you see let us say if you have deployed virtual machine in East location and if there is any issues related to Virtual Machine in your region you can come and check here you can come to this Azu status portal and you'll be able to see the information related to Azure Cloud infrastructure let us say I have deployed my storage account in East US region so this is the East US US region I wanted to use the service Health to find out if there is any issues for East TOS I can filter for East us I'll remove all I can filter for East us and it is going to tell me no active service issues and if it is in blue then it is good it is normal if I wanted to check for particular service since I am using storage account I can select the storage account like this you'll be able to filter for specific region and specific services and also you can create an alert if you wanted to create an alert you can create alert here if I go back to service health I can check for my specific resources which I have deployed I can can select the resource heal once you select the resource Health you can select the subscription and you can select the resource type since I have created only the storage account I can select the storage account if you have created any other resources like virtual machine or virtual Network VPN SQL service bus you can select those resources so I have created wly storage account I have select the storage account and it is going to Prov provide me the status of the storage account if I click on the storage account itself it will tell if there was any issues and what is the issue so I have created the storage account it is available since the time I have created I have created the storage account today so from today there is no issues the status will be available unavailable unknown or degraded so you'll find these status so this is how you will be able to use the Azure status and Azure service health and also Azure resource health so in summary Azure service Health we have three main categories Azure status service Health Resource Health Azure status will tell you the information about the global infrastructure service health will tell you the information about the services where you have deployed and resource health is going to provide you the information about the resources it is specific to the resources what you have deployed in this topic we'll discuss about Azure monitor monitoring is a critical process for any organization monitoring will collect the data from different resources and tells if the resources are normal or if it is degraded which is very important for different it teams to know the status of the resources so we have monitoring and it is going to collect the data which can be elemetry data and it can be log data there are different it teams they want to know the status of the resources which they have deployed in their infrastructure based on the status let us say if CPU is greater than 70% mem memory is greater than 50% they wanted to know the status and they can take the action so which is why monitoring is very important when it comes to the onp environment we have monitoring tools like plunk NEOS HP open View and E so these are the very popular and robust tools which is used for the on from infrastructure monitoring Purpose with the help of azure monitor we collect the data and we will be able to analyze the data we'll be able to visualize the data what we have collected and we can take actions on the collected data we have the Azure monitor diagram here and on the left we see the source so these are the source from where the data will be collected it can be applications workload infrastructure Azure platform Azure monitor is not only used for Azure Cloud we can collect the data from onr we can also collect the data from multic cloud so we can collect the log data elemetry data from Azure cloud from on-prem environment and even from the multicloud environment so on the left hand side we have different data sources application workload infrastructure platform and we can add custom sources so from different data sources we are collecting the data we are collecting the log data we are collecting the metric data at different layers it can be from Network layer it can be at the operating system layer it can be at application layer at Network layer there can be link down issue there can be Port down issue there can be hardware issues at the operating system layer there can be service stop the CPU of one of the virtual machine or physical server might be I so these are the metrics and logs which is being collected by different layers and we need to store the collected data we use data platform to store all the data which has been collected from different sources so we collect metrics we collect log we collect the traces we will also collect the activity log which will determine the changes which has been done on the resources so all these logs traces metrics everything everything will be stored in a central location which will be your log analytics workspace once the logs are collected and stored then we use different tools to analyze the data we use application insights we use workbooks dashboard powerbi graph and also we use metric Explorer log analytics to query the data what we have collected and in case if there is any action which needs to be taken based on the alert we will be able to take the action we can also integrate with different tools and services based on the logs what we have collected so we have the locks collected we can integrate with different tools and services so we can integrate with services like function we can integrate with logic apps import export apis Azure devops we can integrate with tools like GitHub managed partner tools we can also integrate with itm tools like remedy and service no so once an alert is triggered the IT team will take necessary reaction the alert can also be sent to SMS it can be sent to email it can be sent to SMS it can be sent as a push notification to application we can also send the alert message as a voice call and also we can take necessary action based on the alert if the alert is based on the metric if the CPU usage is more than 70% of virtual machine skill set then we can add the virtual machines so basically we are responding to an alert to automatically scale the virtual machines some monitoring data is automatically collected these data are platform metrics so platform metric is automatically collected metric for CPU memory disk Network so these are the matrics of platform and activity log is also automatically collected activity log is basically the change of the resources if there is any action which has been performed on the resources those actions will be logged under activity log so this will be collected automatically and also the locks related to enter ID these locks are basically the Microsoft ENT ID locks related to the Azure tenant user and group creation deltion so those activities are also logged for Microsoft ENT ID and these are the logs which will be automatically collected let us see what is azure log analytics Azure log analytics it is a tool in Azure to query log from the data collected by Azure monitor so we query the locks collected from the Azure monitor with loog analytics we can do both simple and complex queries and also we can do data analysis we can also analyze the logs to match criteria identity Trends analyze patterns and provide various insights to use the log Analytics tool to query and analyze the data what we have collected so we need to have something called as log analytics workpace so this is the place where the data is all your log data will be stored here from different sources it can be from Azure Cloud it it can be from onr environment it can be from multicloud environment so whatever data we have collected it will be stored in log analytics workspace on the data we can do the query and Analysis we can query the log for a particular time range we can query the logs for different criteria we can also do simple or very complex query on the logs what we have collected we can write an advanced query to perform statistical analysis and visualize the result in a chart so we can visualize the result in the start format so whatever data we have quered so basically the log analytics is a tool using which we'll be able to query the logs what we have collected let us see Azure monitor alerts alerts are automated way to stay informed we set the alert condition and notification actions Azure monitor will notify when an alert is triggered these alerts which are triggered can be sent to email SMS push notification or it can be sent as a voice call also depending on the configuration we can use Azure monitor to attempt corrective action as well so in this picture we see there is a subscription there is a resource Group which has been filtered and we have filtered for past 1 hour for the time range and we have 29 alerts mod group are nothing but the similar set of alerts which has been triggered multiple times and it has been grouped as a smart group instead of sending multiple alerts it will be grouped into one single alert within the smart group so there are 15 total alerts we have also set the city for the alert rules and total out of 15 rules we have enabled 13 rules we can also trigger the alerts based on certain log events Azure monitor service health and Azure adviser all use action groups to notify you when an alert has been triggered so these are different Services Under Azure Cloud all these services will use action groups to notify you when an alert has been triggered let us say what is application insights application insights is an Azure monitor feature earlier Azure monitor Azure monitor application insights and log analytics so these were different Services now it has been brought under one single umbrella which is called as monitor which is azure monitor using application Insight we will be able to monitor the applications which are running in Azure on PR environment or even in the different Cloud environment there are two different ways to set up and configure the application insights we can install SDK for our application or we can install the application Insight agent on different programming languages these programming languages which are supported are c. net visual basic.net Java JavaScript nodejs and python so we will be able to install the application agent application Insight agent to monitor using the application insights when we have installed the application Insight agent for the applications which are running these codes we will be able to monitor informations like request rate response time and failure rates dependency rate response time and failure rates to show whether external services are slowing down the performance page views and load performance reported by the users browser this can also be monitored ax calls from web pages including rates response times and failure rates user and sessions count this can also be monitored performance counters such as CPU memory and network usage on Windows or Linux machine can also be monitored so this was an overview about Azure monitor we have discussed about log analytics Azure monitor alerts application insights this was the last topic for Azure AZ 900 fundamental course I hope this was helpful and added value to your certification preparation thank you for watching the video if you like the content Please Subscribe and share the video thank you and take care all the best

Share your thoughts

Related Transcripts

Azure Fundamentals | Azure Service Health | CH-61 | #az900  #azure thumbnail
Azure Fundamentals | Azure Service Health | CH-61 | #az900 #azure

Category: Science & Technology

Hello everyone in this topic we'll discuss about azure service health we know that the services whatever we are deploying will be deployed in a particular region these services are dependent on the infrastructure which is provided by azure cloud if there is any incident or any outage or any issues on... Read more

microsoft outage today live news thumbnail
microsoft outage today live news

Category: Science & Technology

As it steps up to address the critical issues affecting millions of windows devices on july 19th this conference held at microsoft's headquarters in redmond washington marks a significant milestone in the tech giants journey towards ensuring the security and reliability of its systems the july outage... Read more

The Resurgence of DDoS: Microsoft's Outage Highlights Persistent Cyber Threat thumbnail
The Resurgence of DDoS: Microsoft's Outage Highlights Persistent Cyber Threat

Category: News & Politics

The resurgence of ddos: microsoft's outage highlights persistent cyber threat the resurgence of dos microsoft's outage highlights persistent cyber threat by sator news in an era where cyber threats morph with the advancement of technology an old nemesis in the form of distributed denial of service dos... Read more

Microsoft Azure Outage: What Went Wrong? thumbnail
Microsoft Azure Outage: What Went Wrong?

Category: Science & Technology

You need to hear this if you're a tech enthusiast microsoft's cloud had a major hiccup and here's why it matters so picture this you're working away and suddenly everything grinds to a halt that's what happened when microsoft azure went down globally the culprit a dos attack but wait there's more microsoft's... Read more

Microsoft Azure Outage Highlights Resurgence of DDoS Cyberattacks thumbnail
Microsoft Azure Outage Highlights Resurgence of DDoS Cyberattacks

Category: News & Politics

Microsoft azure outage highlights resurgence of dos cyber attacks by sator news a sophisticated form of cyber intrusion known as a distributed denial of service dds attack recently disrupted microsoft corpse azure cloud services signifying a resurgence of this blunt hacking method the attack occurred... Read more

The Microsoft Outage Explained: How It Happened thumbnail
The Microsoft Outage Explained: How It Happened

Category: News & Politics

One small computer update gone wrong and the  world comes to a crashing halt so as we navigate   our increasingly digital world is the largest  it outage in history a wakeup call on just how   vulnerable we've become flight grounded it stinks  it's scary banks medor outlets government agencies   hospital... Read more