diff --git a/.openpublishing.redirection.json b/.openpublishing.redirection.json index 8fef1e0a5ea7f..d5e9eefc45ad7 100644 --- a/.openpublishing.redirection.json +++ b/.openpublishing.redirection.json @@ -5050,6 +5050,16 @@ "redirect_url": "/azure/machine-learning/team-data-science-process/apps-anomaly-detection-api", "redirect_document_id": false }, + { + "source_path": "articles/machine-learning/team-data-science-process/project-execution.md", + "redirect_url": "/azure/machine-learning/team-data-science-process/agile-development", + "redirect_document_id": false + }, + { + "source_path": "articles/machine-learning/team-data-science-process/move-data-to-azure-blob-using-python.md", + "redirect_url": "/azure/storage/blobs/storage-python-how-to-use-blob-storage", + "redirect_document_id": false + }, { "source_path": "articles/machine-learning/machine-learning-automated-data-pipeline-cheat-sheet.md", "redirect_url": "/azure/machine-learning/team-data-science-process/automated-data-pipeline-cheat-sheet", @@ -5690,6 +5700,11 @@ "redirect_url": "/azure/sql-database/saas-tenancy-tenant-analytics", "redirect_document_id": false }, + { + "source_path": "articles/sql-database/saas-dbpertenant-wingtip-app-guidance-tips.md", + "redirect_url": "/azure/sql-database/saas-tenancy-wingtip-app-guidance-tips", + "redirect_document_id": false + }, { "source_path": "articles/sql-database/sql-database-cloud-migrate-compatible-export-bacpac-sqlpackage.md", "redirect_url": "/azure/sql-database/sql-database-export", diff --git a/articles/active-directory-b2c/active-directory-b2c-faqs.md b/articles/active-directory-b2c/active-directory-b2c-faqs.md index 0e3779764ce0d..07e614574acf7 100644 --- a/articles/active-directory-b2c/active-directory-b2c-faqs.md +++ b/articles/active-directory-b2c/active-directory-b2c-faqs.md @@ -72,13 +72,13 @@ The email signature contains the B2C tenant's name that you provided when you fi Currently there is no way to change the "From:" field on the email. Vote on [feedback.azure.com](https://feedback.azure.com/forums/169401-azure-active-directory/suggestions/15334335-fully-customizable-verification-emails) you are interested in customizing the body of the verification email. ### How can I migrate my existing user names, passwords, and profiles from my database to Azure AD B2C? -You can use the Azure AD Graph API to write your migration tool. See the [Graph API sample](active-directory-b2c-devquickstarts-graph-dotnet.md) for details. +You can use the Azure AD Graph API to write your migration tool. See the [User migration guide](active-directory-b2c-user-migration.md) for details. ### What password policy is used for local accounts in Azure AD B2C? The Azure AD B2C password policy for local accounts is based on the policy for Azure AD. Azure AD B2C's sign-up, sign-up or sign-in and password reset policies uses the "strong" password strength and doesn't expire any passwords. Read the [Azure AD password policy](https://msdn.microsoft.com/library/azure/jj943764.aspx) for more details. ### Can I use Azure AD Connect to migrate consumer identities that are stored on my on-premises Active Directory to Azure AD B2C? -No, Azure AD Connect is not designed to work with Azure AD B2C. Consider using the [Graph API](active-directory-b2c-devquickstarts-graph-dotnet.md) for user migration. +No, Azure AD Connect is not designed to work with Azure AD B2C. Consider using the [Graph API](active-directory-b2c-devquickstarts-graph-dotnet.md) for user migration. See the [User migration guide](active-directory-b2c-user-migration.md) for details. ### Can my app open up Azure AD B2C pages within an iFrame? No, for security reasons, Azure AD B2C pages cannot be opened within an iFrame. Our service communicates with the browser to prohibit iFrames. The security community in general and the OAUTH2 specification, recommend against using iFrames for identity experiences due to the risk of click-jacking. diff --git a/articles/active-directory-domain-services/active-directory-ds-join-rhel-linux-vm.md b/articles/active-directory-domain-services/active-directory-ds-join-rhel-linux-vm.md index ebac03734418f..7828101835fcd 100644 --- a/articles/active-directory-domain-services/active-directory-ds-join-rhel-linux-vm.md +++ b/articles/active-directory-domain-services/active-directory-ds-join-rhel-linux-vm.md @@ -79,13 +79,13 @@ Now that the required packages are installed on the Linux virtual machine, the n sudo realm discover CONTOSO100.COM ``` - > [!NOTE] - > **Troubleshooting:** - > If *realm discover* is unable to find your managed domain: - * Ensure that the domain is reachable from the virtual machine (try ping). - * Check that the virtual machine has indeed been deployed to the same virtual network in which the managed domain is available. - * Check to see if you have updated the DNS server settings for the virtual network to point to the domain controllers of the managed domain. - > + > [!NOTE] + > **Troubleshooting:** + > If *realm discover* is unable to find your managed domain: + * Ensure that the domain is reachable from the virtual machine (try ping). + * Check that the virtual machine has indeed been deployed to the same virtual network in which the managed domain is available. + * Check to see if you have updated the DNS server settings for the virtual network to point to the domain controllers of the managed domain. + > 2. Initialize Kerberos. In your SSH terminal, type the following command: diff --git a/articles/active-directory-domain-services/active-directory-ds-join-ubuntu-linux-vm.md b/articles/active-directory-domain-services/active-directory-ds-join-ubuntu-linux-vm.md index 4d1beae5fbe67..0cd4ab99d49ab 100644 --- a/articles/active-directory-domain-services/active-directory-ds-join-ubuntu-linux-vm.md +++ b/articles/active-directory-domain-services/active-directory-ds-join-ubuntu-linux-vm.md @@ -117,13 +117,13 @@ Now that the required packages are installed on the Linux virtual machine, the n sudo realm discover CONTOSO100.COM ``` - > [!NOTE] - > **Troubleshooting:** - > If *realm discover* is unable to find your managed domain: - * Ensure that the domain is reachable from the virtual machine (try ping). - * Check that the virtual machine has indeed been deployed to the same virtual network in which the managed domain is available. - * Check to see if you have updated the DNS server settings for the virtual network to point to the domain controllers of the managed domain. - > + > [!NOTE] + > **Troubleshooting:** + > If *realm discover* is unable to find your managed domain: + * Ensure that the domain is reachable from the virtual machine (try ping). + * Check that the virtual machine has indeed been deployed to the same virtual network in which the managed domain is available. + * Check to see if you have updated the DNS server settings for the virtual network to point to the domain controllers of the managed domain. + > 2. Initialize Kerberos. In your SSH terminal, type the following command: diff --git a/articles/active-directory/active-directory-conditional-access-controls.md b/articles/active-directory/active-directory-conditional-access-controls.md index ad53c0b01a006..5655d8af2f93f 100644 --- a/articles/active-directory/active-directory-conditional-access-controls.md +++ b/articles/active-directory/active-directory-conditional-access-controls.md @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: article ms.tgt_pltfrm: na ms.workload: identity -ms.date: 11/03/2017 +ms.date: 11/29/2017 ms.author: markvi ms.reviewer: calebb @@ -109,7 +109,7 @@ Providers currently offering a compatible service include: - RSA -- Trusona +- [Trusona](https://www.trusona.com/docs/azure-ad-integration-guide) For more information on those services, contact the providers directly. diff --git a/articles/active-directory/active-directory-conditional-access-technical-reference.md b/articles/active-directory/active-directory-conditional-access-technical-reference.md index c7a43ac4dbdc3..df8cd4fc5cbde 100644 --- a/articles/active-directory/active-directory-conditional-access-technical-reference.md +++ b/articles/active-directory/active-directory-conditional-access-technical-reference.md @@ -12,16 +12,16 @@ ms.devlang: na ms.topic: article ms.tgt_pltfrm: na ms.workload: identity -ms.date: 11/28/2017 +ms.date: 11/29/2017 ms.author: markvi ms.reviewer: spunukol --- # Azure Active Directory conditional access technical reference -You can use [Azure Active Directory (Azure AD) conditional access](active-directory-conditional-access-azure-portal.md) to fine-tune how authorized users can access your resources. +You can use [Azure Active Directory (Azure AD) conditional access](active-directory-conditional-access-azure-portal.md) to fine-tune how authorized users can access your resources. -This topic provides support information for the following configuration options for a conditional access policy: +This article provides you with support information for the following configuration options for a conditional access policy: - Cloud applications assignments @@ -35,7 +35,7 @@ This topic provides support information for the following configuration options ## Cloud apps assignments -When you configure a conditional access policy, you need to [select the cloud apps that use your policy](active-directory-conditional-access-azure-portal.md#who). +With conditional access policies, you control how your users access your [cloud apps](active-directory-conditional-access-azure-portal.md#who). When you configure a conditional access policy, you need to select at least one cloud app. ![Select the cloud apps for your policy](./media/active-directory-conditional-access-technical-reference/09.png) @@ -45,6 +45,7 @@ When you configure a conditional access policy, you need to [select the cloud ap You can assign a conditional access policy to the following cloud apps from Microsoft: - Azure Information Protection - [Learn more](https://docs.microsoft.com/information-protection/get-started/faqs#i-see-azure-information-protection-is-listed-as-an-available-cloud-app-for-conditional-accesshow-does-this-work) + - Azure RemoteApp - Microsoft Dynamics 365 @@ -100,7 +101,7 @@ In a conditional access policy, you can configure the device platform condition ## Client apps condition -When you configure a conditional access policy, you can [select client apps](active-directory-conditional-access-azure-portal.md#client-apps) for the client app condition. Set the client apps condition to grant or block access when an access attempt is made from the following types of client apps: +In your conditional access policy, you can configure the [client apps](active-directory-conditional-access-azure-portal.md#client-apps) condition to tie the policy to the client app that has initiated an access attempt. Set the client apps condition to grant or block access when an access attempt is made from the following types of client apps: - Browser - Mobile apps and desktop apps @@ -109,11 +110,11 @@ When you configure a conditional access policy, you can [select client apps](act ### Supported browsers -Control browser access by using the **Browser** option in your conditional access policy. Access is granted only when the access attempt is made by a supported browser. When an access attempt is made by an unsupported browser, the attempt is blocked. +In your conditional access policy, you can select **Browsers** as client app. ![Control access for supported browsers](./media/active-directory-conditional-access-technical-reference/05.png) -In your conditional access policy, the following browsers are supported: +This setting has an impact on access attempts made from the following browsers: | OS | Browsers | Support | @@ -137,11 +138,13 @@ In your conditional access policy, the following browsers are supported: ### Supported mobile applications and desktop clients -Control app and client access by using the **Mobile apps and desktop clients** option in your conditional access policy. Access is granted only when the access attempt is made by a supported mobile app or desktop client. When an access attempt is made by an unsupported app or client, the attempt is blocked. +In your conditional access policy, you can select **Mobile apps and desktop clients** as client app. + ![Control access for supported mobile apps or desktop clients](./media/active-directory-conditional-access-technical-reference/06.png) -The following mobile apps and desktop clients support conditional access for Office 365 and other Azure AD-connected service applications: + +This setting has an impact on access attempts made from the following mobile apps and desktop clients: |Client apps|Target Service|Platform| @@ -167,11 +170,11 @@ The following mobile apps and desktop clients support conditional access for Off ## Approved client app requirement -Control client connections by using the **Require approved client app** option in your conditional access policy. Access is granted only when a connection attempt is made by an approved client app. +In your conditional access policy, you can require that an access attempt to the selected cloud apps needs to be made from an approved client app. ![Control access for approved client apps](./media/active-directory-conditional-access-technical-reference/21.png) -The following client apps can be used with the approved client application requirement: +This setting applies to the following client apps: - Microsoft Azure Information Protection diff --git a/articles/active-directory/active-directory-how-subscriptions-associated-directory.md b/articles/active-directory/active-directory-how-subscriptions-associated-directory.md index 5c41e3507a983..95db394a4c0a8 100644 --- a/articles/active-directory/active-directory-how-subscriptions-associated-directory.md +++ b/articles/active-directory/active-directory-how-subscriptions-associated-directory.md @@ -37,7 +37,6 @@ You must sign in with an account that exists in both the current directory with 5. The recipient clicks the link and follows the instructions, including entering their payment information. When the recipient succeeds, the subscription is transferred. 6. The default directory of the subscription is changed to the directory that the user is in. -For more information, see [Transfer Azure subscription ownership to another account](../billing/billing-subscription-transfer.md). ## Next steps * To learn more about how to change administrators for an Azure subscription, see [Transfer ownership of an Azure subscription to another account](../billing/billing-subscription-transfer.md) diff --git a/articles/active-directory/active-directory-saas-docusign-provisioning-tutorial.md b/articles/active-directory/active-directory-saas-docusign-provisioning-tutorial.md index 61db11b4ac05f..30c888ce135cc 100644 --- a/articles/active-directory/active-directory-saas-docusign-provisioning-tutorial.md +++ b/articles/active-directory/active-directory-saas-docusign-provisioning-tutorial.md @@ -12,7 +12,7 @@ ms.workload: identity ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 07/10/2017 +ms.date: 11/28/2017 ms.author: jeedes --- @@ -38,10 +38,13 @@ Before configuring and enabling the provisioning service, you need to decide wha ### Important tips for assigning users to DocuSign -* It is recommended that a single Azure AD user is assigned to DocuSign to test the provisioning configuration. Additional users and/or groups may be assigned later. +* It is recommended that a single Azure AD user is assigned to DocuSign to test the provisioning configuration. Additional users may be assigned later. * When assigning a user to DocuSign, you must select a valid user role. The "Default Access" role does not work for provisioning. +> [!NOTE] +> Azure AD does not support group provisioning with the Docusign application, only users can be provisioned. + ## Enable User Provisioning This section guides you through connecting your Azure AD to DocuSign's user account provisioning API, and configuring the provisioning service to create, update, and disable assigned user accounts in DocuSign based on user and group assignment in Azure AD. @@ -83,7 +86,7 @@ The objective of this section is to outline how to enable user provisioning of A 12. Click **Save.** -It starts the initial synchronization of any users and/or groups assigned to DocuSign in the Users and Groups section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 20 minutes as long as the service is running. You can use the **Synchronization Details** section to monitor progress and follow links to provisioning activity reports, which describe all actions performed by the provisioning service on your DocuSign app. +It starts the initial synchronization of any users assigned to DocuSign in the Users and Groups section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 20 minutes as long as the service is running. You can use the **Synchronization Details** section to monitor progress and follow links to provisioning activity reports, which describe all actions performed by the provisioning service on your DocuSign app. You can now create a test account. Wait for up to 20 minutes to verify that the account has been synchronized to DocuSign. diff --git a/articles/active-directory/active-directory-saas-filecloud-tutorial.md b/articles/active-directory/active-directory-saas-filecloud-tutorial.md index e27f44cd6dade..b92669dbab749 100644 --- a/articles/active-directory/active-directory-saas-filecloud-tutorial.md +++ b/articles/active-directory/active-directory-saas-filecloud-tutorial.md @@ -7,13 +7,13 @@ author: jeevansd manager: femila ms.reviewer: joflore -ms.assetid: f39f0ddd-b504-4562-971f-77b88d1e75fb +ms.assetid: 2263e583-3eb2-4a06-982d-33f5f54858f4 ms.service: active-directory ms.workload: identity ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 07/19/2017 +ms.date: 11/27/2017 ms.author: jeedes --- @@ -106,12 +106,13 @@ In this section, you enable Azure AD single sign-on in the Azure portal and conf ![FileCloud Domain and URLs single sign-on information](./media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_url.png) - a. In the **Sign-on URL** textbox, type a URL using the following pattern: `https://.filecloudhosted.com` + a. In the **Sign-on URL** textbox, type a URL using the following pattern: + `https://.filecloudonline.com` - b. In the **Identifier** textbox, type a URL using the following pattern: `https://.filecloudhosted.com/simplesaml/module.php/saml/sp/metadata.php/default-sp` + b. In the **Identifier** textbox, type a URL using the following pattern: `https://.filecloudonline.com/simplesaml/module.php/saml/sp/metadata.php/default-sp` > [!NOTE] - > These values are not real. Update these values with the actual Sign-On URL and Identifier. Contact [FileCloud Client support team](mailto:support@codelathe.com) to get these values. + > These values are not real. Update these values with the actual Sign-On URL and Identifier. Contact [FileCloud Client support team](mailto:support@codelathe.com) to get these values. 4. On the **SAML Signing Certificate** section, click **Metadata XML** and then save the metadata file on your computer. @@ -129,23 +130,23 @@ In this section, you enable Azure AD single sign-on in the Azure portal and conf 8. On the left navigation pane, click **Settings**. - ![Settings section On App side](./media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_000.png) + ![Configure Single Sign-On On App side](./media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_000.png) 9. Click **SSO** tab on Settings section. - ![Single Sign-On Tab On App side](./media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_001.png) + ![Configure Single Sign-On On App side](./media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_001.png) 10. Select **SAML** as **Default SSO Type** on **Single Sign On (SSO) Settings** panel. - ![Single Sign-On Settings Panel On App side](./media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_002.png) + ![Configure Single Sign-On On App side](./media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_002.png) -11. Paste **SAML Entity ID**, which you have copied from Azure portal into the **IdP End Point URL** textbox. +11. In the **IdP End Point URL** textbox, paste the value of **SAML Entity ID** which you have copied from Azure portal. - ![IDP End Point URL Textbox](./media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_003.png) + ![Configure Single Sign-On On App side](./media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_003.png) 12. Open your downloaded metadata file in notepad, copy the content of it into your clipboard, and then paste it to the **IdP Meta Data** textbox on **SAML Settings** panel. - ![IDP Meta Data Section on App side](./media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_004.png) + ![Configure Single Sign-On On App side](./media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_004.png) 13. Click **Save** button. @@ -190,7 +191,7 @@ The objective of this section is to create a test user in the Azure portal calle The objective of this section is to create a user called Britta Simon in FileCloud. FileCloud supports just-in-time provisioning, which is by default enabled. There is no action item for you in this section. A new user is created during an attempt to access FileCloud if it doesn't exist yet. >[!NOTE] ->If you need to create a user manually, you need to contact the [FileCloud Client support team](mailto:support@codelathe.com). +>If you need to create a user manually, you need to contact the [FileCloud Client support team](mailto:support@codelathe.com). ### Assign the Azure AD test user @@ -224,9 +225,10 @@ In this section, you enable Britta Simon to use Azure single sign-on by granting ### Test single sign-on -The objective of this section is to test your Azure AD SSO configuration using the Access Panel. +In this section, you test your Azure AD single sign-on configuration using the Access Panel. When you click the FileCloud tile in the Access Panel, you should get automatically signed-on to your FileCloud application. +For more information about the Access Panel, see [Introduction to the Access Panel](active-directory-saas-access-panel-introduction.md). ## Additional resources diff --git a/articles/active-directory/active-directory-saas-google-apps-tutorial.md b/articles/active-directory/active-directory-saas-google-apps-tutorial.md index 969604174c7f7..2a2249c75aaf5 100644 --- a/articles/active-directory/active-directory-saas-google-apps-tutorial.md +++ b/articles/active-directory/active-directory-saas-google-apps-tutorial.md @@ -13,7 +13,7 @@ ms.workload: identity ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 11/16/2017 +ms.date: 11/28/2017 ms.author: jeedes --- @@ -134,11 +134,11 @@ In this section, you enable Azure AD single sign-on in the Azure portal and conf | | |--| + | `google.com`| + | `http://google.com`| + | `google.com/`| | `http://google.com/a/`| - | `http://google.com`| - | `google.com/`| - | `google.com`| - + > [!NOTE] > These values are not real. Update these values with the actual Sign-On URL and Identifier. Contact [Google Apps Client support team](https://www.google.com/contact/) to get these values. diff --git a/articles/active-directory/connect/active-directory-aadconnect-dirsync-deprecated.md b/articles/active-directory/connect/active-directory-aadconnect-dirsync-deprecated.md index 0f2e3b1748fa8..3daa4f2392abe 100644 --- a/articles/active-directory/connect/active-directory-aadconnect-dirsync-deprecated.md +++ b/articles/active-directory/connect/active-directory-aadconnect-dirsync-deprecated.md @@ -21,7 +21,7 @@ ms.custom: H1Hack27Feb2017 # Upgrade Windows Azure Active Directory Sync and Azure Active Directory Sync Azure AD Connect is the best way to connect your on-premises directory with Azure AD and Office 365. This is a great time to upgrade to Azure AD Connect from Windows Azure Active Directory Sync (DirSync) or Azure AD Sync as these tools are now deprecated and are no longer supported as of April 13, 2017. -The two identity synchronization tools that are deprecated were offered for single forest customers (DirSync) and for multi-forest and other advanced customers (Azure AD Sync). These older tools have been replaced with a single solution that is available for all scenarios: Azure AD Connect. It offers new functionality, feature enhancements, and support for new scenarios. To be able to continue to synchronize your on-premises identity data to Azure AD and Office 365, we strongly recommend that you upgrade to Azure AD Connect. Microsoft does not guarantee these older versions to work after December 31st, 2017. +The two identity synchronization tools that are deprecated were offered for single forest customers (DirSync) and for multi-forest and other advanced customers (Azure AD Sync). These older tools have been replaced with a single solution that is available for all scenarios: Azure AD Connect. It offers new functionality, feature enhancements, and support for new scenarios. To be able to continue to synchronize your on-premises identity data to Azure AD and Office 365, we strongly recommend that you upgrade to Azure AD Connect. Microsoft does not guarantee these older versions to work after December 31, 2017. The last release of DirSync was released in July 2014 and the last release of Azure AD Sync was released in May 2015. @@ -38,6 +38,9 @@ Azure AD Connect is the successor to DirSync and Azure AD Sync. It combines all ## How to transition to Azure AD Connect If you are running DirSync, there are two ways you can upgrade: In-place upgrade and parallel deployment. An in-place upgrade is recommended for most customers and if you have a recent operating system and less than 50,000 objects. In other cases, it is recommended to do a parallel deployment where your DirSync configuration is moved to a new server running Azure AD Connect. +>[!NOTE] +>In place upgrade form DirSync to Azure AD Connect is no longer supported after December 31, 2017, and you may need to do a parallel deployment to upgrade. + If you use Azure AD Sync, then an in-place upgrade is recommended. If you want to, it is possible to install a new Azure AD Connect server in parallel and do a swing migration from your Azure AD Sync server to Azure AD Connect. | Solution | Scenario | @@ -56,7 +59,7 @@ If you want to see how to do an in-place upgrade from DirSync to Azure AD Connec The notification was also sent to customers using Azure AD Connect with a build number 1.0.\*.0 (using a pre-1.1 release). Microsoft recommends customers to stay current with Azure AD Connect releases. The [automatic upgrade](active-directory-aadconnect-feature-automatic-upgrade.md) feature introduced in 1.1 makes it easy to always have a recent version of Azure AD Connect installed. **Q: Will DirSync/Azure AD Sync stop working on April 13, 2017?** -DirSync/Azure AD Sync will continue to work on April 13th 2017. However, Azure AD may no longer accept communications from DirSync/Azure AD Sync after December 31st 2017. +DirSync/Azure AD Sync will continue to work on April 13, 2017. However, Azure AD will no longer accept communications from DirSync/Azure AD Sync after December 31, 2017. **Q: Which DirSync versions can I upgrade from?** It is supported to upgrade from any DirSync release currently being used. Note that in-place upgrade from DirSync to Azure AD Connect is not supported after December 31st 2017. Customers who are using DirSync after that date and want to move to Azure AD Connect may have to do a fresh installation of Azure AD Connect instead. diff --git a/articles/active-directory/connect/active-directory-aadconnect-existing-tenant.md b/articles/active-directory/connect/active-directory-aadconnect-existing-tenant.md index d8a428e2c80f6..e34300b674c8e 100644 --- a/articles/active-directory/connect/active-directory-aadconnect-existing-tenant.md +++ b/articles/active-directory/connect/active-directory-aadconnect-existing-tenant.md @@ -30,7 +30,7 @@ If you started to manage users in Azure AD that are also in on-premises AD and l ## Sync with existing users in Azure AD When you install Azure AD Connect and you start synchronizing, the Azure AD sync service (in Azure AD) does a check on every new object and try to find an existing object to match. There are three attributes used for this process: **userPrincipalName**, **proxyAddresses**, and **sourceAnchor**/**immutableID**. A match on **userPrincipalName** and **proxyAddresses** is known as a **soft match**. A match on **sourceAnchor** is known as **hard match**. For the **proxyAddresses** attribute only the value with **SMTP:**, that is the primary email address, is used for the evaluation. -The match is only evaluated for new objects coming from Connect. If you change an exiting object so it is matching any of these attributes, then you see an error instead. +The match is only evaluated for new objects coming from Connect. If you change an existing object so it is matching any of these attributes, then you see an error instead. If Azure AD finds an object where the attribute values are the same for an object coming from Connect and that it is already present in Azure AD, then the object in Azure AD is taken over by Connect. The previously cloud-managed object is flagged as on-premises managed. All attributes in Azure AD with a value in on-premises AD are overwritten with the on-premises value. The exception is when an attribute has a **NULL** value on-premises. In this case, the value in Azure AD remains, but you can still only change it on-premises to something else. diff --git a/articles/active-directory/connect/active-directory-aadconnect.md b/articles/active-directory/connect/active-directory-aadconnect.md index 665a6f28de5d9..34dbbc7f33793 100644 --- a/articles/active-directory/connect/active-directory-aadconnect.md +++ b/articles/active-directory/connect/active-directory-aadconnect.md @@ -21,7 +21,7 @@ ms.author: billmath Azure AD Connect will integrate your on-premises directories with Azure Active Directory. This allows you to provide a common identity for your users for Office 365, Azure, and SaaS applications integrated with Azure AD. This topic will guide you through the planning, deployment, and operation steps. It is a collection of links to the topics related to this area. > [!IMPORTANT] -> [Azure AD Connect is the best way to connect your on-premises directory with Azure AD and Office 365. This is a great time to upgrade to Azure AD Connect from Windows Azure Active Directory Sync (DirSync) or Azure AD Sync as these tools are now deprecated and will reach end of support on April 13, 2017.](active-directory-aadconnect-dirsync-deprecated.md) +> [Azure AD Connect is the best way to connect your on-premises directory with Azure AD and Office 365. This is a great time to upgrade to Azure AD Connect from Windows Azure Active Directory Sync (DirSync) or Azure AD Sync as these tools are now deprecated are no longer supported as of April 13, 2017.](active-directory-aadconnect-dirsync-deprecated.md) > > diff --git a/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_addfromgallery.png b/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_addfromgallery.png index df27ab846d15e..874a5d5fb7f03 100644 Binary files a/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_addfromgallery.png and b/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_addfromgallery.png differ diff --git a/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_app.png b/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_app.png index 4e98d06889090..843ea0a01a7d0 100644 Binary files a/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_app.png and b/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_app.png differ diff --git a/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_certificate.png b/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_certificate.png index 15796ab6a617a..6be0017805652 100644 Binary files a/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_certificate.png and b/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_certificate.png differ diff --git a/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_configure.png b/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_configure.png index b5f1919d924a4..d3b20aae9de5c 100644 Binary files a/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_configure.png and b/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_configure.png differ diff --git a/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_url.png b/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_url.png index 6add7718bc038..18cab24da2d77 100644 Binary files a/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_url.png and b/articles/active-directory/media/active-directory-saas-filecloud-tutorial/tutorial_filecloud_url.png differ diff --git a/articles/active-directory/media/active-directory-saas-googleapps-tutorial/create_aaduser_003.png b/articles/active-directory/media/active-directory-saas-googleapps-tutorial/create_aaduser_003.png index 091f5798cc612..2999b3ee22b8a 100644 Binary files a/articles/active-directory/media/active-directory-saas-googleapps-tutorial/create_aaduser_003.png and b/articles/active-directory/media/active-directory-saas-googleapps-tutorial/create_aaduser_003.png differ diff --git a/articles/active-directory/media/active-directory-saas-googleapps-tutorial/tutorial_googleapps_url.png b/articles/active-directory/media/active-directory-saas-googleapps-tutorial/tutorial_googleapps_url.png index f8823461ded19..171b56f967f4a 100644 Binary files a/articles/active-directory/media/active-directory-saas-googleapps-tutorial/tutorial_googleapps_url.png and b/articles/active-directory/media/active-directory-saas-googleapps-tutorial/tutorial_googleapps_url.png differ diff --git a/articles/active-directory/role-based-access-control-create-custom-roles-for-internal-external-users.md b/articles/active-directory/role-based-access-control-create-custom-roles-for-internal-external-users.md index 65cc634f5d95f..ee590cfce7b52 100644 --- a/articles/active-directory/role-based-access-control-create-custom-roles-for-internal-external-users.md +++ b/articles/active-directory/role-based-access-control-create-custom-roles-for-internal-external-users.md @@ -17,7 +17,7 @@ ms.date: 05/10/2017 ms.author: a-crradu --- -## Intro on role-based access control +# Intro on role-based access control Role-based access control is an Azure portal only feature allowing the owners of a subscription to assign granular roles to other users who can manage specific resource scopes in their environment. @@ -29,11 +29,10 @@ Using RBAC in the Azure environment requires: * Having a standalone Azure subscription assigned to the user as owner (subscription role) * Have the Owner role of the Azure subscription * Have access to the [Azure portal](https://portal.azure.com) -* Make sure to have the following Resource Providers registered for the user subscription: **Microsoft.Authorization**. For more information on how to register the resource providers, see [Resource Manager providers, regions, API versions and schemas](/azure-resource-manager/resource-manager-supported-services.md). - +* Make sure to have the following Resource Providers registered for the user subscription: **Microsoft.Authorization**. For more information on how to register the resource providers, see [Resource Manager providers, regions, API versions and schemas](../azure-resource-manager/resource-manager-supported-services.md). > [!NOTE] -> Office 365 subscriptions or Azure Active Directory licenses (for example: Access to Azure Active Directory) provisioned from the O365 portal don't quality for using RBAC. +> Office 365 subscriptions or Azure Active Directory licenses (for example: Access to Azure Active Directory) provisioned from the O365 portal don't qualify for using RBAC. ## How can RBAC be used RBAC can be applied at three different scopes in Azure. From the highest scope to the lowest one, they are as follows: @@ -74,8 +73,7 @@ After selecting the subscription, the admin user must click **Access Control (IA ![add new user in access control IAM feature in Azure portal](./media/role-based-access-control-create-custom-roles-for-internal-external-users/2.png) -The next step is to select the role to be assigned and the user whom the RBAC role will be assigned to. In the **Role** dropdown menu the admin user sees only the built-in RBAC roles which are available in Azure. For more detailed explanations of each role and their assignable scopes, see [Built-in roles for Azure Role-Based Access Control](/active-directory/role-based-access-built-in-roles.md). - +The next step is to select the role to be assigned and the user whom the RBAC role will be assigned to. In the **Role** dropdown menu the admin user sees only the built-in RBAC roles which are available in Azure. For more detailed explanations of each role and their assignable scopes, see [Built-in roles for Azure Role-Based Access Control](role-based-access-built-in-roles.md). The admin user then needs to add the email address of the external user. The expected behavior is for the external user to not show up in the existing tenant. After the external user has been invited, he will be visible under **Subscriptions > Access Control (IAM)** with all the current users which are currently assigned an RBAC role at the Subscription scope. @@ -121,8 +119,7 @@ In the **Users** view in both portals the external users can be recognized by: * The different icon type in the Azure portal * The different sourcing point in the classic portal -However, granting **Owner** or **Contributor** access to an external user at the **Subscription** scope, does not allow the access to the admin user's directory, unless the **Global Admin** allows it. In the user proprieties, the **User Type** which has two common parameters, **Member** and **Guest** can be identified. A member is a user which is registered in the directory while a guest is a user invited to the directory from an external source. For more information, see [How do Azure Active Directory admins add B2B collaboration users](/active-directory/active-directory-b2b-admin-add-users). - +However, granting **Owner** or **Contributor** access to an external user at the **Subscription** scope, does not allow the access to the admin user's directory, unless the **Global Admin** allows it. In the user proprieties, the **User Type** which has two common parameters, **Member** and **Guest** can be identified. A member is a user which is registered in the directory while a guest is a user invited to the directory from an external source. For more information, see [How do Azure Active Directory admins add B2B collaboration users](active-directory-b2b-admin-add-users.md). > [!NOTE] > Make sure that after entering the credentials in the portal, the external user selects the correct directory to sign-in to. The same user can have access to multiple directories and can select either one of them by clicking the username in the top right-hand side in the Azure portal and then choose the appropriate directory from the dropdown list. @@ -163,7 +160,7 @@ The normal behavior for this external user with this built-in role is to see and -![virtual machine contributor role overview in azure portal](./media/role-based-access-control-create-custom-roles-for-internal-external-users/12.png) +![virtual machine contributor role overview in Azure portal](./media/role-based-access-control-create-custom-roles-for-internal-external-users/12.png) ## Grant access at a subscription level for a user in the same directory The process flow is identical to adding an external user, both from the admin perspective granting the RBAC role as well as the user being granted access to the role. The difference here is that the invited user will not receive any email invitations as all the resource scopes within the subscription will be available in the dashboard after signing in. @@ -340,7 +337,7 @@ The new role is now available in the Azure portal and the assignation process is ![Azure portal screenshot of custom RBAC role created using CLI 1.0](./media/role-based-access-control-create-custom-roles-for-internal-external-users/26.png) -As of the latest Build 2017, the Azure Cloud Shell is generally available. Azure Cloud Shell is a complement to IDE and the Azure Portal. With this service, you get a browser-based shell that is authenticated and hosted within Azure and you can use it instead of CLI installed on your machine. +As of the latest Build 2017, the Azure Cloud Shell is generally available. Azure Cloud Shell is a complement to IDE and the Azure portal. With this service, you get a browser-based shell that is authenticated and hosted within Azure and you can use it instead of CLI installed on your machine. diff --git a/articles/active-directory/whats-new.md b/articles/active-directory/whats-new.md index 3429301586ba3..660968f8fb3e8 100644 --- a/articles/active-directory/whats-new.md +++ b/articles/active-directory/whats-new.md @@ -265,14 +265,14 @@ For more information, see [Integrate your existing NPS infrastructure with Azure --- +### Restore or permanently remove deleted users + **Type:** New feature **Service Category:** User Management **Product Capability:** Directory -**Restore or permanently remove deleted users** - In the Azure AD admin center, you can now: diff --git a/articles/aks/kubernetes-service-principal.md b/articles/aks/kubernetes-service-principal.md index 27a9def694e9e..5e80e81ad4799 100644 --- a/articles/aks/kubernetes-service-principal.md +++ b/articles/aks/kubernetes-service-principal.md @@ -41,7 +41,7 @@ When deploying an AKS cluster with the `az aks create` command, you have the opt In the following example, an AKS cluster is created, and because an existing service principal is not specified, a service principal is created for the cluster. In order to complete this operation, your account must have the proper rights for creating a service principal. ```azurecli -az aks create -n myClusterName -d myDNSPrefix -g myResourceGroup --generate-ssh-keys +az aks create --name myK8SCluster --resource-group myResourceGroup --generate-ssh-keys ``` ## Use an existing SP @@ -57,7 +57,6 @@ When using an existing service principal, it must meet the following requirement To create the service principal with the Azure CLI, use the [az ad sp create-for-rbac]() command. ```azurecli -id=$(az account show --query id --output tsv) az ad sp create-for-rbac --skip-assignment ``` @@ -78,7 +77,7 @@ Output is similar to the following. Take note of the `appId` and `password`. The When using a pre-created service principal, provide the `appId` and `password` as argument values to the `az aks create` command. ```azurecli-interactive -az aks create --resource-group myResourceGroup --name myK8SCluster --service-principal ----client-secret +az aks create --resource-group myResourceGroup --name myK8SCluster --service-principal --client-secret ``` If deploying an AKS cluster from the Azure portal, enter these values in the AKS cluster configuration form. diff --git a/articles/api-management/upgrade-and-scale.md b/articles/api-management/upgrade-and-scale.md index 1826ca489449e..62da718385d33 100644 --- a/articles/api-management/upgrade-and-scale.md +++ b/articles/api-management/upgrade-and-scale.md @@ -17,11 +17,11 @@ ms.author: apimpm # Upgrade and scale an API Management instance -Customers can scale an API Management (APIM) instance by adding and removing units. A **unit** is comprised of dedicated Azure resources and has a certain load-bearing capacity expressed as a number of API calls per month. Actual throughput and latency will vary broadly depending on on factors such as number and rate of concurrent connections, the kind and number of configured policies, request and response sizes and backend latency. +Customers can scale an API Management (APIM) instance by adding and removing units. A **unit** is composed of dedicated Azure resources and has a certain load-bearing capacity expressed as a number of API calls per month. This number does not represent a call limit, but rather a maximum throughput value to allow for rough capacity planning. Actual throughput and latency vary broadly depending on factors such as number and rate of concurrent connections, the kind and number of configured policies, request and response sizes, and backend latency. -Capacity and price of each unit depends on a **tier** in which the unit exists. You can choose between three tiers: **Developer**, **Standard**, **Premium**. If you need to increase capacity for a service within a tier, you should add a unit. If the tier that is currently selected in your APIM instance does not allow adding more units, you need to upgrade to a higher-level tier. +Capacity and price of each unit depends on the **tier** in which the unit exists. You can choose between three tiers: **Developer**, **Standard**, **Premium**. If you need to increase capacity for a service within a tier, you should add a unit. If the tier that is currently selected in your APIM instance does not allow adding more units, you need to upgrade to a higher-level tier. -The price of each unit, the ability to add/remove units, whether or not you have certain features (for example, multi-region deployment) depends on the tier that you chose for your APIM instance. The [pricing details](https://azure.microsoft.com/pricing/details/api-management/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) article, explains what price per unit and features you get in each tier. +The price of each unit and the available features (for example, multi-region deployment) depends on the tier that you chose for your APIM instance. The [pricing details](https://azure.microsoft.com/pricing/details/api-management/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) article, explains the price per unit and features you get in each tier. >[!NOTE] >The [pricing details](https://azure.microsoft.com/pricing/details/api-management/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) article shows approximate numbers of unit capacity in each tier. To get more accurate numbers, you need to look at a realistic scenario for your APIs. See the "How to plan for capacity" section that follows. @@ -40,7 +40,7 @@ To perform the steps described in this article, you must have: To find out if you have enough units to handle your traffic, test on workloads that you expect. -As mentioned above, the number of requests per second APIM can process depends on many variables. For example, the connection pattern, the size of the request and response, policies that are configured on each API, number of clients that are sending requests. +As mentioned above, the number of requests per second an APIM unit can process depends on many variables. For example, the connection pattern, the size of the request and response, policies that are configured on each API, number of clients that are sending requests. Use **Metrics** (uses Azure Monitor capabilities) to understand how much capacity is used at any given time. @@ -50,23 +50,23 @@ Use **Metrics** (uses Azure Monitor capabilities) to understand how much capacit 2. Select **Metrics**. 3. Select **Capacity** metric from **Available metrics**. - The capacity metric, gives you some idea of how much capacity is being used in your tenant. You can test by putting more and more load and see what is your pick load. You can set a metric alert to let you know when something that is not expected is happening. For example, your APIM instance accedes capacity for over 5 min. + The capacity metric gives you some idea of how much of the available compute capacity is being used in your tenant. Its value is derived from the compute resources used by your tenant such as memory, CPU, and network queue lengths. It is not a direct measure of the number of requests being processed. You can test by increasing the request load on your tenant and monitoring what value of the capacity metric corresponds to your peak load. You can set a metric alert to let you know when something unexpected is happening. For example, your APIM instance has exceeded its expected peak capacity for over 10 minutes. >[!TIP] - > You can configure alert to let you know when your service running low on capacity or make it call into a logic app that will automatically scale by adding a unit.. + > You can configure alerts to let you know when your service is running low on capacity or call into a logic app that automatically scale by adding a unit. ## Upgrade and scale -As mentioned previously, you can choose between three tiers: **Developer**, **Standard**, **Premium**. The **Developer** tier should be used to evaluate the service; it should not be used for production. The **Developer** tier does not have SLA and you cannot scale this tier (add/remove units). +As mentioned previously, you can choose between three tiers: **Developer**, **Standard**, and **Premium**. The **Developer** tier should be used to evaluate the service; it should not be used for production. The **Developer** tier does not have SLA and you cannot scale this tier (add/remove units). -**Standard** and **Premium** are production tiers have SLA and can be scaled. The **Standard** tier can be scaled to up to four units. You can add any number of units to the **Premium** tier. +**Standard** and **Premium** are production tiers that have SLA and can be scaled. The **Standard** tier can be scaled to up to four units. You can add any number of units to the **Premium** tier. -The **Premium** tier enables you to distribute a single API management instance across any number of desired Azure regions. When you initially create an API Management service, the instance contains only one unit and resides in a single Azure region. The initial region is designated as the **primary** region. Additional regions can be easily added. When adding a region, you specify the number of units you want to allocate. For example, you can have one unit in the **primary** region and five units in some other region. You can tailor to whatever traffic you have in each region. For more information, see [How to deploy an Azure API Management service instance to multiple Azure regions](api-management-howto-deploy-multi-region.md). +The **Premium** tier enables you to distribute a single API management instance across any number of desired Azure regions. When you initially create an API Management service, the instance contains only one unit and resides in a single Azure region. The initial region is designated as the **primary** region. Additional regions can be easily added. When adding a region, you specify the number of units you want to allocate. For example, you can have one unit in the **primary** region and five units in some other region. You can tailor the number of units to the traffic you have in each region. For more information, see [How to deploy an Azure API Management service instance to multiple Azure regions](api-management-howto-deploy-multi-region.md). -You can upgrade and downgrade to and from any tier. Please note that downgrading can remove some features, for example, VNETs or multi-region deployment, when downgrading to Standard from the Premium tier. +You can upgrade and downgrade to and from any tier. Note that upgrading or downgrading can remove some features - for example, VNETs or multi-region deployment, when downgrading to Standard from the Premium tier. >[!NOTE] ->The upgrade or scale process can take from 15 to 30 minutes to apply. You get notification when it is done. +>The upgrade or scale process can take from 15 to 45 minutes to apply. You get notification when it is done. ### Use the Azure portal to upgrade and scale diff --git a/articles/app-service/app-service-hybrid-connections.md b/articles/app-service/app-service-hybrid-connections.md index a281b96abf743..3b08c1cea3476 100644 --- a/articles/app-service/app-service-hybrid-connections.md +++ b/articles/app-service/app-service-hybrid-connections.md @@ -20,38 +20,36 @@ ms.author: ccompy # Azure App Service Hybrid Connections # -## Overview ## +Hybrid Connections is both a service in Azure and a feature in Azure App Service. As a service, it has uses and capabilities beyond those that are used in App Service. To learn more about Hybrid Connections and their usage outside App Service, see [Azure Relay Hybrid Connections][HCService]. -Hybrid Connections is both a service in Azure as well as a feature in the Azure App Service. As a service it has uses and capabilities beyond those that are leveraged in Azure App Service. To learn more about Hybrid Connections and their usage outside of the Azure App Service you can start here, [Azure Relay Hybrid Connections][HCService] - -Within Azure App Service, Hybrid Connections can be used to access application resources in other networks. It provides access FROM your app TO an application endpoint. It does not enable an alternate capability to access your application. As used in App Service, each Hybrid Connection correlates to a single TCP host and port combination. This means that the Hybrid Connection endpoint can be on any operating system and any application, provided you are hitting a TCP listening port. Hybrid Connections do not know or care what the application protocol is or what you are accessing. They are simply providing network access. +Within App Service, Hybrid Connections can be used to access application resources in other networks. It provides access from your app to an application endpoint. It does not enable an alternate capability to access your application. As used in App Service, each Hybrid Connection correlates to a single TCP host and port combination. This means that the Hybrid Connection endpoint can be on any operating system and any application, provided you are accessing a TCP listening port. The Hybrid Connections feature does not know or care what the application protocol is, or what you are accessing. It is simply providing network access. ## How it works ## -The Hybrid Connections feature consists of two outbound calls to Service Bus Relay. There is a connection from a library on the host where your app is running in App Service and there is a connection from the Hybrid Connection Manager (HCM) to Service Bus Relay. The HCM is a relay service that you deploy within the network hosting the resource you are trying to access. +The Hybrid Connections feature consists of two outbound calls to Azure Service Bus Relay. There is a connection from a library on the host where your app is running in App Service. There is also a connection from the Hybrid Connection Manager (HCM) to Service Bus Relay. The HCM is a relay service that you deploy within the network hosting the resource you are trying to access. -Through the two joined connections, your app has a TCP tunnel to a fixed host:port combination on the other side of the HCM. The connection uses TLS 1.2 for security and SAS keys for authentication/authorization. +Through the two joined connections, your app has a TCP tunnel to a fixed host:port combination on the other side of the HCM. The connection uses TLS 1.2 for security and shared access signature (SAS) keys for authentication and authorization. -![Hybrid connection high level flow][1] +![Diagram of Hybrid Connection high level flow][1] When your app makes a DNS request that matches a configured Hybrid Connection endpoint, the outbound TCP traffic will be redirected through the Hybrid Connection. > [!NOTE] -> This means that you should try to always use a DNS name for your Hybrid Connection. Some client software does not do a DNS lookup if the endpoint uses an IP address instead. +> This means that you should try to always use a DNS name for your Hybrid Connection. Some client software does not do a DNS lookup if the endpoint uses an IP address instead. > > -There are two types of Hybrid Connections: the new Hybrid Connections that are offered as a service under Azure Relay and the older BizTalk Hybrid Connections. The older BizTalk Hybrid Connections are referred to as Classic Hybrid Connections in the portal. There is more information later in this document about them. +The Hybrid Connections feature has two types: the Hybrid Connections that are offered as a service under Service Bus Relay, and the older Azure BizTalk Services Hybrid Connections. The latter are referred to as Classic Hybrid Connections in the portal. There is more information about them later in this article. ### App Service Hybrid Connection benefits ### There are a number of benefits to the Hybrid Connections capability, including: -- Apps can securely access on-premises systems and services securely. -- The feature does not require an Internet-accessible endpoint. +- Apps can access on-premises systems and services securely. +- The feature does not require an internet-accessible endpoint. - It is quick and easy to set up. -- Each Hybrid Connection matches to a single host:port combination, which is an excellent security aspect. -- It normally does not require firewall holes as the connections are all outbound over standard web ports. +- Each Hybrid Connection matches to a single host:port combination, helpful for security. +- It normally does not require firewall holes. The connections are all outbound over standard web ports. - Because the feature is network level, it is agnostic to the language used by your app and the technology used by the endpoint. - It can be used to provide access in multiple networks from a single app. @@ -59,138 +57,138 @@ There are a number of benefits to the Hybrid Connections capability, including: There are a few things you cannot do with Hybrid Connections, including: -- Mounting a drive -- Using UDP -- Accessing TCP based services that use dynamic ports, such as FTP Passive Mode or Extended Passive Mode -- LDAP support, as it sometimes requires UDP -- Active Directory support +- Mounting a drive. +- Using UDP. +- Accessing TCP-based services that use dynamic ports, such as FTP Passive Mode or Extended Passive Mode. +- Supporting LDAP, because it sometimes requires UDP. +- Supporting Active Directory. -## Adding and Creating a Hybrid Connection in your App ## +## Add and Create Hybrid Connections in your app ## -Hybrid Connections can be created through your App Service app in the Azure portal or from Azure Relay in the Azure portal. It is highly recommended that you create Hybrid Connections through the App Service app that you wish to use with the Hybrid Connection. To create a Hybrid Connection, go to the [Azure portal][portal] and select your app. Select **Networking > Configure your Hybrid Connection endpoints**. From here you can see the Hybrid Connections that are configured for your app. +You can create Hybrid Connections through your App Service app in the Azure portal, or from Azure Relay in the Azure portal. We recommend that you create Hybrid Connections through the App Service app that you want to use with the Hybrid Connection. To create a Hybrid Connection, go to the [Azure portal][portal] and select your app. Select **Networking** > **Configure your Hybrid Connection endpoints**. From here, you can see the Hybrid Connections that are configured for your app. -![Hybrid connection list][2] +![Screenshot of Hybrid Connection list][2] -To add a new Hybrid Connection, click Add Hybrid Connection. The UI that opens up lists the Hybrid Connections that you have already created. To add one or more of them to your app, click on the ones you want and hit **Add selected Hybrid Connection**. +To add a new Hybrid Connection, select **Add hybrid connection**. You'll see a list of the Hybrid Connections that you have already created. To add one or more of them to your app, select the ones you want, and then select **Add selected Hybrid Connection**. -![Hybrid connection portal][3] +![Screenshot of Hybrid Connection portal][3] -If you want to create a new Hybrid Connection, click **Create new Hybrid Connection**. From here, you specify the: +If you want to create a new Hybrid Connection, select **Create new hybrid connection**. Specify the: -- Endpoint name -- Endpoint hostname -- Endpoint port -- Service Bus namespace you wish to use +- Endpoint name. +- Endpoint hostname. +- Endpoint port. +- Service Bus namespace you want to use. -![Create a hybrid connection][4] +![Screenshot of Create new hybrid connection dialog box][4] -Every Hybrid Connection is tied to a Service Bus namespace, and each Service Bus namespace is in an Azure region. It is important to try and use a Service Bus namespace in the same region as your app so as to avoid network induced latency. +Every Hybrid Connection is tied to a Service Bus namespace, and each Service Bus namespace is in an Azure region. It's important to try to use a Service Bus namespace in the same region as your app, to avoid network induced latency. -If you want to remove your Hybrid Connection from your app, right click on it and select **Disconnect**. +If you want to remove your Hybrid Connection from your app, right-click it and select **Disconnect**. -Once a Hybrid Connection is added to your app, you can see details on it by simply clicking on it. +When a Hybrid Connection is added to your app, you can see details on it simply by selecting it. -![Hybrid Connection details][5] +![Screenshot of Hybrid connections details][5] -### Creating a Hybrid Connection in the Azure Relay portal ### +### Create a Hybrid Connection in the Azure Relay portal ### -In addition to the portal experience from within your app, there is also an ability to create Hybrid Connections from within the Azure Relay portal. In order for a Hybrid Connection to be used by the Azure App Service it must satisfy two criteria. It must: +In addition to the portal experience from within your app, you can create Hybrid Connections from within the Azure Relay portal. For a Hybrid Connection to be used by App Service, it must: -* Require Client Authorization -* Have a metadata item named endpoint that contains a host:port combination as the value +* Require client authorization. +* Have a metadata item, named endpoint, that contains a host:port combination as the value. -## Hybrid Connections and App Service Plans ## +## Hybrid Connections and App Service plans ## -Hybrid Connections are only available in Basic, Standard, Premium, and Isolated pricing SKUs. There are limits tied to the pricing plan. +The Hybrid Connections feature is only available in Basic, Standard, Premium, and Isolated pricing SKUs. There are limits tied to the pricing plan. > [!NOTE] > You can only create new Hybrid Connections based on Azure Relay. You cannot create new BizTalk Hybrid Connections. > -| Pricing Plan | Number of Hybrid Connections usable in the plan | +| Pricing plan | Number of Hybrid Connections usable in the plan | |----|----| | Basic | 5 | | Standard | 25 | | Premium | 200 | | Isolated | 200 | -Since there are App Service Plan restrictions, there is also UI in the App Service Plan that shows you how many Hybrid Connections are being used and by what apps. +Note that the App Service plan shows you how many Hybrid Connections are being used and by what apps. -![App Servie plan level properties][6] +![Screenshot of App Service plan properties][6] -You can see details on your Hybrid Connection by clicking on it. In the properties shown here, you can see all the information you saw at the app view, and you can also see how many other apps in the same App Service Plan are using that Hybrid Connection. +Select the Hybrid Connection to see details. You can see all the information that you saw at the app view. You can also see how many other apps in the same plan are using that Hybrid Connection. -While there is a limit on the number of Hybrid Connection endpoints that can be used in an App Service Plan, each Hybrid Connection used can be used across any number of apps in that App Service Plan. In other words, a single Hybrid Connection that is used in 5 separate apps in an App Service Plan counts as 1 Hybrid Connection. +There is a limit on the number of Hybrid Connection endpoints that can be used in an App Service plan. Each Hybrid Connection used, however, can be used across any number of apps in that plan. For example, a single Hybrid Connection that is used in five separate apps in an App Service plan counts as one Hybrid Connection. -There is an additional cost to using Hybrid Connections. For details on Hybrid Connection pricing, please go here: [Service Bus pricing][sbpricing]. +There is an additional cost to using Hybrid Connections. For details, see [Service Bus pricing][sbpricing]. ## Hybrid Connection Manager ## -In order for Hybrid Connections to work, you need a relay agent in the network that hosts your Hybrid Connection endpoint. That relay agent is called the Hybrid Connection Manager (HCM). This tool can be downloaded from the **Networking > Configure your Hybrid Connection endpoints** UI available from your app in the [Azure portal][portal]. +The Hybrid Connections feature requires a relay agent in the network that hosts your Hybrid Connection endpoint. That relay agent is called the Hybrid Connection Manager (HCM). To download HCM, from your app in the [Azure portal][portal], select **Networking** > **Configure your Hybrid Connection endpoints**. -This tool runs on Windows server 2012 and later versions of Windows. Once installed the HCM runs as a service. This service connects to Azure Service Bus Relay based on the configured endpoints. The connections from the HCM are outbound to Azure over port 443. +This tool runs on Windows Server 2012 and later. When installed, HCM runs as a service that connects to Service Bus Relay, based on the configured endpoints. The connections from HCM are outbound to Azure over port 443. -The HCM has a UI to configure it. After the HCM is installed, you can bring up the UI by running the HybridConnectionManagerUi.exe that sits in the Hybrid Connection Manager installation directory. It is also easily reached on Windows 10 by typing *Hybrid Connection Manager UI* in your search box. +After installing HCM, you can run HybridConnectionManagerUi.exe to use the UI for the tool. This file is in the Hybrid Connection Manager installation directory. In Windows 10, you can also just search for *Hybrid Connection Manager UI* in your search box. -![Hybrid Connection portal][7] +![Screenshot of Hybrid Connection Manager][7] -When the HCM UI is started, the first thing you see is a table that lists all of the Hybrid Connections that are configured with this instance of the HCM. If you wish to make any changes, you will need to authenticate with Azure. +When you start the HCM UI, the first thing you see is a table that lists all the Hybrid Connections that are configured with this instance of the HCM. If you want to make any changes, first authenticate with Azure. To add one or more Hybrid Connections to your HCM: -1. Start the HCM UI -1. Click Configure another Hybrid Connection -![Add an HC in the HCM][8] +1. Start the HCM UI. +1. Select **Configure another Hybrid Connection**. +![Screenshot of Configure New Hybrid Connections][8] -1. Sign in with your Azure account -1. Choose a subscription -1. Click on the Hybrid Connections you want the HCM to relay -![Select an HC][9] +1. Sign in with your Azure account. +1. Choose a subscription. +1. Select the Hybrid Connections that you want the HCM to relay. +![Screenshot of Hybrid Connections][9] -1. Click Save +1. Select **Save**. -At this point, you will see the Hybrid Connections you added. You can also click on the configured Hybrid Connection and see details about the it. +You can now see the Hybrid Connections you added. You can also select the configured Hybrid Connection to see details. -![HC details][10] +![Screenshot of Hybrid Connection Details][10] -For your HCM to be able to support the Hybrid Connections it is configured with, it needs: +To support the Hybrid Connections it is configured with, HCM requires: -- TCP access to Azure over ports 80 and 443 -- TCP access to the Hybrid Connection endpoint -- Ability to do DNS look-ups on the endpoint host and the Azure Service Bus namespace +- TCP access to Azure over ports 80 and 443. +- TCP access to the Hybrid Connection endpoint. +- The ability to do DNS look-ups on the endpoint host and the Service Bus namespace. -The HCM supports both new Hybrid Connections, as well as the older BizTalk Hybrid Connections. +HCM supports both new Hybrid Connections and BizTalk Hybrid Connections. > [!NOTE] -> Azure Relay relies on Web Sockets for connectivity. This capability is only available on Windows Server 2012 or newer. Because of that the Hybrid Connection Manager is not supported on anything earlier than Windows Server 2012. +> Azure Relay relies on Web Sockets for connectivity. This capability is only available on Windows Server 2012 or later. Because of that, HCM is not supported on anything earlier than Windows Server 2012. > ### Redundancy ### -Each HCM can support multiple Hybrid Connections. Also, any given Hybrid Connection can be supported by multiple HCMs. The default behavior is to round robin traffic across the configured HCMs for any given endpoint. If you want high availability on your Hybrid Connections from your network, simply instantiate multiple HCMs on separate machines. +Each HCM can support multiple Hybrid Connections. Also, any given Hybrid Connection can be supported by multiple HCMs. The default behavior is to route traffic across the configured HCMs for any given endpoint. If you want high availability on your Hybrid Connections from your network, run multiple HCMs on separate machines. -### Manually adding a Hybrid Connection ### +### Manually add a Hybrid Connection ### -If you wish somebody outside of your subscription to host an HCM instance for a given Hybrid Connection, you can share the gateway connection string for the Hybrid Connection with them. You can see this in the properties for a Hybrid Connection in the [Azure portal][portal]. To use that string, click the **Enter Manually** button in the HCM and paste in the gateway connection string. +To enable someone outside your subscription to host an HCM instance for a given Hybrid Connection, share the gateway connection string for the Hybrid Connection with them. You can see this in the properties for a Hybrid Connection in the [Azure portal][portal]. To use that string, select **Enter Manually** in the HCM, and paste in the gateway connection string. ## Troubleshooting ## -The connection status for a Hybrid Connection means that at least one HCM is configured with that Hybrid Conneciton and is able to reach Azure. If the status for your Hybrid Connection does not say **Connected**, then your Hybrid Connection is not configured on any HCM that has access to Azure. +The status of "Connected" means that at least one HCM is configured with that Hybrid Connection, and is able to reach Azure. If the status for your Hybrid Connection does not say **Connected**, your Hybrid Connection is not configured on any HCM that has access to Azure. -The primary reason that clients cannot connect to their endpoint is because the endpoint was specified using an IP address instead of a DNS name. If your app cannot reach the desired endpoint and you used an IP address, switch to using a DNS name that is valid on the host where the HCM is running. Other things to check are that the DNS name resolves properly on the host where the HCM is running and that there is connectivity from the host where the HCM is running to the Hybrid Connection endpoint. +The primary reason that clients cannot connect to their endpoint is because the endpoint was specified by using an IP address instead of a DNS name. If your app cannot reach the desired endpoint and you used an IP address, switch to using a DNS name that is valid on the host where the HCM is running. Also check that the DNS name resolves properly on the host where the HCM is running. Confirm that there is connectivity from the host where the HCM is running to the Hybrid Connection endpoint. -There is a tool in App Service that can be invoked from the Advanced Tools (Kudu) console called tcpping. This tool can tell you if you have access to a TCP endpoint, but it does not tell you if you have access to a Hybrid Connection endpoint. When used in the console against a Hybrid Connection endpoint, a successful ping will only tell you that you have a Hybrid Connection configured for your app that uses that host:port combination. +In App Service, the tcpping tool can be invoked from the Advanced Tools (Kudu) console. This tool can tell you if you have access to a TCP endpoint, but it does not tell you if you have access to a Hybrid Connection endpoint. When you use the tool in the console against a Hybrid Connection endpoint, you are only confirming that it uses a host:port combination. ## BizTalk Hybrid Connections ## -The older BizTalk Hybrid Connections capability has been closed off to new BizTalk Hybrid Connections. You can continue using your existing BizTalk Hybrid Connections with your apps, but should migrate to the new Hybrid Connections that use Azure Relay. Among the benefits in the new service over the BizTalk version are: +The older BizTalk Hybrid Connections capability has been closed to new BizTalk Hybrid Connections. You can continue using your existing BizTalk Hybrid Connections with your apps, but you should migrate to the new Hybrid Connections that use Azure Relay. Among the benefits in the new service over the BizTalk version are: - No additional BizTalk account is required. -- TLS is 1.2 instead of 1.0 as in BizTalk Hybrid Connections. -- Communication is over ports 80 and 443 using a DNS name to reach Azure instead of IP addresses and a range of additional other ports. +- TLS is version 1.2 instead of version 1.0. +- Communication is over ports 80 and 443, and uses a DNS name to reach Azure, instead of IP addresses and a range of additional ports. -To add an existing BizTalk Hybrid Connection to your app, go to your app in the [Azure portal][portal] and click **Networking > Configure your Hybrid Connection endpoints**. In the Classic Hybrid Connections table, click **Add Classic Hybrid Connection**. From here, you will see a list of your BizTalk Hybrid Connections. +To add an existing BizTalk Hybrid Connection to your app, go to your app in the [Azure portal][portal], and select **Networking** > **Configure your Hybrid Connection endpoints**. In the Classic Hybrid Connections table, select **Add Classic Hybrid Connection**. You can then see a list of your BizTalk Hybrid Connections. diff --git a/articles/app-service/containers/app-service-linux-faq.md b/articles/app-service/containers/app-service-linux-faq.md index 00ab91bab6ec6..6a83e644b0609 100644 --- a/articles/app-service/containers/app-service-linux-faq.md +++ b/articles/app-service/containers/app-service-linux-faq.md @@ -61,6 +61,20 @@ Yes. Yes, you need to set an app setting called `WEBSITE_WEBDEPLOY_USE_SCM` to *false*. +**Git deployment of my application fails when using Linux web app. How can I workaround the issue?** + +If Git deployment fails to your Linux web app, you can choose the following alternate options to deploy your application code: + +- Use the Continuous Delivery (Preview) feature: You can store your app’s source code in a Team Services Git repo or GitHub repo to use Azure Continuous Delivery. For more details, see [How to configure Continuous Delivery for Linux web app](https://blogs.msdn.microsoft.com/devops/2017/05/10/use-azure-portal-to-setup-continuous-delivery-for-web-app-on-linux/). + +- Use the [ZIP deploy API](https://github.com/projectkudu/kudu/wiki/Deploying-from-a-zip-file): To use this API, [SSH into your web app](https://docs.microsoft.com/en-us/azure/app-service/containers/app-service-linux-ssh-support#making-a-client-connection) and go to the folder where you want to deploy your code. Run the following: + + ``` + curl -X POST -u --data-binary @ https://{your-sitename}.scm.azurewebsites.net/api/zipdeploy + ``` + + If you get an error that the `curl` command is not found, make sure you install curl by using `apt-get install curl` before you run the previous `curl` command. + ## Language support **I want to use websockets in my Node.js application, any special settings or configurations to set?** diff --git a/articles/app-service/containers/tutorial-docker-python-postgresql-app.md b/articles/app-service/containers/tutorial-docker-python-postgresql-app.md index 4ed83ce389823..1246c4f211b3b 100644 --- a/articles/app-service/containers/tutorial-docker-python-postgresql-app.md +++ b/articles/app-service/containers/tutorial-docker-python-postgresql-app.md @@ -9,7 +9,7 @@ ms.service: app-service-web ms.workload: web ms.devlang: python ms.topic: tutorial -ms.date: 05/03/2017 +ms.date: 11/29/2017 ms.author: beverst ms.custom: mvc --- @@ -34,7 +34,7 @@ To complete this tutorial: [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)] -If you choose to install and use the CLI locally, this topic requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI 2.0]( /cli/azure/install-azure-cli). +If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI 2.0]( /cli/azure/install-azure-cli). ## Test local PostgreSQL installation and create a database @@ -116,7 +116,7 @@ In this step, you create a PostgreSQL database in Azure. When your app is deploy ### Log in to Azure -You are now going to use the Azure CLI 2.0 to create the resources needed to host your Python application in Web App for Containers. Log in to your Azure subscription with the [az login](/cli/azure/#login) command and follow the on-screen directions. +You are now going to use the Azure CLI 2.0 to create the resources needed to host your Python application in Web App for Containers. Log in to your Azure subscription with the [az login](/cli/azure/#az_login) command and follow the on-screen directions. ```azurecli az login @@ -124,7 +124,7 @@ az login ### Create a resource group -Create a [resource group](../../azure-resource-manager/resource-group-overview.md) with the [az group create](/cli/azure/group#create). +Create a [resource group](../../azure-resource-manager/resource-group-overview.md) with the [az group create](/cli/azure/group#az_group_create). [!INCLUDE [Resource group intro](../../../includes/resource-group.md)] @@ -134,7 +134,7 @@ The following example creates a resource group in the West US region: az group create --name myResourceGroup --location "West US" ``` -Use the [az appservice list-locations](/cli/azure/appservice#list-locations) Azure CLI command to list available locations. +Use the [az appservice list-locations](/cli/azure/appservice#az_appservice_list_locations) Azure CLI command to list available locations. ### Create an Azure Database for PostgreSQL server @@ -255,7 +255,7 @@ Docker displays a confirmation that it successfully created the container. Successfully built 7548f983a36b ``` -Add database environment variables to an environment variable file *db.env*. The app will connect to the PostgreSQL production database in Azure. +Add database environment variables to an environment variable file *db.env*. The app connects to the Azure Database for PostgreSQL production database. ```text DBHOST=".postgres.database.azure.com" @@ -359,7 +359,7 @@ In this step, you deploy your Docker container-based Python Flask application to ### Create an App Service plan -Create an App Service plan with the [az appservice plan create](/cli/azure/appservice/plan#create) command. +Create an App Service plan with the [az appservice plan create](/cli/azure/appservice/plan#az_appservice_plan_create) command. [!INCLUDE [app-service-plan](../../../includes/app-service-plan-linux.md)] @@ -409,7 +409,7 @@ When the App Service plan is created, the Azure CLI shows information similar to ### Create a web app -Create a web app in the *myAppServicePlan* App Service plan with the [az webapp create](/cli/azure/webapp#create) command. +Create a web app in the *myAppServicePlan* App Service plan with the [az webapp create](/cli/azure/webapp#az_webapp_create) command. The web app gives you a hosting space to deploy your code and provides a URL for you to view the deployed application. Use to create the web app. @@ -440,7 +440,7 @@ When the web app has been created, the Azure CLI shows information similar to th Earlier in the tutorial, you defined environment variables to connect to your PostgreSQL database. -In App Service, you set environment variables as _app settings_ by using the [az webapp config appsettings set](/cli/azure/webapp/config#set) command. +In App Service, you set environment variables as _app settings_ by using the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) command. The following example specifies the database connection details as app settings. It also uses the *PORT* variable to map PORT 5000 from your Docker Container to receive HTTP traffic on PORT 80. diff --git a/articles/app-service/containers/tutorial-php-mysql-app.md b/articles/app-service/containers/tutorial-php-mysql-app.md index c0cc54f2251ad..db2ec76b4b616 100644 --- a/articles/app-service/containers/tutorial-php-mysql-app.md +++ b/articles/app-service/containers/tutorial-php-mysql-app.md @@ -7,10 +7,9 @@ author: cephalin manager: erikre ms.service: app-service-web ms.workload: web -ms.tgt_pltfrm: na ms.devlang: nodejs ms.topic: tutorial -ms.date: 07/21/2017 +ms.date: 11/28/2017 ms.author: cephalin ms.custom: mvc --- @@ -151,7 +150,7 @@ In this step, you create a MySQL database in [Azure Database for MySQL (Preview) ### Create a MySQL server -Create a server in Azure Database for MySQL (Preview) with the [az mysql server create](/cli/azure/mysql/server#create) command. +Create a server in Azure Database for MySQL (Preview) with the [az mysql server create](/cli/azure/mysql/server#az_mysql_server_create) command. In the following command, substitute your MySQL server name where you see the _<mysql_server_name>_ placeholder (valid characters are `a-z`, `0-9`, and `-`). This name is part of the MySQL server's hostname (`.database.windows.net`), it needs to be globally unique. @@ -176,7 +175,7 @@ When the MySQL server is created, the Azure CLI shows information similar to the ### Configure server firewall -Create a firewall rule for your MySQL server to allow client connections by using the [az mysql server firewall-rule create](/cli/azure/mysql/server/firewall-rule#create) command. +Create a firewall rule for your MySQL server to allow client connections by using the [az mysql server firewall-rule create](/cli/azure/mysql/server/firewall-rule#az_mysql_server_firewall_rule_create) command. ```azurecli-interactive az mysql server firewall-rule create --name allIPs --server --resource-group myResourceGroup --start-ip-address 0.0.0.0 --end-ip-address 255.255.255.255 @@ -327,7 +326,7 @@ In this step, you deploy the MySQL-connected PHP application to Azure App Servic ### Configure database settings -In App Service, you set environment variables as _app settings_ by using the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#set) command. +In App Service, you set environment variables as _app settings_ by using the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) command. The following command configures the app settings `DB_HOST`, `DB_DATABASE`, `DB_USERNAME`, and `DB_PASSWORD`. Replace the placeholders _<appname>_ and _<mysql_server_name>_. @@ -359,7 +358,7 @@ Use `php artisan` to generate a new application key without saving it to _.env_. php artisan key:generate --show ``` -Set the application key in the App Service web app by using the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#set) command. Replace the placeholders _<appname>_ and _<outputofphpartisankey:generate>_. +Set the application key in the App Service web app by using the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) command. Replace the placeholders _<appname>_ and _<outputofphpartisankey:generate>_. ```azurecli-interactive az webapp config appsettings set --name --resource-group myResourceGroup --settings APP_KEY="" APP_DEBUG="true" @@ -371,7 +370,7 @@ az webapp config appsettings set --name --resource-group myResourceGr Set the virtual application path for the web app. This step is required because the [Laravel application lifecycle](https://laravel.com/docs/5.4/lifecycle) begins in the _public_ directory instead of the application's root directory. Other PHP frameworks whose lifecycle start in the root directory can work without manual configuration of the virtual application path. -Set the virtual application path by using the [az resource update](/cli/azure/resource#update) command. Replace the _<appname>_ placeholder. +Set the virtual application path by using the [az resource update](/cli/azure/resource#az_resource_update) command. Replace the _<appname>_ placeholder. ```azurecli-interactive az resource update --name web --resource-group myResourceGroup --namespace Microsoft.Web --resource-type config --parent sites/ --set properties.virtualApplications[0].physicalPath="site\wwwroot\public" --api-version 2015-06-01 diff --git a/articles/app-service/environment/forced-tunnel-support.md b/articles/app-service/environment/forced-tunnel-support.md index 45d5f415ed62e..14a62957fe87f 100644 --- a/articles/app-service/environment/forced-tunnel-support.md +++ b/articles/app-service/environment/forced-tunnel-support.md @@ -1,6 +1,6 @@ --- title: Configure your Azure App Service Environment to be force tunneled -description: Enable your ASE to work when outbound traffic is force tunneled +description: Enable your App Service Environment to work when outbound traffic is force tunneled services: app-service documentationcenter: na author: ccompy @@ -18,82 +18,85 @@ ms.author: ccompy # Configure your App Service Environment with forced tunneling -The App Service Environment (ASE) is a deployment of the Azure App Service in a customer's Azure Virtual Network (VNet). Many customers configure their VNets to be extensions of their on-premises networks with VPNs or ExpressRoute connections. Due to corporate policies or other security constraints, they configure routes to send all outbound traffic on-premises before it can go out to the internet. Changing the routing of the VNet so that the outbound traffic from the VNet flows through the VPN or ExpressRoute connection to on-premises is called forced tunneling. +The App Service Environment is a deployment of Azure App Service in a customer's instance of Azure Virtual Network. Many customers configure their virtual networks to be extensions of their on-premises networks with VPNs or Azure ExpressRoute connections. Due to corporate policies or other security constraints, they configure routes to send all outbound traffic on-premises before it can go out to the internet. Changing the routing of the virtual network so that the outbound traffic from the virtual network flows through the VPN or ExpressRoute connection to on-premises is called forced tunneling. -Forced tunneling can cause problems for an ASE. The ASE has a number of external dependencies, which are enumerated in this [ASE Network Architecture][network] document. The ASE, by default, requires that all outbound communication goes through the VIP that is provisioned with the ASE. +Forced tunneling can cause problems for an App Service Environment. The App Service Environment has a number of external dependencies, which are enumerated in the [App Service Environment network architecture][network] document. The App Service Environment, by default, requires that all outbound communication goes through the VIP that is provisioned with the App Service Environment. -Routes are a critical aspect what forced tunneling is and how to deal with it. In an Azure Virtual Network, routing is done based on Longest Prefix Match (LPM). If there is more than one route with the same LPM match, then a route is selected based on its origin in the following order: +Routes are a critical aspect of what forced tunneling is and how to deal with it. In an Azure virtual network, routing is done based on the longest prefix match (LPM). If there is more than one route with the same LPM match, a route is selected based on its origin in the following order: -1. User defined route -1. BGP route (when ExpressRoute is used) -1. System route +* User-defined route (UDR) +* BGP route (when ExpressRoute is used) +* System route -To learn more about routing in a VNet, read [User defined routes and IP forwarding][routes]. +To learn more about routing in a virtual network, read [User-defined routes and IP forwarding][routes]. -If you want your ASE to operate in a forced tunnel VNet, you have two choices: +If you want your App Service Environment to operate in a forced tunnel virtual network, you have two choices: -1. Enable your ASE to have direct internet access -1. Change the egress endpoint for your ASE +* Enable your App Service Environment to have direct internet access. +* Change the egress endpoint for your App Service Environment. -## Enable your ASE to have direct internet access +## Enable your App Service Environment to have direct internet access -For your ASE to work while your VNet is configured with an ExpressRoute, you can: +For your App Service Environment to work while your virtual network is configured with an ExpressRoute connection, you can: * Configure ExpressRoute to advertise 0.0.0.0/0. By default, it force tunnels all outbound traffic on-premises. -* Create a UDR. Apply it to the subnet that contains the ASE with an address prefix of 0.0.0.0/0 and a next hop type of Internet. +* Create a UDR. Apply it to the subnet that contains the App Service Environment with an address prefix of 0.0.0.0/0 and a next hop type of Internet. -If you make these two changes, internet-destined traffic that originates from the ASE subnet isn't forced down the ExpressRoute and the ASE works. +If you make these two changes, internet-destined traffic that originates from the App Service Environment subnet isn't forced down the ExpressRoute connection, and the App Service Environment works. > [!IMPORTANT] > The routes defined in a UDR must be specific enough to take precedence over any routes advertised by the ExpressRoute configuration. The preceding example uses the broad 0.0.0.0/0 address range. It can potentially be accidentally overridden by route advertisements that use more specific address ranges. > -> ASEs aren't supported with ExpressRoute configurations that cross-advertise routes from the public-peering path to the private-peering path. ExpressRoute configurations with public peering configured receive route advertisements from Microsoft. The advertisements contain a large set of Microsoft Azure IP address ranges. If the address ranges are cross-advertised on the private-peering path, all outbound network packets from the ASE's subnet are force tunneled to a customer's on-premises network infrastructure. This network flow is currently not supported by default with ASEs. One solution to this problem is to stop cross-advertising routes from the public-peering path to the private-peering path. The other solution is to enable your ASE to work in a forced tunnel configuration. +> App Service Environments aren't supported with ExpressRoute configurations that cross-advertise routes from the public-peering path to the private-peering path. ExpressRoute configurations with public peering configured receive route advertisements from Microsoft. The advertisements contain a large set of Microsoft Azure IP address ranges. If the address ranges are cross-advertised on the private-peering path, all outbound network packets from the App Service Environment's subnet are force tunneled to a customer's on-premises network infrastructure. This network flow is currently not supported by default with App Service Environments. One solution to this problem is to stop cross-advertising routes from the public-peering path to the private-peering path. Another solution is to enable your App Service Environment to work in a forced tunnel configuration. -## Change the egress endpoint for your ASE ## +## Change the egress endpoint for your App Service Environment ## -This section describes how to enable an ASE to operate in a forced tunnel configuration by changing the egress endpoint used by the ASE. If the outbound traffic from the ASE is forced tunneled to an on-premises network, then you need to allow that traffic to source from IP addresses other than the ASE VIP address. +This section describes how to enable an App Service Environment to operate in a forced tunnel configuration by changing the egress endpoint used by the App Service Environment. If the outbound traffic from the App Service Environment is force tunneled to an on-premises network, you need to allow that traffic to source from IP addresses other than the App Service Environment VIP address. -An ASE not only has external dependencies but it also must listen for inbound traffic to manage the ASE. The ASE must be able to respond to such traffic and the replies cannot be sent back from another address as that breaks TCP. There are thus three steps required to change the egress endpoint for the ASE. +An App Service Environment not only has external dependencies, but also must listen for inbound traffic and respond to such traffic. The replies can't be sent back from another address because that breaks TCP. There are three steps required to change the egress endpoint for the App Service Environment: -1. Set a route table to ensure that inbound management traffic can go back out from the same IP address -1. Add your IP addresses that to be used for egress to the ASE firewall -1. Set the routes to outbound traffic from the ASE to be tunneled +1. Set a route table to ensure that inbound management traffic can go back out from the same IP address. -![Forced tunnel network flow][1] +2. Add your IP addresses that are to be used for egress to the App Service Environment firewall. -You can configure the ASE with different egress addresses after the ASE is already up and operational or they can be set during ASE deployment. +3. Set the routes to outbound traffic from the App Service Environment to be tunneled. -### Changing the egress address after the ASE is operational ### -1. Get the IP addresses you want to use as egress IPs for your ASE. If you are doing forced tunneling, then this would be your NATs or gateway IPs. If you want to route the ASE outbound traffic through an NVA, then the egress address would be the public IP of the NVA. -2. Set the egress addresses in your ASE configuration information. Go to resource.azure.com and navigate to: Subscription//resourceGroups//providers/Microsoft.Web/hostingEnvironments/ then you can see the json that describes your ASE. Make sure it says read/write at the top. Click Edit Scroll down to the bottom and change userWhitelistedIpRanges from + ![Forced tunnel network flow][1] - "userWhitelistedIpRanges": null - - to something like the following. Use the addresses you want to set as the egress address range. +You can configure the App Service Environment with different egress addresses after the App Service Environment is already up and operational, or they can be set during App Service Environment deployment. - "userWhitelistedIpRanges": ["11.22.33.44/32", "55.66.77.0/24"] +### Change the egress address after the App Service Environment is operational ### +1. Get the IP addresses that you want to use as egress IPs for your App Service Environment. If you're doing forced tunneling, these addresses come from your NATs or gateway IPs. If you want to route the App Service Environment outbound traffic through an NVA, the egress address is the public IP of the NVA. - Click PUT at the top. This triggers a scale operation on your ASE and adjust the firewall. - -3. Create or edit a route table and populate the rules to allow access to/from the management addresses that map to your ASE location. The management addresses are here, [App Service Environment management addresses][management] +2. Set the egress addresses in your App Service Environment configuration information. Go to resource.azure.com, and go to Subscription//resourceGroups//providers/Microsoft.Web/hostingEnvironments/. Then you can see the JSON that describes your App Service Environment. Make sure it says **read/write** at the top. Select **Edit**. Scroll down to the bottom, and change the **userWhitelistedIpRanges** value from **null** to something like the following. Use the addresses you want to set as the egress address range. -4. Adjust the routes applied to the ASE subnet with a route table or BGP routes. + "userWhitelistedIpRanges": ["11.22.33.44/32", "55.66.77.0/24"] -If the ASE goes unresponsive from the portal, then there is a problem with your changes. It can be that your list of egress addresses was incomplete, the traffic was lost, or the traffic was blocked. + Select **PUT** at the top. This option triggers a scale operation on your App Service Environment and adjusts the firewall. + +3. Create or edit a route table, and populate the rules to allow access to/from the management addresses that map to your App Service Environment location. To find the management addresses, see [App Service Environment management addresses][management]. -### Create a new ASE with a different egress address ### +4. Adjust the routes applied to the App Service Environment subnet with a route table or BGP routes. -In the event that your VNet is already configured to force tunnel all the traffic, you will need to take some extra steps to create your ASE such that it can come up successfully. This means you need to enable use of another egress endpoint during the ASE creation. To do this, you need to create the ASE with a template that specifies the permitted egress addresses. +If the App Service Environment goes unresponsive from the portal, there is a problem with your changes. The problem might be that your list of egress addresses was incomplete, the traffic was lost, or the traffic was blocked. -1. Get the IP addresses to be used as the egress addresses for your ASE. -1. Pre-create the subnet to be used by the ASE. This is needed to let you set routes and also because the template requires it. -1. Create a route table with the management IPs that map to your ASE location and assign it to your ASE -1. Follow the directions here, [Creating an ASE with a template][template], and pull down the appropriate template -1. Edit the azuredeploy.json file "resources" section. Include a line for **userWhitelistedIpRanges** with your values like: +### Create a new App Service Environment with a different egress address ### + +If your virtual network is already configured to force tunnel all the traffic, you need to take extra steps to create your App Service Environment so that it can come up successfully. You need to enable the use of another egress endpoint during the App Service Environment creation. To do this, you need to create the App Service Environment with a template that specifies the permitted egress addresses. + +1. Get the IP addresses to be used as the egress addresses for your App Service Environment. + +2. Pre-create the subnet to be used by the App Service Environment. You need it so that you can set routes, and also because the template requires it. + +3. Create a route table with the management IPs that map to your App Service Environment location. Assign it to your App Service Environment. + +4. Follow the directions in [Create an App Service Environment with a template][template]. Pull down the appropriate template. + +5. Edit the "resources" section in the azuredeploy.json file. Include a line for **userWhitelistedIpRanges** with your values like this: "userWhitelistedIpRanges": ["11.22.33.44/32", "55.66.77.0/30"] -If this is configured properly, then the ASE should start with no issues. +If this section is configured properly, the App Service Environment should start with no problems. diff --git a/articles/app-service/media/web-sites-monitor/quotas.png b/articles/app-service/media/web-sites-monitor/quotas.png index edf547115a9c6..3beca73838bde 100644 Binary files a/articles/app-service/media/web-sites-monitor/quotas.png and b/articles/app-service/media/web-sites-monitor/quotas.png differ diff --git a/articles/app-service/web-sites-monitor.md b/articles/app-service/web-sites-monitor.md index c30d39a94a5e5..432beb82e86fa 100644 --- a/articles/app-service/web-sites-monitor.md +++ b/articles/app-service/web-sites-monitor.md @@ -13,14 +13,14 @@ ms.workload: na ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 09/07/2016 +ms.date: 11/28/2017 ms.author: byvinyal --- # How to: Monitor Apps in Azure App Service [App Service](http://go.microsoft.com/fwlink/?LinkId=529714) provides built in monitoring functionality in the [Azure portal](https://portal.azure.com). -This includes the ability to review **quotas** and **metrics** for an app as +The Azure portal includes the ability to review **quotas** and **metrics** for an app as well as the App Service plan, setting up **alerts** and even **scaling** automatically based on these metrics. @@ -43,7 +43,7 @@ Medium, Large) and **instance count** (1, 2, 3, ...) of the **App Service plan** * **CPU(Short)** * Amount of CPU allowed for this application in a 5-minute interval. This - quota resets every 5 minutes. + quota resets every five minutes. * **CPU(Day)** * Total amount of CPU allowed for this application in a day. This quota resets every 24 hours at midnight UTC. @@ -55,7 +55,7 @@ Medium, Large) and **instance count** (1, 2, 3, ...) of the **App Service plan** * **Filesystem** * Total amount of storage allowed. -The only quota applicable to apps hosted on **Basic**, **Standard** and +The only quota applicable to apps hosted on **Basic**, **Standard**, and **Premium** plans is **Filesystem**. More information about the specific quotas, limits, and features available to @@ -63,7 +63,7 @@ the different App Service SKUs can be found here: [Azure Subscription Service Limits](../azure-subscription-service-limits.md#app-service-limits) #### Quota Enforcement -If an application in its usage exceeds the **CPU (short)**, **CPU (Day)**, or +If an application exceeds the **CPU (short)**, **CPU (Day)**, or **bandwidth** quota then the application is stopped until the quota resets. During this time, all incoming requests result in an **HTTP 403**. ![][http403] @@ -146,17 +146,17 @@ There are two metrics that reflect CPU usage. **CPU time** and **CPU percentage* one of their quotas is defined in CPU minutes used by the app. **CPU percentage** is useful for apps hosted in -**basic**, **standard** and **premium** plans since they can be -scaled out and this metric is a good indication of the overall usage across +**basic**, **standard**, and **premium** plans since they can be +scaled out. CPU percentage is a good indication of the overall usage across all instances. ## Metrics Granularity and Retention Policy Metrics for an application and app service plan are logged and aggregated by the service with the following granularities and retention policies: -* **Minute** granularity metrics are retained for **48 hours** +* **Minute** granularity metrics are retained for **30 hours** * **Hour** granularity metrics are retained for **30 days** -* **Day** granularity metrics are retained for **90 days** +* **Day** granularity metrics are retained for **30 days** ## Monitoring Quotas and Metrics in the Azure portal. You can review the status of the different **quotas** and **metrics** @@ -180,10 +180,10 @@ Metrics for an App or App Service plan can be hooked up to alerts. To learn more about it, see [Receive alert notifications](../monitoring-and-diagnostics/insights-alerts-portal.md). App Service apps hosted in basic, standard, or premium App Service plans -support **autoscale**. This allows you to configure rules that monitor the -App Service plan metrics and can increase or decrease the instance count -providing additional resources as needed, or saving money when the application -is over-provision. You can learn more about auto scale here: [How to Scale](../monitoring-and-diagnostics/insights-how-to-scale.md) and here [Best practices for Azure Monitor autoscaling](../monitoring-and-diagnostics/insights-autoscale-best-practices.md) +support **autoscale**. Autoscale allows you to configure rules that monitor the +App Service plan metrics. Rules can increase or decrease the instance count +providing additional resources as needed. Rules can also help you save money when the application +is over-provisioned. You can learn more about auto scale here: [How to Scale](../monitoring-and-diagnostics/insights-how-to-scale.md) and here [Best practices for Azure Monitor autoscaling](../monitoring-and-diagnostics/insights-autoscale-best-practices.md) > [!NOTE] > If you want to get started with Azure App Service before signing up for an Azure account, go to [Try App Service](https://azure.microsoft.com/try/app-service/), where you can immediately create a short-lived starter web app in App Service. No credit cards required; no commitments. diff --git a/articles/app-service/web-sites-traffic-manager.md b/articles/app-service/web-sites-traffic-manager.md index 2e08858f6c308..6088102442a61 100644 --- a/articles/app-service/web-sites-traffic-manager.md +++ b/articles/app-service/web-sites-traffic-manager.md @@ -1,6 +1,6 @@ --- -title: Controlling Azure web app traffic with Azure Traffic Manager -description: This article provides summary information for Azure Traffic Manager as it relates to Azure web apps. +title: Controlling Azure App Service traffic with Azure Traffic Manager +description: This article provides summary information for Azure Traffic Manager as it relates to Azure App Service. services: app-service\web documentationcenter: '' author: cephalin @@ -18,41 +18,41 @@ ms.date: 02/25/2016 ms.author: cephalin --- -# Controlling Azure web app traffic with Azure Traffic Manager +# Controlling Azure App Service traffic with Azure Traffic Manager > [!NOTE] -> This article provides summary information for Microsoft Azure Traffic Manager as it relates to Azure Web Apps. More information about Azure Traffic Manager itself can be found by visiting the links at the end of this article. +> This article provides summary information for Microsoft Azure Traffic Manager as it relates to Azure App Service. More information about Azure Traffic Manager itself can be found by visiting the links at the end of this article. > > ## Introduction -You can use Azure Traffic Manager to control how requests from web clients are distributed to web apps in Azure App Service. When web app endpoints are added to an Azure Traffic Manager profile, Azure Traffic Manager keeps track of the status of your web apps (running, stopped, or deleted) so that it can decide which of those endpoints should receive traffic. +You can use Azure Traffic Manager to control how requests from web clients are distributed to apps in Azure App Service. When App Service endpoints are added to an Azure Traffic Manager profile, Azure Traffic Manager keeps track of the status of your App Service apps (running, stopped, or deleted) so that it can decide which of those endpoints should receive traffic. ## Routing methods -Azure Traffic Manager uses three different routing methods. These methods are described in the following list as they pertain to Azure web apps. +Azure Traffic Manager uses four different routing methods. These methods are described in the following list as they pertain to Azure App Service. -* **[Priority](#priority):** use a primary web app for all traffic, and provide backups in case the primary or the backup web apps are unavailable. -* **[Weighted](#weighted):** distribute traffic across a set of web apps, either evenly or according to weights, which you define. -* **[Performance](#performance):** when you have web apps in different geographic locations, use the "closest" web app in terms of the lowest network latency. -* **[Geographic](#geographic):** direct users to specific web apps based on which geographic location their DNS query originates from. +* **[Priority](#priority):** use a primary app for all traffic, and provide backups in case the primary or the backup apps are unavailable. +* **[Weighted](#weighted):** distribute traffic across a set of apps, either evenly or according to weights, which you define. +* **[Performance](#performance):** when you have apps in different geographic locations, use the "closest" app in terms of the lowest network latency. +* **[Geographic](#geographic):** direct users to specific apps based on which geographic location their DNS query originates from. For more information, see [Traffic Manager routing methods](../traffic-manager/traffic-manager-routing-methods.md). -## Web Apps and Traffic Manager Profiles -To configure the control of web app traffic, you create a profile in Azure Traffic Manager that uses one of the three load balancing methods described previously, and then add the endpoints (in this case, web apps) for which you want to control traffic to the profile. Your web app status (running, stopped, or deleted) is regularly communicated to the profile so that Azure Traffic Manager can direct traffic accordingly. +## App Service and Traffic Manager Profiles +To configure the control of App Service app traffic, you create a profile in Azure Traffic Manager that uses one of the three load balancing methods described previously, and then add the endpoints (in this case, App Service) for which you want to control traffic to the profile. Your app status (running, stopped, or deleted) is regularly communicated to the profile so that Azure Traffic Manager can direct traffic accordingly. When using Azure Traffic Manager with Azure, keep in mind the following points: -* For web app only deployments within the same region, Web Apps already provides failover and round-robin functionality without regard to web app mode. -* For deployments in the same region that use Web Apps in conjunction with another Azure cloud service, you can combine both types of endpoints to enable hybrid scenarios. -* You can only specify one web app endpoint per region in a profile. When you select a web app as an endpoint for one region, the remaining web apps in that region become unavailable for selection for that profile. -* The web app endpoints that you specify in an Azure Traffic Manager profile appears under the **Domain Names** section on the Configure page for the web app in the profile, but is not configurable there. -* After you add a web app to a profile, the **Site URL** on the Dashboard of the web app's portal page displays the custom domain URL of the web app if you have set one up. Otherwise, it displays the Traffic Manager profile URL (for example, `contoso.trafficmanager.net`). Both the direct domain name of the web app and the Traffic Manager URL are visible on the web app's Configure page under the **Domain Names** section. -* Your custom domain names work as expected, but in addition to adding them to your web apps, you must also configure your DNS map to point to the Traffic Manager URL. For information on how to set up a custom domain for an Azure web app, see [Configuring a custom domain name for an Azure web site](app-service-web-tutorial-custom-domain.md). -* You can only add web apps that are in standard or premium mode to an Azure Traffic Manager profile. +* For app only deployments within the same region, App Service already provides failover and round-robin functionality without regard to app mode. +* For deployments in the same region that use App Service in conjunction with another Azure cloud service, you can combine both types of endpoints to enable hybrid scenarios. +* You can only specify one App Service endpoint per region in a profile. When you select an app as an endpoint for one region, the remaining apps in that region become unavailable for selection for that profile. +* The App Service endpoints that you specify in an Azure Traffic Manager profile appears under the **Domain Names** section on the Configure page for the app in the profile, but is not configurable there. +* After you add an app to a profile, the **Site URL** on the Dashboard of the app's portal page displays the custom domain URL of the app if you have set one up. Otherwise, it displays the Traffic Manager profile URL (for example, `contoso.trafficmanager.net`). Both the direct domain name of the app and the Traffic Manager URL are visible on the app's Configure page under the **Domain Names** section. +* Your custom domain names work as expected, but in addition to adding them to your apps, you must also configure your DNS map to point to the Traffic Manager URL. For information on how to set up a custom domain for an App Service app, see [Map an existing custom DNS name to Azure Web Apps](app-service-web-tutorial-custom-domain.md). +* You can only add apps that are in standard or premium mode to an Azure Traffic Manager profile. ## Next Steps For a conceptual and technical overview of Azure Traffic Manager, see [Traffic Manager Overview](../traffic-manager/traffic-manager-overview.md). -For more information about using Traffic Manager with Web Apps, see the blog posts +For more information about using Traffic Manager with App Service, see the blog posts [Using Azure Traffic Manager with Azure Web Sites](http://blogs.msdn.com/b/waws/archive/2014/03/18/using-windows-azure-traffic-manager-with-waws.aspx) and [Azure Traffic Manager can now integrate with Azure Web Sites](https://azure.microsoft.com/blog/2014/03/27/azure-traffic-manager-can-now-integrate-with-azure-web-sites/). diff --git a/articles/application-gateway/application-gateway-configure-redirect-powershell.md b/articles/application-gateway/application-gateway-configure-redirect-powershell.md index a0d1d1e7f143a..0e58ccced5157 100644 --- a/articles/application-gateway/application-gateway-configure-redirect-powershell.md +++ b/articles/application-gateway/application-gateway-configure-redirect-powershell.md @@ -111,8 +111,8 @@ $poolSetting = Get-AzureRmApplicationGatewayBackendHttpSettings -Name "appGatewa # Retrieve an existing backend pool $pool = Get-AzureRmApplicationGatewayBackendAddressPool -Name appGatewayBackendPool -ApplicationGateway $gw -# Create a new path based rule -$pathRule = New-AzureRmApplicationGatewayPathRuleConfig -Name "pathrule6" -Paths "/image/*" -BackendAddressPool $pool -BackendHttpSettings $poolSetting +# Create a new path rule for the path map configuration +$pathRule = New-AzureRmApplicationGatewayPathRuleConfig -Name "pathrule6" -Paths "/image/*" -RedirectConfiguration $redirectconfig # Create a path map to add to the rule Add-AzureRmApplicationGatewayUrlPathMapConfig -Name "urlpathmap" -PathRules $pathRule -DefaultBackendAddressPool $pool -DefaultBackendHttpSettings $poolSetting -ApplicationGateway $gw @@ -121,7 +121,7 @@ Add-AzureRmApplicationGatewayUrlPathMapConfig -Name "urlpathmap" -PathRules $pat $urlPathMap = Get-AzureRmApplicationGatewayUrlPathMapConfig -Name "urlpathmap" -ApplicationGateway $gw # Add a new rule to handle the redirect and use the new listener -Add-AzureRmApplicationGatewayRequestRoutingRule -Name "rule6" -RuleType PathBasedRouting -HttpListener $listener -UrlPathMap $urlPathMap -RedirectConfiguration $redirectconfig -ApplicationGateway $gw +Add-AzureRmApplicationGatewayRequestRoutingRule -Name "rule6" -RuleType PathBasedRouting -HttpListener $listener -UrlPathMap $urlPathMap -ApplicationGateway $gw # Update the application gateway Set-AzureRmApplicationGateway -ApplicationGateway $gw diff --git a/articles/application-gateway/application-gateway-create-gateway-arm-template.md b/articles/application-gateway/application-gateway-create-gateway-arm-template.md index d93d9425a57d1..15da721de4dac 100644 --- a/articles/application-gateway/application-gateway-create-gateway-arm-template.md +++ b/articles/application-gateway/application-gateway-create-gateway-arm-template.md @@ -27,9 +27,9 @@ ms.author: davidmu Azure Application Gateway is a layer-7 load balancer. It provides failover and performance-routing HTTP requests between different servers, whether they are on the cloud or on-premises. Application Gateway provides many application delivery controller (ADC) features including HTTP load balancing, cookie-based session affinity, Secure Sockets Layer (SSL) offload, custom health probes, support for multi-site, and many others. To find a complete list of supported features, visit [Application Gateway overview](application-gateway-introduction.md) -This article walks you through downloading and modifying an existing Azure Resource Manager template from GitHub and deploying the template from GitHub, PowerShell, and the Azure CLI. +This article walks you through downloading and modifying an existing [Azure Resource Manager template](../azure-resource-manager/resource-group-authoring-templates.md) from GitHub and deploying the template from GitHub, PowerShell, and the Azure CLI. -If you are simply deploying the Azure Resource Manager template directly from GitHub without any changes, skip to deploy a template from GitHub. +If you are simply deploying the template directly from GitHub without any changes, skip to deploy a template from GitHub. ## Scenario @@ -73,9 +73,6 @@ You can download the existing Azure Resource Manager template to create a virtua * **name**. Name for the resource. Notice the use of `[parameters('applicationGatewayName')]`, which means that the name is provided as input by you or by a parameter file during deployment. * **properties**. List of properties for the resource. This template uses the virtual network and public IP address during application gateway creation. - > [!NOTE] - > For more information on templates visit: [Resource Manager templates reference](/templates/) - 1. Navigate back to [https://github.com/Azure/azure-quickstart-templates/blob/master/101-application-gateway-waf/](https://github.com/Azure/azure-quickstart-templates/blob/master/101-application-gateway-waf). 1. Click **azuredeploy-parameters.json**, and then click **RAW**. 1. Save the file to a local folder on your computer. diff --git a/articles/application-gateway/application-gateway-faq.md b/articles/application-gateway/application-gateway-faq.md index 0c37bd3052244..6ace1f3550af8 100644 --- a/articles/application-gateway/application-gateway-faq.md +++ b/articles/application-gateway/application-gateway-faq.md @@ -286,7 +286,7 @@ WAF currently supports CRS [2.2.9](application-gateway-crs-rulegroups-rules.md#o * Prevention against bots, crawlers, and scanners -* Detection of common application misconfigurations (that is, Apache, IIS, etc.) + * Detection of common application misconfigurations (that is, Apache, IIS, etc.) **Q. Does WAF also support DDoS prevention?** diff --git a/articles/application-insights/app-insights-troubleshoot-snapshot-debugger.md b/articles/application-insights/app-insights-troubleshoot-snapshot-debugger.md new file mode 100644 index 0000000000000..3ebd52ae44eb1 --- /dev/null +++ b/articles/application-insights/app-insights-troubleshoot-snapshot-debugger.md @@ -0,0 +1,109 @@ +--- +title: Azure Application Insights Snapshot Debugger Troubleshooting Guide | Microsoft Docs +description: Frequently asked questions about Azure Application Insights Snapshot Debugger. +services: application-insights +documentationcenter: .net +author: mrbullwinkle +manager: carmonm +ms.assetid: +ms.service: application-insights +ms.workload: +ms.tgt_pltfrm: ibiza +ms.devlang: na +ms.topic: article +ms.date: 11/15/2017 +ms.author: mbullwin + +--- +# Snapshot Debugger: Troubleshooting Guide + +Application Insights Snapshot Debugger allows you to automatically collect a debug snapshot from live web applications. The snapshot shows the state of source code and variables at the moment an exception was thrown. If you are having difficulty getting the Application Insights snapshot debugger up and running this article walks you through how the debugger works, along with solutions to common troubleshooting scenarios. + +## How does Application Insights Snapshot Debugger work + +Application Insights Snapshot Debugger is part of the Application Insights telemetry pipeline (an instance of ITelemetryProcessor), the snapshot collector monitors both the exceptions thrown in your code (AppDomain.FirstChanceException) and the exceptions that get tracked by the Application Insights Exception Telemetry pipeline. Once you have successfully added the snapshot collector to your project, and it has detected one exception in the Application Insights telemetry pipeline, an Application Insights custom event with the name 'AppInsightsSnapshotCollectorLogs' and 'SnapshotCollectorEnabled' in the Custom Data will be sent. At the same time, it will start a process with the name of 'MinidumpUploader.exe', to upload the collected snapshot data files to Application Insights. When the 'MinidumpUploader.exe' process starts, a custom event with the name 'UploaderStart' will be sent. After the previous steps, the snapshot collector will enter its normal monitoring behavior. + +While the snapshot collector is monitoring Application Insights exception telemetry, it uses the parameters (for example, ThresholdForSnapshotting, MaximumSnapshotsRequired, MaximumCollectionPlanSize, ProblemCounterResetInterval) defined in the configuration to determine when to collect a snapshot. When all the rules are met, the collector will request a snapshot for the next exception thrown at the same place. Simultaneously, an Application Insights custom event with the name 'AppInsightsSnapshotCollectorLogs' and 'RequestSnapshots' will be sent. Since the compiler will optimize 'Release' code, local variables may not be visible in the collected snapshot. The snapshot collector will try to deoptimize the method that threw the exception, when it requests snapshots. During this time, an Application Insights custom event with name 'AppInsightsSnapshotCollectorLogs' and 'ProductionBreakpointsDeOptimizeMethod' in the custom data will be sent. When the snapshot of the next exception is collected, the local variables will be available. After the snapshot is collected, it will reoptimize the code to ensure the performance. + +> [!NOTE] +> Deoptimization requires the Application Insights site extension to be installed. + +When a snapshot is requested for a specific exception, the snapshot collector will start monitoring your application's exception handling pipeline (AppDomain.FirstChanceException). When the exception happens again, the collector will start a snapshot (Application Insights custom event with the name 'AppInsightsSnapshotCollectorLogs' and 'SnapshotStart' in the custom data). Then a shadow copy of the running process is made (the page table will be duplicated). This normally will take 10 to 20 milliseconds. After this, an Application Insights custom event with the name 'AppInsightsSnapshotCollectorLogs' and 'SnapshotStop' in the custom data will be sent. When the forked process is created, the total paged memory will be increased by the same amount as the paged memory of your running application (the working set will be much smaller). While your application process is running normally, the shadow copied process's memory will be dumped to disk and uploaded to Application Insights. After the snapshot is uploaded, an Application Insights custom event with the name 'UploadSnapshotFinish' will be sent. + +## Is the snapshot collector working properly? + +### How to find Snapshot Collector logs +Snapshot collector logs are sent to your Application Insight account if the [Snapshot Collector NuGet package](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) is version 1.1.0 or later is installed. Make sure the *ProvideAnonymousTelemetry* is not set to false (the value is true by default). + +* Navigate to your Application Insights resource in the Azure portal. +* Click *Search* in the Overview section. +* Enter the following string into the search box: + ``` + AppInsightsSnapshotCollectorLogs OR AppInsightsSnapshotUploaderLogs OR UploadSnapshotFinish OR UploaderStart OR UploaderStop + ``` +* Note: change the *Time range* if needed. + +![Screenshot of Search Snapshot Collector logs](./media/app-insights-troubleshoot-snapshot-debugger/001.png) + + +### Examine Snapshot collector logs +When searching for Snapshot Collector logs, there should be 'UploadSnapshotFinish' events in the targeted time range. If you still don't see the 'Open Debug Snapshot' button to open the Snapshot, please send email to snapshothelp@microsoft.com with your Application Insights' Instrumentation Key. + +![Screenshot of Examine snapshot collector logs](./media/app-insights-troubleshoot-snapshot-debugger/005.png) + +## I cannot find a snapshot to Open +If the following steps don't help you solve the issue, please send email to snapshothelp@microsoft.com with your Application Insights' Instrumentation Key. + +### Step 1: Make sure your application is sending telemetry data and exception data to Application Insights +Navigate to Application Insights resource, check that there is data sent from your application. + +### Step 2: Make sure Snapshot collector is added correctly to your application's Application Insights Telemetry pipeline +If you can find logs in the 'How to find Snapshot Collector logs' step, the snapshot collector is correctly added to your project, you can ignore this step. + +If there are no Snapshot collector logs, verify the following: +* For classic ASP.NET applications, check for this line ** in the *ApplicationInsights.config* file. + +* For ASP.NET Core applications, make sure the *ITelemetryProcessorFactory* with *SnapshotCollectorTelemetryProcessor* is added to *IServiceCollection* services. + +* Also check that you're using the correct instrumentation key in your published application. + +* The Snapshot collector doesn't support multiple instrumentation keys within the one application, it will send snapshots to the instrumentation key of the first exception it observes. + +* If you set the *InstrmentationKey* manually in your code, please update the *InstrumentationKey* element from the *ApplicationInsights.config*. + +### Step 3: Make sure the minidump uploader is started +In the snapshot collector logs, search for *UploaderStart* (type UploaderStart in the search text box). There should be an event when the snapshot collector monitored the first exception. If this event doesn't exist, check other logs for details. One possible way for solving this issue is restarting your application. + +### Step 4: Make sure Snapshot Collector expressed its intent to collect snapshots +In the snapshot collector logs, search for *RequestSnapshots* (type ```RequestSnapshots``` in the search text box). If there isn't any, please check your configuration, e.g. *ThresholdForSnapshotting* which indicates the number of a specific exceptions that can occur before it starts collecting snapshots. + +### Step 5: Make sure that Snapshot is not disabled due to Memory Protection +To protect your application's performance, a snapshot will only be captured when there is a good memory buffer. In the snapshot collector logs, search for 'CannotSnapshotDueToMemoryUsage'. In the event's custom data, it will have a detailed reason. If your application is running in an Azure Web App, the restriction may be strict. Since Azure Web App will restart your app when certain memory rules are met. You can try to scale up your service plan to machines with more memory to solve this issue. + +### Step 6: Make sure snapshots were captured +In the snapshot collector logs, search for ```RequestSnapshots```. If none are present, check your configuration, e.g. ```ThresholdForSnapshotting``` this indicates the number of a specific exceptions before collecting a snapshot. + +### Step 7: Make sure snapshots are uploaded correctly +In the snapshot collector logs, search for ```UploadSnapshotFinish```. If this is not present, please send email to snapshothelp@microsoft.com with your Application Insights' Instrumentation Key. If this event exists, open one of the logs and copy the 'SnapshotId' value in the Custom Data. Then search for the value. This will find the exception corresponding to the snapshot. Then click the exception and open debug snapshot. (If there is no corresponding exception, the exception telemetry may be sampled, due to high volume, please try another snapshotId.) + +![Screenshot of Find SnapshotId](./media/app-insights-troubleshoot-snapshot-debugger/002.png) + +![Screenshot of Open Exception](./media/app-insights-troubleshoot-snapshot-debugger/004b.png) + +![Screenshot of Open debug snapshot](./media/app-insights-troubleshoot-snapshot-debugger/003.png) + +## Snapshot View Local Variables are not complete + +Some of the local variables are missing. If your application is running release code, the compiler will optimize some variables away. For example: + +```csharp + const int a = 1; // a will be discarded by compiler and the value 1 will be inline. + Random rand = new Random(); + int b = rand.Next() % 300; // b will be discarded and the value will be directly put to the 'FindNthPrimeNumber' call stack. + long primeNumber = FindNthPrimeNumber(b); +``` + +If your case is different, it could be a bug. Please send email to snapshothelp@microsoft.com with your Application Insights' Instrumentation Key along with the code snippet. + +## Snapshot View: Cannot obtain value of the local variable or argument +Please make sure the [Application Insights site extension](https://www.siteextensions.net/packages/Microsoft.ApplicationInsights.AzureWebSites/) is installed. If the issue persists, please send email to snapshothelp@microsoft.com with your Application Insights' Instrumentation Key. diff --git a/articles/application-insights/media/app-insights-troubleshoot-snapshot-debugger/001.png b/articles/application-insights/media/app-insights-troubleshoot-snapshot-debugger/001.png new file mode 100644 index 0000000000000..446788777a370 Binary files /dev/null and b/articles/application-insights/media/app-insights-troubleshoot-snapshot-debugger/001.png differ diff --git a/articles/application-insights/media/app-insights-troubleshoot-snapshot-debugger/002.png b/articles/application-insights/media/app-insights-troubleshoot-snapshot-debugger/002.png new file mode 100644 index 0000000000000..9719e6132ecaf Binary files /dev/null and b/articles/application-insights/media/app-insights-troubleshoot-snapshot-debugger/002.png differ diff --git a/articles/application-insights/media/app-insights-troubleshoot-snapshot-debugger/003.png b/articles/application-insights/media/app-insights-troubleshoot-snapshot-debugger/003.png new file mode 100644 index 0000000000000..ad221a72c5da6 Binary files /dev/null and b/articles/application-insights/media/app-insights-troubleshoot-snapshot-debugger/003.png differ diff --git a/articles/application-insights/media/app-insights-troubleshoot-snapshot-debugger/004b.png b/articles/application-insights/media/app-insights-troubleshoot-snapshot-debugger/004b.png new file mode 100644 index 0000000000000..d266321c4495d Binary files /dev/null and b/articles/application-insights/media/app-insights-troubleshoot-snapshot-debugger/004b.png differ diff --git a/articles/application-insights/media/app-insights-troubleshoot-snapshot-debugger/005.png b/articles/application-insights/media/app-insights-troubleshoot-snapshot-debugger/005.png new file mode 100644 index 0000000000000..eac8b98897352 Binary files /dev/null and b/articles/application-insights/media/app-insights-troubleshoot-snapshot-debugger/005.png differ diff --git a/articles/application-insights/toc.yml b/articles/application-insights/toc.yml index 664d29564e6b5..d8d237f4dc044 100644 --- a/articles/application-insights/toc.yml +++ b/articles/application-insights/toc.yml @@ -247,6 +247,8 @@ items: - name: No data for .NET href: app-insights-asp-net-troubleshoot-no-data.md + - name: Snapshot debugger + href: app-insights-troubleshoot-snapshot-debugger.md - name: Analytics href: app-insights-analytics-troubleshooting.md - name: Java diff --git a/articles/azure-databricks/TOC.yml b/articles/azure-databricks/TOC.yml index 5758015ad015d..c1c096a1c65f0 100644 --- a/articles/azure-databricks/TOC.yml +++ b/articles/azure-databricks/TOC.yml @@ -13,6 +13,8 @@ items: - name: Common questions href: frequently-asked-questions-databricks.md + - name: Release notes + href: https://docs.azuredatabricks.net/release-notes/index.html - name: Databricks documentation href: https://docs.azuredatabricks.net/user-guide/index.html - name: Azure Roadmap diff --git a/articles/azure-functions/durable-functions-bindings.md b/articles/azure-functions/durable-functions-bindings.md index 3d5c56c71d44c..e1d8141a9e8ec 100644 --- a/articles/azure-functions/durable-functions-bindings.md +++ b/articles/azure-functions/durable-functions-bindings.md @@ -51,7 +51,7 @@ Internally this trigger binding polls a series of queues in the default storage Here are some notes about the orchestration trigger: * **Single-threading** - A single dispatcher thread is used for all orchestrator function execution on a single host instance. For this reason, it is important to ensure that orchestrator function code is efficient and doesn't perform any I/O. It is also important to ensure that this thread does not do any async work except when awaiting on Durable Functions-specific task types. -* **Poising-message handling** - There is no poison message support in orchestration triggers. +* **Poison-message handling** - There is no poison message support in orchestration triggers. * **Message visibility** - Orchestration trigger messages are dequeued and kept invisible for a configurable duration. The visibility of these messages is renewed automatically as long as the function app is running and healthy. * **Return values** - Return values are serialized to JSON and persisted to the orchestration history table in Azure Table storage. These return values can be queried by the orchestration client binding, described later. diff --git a/articles/azure-functions/durable-functions-instance-management.md b/articles/azure-functions/durable-functions-instance-management.md index 88636f1813d4c..766498f14514b 100644 --- a/articles/azure-functions/durable-functions-instance-management.md +++ b/articles/azure-functions/durable-functions-instance-management.md @@ -87,7 +87,7 @@ public static async Task Run( [OrchestrationClient] DurableOrchestrationClient client, [ManualTrigger] string instanceId) { - var status = await checker.GetStatusAsync(instanceId); + var status = await client.GetStatusAsync(instanceId); // do something based on the current status. } ``` diff --git a/articles/azure-functions/functions-bindings-documentdb.md b/articles/azure-functions/functions-bindings-documentdb.md index cf190b07af551..64675f57f941a 100644 --- a/articles/azure-functions/functions-bindings-documentdb.md +++ b/articles/azure-functions/functions-bindings-documentdb.md @@ -1,66 +1,64 @@ --- -title: Azure Cosmos DB bindings for Functions | Microsoft Docs +title: Azure Cosmos DB bindings for Functions description: Understand how to use Azure Cosmos DB triggers and bindings in Azure Functions. services: functions documentationcenter: na -author: christopheranderson +author: ggailey777 manager: cfowler editor: '' tags: '' keywords: azure functions, functions, event processing, dynamic compute, serverless architecture -ms.assetid: 3d8497f0-21f3-437d-ba24-5ece8c90ac85 ms.service: functions; cosmos-db ms.devlang: multiple ms.topic: reference ms.tgt_pltfrm: multiple ms.workload: na -ms.date: 09/19/2017 +ms.date: 11/21/2017 ms.author: glenga - --- -# Azure Cosmos DB bindings for Functions -[!INCLUDE [functions-selector-bindings](../../includes/functions-selector-bindings.md)] - -This article explains how to configure and code Azure Cosmos DB bindings in Azure Functions. Functions supports trigger, input, and output bindings for Azure Cosmos DB. -[!INCLUDE [intro](../../includes/functions-bindings-intro.md)] +# Azure Cosmos DB bindings for Azure Functions -For more information on serverless computing with Azure Cosmos DB, see [Azure Cosmos DB: Serverless database computing using Azure Functions](..\cosmos-db\serverless-computing-database.md). +This article explains how to work with [Azure Cosmos DB](..\cosmos-db\serverless-computing-database.md) bindings in Azure Functions. Azure Functions supports trigger, input, and output bindings for Azure Cosmos DB. - - +[!INCLUDE [intro](../../includes/functions-bindings-intro.md)] -## Azure Cosmos DB trigger +## Trigger The Azure Cosmos DB Trigger uses the [Azure Cosmos DB Change Feed](../cosmos-db/change-feed.md) to listen for changes across partitions. The trigger requires a second collection that it uses to store _leases_ over the partitions. Both the collection being monitored and the collection that contains the leases must be available for the trigger to work. -The Azure Cosmos DB trigger supports the following properties: - -|Property |Description | -|---------|---------| -|**type** | Must be set to `cosmosDBTrigger`. | -|**name** | The variable name used in function code that represents the list of documents with changes. | -|**direction** | Must be set to `in`. This parameter is set automatically when you create the trigger in the Azure portal. | -|**connectionStringSetting** | The name of an app setting that contains the connection string used to connect to the Azure Cosmos DB account being monitored. | -|**databaseName** | The name of the Azure Cosmos DB database with the collection being monitored. | -|**collectionName** | The name of the collection being monitored. | -| **leaseConnectionStringSetting** | (Optional) The name of an app setting that contains the connection string to the service which holds the lease collection. When not set, the `connectionStringSetting` value is used. This parameter is automatically set when the binding is created in the portal. | -| **leaseDatabaseName** | (Optional) The name of the database that holds the collection used to store leases. When not set, the value of the `databaseName` setting is used. This parameter is automatically set when the binding is created in the portal. | -| **leaseCollectionName** | (Optional) The name of the collection used to store leases. When not set, the value `leases` is used. | -| **createLeaseCollectionIfNotExists** | (Optional) When set to `true`, the leases collection is automatically created when it doesn't already exist. The default value is `false`. | -| **leaseCollectionThroughput** | (Optional) Defines the amount of Request Units to assign when the leases collection is created. This setting is only used When `createLeaseCollectionIfNotExists` is set to `true`. This parameter is automatically set when the binding is created using the portal. +## Trigger - example ->[!NOTE] ->The connection string used to connect to the leases collection must have write permissions. +See the language-specific example: + +* [Precompiled C#](#trigger---c-example) +* [C# script](#trigger---c-script-example) +* [JavaScript](#trigger---javascript-example) + +### Trigger - C# example -These properties can be set in the Integrate tab for the function in the Azure portal or by editing the `function.json` project file. +The following example shows a [precompiled C# function](functions-dotnet-class-library.md) that triggers from a specific database and collection. -## Using an Azure Cosmos DB trigger +```cs +[FunctionName("DocumentUpdates")] +public static void Run( + [CosmosDBTrigger("database", "collection", ConnectionStringSetting = "myCosmosDB")] +IReadOnlyList documents, + TraceWriter log) +{ + log.Info("Documents modified " + documents.Count); + log.Info("First document Id " + documents[0].Id); +} +``` -This section contains examples of how to use the Azure Cosmos DB trigger. The examples assume a trigger metadata that looks like the following: +### Trigger - C# script + +The following example shows a Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Cosmos DB records are modified. + +Here's the binding data in the *function.json* file: ```json { @@ -74,10 +72,9 @@ This section contains examples of how to use the Azure Cosmos DB trigger. The ex "createLeaseCollectionIfNotExists": true } ``` - -For an example of how to create a Azure Cosmos DB trigger from a function app in the portal, see [Create a function triggered by Azure Cosmos DB](functions-create-cosmos-db-triggered-function.md). -### Trigger sample in C# # +Here's the C# script code: + ```cs #r "Microsoft.Azure.Documents.Client" using Microsoft.Azure.Documents; @@ -90,8 +87,27 @@ For an example of how to create a Azure Cosmos DB trigger from a function app in } ``` +### Trigger - JavaScript + +The following example shows a Cosmos DB trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function writes log messages when Cosmos DB records are modified. + +Here's the binding data in the *function.json* file: + +```json +{ + "type": "cosmosDBTrigger", + "name": "documents", + "direction": "in", + "leaseCollectionName": "leases", + "connectionStringSetting": "", + "databaseName": "Tasks", + "collectionName": "Items", + "createLeaseCollectionIfNotExists": true +} +``` + +Here's the JavaScript code: -### Trigger sample in JavaScript ```javascript module.exports = function (context, documents) { context.log('First document Id modified : ', documents[0].id); @@ -100,35 +116,66 @@ For an example of how to create a Azure Cosmos DB trigger from a function app in } ``` - +## Trigger - attributes + +For [precompiled C#](functions-dotnet-class-library.md) functions, use the [CosmosDBTrigger](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.DocumentDB/Trigger/CosmosDBTriggerAttribute.cs) attribute, which is defined in NuGet package [Microsoft.Azure.WebJobs.Extensions.DocumentDB](http://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DocumentDB). + +The attribute's constructor takes the database name and collection name. For information about those settings and other properties that you can configure, see [Trigger - configuration](#trigger---configuration). Here's a `CosmosDBTrigger` attribute example in a method signature: + +```csharp +[FunctionName("DocumentUpdates")] +public static void Run( + [CosmosDBTrigger("database", "collection", ConnectionStringSetting = "myCosmosDB")] +IReadOnlyList documents, + TraceWriter log) +{ + ... +} +``` + +For a complete example, see [Trigger - precompiled C# example](#trigger---c-example). + +## Trigger - configuration + +The following table explains the binding configuration properties that you set in the *function.json* file and the `CosmosDBTrigger` attribute. + +|function.json property | Attribute property |Description| +|---------|---------|----------------------| +|**type** || Must be set to `cosmosDBTrigger`. | +|**direction** || Must be set to `in`. This parameter is set automatically when you create the trigger in the Azure portal. | +|**name** || The variable name used in function code that represents the list of documents with changes. | +|**connectionStringSetting**|**ConnectionStringSetting** | The name of an app setting that contains the connection string used to connect to the Azure Cosmos DB account being monitored. | +|**databaseName**|**DatabaseName** | The name of the Azure Cosmos DB database with the collection being monitored. | +|**collectionName** |**CollectionName** | The name of the collection being monitored. | +| **leaseConnectionStringSetting** | **LeaseConnectionStringSetting** | (Optional) The name of an app setting that contains the connection string to the service which holds the lease collection. When not set, the `connectionStringSetting` value is used. This parameter is automatically set when the binding is created in the portal. | +| **leaseDatabaseName** |**LeaseDatabaseName** | (Optional) The name of the database that holds the collection used to store leases. When not set, the value of the `databaseName` setting is used. This parameter is automatically set when the binding is created in the portal. | +| **leaseCollectionName** | **LeaseCollectionName** | (Optional) The name of the collection used to store leases. When not set, the value `leases` is used. | +| **createLeaseCollectionIfNotExists** | **CreateLeaseCollectionIfNotExists** | (Optional) When set to `true`, the leases collection is automatically created when it doesn't already exist. The default value is `false`. | +| **leaseCollectionThroughput**| | (Optional) Defines the amount of Request Units to assign when the leases collection is created. This setting is only used When `createLeaseCollectionIfNotExists` is set to `true`. This parameter is automatically set when the binding is created using the portal. +| |**LeaseOptions** | Configure lease options by setting properties in an instance of the [Change​Feed​Host​Options](https://docs.microsoft.com/dotnet/api/microsoft.azure.documents.changefeedprocessor.changefeedhostoptions) class. -## DocumentDB API input binding -The DocumentDB API input binding retrieves an Azure Cosmos DB document and passes it to the named input parameter of the function. The document ID can be determined based on the trigger that invokes the function. +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] -The DocumentDB API input binding has the following properties in *function.json*: +>[!NOTE] +>The connection string for the leases collection must have write permissions. + +## Input -|Property |Description | -|---------|---------| -|**name** | Name of the binding parameter that represents the document in the function. | -|**type** | Must be set to `documentdb`. | -|**databaseName** | The database containing the document. | -|**collectionName** | The name of the collection that contains the document. | -|**id** | The ID of the document to retrieve. This property supports bindings parameters. To learn more, see [Bind to custom input properties in a binding expression](functions-triggers-bindings.md#bind-to-custom-input-properties-in-a-binding-expression). | -|**sqlQuery** | An Azure Cosmos DB SQL query used for retrieving multiple documents. The query supports runtime bindings, such in the example: `SELECT * FROM c where c.departmentId = {departmentId}`. | -|**connection** |The name of the app setting containing your Azure Cosmos DB connection string. | -|**direction** | Must be set to `in`. | +The DocumentDB API input binding retrieves one or more Azure Cosmos DB documents and passes them to the input parameter of the function. The document ID or query parameters can be determined based on the trigger that invokes the function. -You cannot set both the **id** and **sqlQuery** properties. If neither are set, the entire collection is retrieved. +## Input - example 1 -## Using a DocumentDB API input binding +See the language-specific example that reads a single document: -* In C# and F# functions, when the function exits successfully, any changes made to the input document via named input parameters are automatically persisted. -* In JavaScript functions, updates are not made automatically upon function exit. Instead, use `context.bindings.In` and `context.bindings.Out` to make updates. See the [JavaScript sample](#injavascript). +* [C# script](#input---c-script-example) +* [F#](#input---f-example) +* [JavaScript](#input---javascript-example) - +### Input - C# script example -## Input sample for single document -Suppose you have the following DocumentDB API input binding in the `bindings` array of function.json: +The following example shows a Cosmos DB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads a single document and updates the document's text value. + +Here's the binding data in the *function.json* file: ```json { @@ -141,15 +188,9 @@ Suppose you have the following DocumentDB API input binding in the `bindings` ar "direction": "in" } ``` +The [configuration](#input---configuration) section explains these properties. -See the language-specific sample that uses this input binding to update the document's text value. - -* [C#](#incsharp) -* [F#](#infsharp) -* [JavaScript](#injavascript) - - -### Input sample in C# # +Here's the C# script code: ```cs // Change input document contents using DocumentDB API input binding @@ -158,9 +199,30 @@ public static void Run(string myQueueItem, dynamic inputDocument) inputDocument.text = "This has changed."; } ``` + -### Input sample in F# # +### Input - F# example + +The following example shows a Cosmos DB input binding in a *function.json* file and a [F# function](functions-reference-fsharp.md) that uses the binding. The function reads a single document and updates the document's text value. + +Here's the binding data in the *function.json* file: + +```json +{ + "name": "inputDocument", + "type": "documentDB", + "databaseName": "MyDatabase", + "collectionName": "MyCollection", + "id" : "{queueTrigger}", + "connection": "MyAccount_COSMOSDB", + "direction": "in" +} +``` + +The [configuration](#input---configuration) section explains these properties. + +Here's the F# code: ```fsharp (* Change input document contents using DocumentDB API input binding *) @@ -169,7 +231,7 @@ let Run(myQueueItem: string, inputDocument: obj) = inputDocument?text <- "This has changed." ``` -This sample requires a `project.json` file that specifies the `FSharp.Interop.Dynamic` and `Dynamitey` NuGet +This example requires a `project.json` file that specifies the `FSharp.Interop.Dynamic` and `Dynamitey` NuGet dependencies: ```json @@ -187,9 +249,26 @@ dependencies: To add a `project.json` file, see [F# package management](functions-reference-fsharp.md#package). - +### Input - JavaScript example + +The following example shows a Cosmos DB input binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function reads a single document and updates the document's text value. + +Here's the binding data in the *function.json* file: + +```json +{ + "name": "inputDocument", + "type": "documentDB", + "databaseName": "MyDatabase", + "collectionName": "MyCollection", + "id" : "{queueTrigger}", + "connection": "MyAccount_COSMOSDB", + "direction": "in" +} +``` +The [configuration](#input---configuration) section explains these properties. -### Input sample in JavaScript +Here's the JavaScript code: ```javascript // Change input document contents using DocumentDB API input binding, using context.bindings.inputDocumentOut @@ -200,11 +279,35 @@ module.exports = function (context) { }; ``` -## Input sample with multiple documents +## Input - example 2 + +See the language-specific example that reads multiple documents: + +* [Precompiled C#](#input---c-example-2) +* [C# script](#input---c-script-example-2) +* [JavaScript](#input---javascript-example-2) + +### Input - C# example 2 + +The following example shows a [precompiled C# function](functions-dotnet-class-library.md) that executes a SQL query. + +```csharp +[FunctionName("CosmosDBSample")] +public static HttpResponseMessage Run( + [HttpTrigger(AuthorizationLevel.Anonymous, "get")] HttpRequestMessage req, + [DocumentDB("test", "test", ConnectionStringSetting = "CosmosDB", sqlQuery = "SELECT top 2 * FROM c order by c._ts desc")] IEnumerable documents) +{ + return req.CreateResponse(HttpStatusCode.OK, documents); +} +``` + +### Input - C# script example 2 -Suppose that you wish to retrieve multiple documents specified by a SQL query, using a queue trigger to customize the query parameters. +The following example shows a DocumentDB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters. -In this example, the queue trigger provides a parameter `departmentId`.A queue message of `{ "departmentId" : "Finance" }` would return all records for the finance department. Use the following in *function.json*: +The queue trigger provides a parameter `departmentId`. A queue message of `{ "departmentId" : "Finance" }` would return all records for the finance department. + +Here's the binding data in the *function.json* file: ``` { @@ -213,12 +316,14 @@ In this example, the queue trigger provides a parameter `departmentId`.A queue m "direction": "in", "databaseName": "MyDb", "collectionName": "MyCollection", - "sqlQuery": "SELECT * from c where c.departmentId = {departmentId}", + "sqlQuery": "SELECT * from c where c.departmentId = {departmentId}" "connection": "CosmosDBConnection" } ``` -### Input sample with multiple documents in C# +The [configuration](#input---configuration) section explains these properties. + +Here's the C# script code: ```csharp public static void Run(QueuePayload myQueueItem, IEnumerable documents) @@ -235,7 +340,29 @@ public class QueuePayload } ``` -### Input sample with multiple documents in JavaScript +### Input - JavaScript example 2 + +The following example shows a DocumentDB input binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters. + +The queue trigger provides a parameter `departmentId`. A queue message of `{ "departmentId" : "Finance" }` would return all records for the finance department. + +Here's the binding data in the *function.json* file: + +``` +{ + "name": "documents", + "type": "documentdb", + "direction": "in", + "databaseName": "MyDb", + "collectionName": "MyCollection", + "sqlQuery": "SELECT * from c where c.departmentId = {departmentId}" + "connection": "CosmosDBConnection" +} +``` + +The [configuration](#input---configuration) section explains these properties. + +Here's the JavaScript code: ```javascript module.exports = function (context, input) { @@ -248,48 +375,66 @@ module.exports = function (context, input) { }; ``` -## DocumentDB API output binding -The DocumentDB API output binding lets you write a new document to an Azure Cosmos DB database. -It has the following properties in *function.json*: +## Input - attributes -|Property |Description | -|---------|---------| -|**name** | Name of the binding parameter that represents the document in the function. | -|**type** | Must be set to `documentdb`. | -|**databaseName** | The database containing the collection where the document is created. | -|**collectionName** | The name of the collection where the document is created. | -|**createIfNotExists** | A boolean value to indicate whether the collection is created when it doesn't exist. The default is *false*. This is because new collections are created with reserved throughput, which has cost implications. For more details, please visit the [pricing page](https://azure.microsoft.com/pricing/details/documentdb/). | -|**connection** |The name of the app setting containing your Azure Cosmos DB connection string. | -|**direction** | Must be set to `out`. | +For [precompiled C#](functions-dotnet-class-library.md) functions, use the [DocumentDB](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.DocumentDB/DocumentDBAttribute.cs) attribute, which is defined in NuGet package [Microsoft.Azure.WebJobs.Extensions.DocumentDB](http://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DocumentDB). -## Using a DocumentDB API output binding -This section shows you how to use your DocumentDB API output binding in your function code. +The attribute's constructor takes the database name and collection name. For information about those settings and other properties that you can configure, see [the following configuration section](#input---configuration). -By default, when you write to the output parameter in your function, a document is created in your database. This document has an automatically generated GUID as the document ID. You can specify the document ID of output document by specifying the `id` property in the JSON object passed to the output parameter. +## Input - configuration ->[!Note] ->When you specify the ID of an existing document, it gets overwritten by the new output document. +The following table explains the binding configuration properties that you set in the *function.json* file and the `DocumentDB` attribute. -To output multiple documents, you can also bind to `ICollector` or `IAsyncCollector` where `T` is one of the supported types. +|function.json property | Attribute property |Description| +|---------|---------|----------------------| +|**type** || Must be set to `documentdb`. | +|**direction** || Must be set to `in`. | +|**name** || Name of the binding parameter that represents the document in the function. | +|**databaseName** |**DatabaseName** |The database containing the document. | +|**collectionName** |**CollectionName** | The name of the collection that contains the document. | +|**id** | **Id** | The ID of the document to retrieve. This property supports bindings parameters. To learn more, see [Bind to custom input properties in a binding expression](functions-triggers-bindings.md#bind-to-custom-input-properties-in-a-binding-expression). Don't set both the **id** and **sqlQuery** properties. If you don't set either one, the entire collection is retrieved. | +|**sqlQuery** |**SqlQuery** | An Azure Cosmos DB SQL query used for retrieving multiple documents. The property supports runtime bindings, as in this example: `SELECT * FROM c where c.departmentId = {departmentId}`. Don't set both the **id** and **sqlQuery** properties. If you don't set either one, the entire collection is retrieved.| +|**connection** |**ConnectionStringSetting**|The name of the app setting containing your Azure Cosmos DB connection string. | +||**PartitionKey**|Specifies the partition key value for the lookup. May include binding parameters.| - +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] -## DocumentDB API output binding sample -Suppose you have the following DocumentDB API output binding in the `bindings` array of function.json: +## Input - usage -```json +In C# and F# functions, when the function exits successfully, any changes made to the input document via named input parameters are automatically persisted. + +In JavaScript functions, updates are not made automatically upon function exit. Instead, use `context.bindings.In` and `context.bindings.Out` to make updates. See the [JavaScript example](#input---javascript-example). + +## Output + +The DocumentDB API output binding lets you write a new document to an Azure Cosmos DB database. + +## Output - example + +See the language-specific example: + +* [Precompiled C#](#trigger---c-example) +* [C# script](#trigger---c-script-example) +* [F#](#trigger---f-example) +* [JavaScript](#trigger---javascript-example) + +### Output - C# example + +The following example shows a [precompiled C# function](functions-dotnet-class-library.md) that adds a document to a database, using data provided in message from Queue storage. + +```cs +[FunctionName("QueueToDocDB")] +public static void Run( + [QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")] string myQueueItem, + [DocumentDB("ToDoList", "Items", Id = "id", ConnectionStringSetting = "myCosmosDB")] out dynamic document) { - "name": "employeeDocument", - "type": "documentDB", - "databaseName": "MyDatabase", - "collectionName": "MyCollection", - "createIfNotExists": true, - "connection": "MyAccount_COSMOSDB", - "direction": "out" + document = new { Text = myQueueItem, id = Guid.NewGuid() }; } ``` -And you have a queue input binding for a queue that receives JSON in the following format: +### Output - C# script example + +The following example shows a DocumentDB output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function uses a queue input binding for a queue that receives JSON in the following format: ```json { @@ -299,7 +444,7 @@ And you have a queue input binding for a queue that receives JSON in the followi } ``` -And you want to create Azure Cosmos DB documents in the following format for each record: +The function creates Azure Cosmos DB documents in the following format for each record: ```json { @@ -310,15 +455,23 @@ And you want to create Azure Cosmos DB documents in the following format for eac } ``` -See the language-specific sample that uses this output binding to add documents to your database. +Here's the binding data in the *function.json* file: -* [C#](#outcsharp) -* [F#](#outfsharp) -* [JavaScript](#outjavascript) +```json +{ + "name": "employeeDocument", + "type": "documentDB", + "databaseName": "MyDatabase", + "collectionName": "MyCollection", + "createIfNotExists": true, + "connection": "MyAccount_COSMOSDB", + "direction": "out" +} +``` - +The [configuration](#output---configuration) section explains these properties. -### Output sample in C# # +Here's the C# script code: ```cs #r "Newtonsoft.Json" @@ -342,9 +495,47 @@ public static void Run(string myQueueItem, out object employeeDocument, TraceWri } ``` - +To create multiple documents, you can bind to `ICollector` or `IAsyncCollector` where `T` is one of the supported types. + +### Output - F# example + +The following example shows a DocumentDB output binding in a *function.json* file and an [F# function](functions-reference-fsharp.md) that uses the binding. The function uses a queue input binding for a queue that receives JSON in the following format: + +```json +{ + "name": "John Henry", + "employeeId": "123456", + "address": "A town nearby" +} +``` + +The function creates Azure Cosmos DB documents in the following format for each record: + +```json +{ + "id": "John Henry-123456", + "name": "John Henry", + "employeeId": "123456", + "address": "A town nearby" +} +``` + +Here's the binding data in the *function.json* file: + +```json +{ + "name": "employeeDocument", + "type": "documentDB", + "databaseName": "MyDatabase", + "collectionName": "MyCollection", + "createIfNotExists": true, + "connection": "MyAccount_COSMOSDB", + "direction": "out" +} +``` +The [configuration](#output---configuration) section explains these properties. -### Output sample in F# # +Here's the F# code: ```fsharp open FSharp.Interop.Dynamic @@ -367,7 +558,7 @@ let Run(myQueueItem: string, employeeDocument: byref, log: TraceWriter) = address = employee?address } ``` -This sample requires a `project.json` file that specifies the `FSharp.Interop.Dynamic` and `Dynamitey` NuGet +This example requires a `project.json` file that specifies the `FSharp.Interop.Dynamic` and `Dynamitey` NuGet dependencies: ```json @@ -385,9 +576,46 @@ dependencies: To add a `project.json` file, see [F# package management](functions-reference-fsharp.md#package). - +### Output - JavaScript example + +The following example shows a DocumentDB output binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function uses a queue input binding for a queue that receives JSON in the following format: + +```json +{ + "name": "John Henry", + "employeeId": "123456", + "address": "A town nearby" +} +``` + +The function creates Azure Cosmos DB documents in the following format for each record: + +```json +{ + "id": "John Henry-123456", + "name": "John Henry", + "employeeId": "123456", + "address": "A town nearby" +} +``` + +Here's the binding data in the *function.json* file: + +```json +{ + "name": "employeeDocument", + "type": "documentDB", + "databaseName": "MyDatabase", + "collectionName": "MyCollection", + "createIfNotExists": true, + "connection": "MyAccount_COSMOSDB", + "direction": "out" +} +``` + +The [configuration](#output---configuration) section explains these properties. -### Output sample in JavaScript +Here's the JavaScript code: ```javascript module.exports = function (context) { @@ -402,3 +630,57 @@ module.exports = function (context) { context.done(); }; ``` + +## Output - attributes + +For [precompiled C#](functions-dotnet-class-library.md) functions, use the [DocumentDB](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.DocumentDB/DocumentDBAttribute.cs) attribute, which is defined in NuGet package [Microsoft.Azure.WebJobs.Extensions.DocumentDB](http://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DocumentDB). + +The attribute's constructor takes the database name and collection name. For information about those settings and other properties that you can configure, see [Output - configuration](#output---configuration). Here's a `DocumentDB` attribute example in a method signature: + +```csharp +[FunctionName("QueueToDocDB")] +public static void Run( + [QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")] string myQueueItem, + [DocumentDB("ToDoList", "Items", Id = "id", ConnectionStringSetting = "myCosmosDB")] out dynamic document) +{ + ... +} +``` + +For a complete example, see [Output - precompiled C# example](#output---c-example). + +## Output - configuration + +The following table explains the binding configuration properties that you set in the *function.json* file and the `DocumentDB` attribute. + +|function.json property | Attribute property |Description| +|---------|---------|----------------------| +|**type** || Must be set to `documentdb`. | +|**direction** || Must be set to `out`. | +|**name** || Name of the binding parameter that represents the document in the function. | +|**databaseName** | **DatabaseName**|The database containing the collection where the document is created. | +|**collectionName** |**CollectionName** | The name of the collection where the document is created. | +|**createIfNotExists** |**CreateIfNotExists** | A boolean value to indicate whether the collection is created when it doesn't exist. The default is *false* because new collections are created with reserved throughput, which has cost implications. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/documentdb/). | +||**PartitionKey** |When `CreateIfNotExists` is true, defines the partition key path for the created collection.| +||**CollectionThroughput**| When `CreateIfNotExists` is true, defines the [throughput](../cosmos-db/set-throughput.md) of the created collection.| +|**connection** |**ConnectionStringSetting** |The name of the app setting containing your Azure Cosmos DB connection string. | + +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] + +## Output - usage + +By default, when you write to the output parameter in your function, a document is created in your database. This document has an automatically generated GUID as the document ID. You can specify the document ID of the output document by specifying the `id` property in the JSON object passed to the output parameter. + +> [!Note] +> When you specify the ID of an existing document, it gets overwritten by the new output document. + +## Next steps + +> [!div class="nextstepaction"] +> [Go to a quickstart that uses a Cosmos DB trigger](functions-create-cosmos-db-triggered-function.md) + +> [!div class="nextstepaction"] +> [Learn more about serverless database computing with Cosmos DB](..\cosmos-db\serverless-computing-database.md) + +> [!div class="nextstepaction"] +> [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md) diff --git a/articles/azure-functions/functions-bindings-event-hubs.md b/articles/azure-functions/functions-bindings-event-hubs.md index 45f2b72d05981..1bc35bb0d3f7b 100644 --- a/articles/azure-functions/functions-bindings-event-hubs.md +++ b/articles/azure-functions/functions-bindings-event-hubs.md @@ -1,5 +1,5 @@ --- -title: Azure Functions Event Hubs bindings +title: Azure Event Hubs bindings for Azure Functions description: Understand how to use Azure Event Hubs bindings in Azure Functions. services: functions documentationcenter: na @@ -19,13 +19,13 @@ ms.date: 11/08/2017 ms.author: wesmc --- -# Azure Functions Event Hubs bindings +# Azure Event Hubs bindings for Azure Functions This article explains how to work with [Azure Event Hubs](../event-hubs/event-hubs-what-is-event-hubs.md) bindings for Azure Functions. Azure Functions supports trigger and output bindings for Event Hubs. [!INCLUDE [intro](../../includes/functions-bindings-intro.md)] -## Event Hubs trigger +## Trigger Use the Event Hubs trigger to respond to an event sent to an event hub event stream. You must have read access to the event hub to set up the trigger. @@ -173,7 +173,7 @@ module.exports = function (context, myEventHubMessage) { }; ``` -## Trigger - Attributes for precompiled C# +## Trigger - attributes For [precompiled C#](functions-dotnet-class-library.md) functions, use the [EventHubTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.ServiceBus/EventHubs/EventHubTriggerAttribute.cs) attribute, which is defined in NuGet package [Microsoft.Azure.WebJobs.ServiceBus](http://www.nuget.org/packages/Microsoft.Azure.WebJobs.ServiceBus). @@ -182,8 +182,13 @@ The attribute's constructor takes the name of the event hub, the name of the con ```csharp [FunctionName("EventHubTriggerCSharp")] public static void Run([EventHubTrigger("samples-workitems", Connection = "EventHubConnection")] string myEventHubMessage, TraceWriter log) +{ + ... +} ``` +For a complete example, see [Trigger - precompiled C# example](#trigger---c-example). + ## Trigger - configuration The following table explains the binding configuration properties that you set in the *function.json* file and the `EventHubTrigger` attribute. @@ -195,7 +200,9 @@ The following table explains the binding configuration properties that you set i |**name** | n/a | The name of the variable that represents the event item in function code. | |**path** |**EventHubName** | The name of the event hub. | |**consumerGroup** |**ConsumerGroup** | An optional property that sets the [consumer group](../event-hubs/event-hubs-features.md#event-consumers) used to subscribe to events in the hub. If omitted, the `$Default` consumer group is used. | -|**connection** |**Connection** | The name of an app setting that contains the connection string to the event hub's namespace. Copy this connection string by clicking the **Connection Information** button for the *namespace*, not the event hub itself. This connection string must have at least read permissions to activate the trigger.
When you're developing locally, app settings go into the values of the [local.settings.json file](functions-run-local.md#local-settings-file).| +|**connection** |**Connection** | The name of an app setting that contains the connection string to the event hub's namespace. Copy this connection string by clicking the **Connection Information** button for the *namespace*, not the event hub itself. This connection string must have at least read permissions to activate the trigger.| + +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ## Trigger - host.json properties @@ -203,7 +210,7 @@ The [host.json](functions-host-json.md#eventhub) file contains settings that con [!INCLUDE [functions-host-json-event-hubs](../../includes/functions-host-json-event-hubs.md)] -## Event Hubs output binding +## Output Use the Event Hubs output binding to write events to an event stream. You must have send permission to an event hub to write events to it. @@ -338,7 +345,7 @@ module.exports = function(context) { }; ``` -## Output - Attributes for precompiled C# +## Output - attributes For [precompiled C#](functions-dotnet-class-library.md) functions, use the [EventHubAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.ServiceBus/EventHubs/EventHubAttribute.cs) attribute, which is defined in NuGet package [Microsoft.Azure.WebJobs.ServiceBus](http://www.nuget.org/packages/Microsoft.Azure.WebJobs.ServiceBus). @@ -348,8 +355,13 @@ The attribute's constructor takes the name of the event hub and the name of an a [FunctionName("EventHubOutput")] [return: EventHub("outputEventHubMessage", Connection = "EventHubConnection")] public static string Run([TimerTrigger("0 */5 * * * *")] TimerInfo myTimer, TraceWriter log) +{ + ... +} ``` +For a complete example, see [Output - precompiled C# example](#output---c-example). + ## Output - configuration The following table explains the binding configuration properties that you set in the *function.json* file and the `EventHub` attribute. @@ -360,7 +372,9 @@ The following table explains the binding configuration properties that you set i |**direction** | n/a | Must be set to "out". This parameter is set automatically when you create the binding in the Azure portal. | |**name** | n/a | The variable name used in function code that represents the event. | |**path** |**EventHubName** | The name of the event hub. | -|**connection** |**Connection** | The name of an app setting that contains the connection string to the event hub's namespace. Copy this connection string by clicking the **Connection Information** button for the *namespace*, not the event hub itself. This connection string must have send permissions to send the message to the event stream.
When you're developing locally, app settings go into the values of the [local.settings.json file](functions-run-local.md#local-settings-file).| +|**connection** |**Connection** | The name of an app setting that contains the connection string to the event hub's namespace. Copy this connection string by clicking the **Connection Information** button for the *namespace*, not the event hub itself. This connection string must have send permissions to send the message to the event stream.| + +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ## Output - usage diff --git a/articles/azure-functions/functions-bindings-external-file.md b/articles/azure-functions/functions-bindings-external-file.md index 1511d5a12ea92..829b7e613f6f4 100644 --- a/articles/azure-functions/functions-bindings-external-file.md +++ b/articles/azure-functions/functions-bindings-external-file.md @@ -13,7 +13,7 @@ ms.workload: na ms.tgt_pltfrm: na ms.devlang: multiple ms.topic: article -ms.date: 04/12/2017 +ms.date: 11/27/2017 ms.author: alkarche --- @@ -378,4 +378,6 @@ module.exports = function(context) { ``` ## Next steps -[!INCLUDE [next steps](../../includes/functions-bindings-next-steps.md)] + +> [!div class="nextstepaction"] +> [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md) diff --git a/articles/azure-functions/functions-bindings-external-table.md b/articles/azure-functions/functions-bindings-external-table.md index 355fae69ac7f2..5f160eb334f09 100644 --- a/articles/azure-functions/functions-bindings-external-table.md +++ b/articles/azure-functions/functions-bindings-external-table.md @@ -199,4 +199,6 @@ Add the column names `Id`, `LastName`, `FirstName` to the first row, then popula dataSetName is “default.” ## Next steps -[!INCLUDE [next steps](../../includes/functions-bindings-next-steps.md)] + +> [!div class="nextstepaction"] +> [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md) diff --git a/articles/azure-functions/functions-bindings-http-webhook.md b/articles/azure-functions/functions-bindings-http-webhook.md index c8872d2a896dd..a96d4bfaaa742 100644 --- a/articles/azure-functions/functions-bindings-http-webhook.md +++ b/articles/azure-functions/functions-bindings-http-webhook.md @@ -1,5 +1,5 @@ --- -title: Azure Functions HTTP and webhook bindings | Microsoft Docs +title: Azure Functions HTTP and webhook bindings description: Understand how to use HTTP and webhook triggers and bindings in Azure Functions. services: functions documentationcenter: na @@ -9,236 +9,74 @@ editor: '' tags: '' keywords: azure functions, functions, event processing, webhooks, dynamic compute, serverless architecture, HTTP, API, REST -ms.assetid: 2b12200d-63d8-4ec1-9da8-39831d5a51b1 ms.service: functions ms.devlang: multiple ms.topic: reference ms.tgt_pltfrm: multiple ms.workload: na -ms.date: 08/26/2017 +ms.date: 11/21/2017 ms.author: mahender - --- + # Azure Functions HTTP and webhook bindings -[!INCLUDE [functions-selector-bindings](../../includes/functions-selector-bindings.md)] -This article explains how to configure and work with HTTP triggers and bindings in Azure Functions. -With these, you can use Azure Functions to build serverless APIs and respond to webhooks. +This article explains how to work with HTTP bindings in Azure Functions. Azure Functions supports HTTP triggers and output bindings. -Azure Functions provides the following bindings: -- An [HTTP trigger](#httptrigger) lets you invoke a function with an HTTP request. This can be customized to respond to [webhooks](#hooktrigger). -- An [HTTP output binding](#output) allows you to respond to the request. +An HTTP trigger can be customized to respond to [webhooks](https://en.wikipedia.org/wiki/Webhook). A webhook trigger accepts only a JSON payload and validates the JSON. There are special versions of the webhook trigger that make it easier to handle webhooks from certain providers, such as GitHub and Slack. [!INCLUDE [intro](../../includes/functions-bindings-intro.md)] [!INCLUDE [HTTP client best practices](../../includes/functions-http-client-best-practices.md)] - - -## HTTP trigger -The HTTP trigger executes your function in response to an HTTP request. You can customize it to respond to a particular URL or set of HTTP methods. An HTTP trigger can also be configured to respond to webhooks. - -If using the Functions portal, you can also get started right away using a pre-made template. Select **New function** and choose "API & Webhooks" from the **Scenario** dropdown. Select one of the templates and click **Create**. - -By default, an HTTP trigger responds to the request with an HTTP 200 OK status code and an empty body. To modify the response, configure an [HTTP output binding](#output) - -### Configuring an HTTP trigger -An HTTP trigger is defined by a JSON object in the `bindings` array of function.json, as shown in the following example: - -```json -{ - "name": "req", - "type": "httpTrigger", - "direction": "in", - "authLevel": "function", - "methods": [ "get" ], - "route": "values/{id}" -}, -``` -The binding supports the following properties: - -|Property |Description | -|---------|---------| -| **name** | Required - the variable name used in function code for the request or request body. See [Working with an HTTP trigger from code](#httptriggerusage). | -| **type** | Required - must be set to `httpTrigger`. | -| **direction** | Required - must be set to `in`. | -| **authLevel** | Determines what keys, if any, need to be present on the request in order to invoke the function. The value can be one of the following values:
  • anonymous—No API key is required.
  • function—A function-specific API key is required. This is the default value if none is provided.
  • admin—The master key is required.
For more information, see [Working with keys](#keys). | -| **methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [Customizing the HTTP endpoint](#url). | -| **route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is ``. For more information, see [Customizing the HTTP endpoint](#url). | -| **webHookType** | Configures the HTTP trigger to act as a webhook receiver for the specified provider. Do not use the _methods_ property when using this setting. The value can be one of the following values:
  • genericJson—A general-purpose webhook endpoint without logic for a specific provider.
  • github—The function responds to GitHub webhooks. Do not use the _authLevel_ property when using this value.
  • slack—The function responds to Slack webhooks. Do not use the _authLevel_ property when using this value.
For more information, see [Responding to webhooks](#hooktrigger). | - - -### Working with an HTTP trigger from code -For C# and F# functions, you can declare the type of your trigger input to be either `HttpRequestMessage` or a custom .NET type. If you choose `HttpRequestMessage`, you get full access to the request object. For a custom .NET type, Functions tries to parse the JSON request body to set the object properties. - -For Node.js functions, the Functions runtime provides the request body instead of the request object. For more information, see [HTTP trigger samples](#httptriggersample). - - - -## HTTP response output binding -Use the HTTP output binding to respond to the HTTP request sender. This binding requires an HTTP trigger and allows you to customize the response associated with the trigger's request. If an HTTP output binding is not provided, an HTTP trigger returns HTTP 200 OK with an empty body. - -### Configuring an HTTP output binding -An HTTP output binding is defined a JSON object in the `bindings` array of function.json, as shown in the following example: - -```json -{ - "name": "res", - "type": "http", - "direction": "out" -} -``` -The binding supports the following required properties: - -|Property |Description | -|---------|---------| -|**name** | The variable name used in function code for the response. See [Working with an HTTP output binding from code](#outputusage). | -| **type** |Must be set to `http`. | -| **direction** | Must be set to `out`. | - - -### Working with an HTTP output binding from code -You can use the output parameter to respond to the http or webhook caller. You can also use the language-standard response patterns. For example responses, see [HTTP trigger samples](#httptriggersample) and [Webhook trigger samples](#hooktriggersample). - - - -## Responding to webhooks -An HTTP trigger with the _webHookType_ property is configured to respond to [webhooks](https://en.wikipedia.org/wiki/Webhook). The basic configuration uses the "genericJson" setting. This restricts requests to only those using HTTP POST and with the `application/json` content type. - -The trigger can additionally be tailored to a specific webhook provider, such as [GitHub](https://developer.github.com/webhooks/) or [Slack](https://api.slack.com/outgoing-webhooks). When a provider is specified, the Functions runtime can handle the provider validation logic for you. - -### Configuring GitHub as a webhook provider -To respond to GitHub webhooks, first create your function with an HTTP Trigger, and set the **webHookType** property to `github`. Then copy its [URL](#url) and [API key](#keys) into the **Add webhook** page of your GitHub repository. - -![](./media/functions-bindings-http-webhook/github-add-webhook.png) - -For an example, see [Create a function triggered by a GitHub webhook](functions-create-github-webhook-triggered-function.md). - -### Configuring Slack as a webhook provider -The Slack webhook generates a token for you instead of letting you specify it, so you must configure -a function-specific key with the token from Slack. See [Working with keys](#keys). +## Trigger - -## Customizing the HTTP endpoint -By default when you create a function for an HTTP trigger, or WebHook, the function is addressable with a route of the form: +The HTTP trigger lets you invoke a function with an HTTP request. You can use an HTTP trigger to build serverless APIs and respond to webhooks. - http://.azurewebsites.net/api/ +By default, an HTTP trigger responds to the request with an HTTP 200 OK status code and an empty body. To modify the response, configure an [HTTP output binding](#http-output-binding). -You can customize this route using the optional `route` property on the HTTP trigger's input binding. As an example, the following *function.json* file defines a `route` property for an HTTP trigger: +## Trigger - example -```json -{ - "bindings": [ - { - "type": "httpTrigger", - "name": "req", - "direction": "in", - "methods": [ "get" ], - "route": "products/{category:alpha}/{id:int?}" - }, - { - "type": "http", - "name": "res", - "direction": "out" - } - ] -} -``` +See the language-specific example: -Using this configuration, the function is now addressable with the following route instead of the original route. +* [Precompiled C#](#trigger---c-example) +* [C# script](#trigger---c-script-example) +* [F#](#trigger---f-example) +* [JavaScript](#trigger---javascript-example) - http://.azurewebsites.net/api/products/electronics/357 +### Trigger - C# example -This allows the function code to support two parameters in the address, _category_ and _id_. You can use any [Web API Route Constraint](https://www.asp.net/web-api/overview/web-api-routing-and-actions/attribute-routing-in-web-api-2#constraints) with your parameters. The following C# function code makes use of both parameters. +The following example shows a [precompiled C# function](functions-dotnet-class-library.md) that looks for a `name` parameter either in the query string or the body of the HTTP request. -```csharp -public static Task Run(HttpRequestMessage req, string category, int? id, - TraceWriter log) +```cs +[FunctionName("HttpTriggerCSharp")] +public static async Task Run( + [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequestMessage req, + TraceWriter log) { - if (id == null) - return req.CreateResponse(HttpStatusCode.OK, $"All {category} items were requested."); - else - return req.CreateResponse(HttpStatusCode.OK, $"{category} item with id = {id} has been requested."); -} -``` + log.Info("C# HTTP trigger function processed a request."); -Here is Node.js function code to use the same route parameters. - -```javascript -module.exports = function (context, req) { - - var category = context.bindingData.category; - var id = context.bindingData.id; - - if (!id) { - context.res = { - // status: 200, /* Defaults to 200 */ - body: "All " + category + " items were requested." - }; - } - else { - context.res = { - // status: 200, /* Defaults to 200 */ - body: category + " item with id = " + id + " was requested." - }; - } + // parse query parameter + string name = req.GetQueryNameValuePairs() + .FirstOrDefault(q => string.Compare(q.Key, "name", true) == 0) + .Value; - context.done(); -} -``` + // Get request body + dynamic data = await req.Content.ReadAsAsync(); -By default, all function routes are prefixed with *api*. You can also customize or remove the prefix using the `http.routePrefix` property in your *host.json* file. The following example removes the *api* route prefix by using an empty string for the prefix in the *host.json* file. + // Set name to query string or body data + name = name ?? data?.name; -```json -{ - "http": { - "routePrefix": "" - } + return name == null + ? req.CreateResponse(HttpStatusCode.BadRequest, "Please pass a name on the query string or in the request body") + : req.CreateResponse(HttpStatusCode.OK, "Hello " + name); } ``` -For detailed information on how to update the *host.json* file for your function, See, [How to update function app files](functions-reference.md#fileupdate). - -For information on other properties you can configure in your *host.json* file, see [host.json reference](functions-host-json.md). - - - -## Working with keys -HTTP triggers let you use keys for added security. A standard HTTP trigger can use these as an API key, requiring the key to be present on the request. Webhooks can use keys to authorize requests in a variety of ways, depending on what the provider supports. - -Keys are stored as part of your function app in Azure and are encrypted at rest. To view your keys, create new ones, or roll keys to new values, navigate to one of your functions within the portal and select "Manage." +### Trigger - C# script example -There are two types of keys: -- **Host keys**: These keys are shared by all functions within the function app. When used as an API key, these allow access to any function within the function app. -- **Function keys**: These keys apply only to the specific functions under which they are defined. When used as an API key, these only allow access to that function. +The following example shows a trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function looks for a `name` parameter either in the query string or the body of the HTTP request. -Each key is named for reference, and there is a default key (named "default") at the function and host level. The **master key** is a default host key named "_master" that is defined for each function app. This key cannot be revoked. It provides administrative access to the runtime APIs. Using `"authLevel": "admin"` in the binding JSON requires this key to be presented on the request; any other key results in authorization failure. - -> [!IMPORTANT] -> Due to the elevated permissions granted by the master key, you should not share this key with third parties or distribute it in native client applications. Use caution when choosing the admin authorization level. - -### API key authorization -By default, an HTTP trigger requires an API key in the HTTP request. So your HTTP request normally looks like the following: - - https://.azurewebsites.net/api/?code= - -The key can be included in a query string variable named `code`, as above, or it can be included in an `x-functions-key` HTTP header. The value of the key can be any function key defined for the function, or any host key. - -You can allow anonymous requests, which do not require keys. You can also require that the master key be used. You change the default authorization level by using the `authLevel` property in the binding JSON. For more information, see [HTTP trigger](#httptrigger). - -### Keys and webhooks -Webhook authorization is handled by the webhook receiver component, part of the HTTP trigger, and the mechanism varies based on the webhook type. Each mechanism does, however rely on a key. By default, the function key named "default" is used. To use a different key, configure the webhook provider to send the key name with the request in one of the following ways: - -- **Query string**: The provider passes the key name in the `clientid` query string parameter, such as `https://.azurewebsites.net/api/?clientid=`. -- **Request header**: The provider passes the key name in the `x-functions-clientid` header. - -> [!NOTE] -> Function keys take precedence over host keys. When two keys are defined with the same name, the function key is always used. - - - -## HTTP trigger samples -Suppose you have the following HTTP trigger in the `bindings` array of function.json: +Here's the binding data in the *function.json* file: ```json { @@ -249,15 +87,10 @@ Suppose you have the following HTTP trigger in the `bindings` array of function. }, ``` -See the language-specific sample that looks for a `name` parameter either in the query string or the body of the HTTP request. +The [configuration](#trigger---configuration) section explains these properties. -* [C#](#httptriggercsharp) -* [F#](#httptriggerfsharp) -* [Node.js](#httptriggernodejs) +Here's C# script code that binds to `HttpRequestMessage`: - - -### HTTP trigger sample in C# # ```csharp using System.Net; using System.Threading.Tasks; @@ -283,7 +116,7 @@ public static async Task Run(HttpRequestMessage req, TraceW } ``` -You can also bind to a .NET object instead of **HttpRequestMessage**. This object is created from the body of the request, parsed as JSON. Similarly, a type can be passed to the HTTP response output binding and returned as the response body, along with a 200 status code. +You can bind to a custom object instead of `HttpRequestMessage`. This object is created from the body of the request, parsed as JSON. Similarly, a type can be passed to the HTTP response output binding and returned as the response body, along with a 200 status code. ```csharp using System.Net; @@ -300,8 +133,25 @@ public class CustomObject { } ``` - -### HTTP trigger sample in F# # +### Trigger - F# example + +The following example shows a trigger binding in a *function.json* file and an [F# function](functions-reference-fsharp.md) that uses the binding. The function looks for a `name` parameter either in the query string or the body of the HTTP request. + +Here's the binding data in the *function.json* file: + +```json +{ + "name": "req", + "type": "httpTrigger", + "direction": "in", + "authLevel": "function" +}, +``` + +The [configuration](#trigger---configuration) section explains these properties. + +Here's the F# code: + ```fsharp open System.Net open System.Net.Http @@ -324,7 +174,7 @@ let Run(req: HttpRequestMessage) = } |> Async.StartAsTask ``` -You need a `project.json` file that uses NuGet to reference the `FSharp.Interop.Dynamic` and `Dynamitey` assemblies, like the following: +You need a `project.json` file that uses NuGet to reference the `FSharp.Interop.Dynamic` and `Dynamitey` assemblies, as shown in the following example: ```json { @@ -339,10 +189,25 @@ You need a `project.json` file that uses NuGet to reference the `FSharp.Interop. } ``` -This uses NuGet to fetch your dependencies and references them in your script. +### Trigger - JavaScript example + +The following example shows a trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function looks for a `name` parameter either in the query string or the body of the HTTP request. + +Here's the binding data in the *function.json* file: + +```json +{ + "name": "req", + "type": "httpTrigger", + "direction": "in", + "authLevel": "function" +}, +``` + +The [configuration](#trigger---configuration) section explains these properties. + +Here's the JavaScript code: - -### HTTP trigger sample in Node.JS ```javascript module.exports = function(context, req) { context.log('Node.js HTTP trigger function processed a request. RequestUri=%s', req.originalUrl); @@ -363,9 +228,32 @@ module.exports = function(context, req) { }; ``` - -## Webhook samples -Suppose you have the following webhook trigger in the `bindings` array of function.json: +## Trigger - webhook example + +See the language-specific example: + +* [Precompiled C#](#webhook---c-example) +* [C# script](#webhook---c-script-example) +* [F#](#webhook---f-example) +* [JavaScript](#webhook---javascript-example) + +### Webhook - C# example + +The following example shows a [precompiled C# function](functions-dotnet-class-library.md) that sends an HTTP 200 in response to a generic JSON request. + +```cs +[FunctionName("HttpTriggerCSharp")] +public static HttpResponseMessage Run([HttpTrigger(AuthorizationLevel.Anonymous, WebHookType = "genericJson")] HttpRequestMessage req) +{ + return req.CreateResponse(HttpStatusCode.OK); +} +``` + +### Webhook - C# script example + +The following example shows a webhook trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function logs GitHub issue comments. + +Here's the binding data in the *function.json* file: ```json { @@ -376,15 +264,10 @@ Suppose you have the following webhook trigger in the `bindings` array of functi }, ``` -See the language-specific sample that logs GitHub issue comments. +The [configuration](#trigger---configuration) section explains these properties. -* [C#](#hooktriggercsharp) -* [F#](#hooktriggerfsharp) -* [Node.js](#hooktriggernodejs) +Here's the C# script code: - - -### Webhook sample in C# # ```csharp #r "Newtonsoft.Json" @@ -406,9 +289,25 @@ public static async Task Run(HttpRequestMessage req, TraceWriter log) } ``` - +### Webhook - F# example + +The following example shows a webhook trigger binding in a *function.json* file and an [F# function](functions-reference-fsharp.md) that uses the binding. The function logs GitHub issue comments. + +Here's the binding data in the *function.json* file: + +```json +{ + "webHookType": "github", + "name": "req", + "type": "httpTrigger", + "direction": "in", +}, +``` + +The [configuration](#trigger---configuration) section explains these properties. + +Here's the F# code: -### Webhook sample in F# # ```fsharp open System.Net open System.Net.Http @@ -430,9 +329,28 @@ let Run(req: HttpRequestMessage, log: TraceWriter) = } |> Async.StartAsTask ``` - +### Webhook - JavaScript example + +The following example shows a webhook trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function logs GitHub issue comments. + +Here's the binding data in the *function.json* file: + +```json +{ + "webHookType": "github", + "name": "req", + "type": "httpTrigger", + "direction": "in", +}, +``` + +The [configuration](#trigger---configuration) section explains these properties. + +Here's the JavaScript code: + +```javascript +``` -### Webhook sample in Node.JS ```javascript module.exports = function (context, data) { context.log('GitHub WebHook triggered!', data.comment.body); @@ -441,7 +359,208 @@ module.exports = function (context, data) { }; ``` +## Trigger - attributes + +For [precompiled C#](functions-dotnet-class-library.md) functions, use the [HttpTrigger](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/dev/src/WebJobs.Extensions.Http/HttpTriggerAttribute.cs) attribute, defined in NuGet package [Microsoft.Azure.WebJobs.Extensions.Http](http://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Http). + +You can set the authorization level and allowable HTTP methods in attribute constructor parameters, and there are properties for webhook type and route template. For more information about these settings, see [Trigger - configuration](#trigger---configuration). Here's an `HttpTrigger` attribute in a method signature: + +```csharp +[FunctionName("HttpTriggerCSharp")] +public static HttpResponseMessage Run( + [HttpTrigger(AuthorizationLevel.Anonymous, WebHookType = "genericJson")] HttpRequestMessage req) +{ + ... +} + ``` + +For a complete example, see [Trigger - precompiled C# example](#trigger---c-example). + +## Trigger - configuration + +The following table explains the binding configuration properties that you set in the *function.json* file and the `HttpTrigger` attribute. + +|function.json property | Attribute property |Description| +|---------|---------|----------------------| +| **type** | n/a| Required - must be set to `httpTrigger`. | +| **direction** | n/a| Required - must be set to `in`. | +| **name** | n/a| Required - the variable name used in function code for the request or request body. | +| **authLevel** | **AuthLevel** |Determines what keys, if any, need to be present on the request in order to invoke the function. The authorization level can be one of the following values:
  • anonymous—No API key is required.
  • function—A function-specific API key is required. This is the default value if none is provided.
  • admin—The master key is required.
For more information, see the section about [authorization keys](#authorization-keys). | +| **methods** |**Methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [customize the http endpoint](#trigger---customize-the-http-endpoint). | +| **route** | **Route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is ``. For more information, see [customize the http endpoint](#customize-the-http-endpoint). | +| **webHookType** | **WebHookType** |Configures the HTTP trigger to act as a [webhook](https://en.wikipedia.org/wiki/Webhook) receiver for the specified provider. Don't set the `methods` property if you set this property. The webhook type can be one of the following values:
  • genericJson—A general-purpose webhook endpoint without logic for a specific provider. This setting restricts requests to only those using HTTP POST and with the `application/json` content type.
  • github—The function responds to [GitHub webhooks](https://developer.github.com/webhooks/). Do not use the _authLevel_ property with GitHub webhooks. For more information, see the GitHub webhooks section later in this article.
  • slack—The function responds to [Slack webhooks](https://api.slack.com/outgoing-webhooks). Do not use the _authLevel_ property with Slack webhooks. For more information, see the Slack webhooks section later in this article.
| + +## Trigger - usage + +For C# and F# functions, you can declare the type of your trigger input to be either `HttpRequestMessage` or a custom type. If you choose `HttpRequestMessage`, you get full access to the request object. For a custom type, Functions tries to parse the JSON request body to set the object properties. + +For JavaScript functions, the Functions runtime provides the request body instead of the request object. For more information, see the [JavaScript trigger example](#trigger---javascript-example). + +### GitHub webhooks + +To respond to GitHub webhooks, first create your function with an HTTP Trigger, and set the **webHookType** property to `github`. Then copy its URL and API key into the **Add webhook** page of your GitHub repository. + +![](./media/functions-bindings-http-webhook/github-add-webhook.png) + +For an example, see [Create a function triggered by a GitHub webhook](functions-create-github-webhook-triggered-function.md). + +### Slack webhooks + +The Slack webhook generates a token for you instead of letting you specify it, so you must configure a function-specific key with the token from Slack. See [Authorization keys](#authorization-keys). + +### Customize the HTTP endpoint + +By default when you create a function for an HTTP trigger, or WebHook, the function is addressable with a route of the form: + + http://.azurewebsites.net/api/ + +You can customize this route using the optional `route` property on the HTTP trigger's input binding. As an example, the following *function.json* file defines a `route` property for an HTTP trigger: + +```json +{ + "bindings": [ + { + "type": "httpTrigger", + "name": "req", + "direction": "in", + "methods": [ "get" ], + "route": "products/{category:alpha}/{id:int?}" + }, + { + "type": "http", + "name": "res", + "direction": "out" + } + ] +} +``` + +Using this configuration, the function is now addressable with the following route instead of the original route. + +``` +http://.azurewebsites.net/api/products/electronics/357 +``` + +This allows the function code to support two parameters in the address, _category_ and _id_. You can use any [Web API Route Constraint](https://www.asp.net/web-api/overview/web-api-routing-and-actions/attribute-routing-in-web-api-2#constraints) with your parameters. The following C# function code makes use of both parameters. + +```csharp +public static Task Run(HttpRequestMessage req, string category, int? id, + TraceWriter log) +{ + if (id == null) + return req.CreateResponse(HttpStatusCode.OK, $"All {category} items were requested."); + else + return req.CreateResponse(HttpStatusCode.OK, $"{category} item with id = {id} has been requested."); +} +``` + +Here is Node.js function code that uses the same route parameters. + +```javascript +module.exports = function (context, req) { + + var category = context.bindingData.category; + var id = context.bindingData.id; + + if (!id) { + context.res = { + // status: 200, /* Defaults to 200 */ + body: "All " + category + " items were requested." + }; + } + else { + context.res = { + // status: 200, /* Defaults to 200 */ + body: category + " item with id = " + id + " was requested." + }; + } + + context.done(); +} +``` + +By default, all function routes are prefixed with *api*. You can also customize or remove the prefix using the `http.routePrefix` property in your [host.json](functions-host-json.md) file. The following example removes the *api* route prefix by using an empty string for the prefix in the *host.json* file. + +```json +{ + "http": { + "routePrefix": "" + } +} +``` + +### Authorization keys + +HTTP triggers let you use keys for added security. A standard HTTP trigger can use these as an API key, requiring the key to be present on the request. Webhooks can use keys to authorize requests in a variety of ways, depending on what the provider supports. + +Keys are stored as part of your function app in Azure and are encrypted at rest. To view your keys, create new ones, or roll keys to new values, navigate to one of your functions in the portal and select "Manage." + +There are two types of keys: + +- **Host keys**: These keys are shared by all functions within the function app. When used as an API key, these allow access to any function within the function app. +- **Function keys**: These keys apply only to the specific functions under which they are defined. When used as an API key, these only allow access to that function. + +Each key is named for reference, and there is a default key (named "default") at the function and host level. Function keys take precedence over host keys. When two keys are defined with the same name, the function key is always used. + +The **master key** is a default host key named "_master" that is defined for each function app. This key cannot be revoked. It provides administrative access to the runtime APIs. Using `"authLevel": "admin"` in the binding JSON requires this key to be presented on the request; any other key results in authorization failure. + +> [!IMPORTANT] +> Due to the elevated permissions granted by the master key, you should not share this key with third parties or distribute it in native client applications. Use caution when choosing the admin authorization level. + +### API key authorization + +By default, an HTTP trigger requires an API key in the HTTP request. So your HTTP request normally looks like the following: + + https://.azurewebsites.net/api/?code= + +The key can be included in a query string variable named `code`, as above, or it can be included in an `x-functions-key` HTTP header. The value of the key can be any function key defined for the function, or any host key. + +You can allow anonymous requests, which do not require keys. You can also require that the master key be used. You change the default authorization level by using the `authLevel` property in the binding JSON. For more information, see [Trigger - configuration](#trigger---configuration). + +### Keys and webhooks + +Webhook authorization is handled by the webhook receiver component, part of the HTTP trigger, and the mechanism varies based on the webhook type. Each mechanism does, however rely on a key. By default, the function key named "default" is used. To use a different key, configure the webhook provider to send the key name with the request in one of the following ways: + +- **Query string**: The provider passes the key name in the `clientid` query string parameter, such as `https://.azurewebsites.net/api/?clientid=`. +- **Request header**: The provider passes the key name in the `x-functions-clientid` header. + +## Trigger - host.json properties + +The [host.json](functions-host-json.md) file contains settings that control HTTP trigger behavior. + +[!INCLUDE [functions-host-json-http](../../includes/functions-host-json-http.md)] + +## Output + +Use the HTTP output binding to respond to the HTTP request sender. This binding requires an HTTP trigger and allows you to customize the response associated with the trigger's request. If an HTTP output binding is not provided, an HTTP trigger returns HTTP 200 OK with an empty body. + +## Output - configuration + +For precompiled C#, there are no output-specific binding configuration properties. To send an HTTP response, make the function return type `HttpResponseMessage` or `Task`. + +For other languages, an HTTP output binding is defined as a JSON object in the `bindings` array of function.json, as shown in the following example: + +```json +{ + "name": "res", + "type": "http", + "direction": "out" +} +``` + +The following table explains the binding configuration properties that you set in the *function.json* file. + +|Property |Description | +|---------|---------| +| **type** |Must be set to `http`. | +| **direction** | Must be set to `out`. | +|**name** | The variable name used in function code for the response. | + +## Output - usage + +You can use the output parameter to respond to the HTTP or webhook caller. You can also use the language-standard response patterns. For example responses, see the [trigger example](#trigger---example) and the [webhook example](#trigger---webhook-example). ## Next steps -[!INCLUDE [next steps](../../includes/functions-bindings-next-steps.md)] +> [!div class="nextstepaction"] +> [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md) diff --git a/articles/azure-functions/functions-bindings-microsoft-graph.md b/articles/azure-functions/functions-bindings-microsoft-graph.md index dc0a568b66b0b..43ac296db1c04 100644 --- a/articles/azure-functions/functions-bindings-microsoft-graph.md +++ b/articles/azure-functions/functions-bindings-microsoft-graph.md @@ -16,7 +16,6 @@ ms.author: mahender --- # Azure Functions Microsoft Graph bindings -[!INCLUDE [functions-selector-bindings](../../includes/functions-selector-bindings.md)] This article explains how to configure and work with Microsoft Graph triggers and bindings in Azure Functions. With these, you can use Azure Functions to work with data, insights, and events from the [Microsoft Graph](https://graph.microsoft.io). @@ -1060,3 +1059,8 @@ public class UserSubscription { [HTTP trigger]: functions-bindings-http-webhook.md [Working with webhooks in Microsoft Graph]: https://developer.microsoft.com/graph/docs/api-reference/v1.0/resources/webhooks + +## Next steps + +> [!div class="nextstepaction"] +> [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md) diff --git a/articles/azure-functions/functions-bindings-mobile-apps.md b/articles/azure-functions/functions-bindings-mobile-apps.md index 1035605cf0266..e7eef6dbae8f8 100644 --- a/articles/azure-functions/functions-bindings-mobile-apps.md +++ b/articles/azure-functions/functions-bindings-mobile-apps.md @@ -1,5 +1,5 @@ --- -title: Azure Functions Mobile Apps bindings | Microsoft Docs +title: Mobile Apps bindings for Azure Functions description: Understand how to use Azure Mobile Apps bindings in Azure Functions. services: functions documentationcenter: na @@ -9,86 +9,40 @@ editor: '' tags: '' keywords: azure functions, functions, event processing, dynamic compute, serverless architecture -ms.assetid: faad1263-0fa5-41a9-964f-aecbc0be706a ms.service: functions ms.devlang: multiple ms.topic: reference ms.tgt_pltfrm: multiple ms.workload: na -ms.date: 10/31/2016 +ms.date: 11/21/2017 ms.author: glenga - --- -# Azure Functions Mobile Apps bindings -[!INCLUDE [functions-selector-bindings](../../includes/functions-selector-bindings.md)] -This article explains how to configure and code [Azure Mobile Apps](../app-service-mobile/app-service-mobile-value-prop.md) bindings in Azure Functions. -Azure Functions supports input and output bindings for Mobile Apps. +# Mobile Apps bindings for Azure Functions + +This article explains how to work with [Azure Mobile Apps](../app-service-mobile/app-service-mobile-value-prop.md) bindings in Azure Functions. Azure Functions supports input and output bindings for Mobile Apps. -The Mobile Apps input and output bindings let you [read from and write to data tables](../app-service-mobile/app-service-mobile-node-backend-how-to-use-server-sdk.md#TableOperations) -in your mobile app. +The Mobile Apps bindings let you read and update data tables in mobile apps. [!INCLUDE [intro](../../includes/functions-bindings-intro.md)] - +## Input -## Mobile Apps input binding -The Mobile Apps input binding loads a record from a mobile table endpoint and passes it into your function. -In a C# and F# functions, any changes made to the record are automatically sent back to the table when the function exits successfully. +The Mobile Apps input binding loads a record from a mobile table endpoint and passes it into your function. In C# and F# functions, any changes made to the record are automatically sent back to the table when the function exits successfully. -The Mobile Apps input to a function uses the following JSON object in the `bindings` array of function.json: +## Input - example -```json -{ - "name": "", - "type": "mobileTable", - "tableName": "", - "id" : "", - "connection": "", - "apiKey": "", - "direction": "in" -} -``` +See the language-specific example: + + +* [C# script](#input---c-script-example) +* [JavaScript](#input---javascript-example) + +### Input - C# script example -Note the following: - -* `id` can be static, or it can be based on the trigger that invokes the function. For example, if you use a [queue trigger]() for your function, then - `"id": "{queueTrigger}"` uses the string value of the queue message as the record ID to retrieve. -* `connection` should contain the name of an app setting in your function app, which in turn contains the URL of your mobile app. The function - uses this URL to construct the required REST operations against your mobile app. You [create an app setting in your function app]() that contains - your mobile app's URL (which looks like `http://.azurewebsites.net`), then specify the name of the app setting in the `connection` property - in your input binding. -* You need to specify `apiKey` if you [implement an API key in your Node.js mobile app backend](https://github.com/Azure/azure-mobile-apps-node/tree/master/samples/api-key), - or [implement an API key in your .NET mobile app backend](https://github.com/Azure/azure-mobile-apps-net-server/wiki/Implementing-Application-Key). To do this, - you [create an app setting in your function app]() that contains the API key, then add the `apiKey` property in your input binding with the name of the - app setting. - - > [!IMPORTANT] - > This API key must not be shared with your mobile app clients. It should only be distributed securely to service-side clients, like Azure Functions. - > - > [!NOTE] - > Azure Functions stores your connection information and API keys as app settings so that they are not checked into your - > source control repository. This safeguards your sensitive information. - > - > - - - -## Input usage -This section shows you how to use your Mobile Apps input binding in your function code. - -When the record with the specified table and record ID is found, it is passed into the named -[JObject](http://www.newtonsoft.com/json/help/html/t_newtonsoft_json_linq_jobject.htm) parameter (or, in Node.js, -it is passed into the `context.bindings.` object). When the record is not found, the parameter is `null`. - -In C# and F# functions, any changes you make to the input record (input parameter) is automatically sent back to the -Mobile Apps table when the function exits successfully. -In Node.js functions, use `context.bindings.` to access the input record. You cannot modify a record in Node.js. - - - -## Input sample -Suppose you have the following function.json, that retrieves a Mobile App table record with the id of the queue trigger message: +The following example shows a Mobile Apps input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function is triggered by a queue message that has a record identifier. The function reads the specified record and modifies its `Text` property. + +Here's the binding data in the *function.json* file: ```json { @@ -113,15 +67,9 @@ Suppose you have the following function.json, that retrieves a Mobile App table "disabled": false } ``` +The [configuration](#input---configuration) section explains these properties. -See the language-specific sample that uses the input record from the binding. The C# and F# samples also modify the record's `text` property. - -* [C#](#inputcsharp) -* [Node.js](#inputnodejs) - - - -### Input sample in C# # +Here's the C# script code: ```cs #r "Newtonsoft.Json" @@ -136,21 +84,38 @@ public static void Run(string myQueueItem, JObject record) } ``` - +The following example shows a Mobile Apps input binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function is triggered by a queue message that has a record identifier. The function reads the specified record and modifies its `Text` property. + +Here's the binding data in the *function.json* file: - +```json +{ +"bindings": [ + { + "name": "myQueueItem", + "queueName": "myqueue-items", + "connection":"", + "type": "queueTrigger", + "direction": "in" + }, + { + "name": "record", + "type": "mobileTable", + "tableName": "MyTable", + "id" : "{queueTrigger}", + "connection": "My_MobileApp_Url", + "apiKey": "My_MobileApp_Key", + "direction": "in" + } +], +"disabled": false +} +``` +The [configuration](#input---configuration) section explains these properties. -### Input sample in Node.js +Here's the JavaScript code: ```javascript module.exports = function (context, myQueueItem) { @@ -159,56 +124,72 @@ module.exports = function (context, myQueueItem) { }; ``` - +## Input - attributes -## Mobile Apps output binding -Use the Mobile Apps output binding to write a new record to a Mobile Apps table endpoint. +For [precompiled C#](functions-dotnet-class-library.md) functions, use the [MobileTable](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.MobileApps/MobileTableAttribute.cs) attribute, which is defined in NuGet package [Microsoft.Azure.WebJobs.Extensions.MobileApps](http://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.MobileApps). -The Mobile Apps output for a function uses the following JSON object in the `bindings` array of function.json: +For information about attribute properties that you can configure, see [the following configuration section](#input---configuration). -```json -{ - "name": "", - "type": "mobileTable", - "tableName": "", - "connection": "", - "apiKey": "", - "direction": "out" -} -``` +## Input - configuration + +The following table explains the binding configuration properties that you set in the *function.json* file and the `MobileTable` attribute. + +|function.json property | Attribute property |Description| +|---------|---------|----------------------| +| **type**|| Must be set to "mobileTable"| +| **direction**||Must be set to "in"| +| **name**|| Name of input parameter in function signature.| +|**tableName** |**TableName**|Name of the mobile app's data table| +| **id**| **Id** | The identifier of the record to retrieve. Can be static or based on the trigger that invokes the function. For example, if you use a queue trigger for your function, then `"id": "{queueTrigger}"` uses the string value of the queue message as the record ID to retrieve.| +|**connection**|**Connection**|The name of an app setting that has the mobile app's URL. The function uses this URL to construct the required REST operations against your mobile app. Create an app setting in your function app that contains the mobile app's URL, then specify the name of the app setting in the `connection` property in your input binding. The URL looks like `http://.azurewebsites.net`. +|**apiKey**|**ApiKey**|The name of an app setting that has your mobile app's API key. Provide the API key if you [implement an API key in your Node.js mobile app](https://github.com/Azure/azure-mobile-apps-node/tree/master/samples/api-key), or [implement an API key in your .NET mobile app](https://github.com/Azure/azure-mobile-apps-net-server/wiki/Implementing-Application-Key). To provide the key, create an app setting in your function app that contains the API key, then add the `apiKey` property in your input binding with the name of the app setting. | + +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] + +> [!IMPORTANT] +> Don't share the API key with your mobile app clients. It should only be distributed securely to service-side clients, like Azure Functions. Azure Functions stores your connection information and API keys as app settings so that they are not checked into your source control repository. This safeguards your sensitive information. + +## Input - usage + +In C# functions, when the record with the specified ID is found, it is passed into the named +[JObject](http://www.newtonsoft.com/json/help/html/t_newtonsoft_json_linq_jobject.htm) parameter. When the record is not found, the parameter value is `null`. -Note the following: +In JavaScript functions, the record is passed into the `context.bindings.` object. When the record is not found, the parameter value is `null`. -* `connection` should contain the name of an app setting in your function app, which in turn contains the URL of your mobile app. The function - uses this URL to construct the required REST operations against your mobile app. You [create an app setting in your function app]() that contains - your mobile app's URL (which looks like `http://.azurewebsites.net`), then specify the name of the app setting in the `connection` property - in your input binding. -* You need to specify `apiKey` if you [implement an API key in your Node.js mobile app backend](https://github.com/Azure/azure-mobile-apps-node/tree/master/samples/api-key), - or [implement an API key in your .NET mobile app backend](https://github.com/Azure/azure-mobile-apps-net-server/wiki/Implementing-Application-Key). To do this, - you [create an app setting in your function app]() that contains the API key, then add the `apiKey` property in your input binding with the name of the - app setting. - - > [!IMPORTANT] - > This API key must not be shared with your mobile app clients. It should only be distributed securely to service-side clients, like Azure Functions. - > - > [!NOTE] - > Azure Functions stores your connection information and API keys as app settings so that they are not checked into your - > source control repository. This safeguards your sensitive information. - > - > +In C# and F# functions, any changes you make to the input record (input parameter) are automatically sent back to the table when the function exits successfully. You can't modify a record in JavaScript functions. - +## Output -## Output usage -This section shows you how to use your Mobile Apps output binding in your function code. +Use the Mobile Apps output binding to write a new record to a Mobile Apps table. -In C# functions, use a named output parameter of type `out object` to access the output record. In Node.js functions, use -`context.bindings.` to access the output record. +## Output - example - +See the language-specific example: -## Output sample -Suppose you have the following function.json, that defines a queue trigger and a Mobile Apps output: +* [Precompiled C#](#output---c-example) +* [C# script](#output---c-script-example) +* [JavaScript](#output---javascript-example) + +### Output - C# example + +The following example shows a [precompiled C# function](functions-dotnet-class-library.md) that is triggered by a queue message and creates a record in a mobile app table. + +```csharp +[FunctionName("MobileAppsOutput")] +[return: MobileTable(ApiKeySetting = "MyMobileAppKey", TableName = "MyTable", MobileAppUriSetting = "MyMobileAppUri")] +public static object Run( + [QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")] string myQueueItem, + TraceWriter log) +{ + return new { Text = $"I'm running in a C# function! {myQueueItem}" }; +} +``` + +### Output - C# script example + +The following example shows a Mobile Apps output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function is triggered by a queue message and creates a new record with hard-coded value for the `Text` property. + +Here's the binding data in the *function.json* file: ```json { @@ -233,14 +214,9 @@ Suppose you have the following function.json, that defines a queue trigger and a } ``` -See the language-specific sample that creates a record in the Mobile Apps table endpoint with the content of the queue message. - -* [C#](#outcsharp) -* [Node.js](#outnodejs) +The [configuration](#output---configuration) section explains these properties. - - -### Output sample in C# # +Here's the C# script code: ```cs public static void Run(string myQueueItem, out object record) @@ -251,16 +227,38 @@ public static void Run(string myQueueItem, out object record) } ``` - - -### Output sample in Node.js +The [configuration](#output---configuration) section explains these properties. + +Here's the JavaScript code: ```javascript module.exports = function (context, myQueueItem) { @@ -273,6 +271,54 @@ module.exports = function (context, myQueueItem) { }; ``` +## Output - attributes + +For [precompiled C#](functions-dotnet-class-library.md) functions, use the [MobileTable](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.MobileApps/MobileTableAttribute.cs) attribute, which is defined in NuGet package [Microsoft.Azure.WebJobs.Extensions.MobileApps](http://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.MobileApps). + +For information about attribute properties that you can configure, see [Output - configuration](#output---configuration). Here's a `MobileTable` attribute example in a method signature: + +```csharp +[FunctionName("MobileAppsOutput")] +[return: MobileTable(ApiKeySetting = "MyMobileAppKey", TableName = "MyTable", MobileAppUriSetting = "MyMobileAppUri")] +public static object Run( + [QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")] string myQueueItem, + TraceWriter log) +{ + ... +} +``` + +For a complete example, see [Output - precompiled C# example](#output---c-example). + +## Output - configuration + +The following table explains the binding configuration properties that you set in the *function.json* file and the `MobileTable` attribute. + +|function.json property | Attribute property |Description| +|---------|---------|----------------------| +| **type**|| Must be set to "mobileTable"| +| **direction**||Must be set to "out"| +| **name**|| Name of output parameter in function signature.| +|**tableName** |**TableName**|Name of the mobile app's data table| +|**connection**|**MobileAppUriSetting**|The name of an app setting that has the mobile app's URL. The function uses this URL to construct the required REST operations against your mobile app. Create an app setting in your function app that contains the mobile app's URL, then specify the name of the app setting in the `connection` property in your input binding. The URL looks like `http://.azurewebsites.net`. +|**apiKey**|**ApiKeySetting**|The name of an app setting that has your mobile app's API key. Provide the API key if you [implement an API key in your Node.js mobile app backend](https://github.com/Azure/azure-mobile-apps-node/tree/master/samples/api-key), or [implement an API key in your .NET mobile app backend](https://github.com/Azure/azure-mobile-apps-net-server/wiki/Implementing-Application-Key). To provide the key, create an app setting in your function app that contains the API key, then add the `apiKey` property in your input binding with the name of the app setting. | + +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] + +> [!IMPORTANT] +> Don't share the API key with your mobile app clients. It should only be distributed securely to service-side clients, like Azure Functions. Azure Functions stores your connection information and API keys as app settings so that they are not checked into your source control repository. This safeguards your sensitive information. + +## Output - usage + +In C# script functions, use a named output parameter of type `out object` to access the output record. In precompiled C# functions, the `MobileTable` attribute can be used with any of the following types: + +* `ICollector` or `IAsyncCollector`, where `T` is either `JObject` or any type with a `public string Id` property. +* `out JObject` +* `out T` or `out T[]`, where `T` is any Type with a `public string Id` property. + +In Node.js functions, use `context.bindings.` to access the output record. + ## Next steps -[!INCLUDE [next steps](../../includes/functions-bindings-next-steps.md)] +> [!div class="nextstepaction"] +> [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md) diff --git a/articles/azure-functions/functions-bindings-notification-hubs.md b/articles/azure-functions/functions-bindings-notification-hubs.md index a31c03f048eab..169809daf4073 100644 --- a/articles/azure-functions/functions-bindings-notification-hubs.md +++ b/articles/azure-functions/functions-bindings-notification-hubs.md @@ -1,5 +1,5 @@ --- -title: Azure Functions Notification Hub binding | Microsoft Docs +title: Notification Hubs bindings for Azure Functions description: Understand how to use Azure Notification Hub binding in Azure Functions. services: functions documentationcenter: na @@ -9,73 +9,156 @@ editor: '' tags: '' keywords: azure functions, functions, event processing, dynamic compute, serverless architecture -ms.assetid: 0ff0c949-20bf-430b-8dd5-d72b7b6ee6f7 ms.service: functions ms.devlang: multiple ms.topic: reference ms.tgt_pltfrm: multiple ms.workload: na -ms.date: 08/26/2017 +ms.date: 11/21/2017 ms.author: glenga - --- -# Azure Functions Notification Hub output binding -[!INCLUDE [functions-selector-bindings](../../includes/functions-selector-bindings.md)] -This article explains how to configure and code Azure Notification Hub bindings in Azure Functions. +# Notification Hubs output binding for Azure Functions + +This article explains how to send push notifications by using [Azure Notification Hubs](../notification-hubs/notification-hubs-push-notification-overview.md) bindings in Azure Functions. Azure Functions supports output bindings for Notification Hubs. + +Azure Notification Hubs must be configured for the Platform Notifications Service (PNS) you want to use. To learn how to get push notifications in your client app from Notification Hubs, see [Getting started with Notification Hubs](../notification-hubs/notification-hubs-windows-store-dotnet-get-started-wns-push-notification.md) and select your target client platform from the drop-down list near the top of the page. [!INCLUDE [intro](../../includes/functions-bindings-intro.md)] -Your functions can send push notifications using a configured Azure Notification Hub with a few lines of code. However, the Azure Notification Hub must be configured for the Platform Notifications Services (PNS) you want to use. For more information on configuring an Azure Notification Hub and developing a client applications that register to receive notifications, see [Getting started with Notification Hubs](../notification-hubs/notification-hubs-windows-store-dotnet-get-started-wns-push-notification.md) and click your target client platform at the top. +## Example - template -The notifications you send can be native notifications or template notifications. Native notifications target a specific notification platform as configured in the `platform` property of the output binding. A template notification can be used to target multiple platforms. +The notifications you send can be native notifications or [template notifications](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md). Native notifications target a specific client platform as configured in the `platform` property of the output binding. A template notification can be used to target multiple platforms. -## Notification hub output binding properties -The function.json file provides the following properties: +See the language-specific example: +* [C# script - out parameter](#c-script-template-example---out-parameter) +* [C# script - asynchronous](#c-script-template-example---asynchronous) +* [C# script - JSON](#c-script-template-example---json) +* [C# script - library types](#c-script-template-example---library-types) +* [F#](#f-template-example) +* [JavaScript](#javascript-template-example) -|Property |Description | -|---------|---------| -|**name** | Variable name used in function code for the notification hub message. | -|**type** | Must be set to `notificationHub`. | -|**tagExpression** | Tag expressions allow you to specify that notifications be delivered to a set of devices who have registered to receive notifications that match the tag expression. For more information, see [Routing and tag expressions](../notification-hubs/notification-hubs-tags-segment-push-message.md). | -|**hubName** | Name of the notification hub resource in the Azure portal. | -|**connection** | This connection string must be an **Application Setting** connection string set to the *DefaultFullSharedAccessSignature* value for your notification hub. | -|**direction** | Must be set to `out`. | -|**platform** | The platform property indicates the notification platform your notification targets. By default, if the platform property is omitted from the output binding, template notifications can be used to target any platform configured on the Azure Notification Hub. For more information on using templates in general to send cross platform notifications with an Azure Notification Hub, see [Templates](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md). When set, _platform_ must be one of the following values:
  • apns—Apple Push Notification Service. For more information on configuring the notification hub for APNS and receiving the notification in a client app, see [Sending push notifications to iOS with Azure Notification Hubs](../notification-hubs/notification-hubs-ios-apple-push-notification-apns-get-started.md).
  • adm—[Amazon Device Messaging](https://developer.amazon.com/device-messaging). For more information on configuring the notification hub for ADM and receiving the notification in a Kindle app, see [Getting Started with Notification Hubs for Kindle apps](../notification-hubs/notification-hubs-kindle-amazon-adm-push-notification.md).
  • gcm—[Google Cloud Messaging](https://developers.google.com/cloud-messaging/). Firebase Cloud Messaging, which is the new version of GCM, is also supported. For more information, see [Sending push notifications to Android with Azure Notification Hubs](../notification-hubs/notification-hubs-android-push-notification-google-fcm-get-started.md).
  • wns—[Windows Push Notification Services](https://msdn.microsoft.com/en-us/windows/uwp/controls-and-patterns/tiles-and-notifications-windows-push-notification-services--wns--overview) targeting Windows platforms. Windows Phone 8.1 and later is also supported by WNS. For more information, see [Getting started with Notification Hubs for Windows Universal Platform Apps](../notification-hubs/notification-hubs-windows-store-dotnet-get-started-wns-push-notification.md).
  • mpns—[Microsoft Push Notification Service](https://msdn.microsoft.com/en-us/library/windows/apps/ff402558.aspx). This platform supports Windows Phone 8 and earlier Windows Phone platforms. For more information, see [Sending push notifications with Azure Notification Hubs on Windows Phone](../notification-hubs/notification-hubs-windows-mobile-push-notifications-mpns.md).
| +### C# script template example - out parameter -Example function.json: +This example sends a notification for a [template registration](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) that contains a `message` placeholder in the template. -```json +```cs +using System; +using System.Threading.Tasks; +using System.Collections.Generic; + +public static void Run(string myQueueItem, out IDictionary notification, TraceWriter log) { - "bindings": [ - { - "name": "notification", - "type": "notificationHub", - "tagExpression": "", - "hubName": "my-notification-hub", - "connection": "MyHubConnectionString", - "platform": "gcm", - "direction": "out" - } - ], - "disabled": false + log.Info($"C# Queue trigger function processed: {myQueueItem}"); + notification = GetTemplateProperties(myQueueItem); +} + +private static IDictionary GetTemplateProperties(string message) +{ + Dictionary templateProperties = new Dictionary(); + templateProperties["message"] = message; + return templateProperties; } ``` -## Notification hub connection string setup -To use a Notification hub output binding, you must configure the connection string for the hub. You can select an existing notification hub or create a new one right from the *Integrate* tab in your function. You can also configure the connection string manually. +### C# script template example - asynchronous -To configure the connection string to an existing notification hub: +If you are using asynchronous code, out parameters are not allowed. In this case use `IAsyncCollector` to return your template notification. The following code is an asynchronous example of the code above. -1. Navigate to your notification hub in the [Azure portal](https://portal.azure.com), choose **Access policies**, and select the copy button next to the **DefaultFullSharedAccessSignature** policy. This copies the connection string for the *DefaultFullSharedAccessSignature* policy to your notification hub. This connection string provides your function access permission to send notification messages. - ![Copy the notification hub connection string](./media/functions-bindings-notification-hubs/get-notification-hub-connection.png) -1. Navigate to your function app in the Azure portal, choose **Application settings**, add a key such as `MyHubConnectionString`, paste the copied *DefaultFullSharedAccessSignature* for your notification hub as the value, and then click **Save**. +```cs +using System; +using System.Threading.Tasks; +using System.Collections.Generic; + +public static async Task Run(string myQueueItem, IAsyncCollector> notification, TraceWriter log) +{ + log.Info($"C# Queue trigger function processed: {myQueueItem}"); + + log.Info($"Sending Template Notification to Notification Hub"); + await notification.AddAsync(GetTemplateProperties(myQueueItem)); +} + +private static IDictionary GetTemplateProperties(string message) +{ + Dictionary templateProperties = new Dictionary(); + templateProperties["user"] = "A new user wants to be added : " + message; + return templateProperties; +} +``` + +### C# script template example - JSON + +This example sends a notification for a [template registration](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) that contains a `message` placeholder in the template using a valid JSON string. + +```cs +using System; + +public static void Run(string myQueueItem, out string notification, TraceWriter log) +{ + log.Info($"C# Queue trigger function processed: {myQueueItem}"); + notification = "{\"message\":\"Hello from C#. Processed a queue item!\"}"; +} +``` + +### C# script template example - library types + +This example shows how to use types defined in the [Microsoft Azure Notification Hubs Library](https://www.nuget.org/packages/Microsoft.Azure.NotificationHubs/). + +```cs +#r "Microsoft.Azure.NotificationHubs" + +using System; +using System.Threading.Tasks; +using Microsoft.Azure.NotificationHubs; + +public static void Run(string myQueueItem, out Notification notification, TraceWriter log) +{ + log.Info($"C# Queue trigger function processed: {myQueueItem}"); + notification = GetTemplateNotification(myQueueItem); +} + +private static TemplateNotification GetTemplateNotification(string message) +{ + Dictionary templateProperties = new Dictionary(); + templateProperties["message"] = message; + return new TemplateNotification(templateProperties); +} +``` + +### F# template example + +This example sends a notification for a [template registration](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) that contains `location` and `message`. + +```fsharp +let Run(myTimer: TimerInfo, notification: byref>) = + notification = dict [("location", "Redmond"); ("message", "Hello from F#!")] +``` + +### JavaScript template example + +This example sends a notification for a [template registration](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) that contains `location` and `message`. + +```javascript +module.exports = function (context, myTimer) { + var timeStamp = new Date().toISOString(); + + if(myTimer.isPastDue) + { + context.log('Node.js is running late!'); + } + context.log('Node.js timer trigger function ran!', timeStamp); + context.bindings.notification = { + location: "Redmond", + message: "Hello from Node!" + }; + context.done(); +}; +``` -You can now use this named application setting that defines the notification hub connection in the output binding. +## Example - APNS native -## APNS native notifications with C# queue triggers -This example shows how to use types defined in the [Microsoft Azure Notification Hubs Library](https://www.nuget.org/packages/Microsoft.Azure.NotificationHubs/) to send a native APNS notification. +This C# script example shows how to send a native APNS notification. ```cs #r "Microsoft.Azure.NotificationHubs" @@ -104,8 +187,9 @@ public static async Task Run(string myQueueItem, IAsyncCollector n } ``` -## GCM native notifications with C# queue triggers -This example shows how to use types defined in the [Microsoft Azure Notification Hubs Library](https://www.nuget.org/packages/Microsoft.Azure.NotificationHubs/) to send a native GCM notification. +## Example - GCM native + +This C# script example shows how to send a native GCM notification. ```cs #r "Microsoft.Azure.NotificationHubs" @@ -134,8 +218,9 @@ public static async Task Run(string myQueueItem, IAsyncCollector n } ``` -## WNS native notifications with C# queue triggers -This example shows how to use types defined in the [Microsoft Azure Notification Hubs Library](https://www.nuget.org/packages/Microsoft.Azure.NotificationHubs/) to send a native WNS toast notification. +## Example - WNS native + +This C# script example shows how to use types defined in the [Microsoft Azure Notification Hubs Library](https://www.nuget.org/packages/Microsoft.Azure.NotificationHubs/) to send a native WNS toast notification. ```cs #r "Microsoft.Azure.NotificationHubs" @@ -176,117 +261,65 @@ public static async Task Run(string myQueueItem, IAsyncCollector n } ``` -## Template example for Node.js timer triggers -This example sends a notification for a [template registration](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) that contains `location` and `message`. - -```javascript -module.exports = function (context, myTimer) { - var timeStamp = new Date().toISOString(); - - if(myTimer.isPastDue) - { - context.log('Node.js is running late!'); - } - context.log('Node.js timer trigger function ran!', timeStamp); - context.bindings.notification = { - location: "Redmond", - message: "Hello from Node!" - }; - context.done(); -}; -``` - -## Template example for F# timer triggers -This example sends a notification for a [template registration](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) that contains `location` and `message`. +## Attributes -```fsharp -let Run(myTimer: TimerInfo, notification: byref>) = - notification = dict [("location", "Redmond"); ("message", "Hello from F#!")] -``` +For [precompiled C#](functions-dotnet-class-library.md) functions, use the [NotificationHub](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.NotificationHubs/NotificationHubAttribute.cs) attribute, which is defined in NuGet package [Microsoft.Azure.WebJobs.Extensions.NotificationHubs](http://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.NotificationHubs). -## Template example using an out parameter -This example sends a notification for a [template registration](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) that contains a `message` place holder in the template. +The attribute's constructor parameters and properties are described in the [configuration](#configuration) section. -```cs -using System; -using System.Threading.Tasks; -using System.Collections.Generic; +## Configuration -public static void Run(string myQueueItem, out IDictionary notification, TraceWriter log) -{ - log.Info($"C# Queue trigger function processed: {myQueueItem}"); - notification = GetTemplateProperties(myQueueItem); -} +The following table explains the binding configuration properties that you set in the *function.json* file and the `NotificationHub` attribute: -private static IDictionary GetTemplateProperties(string message) -{ - Dictionary templateProperties = new Dictionary(); - templateProperties["message"] = message; - return templateProperties; -} -``` - -## Template example with asynchronous function -If you are using asynchronous code, out parameters are not allowed. In this case use `IAsyncCollector` to return your template notification. The following code is an asynchronous example of the code above. +|function.json property | Attribute property |Description| +|---------|---------|----------------------| +|**type** |n/a| Must be set to "notificationHub". | +|**direction** |n/a| Must be set to "out". | +|**name** |n/a| Variable name used in function code for the notification hub message. | +|**tagExpression** |**TagExpression** | Tag expressions allow you to specify that notifications be delivered to a set of devices that have registered to receive notifications that match the tag expression. For more information, see [Routing and tag expressions](../notification-hubs/notification-hubs-tags-segment-push-message.md). | +|**hubName** | **HubName** | Name of the notification hub resource in the Azure portal. | +|**connection** | **ConnectionStringSetting** | The name of an app setting that contains a Notification Hubs connection string. The connection string must be set to the *DefaultFullSharedAccessSignature* value for your notification hub. See [Connection string setup](#connection-string-setup) later in this article.| +|**platform** | **Platform** | The platform property indicates the client platform your notification targets. By default, if the platform property is omitted from the output binding, template notifications can be used to target any platform configured on the Azure Notification Hub. For more information on using templates in general to send cross platform notifications with an Azure Notification Hub, see [Templates](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md). When set, **platform** must be one of the following values:
  • apns—Apple Push Notification Service. For more information on configuring the notification hub for APNS and receiving the notification in a client app, see [Sending push notifications to iOS with Azure Notification Hubs](../notification-hubs/notification-hubs-ios-apple-push-notification-apns-get-started.md).
  • adm—[Amazon Device Messaging](https://developer.amazon.com/device-messaging). For more information on configuring the notification hub for ADM and receiving the notification in a Kindle app, see [Getting Started with Notification Hubs for Kindle apps](../notification-hubs/notification-hubs-kindle-amazon-adm-push-notification.md).
  • gcm—[Google Cloud Messaging](https://developers.google.com/cloud-messaging/). Firebase Cloud Messaging, which is the new version of GCM, is also supported. For more information, see [Sending push notifications to Android with Azure Notification Hubs](../notification-hubs/notification-hubs-android-push-notification-google-fcm-get-started.md).
  • wns—[Windows Push Notification Services](https://msdn.microsoft.com/windows/uwp/controls-and-patterns/tiles-and-notifications-windows-push-notification-services--wns--overview) targeting Windows platforms. Windows Phone 8.1 and later is also supported by WNS. For more information, see [Getting started with Notification Hubs for Windows Universal Platform Apps](../notification-hubs/notification-hubs-windows-store-dotnet-get-started-wns-push-notification.md).
  • mpns—[Microsoft Push Notification Service](https://msdn.microsoft.com/library/windows/apps/ff402558.aspx). This platform supports Windows Phone 8 and earlier Windows Phone platforms. For more information, see [Sending push notifications with Azure Notification Hubs on Windows Phone](../notification-hubs/notification-hubs-windows-mobile-push-notifications-mpns.md).
| -```cs -using System; -using System.Threading.Tasks; -using System.Collections.Generic; +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] -public static async Task Run(string myQueueItem, IAsyncCollector> notification, TraceWriter log) -{ - log.Info($"C# Queue trigger function processed: {myQueueItem}"); +### function.json file example - log.Info($"Sending Template Notification to Notification Hub"); - await notification.AddAsync(GetTemplateProperties(myQueueItem)); -} +Here's an example of a Notification Hubs binding in a *function.json* file. -private static IDictionary GetTemplateProperties(string message) +```json { - Dictionary templateProperties = new Dictionary(); - templateProperties["user"] = "A new user wants to be added : " + message; - return templateProperties; + "bindings": [ + { + "type": "notificationHub", + "direction": "out", + "name": "notification", + "tagExpression": "", + "hubName": "my-notification-hub", + "connection": "MyHubConnectionString", + "platform": "gcm" + } + ], + "disabled": false } ``` -## Template example using JSON -This example sends a notification for a [template registration](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) that contains a `message` place holder in the template using a valid JSON string. - -```cs -using System; - -public static void Run(string myQueueItem, out string notification, TraceWriter log) -{ - log.Info($"C# Queue trigger function processed: {myQueueItem}"); - notification = "{\"message\":\"Hello from C#. Processed a queue item!\"}"; -} -``` +### Connection string setup -## Template example using Notification Hubs library types -This example shows how to use types defined in the [Microsoft Azure Notification Hubs Library](https://www.nuget.org/packages/Microsoft.Azure.NotificationHubs/). +To use a notification hub output binding, you must configure the connection string for the hub. You can select an existing notification hub or create a new one right from the *Integrate* tab in the Azure portal. You can also configure the connection string manually. -```cs -#r "Microsoft.Azure.NotificationHubs" +To configure the connection string to an existing notification hub: -using System; -using System.Threading.Tasks; -using Microsoft.Azure.NotificationHubs; +1. Navigate to your notification hub in the [Azure portal](https://portal.azure.com), choose **Access policies**, and select the copy button next to the **DefaultFullSharedAccessSignature** policy. This copies the connection string for the *DefaultFullSharedAccessSignature* policy to your notification hub. This connection string lets your function send notification messages to the hub. + ![Copy the notification hub connection string](./media/functions-bindings-notification-hubs/get-notification-hub-connection.png) +1. Navigate to your function app in the Azure portal, choose **Application settings**, add a key such as **MyHubConnectionString**, paste the copied *DefaultFullSharedAccessSignature* for your notification hub as the value, and then click **Save**. -public static void Run(string myQueueItem, out Notification notification, TraceWriter log) -{ - log.Info($"C# Queue trigger function processed: {myQueueItem}"); - notification = GetTemplateNotification(myQueueItem); -} +The name of this application setting is what goes in the output binding connection setting in *function.json* or the .NET attribute. See the [Configuration section](#configuration) earlier in this article. -private static TemplateNotification GetTemplateNotification(string message) -{ - Dictionary templateProperties = new Dictionary(); - templateProperties["message"] = message; - return new TemplateNotification(templateProperties); -} -``` +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ## Next steps -[!INCLUDE [next steps](../../includes/functions-bindings-next-steps.md)] + +> [!div class="nextstepaction"] +> [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md) diff --git a/articles/azure-functions/functions-bindings-sendgrid.md b/articles/azure-functions/functions-bindings-sendgrid.md index f0a545f41275b..c20e22b2999d6 100644 --- a/articles/azure-functions/functions-bindings-sendgrid.md +++ b/articles/azure-functions/functions-bindings-sendgrid.md @@ -1,9 +1,9 @@ --- -title: Azure Functions SendGrid bindings | Microsoft Docs -description: Azure Functions SendGrid bindings reference +title: Azure Functions SendGrid bindings +description: Azure Functions SendGrid bindings reference. services: functions documentationcenter: na -author: rachelappel +author: tdykstra manager: cfowler ms.service: functions @@ -11,43 +11,64 @@ ms.devlang: multiple ms.topic: article ms.tgt_pltfrm: multiple ms.workload: na -ms.date: 08/26/2017 -ms.author: rachelap +ms.date: 11/29/2017 +ms.author: tdykstra --- + # Azure Functions SendGrid bindings -This article explains how to configure and work with SendGrid bindings in Azure Functions. With SendGrid, you can use Azure Functions to send customized email programmatically. +This article explains how to send email by using [SendGrid](https://sendgrid.com/docs/User_Guide/index.html) bindings in Azure Functions. Azure Functions supports an output binding for SendGrid. + +[!INCLUDE [intro](../../includes/functions-bindings-intro.md)] + +## Example + +See the language-specific example: + +* [Precompiled C#](#c-example) +* [C# script](#c-script-example) +* [JavaScript](#javascript-example) + +### C# example -This article is reference information for Azure Functions developers. If you're new to Azure Functions, start with the following resources: +The following example shows a [precompiled C# function](functions-dotnet-class-library.md) that uses a Service Bus queue trigger and a SendGrid output binding. -[Create your first Azure Function](functions-create-first-azure-function.md). -[C#](functions-reference-csharp.md), [F#](functions-reference-fsharp.md), or [Node](functions-reference-node.md) developer references. +```cs +[FunctionName("SendEmail")] +public static void Run( + [ServiceBusTrigger("myqueue", AccessRights.Manage, Connection = "ServiceBusConnection")] OutgoingEmail email, + [SendGrid(ApiKey = "CustomSendGridKeyAppSettingName")] out SendGridMessage message) +{ + message = new SendGridMessage(); + message.AddTo(email.To); + message.AddContent("text/html", email.Body); + message.SetFrom(new EmailAddress(email.From)); + message.SetSubject(email.Subject); +} -## function.json for SendGrid bindings +public class OutgoingEmail +{ + public string To { get; set; } + public string From { get; set; } + public string Subject { get; set; } + public string Body { get; set; } +} +``` -Azure Functions provides an output binding for SendGrid. The SendGrid output binding enables you to create and send email programmatically. +You can omit setting the attribute's `ApiKey` property if you have your API key in an app setting named "AzureWebJobsSendGridApiKey". -The SendGrid binding supports the following properties: +### C# script example -|Property |Description | -|---------|---------| -|**name**| Required - the variable name used in function code for the request or request body. This value is ```$return``` when there is only one return value. | -|**type**| Required - must be set to `sendGrid`.| -|**direction**| Required - must be set to `out`.| -|**apiKey**| Required - must be set to the name of your API key stored in the Function App's app settings. | -|**to**| the recipient's email address. | -|**from**| the sender's email address. | -|**subject**| the subject of the email. | -|**text**| the email content. | +The following example shows a SendGrid output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. -Example of **function.json**: +Here's the binding data in the *function.json* file: ```json { "bindings": [ { - "name": "$return", + "name": "message", "type": "sendGrid", "direction": "out", "apiKey" : "MySendGridKey", @@ -59,19 +80,16 @@ Example of **function.json**: } ``` -> [!NOTE] -> Azure Functions stores your connection information and API keys as app settings so that they are not checked into your source control repository. This action safeguards your sensitive information. -> -> +The [configuration](#configuration) section explains these properties. -## C# example of the SendGrid output binding +Here's the C# script code: ```csharp #r "SendGrid" using System; using SendGrid.Helpers.Mail; -public static Mail Run(TraceWriter log, string input, out Mail message) +public static void Run(TraceWriter log, string input, out Mail message) { message = new Mail { @@ -91,7 +109,31 @@ public static Mail Run(TraceWriter log, string input, out Mail message) } ``` -## Node example of the SendGrid output binding +### JavaScript example + +The following example shows a SendGrid output binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. + +Here's the binding data in the *function.json* file: + +```json +{ + "bindings": [ + { + "name": "$return", + "type": "sendGrid", + "direction": "out", + "apiKey" : "MySendGridKey", + "to": "{ToEmail}", + "from": "{FromEmail}", + "subject": "SendGrid output bindings" + } + ] +} +``` + +The [configuration](#configuration) section explains these properties. + +Here's the JavaScript code: ```javascript module.exports = function (context, input) { @@ -107,15 +149,44 @@ module.exports = function (context, input) { context.done(null, message); }; +``` + +## Attributes +For [precompiled C#](functions-dotnet-class-library.md) functions, use the [SendGrid](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.SendGrid/SendGridAttribute.cs) attribute, which is defined in NuGet package [Microsoft.Azure.WebJobs.Extensions.SendGrid](http://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.SendGrid). + +For information about attribute properties that you can configure, see [Configuration](#configuration). Here's a `SendGrid` attribute example in a method signature: + +```csharp +[FunctionName("SendEmail")] +public static void Run( + [ServiceBusTrigger("myqueue", AccessRights.Manage, Connection = "ServiceBusConnection")] OutgoingEmail email, + [SendGrid(ApiKey = "CustomSendGridKeyAppSettingName")] out SendGridMessage message) +{ + ... +} ``` -## Next steps -For information about other bindings and triggers for Azure Functions, see -- [Azure Functions triggers and bindings developer reference](functions-triggers-bindings.md) +For a complete example, see [Precompiled C# example](#c-example). -- [Best practices for Azure Functions](functions-best-practices.md) -Lists some best practices to use when creating Azure Functions. +## Configuration + +The following table explains the binding configuration properties that you set in the *function.json* file and the `SendGrid` attribute. + +|function.json property | Attribute property |Description| +|---------|---------|----------------------| +|**type**|| Required - must be set to `sendGrid`.| +|**direction**|| Required - must be set to `out`.| +|**name**|| Required - the variable name used in function code for the request or request body. This value is ```$return``` when there is only one return value. | +|**apiKey**|**ApiKey**| The name of an app setting that contains your API key. If not set, the default app setting name is "AzureWebJobsSendGridApiKey".| +|**to**|**To**| the recipient's email address. | +|**from**|**From**| the sender's email address. | +|**subject**|**Subject**| the subject of the email. | +|**text**|**Text**| the email content. | + +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] + +## Next steps -- [Azure Functions developer reference](functions-reference.md) -Programmer reference for coding functions and defining triggers and bindings. +> [!div class="nextstepaction"] +> [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md) \ No newline at end of file diff --git a/articles/azure-functions/functions-bindings-service-bus.md b/articles/azure-functions/functions-bindings-service-bus.md index b6ae51c440182..4916427c7a5df 100644 --- a/articles/azure-functions/functions-bindings-service-bus.md +++ b/articles/azure-functions/functions-bindings-service-bus.md @@ -1,9 +1,9 @@ --- -title: Azure Functions Service Bus triggers and bindings +title: Azure Service Bus bindings for Azure Functions description: Understand how to use Azure Service Bus triggers and bindings in Azure Functions. services: functions documentationcenter: na -author: christopheranderson +author: tdykstra manager: cfowler editor: '' tags: '' @@ -16,16 +16,16 @@ ms.topic: reference ms.tgt_pltfrm: multiple ms.workload: na ms.date: 04/01/2017 -ms.author: glenga +ms.author: tdykstra --- -# Azure Functions Service Bus bindings +# Azure Service Bus bindings for Azure Functions This article explains how to work with Azure Service Bus bindings in Azure Functions. Azure Functions supports trigger and output bindings for Service Bus queues and topics. [!INCLUDE [intro](../../includes/functions-bindings-intro.md)] -## Service Bus trigger +## Trigger Use the Service Bus trigger to respond to messages from a Service Bus queue or topic. @@ -141,7 +141,7 @@ module.exports = function(context, myQueueItem) { }; ``` -## Trigger - Attributes for precompiled C# +## Trigger - attributes For [precompiled C#](functions-dotnet-class-library.md) functions, use the following attributes to configure a Service Bus trigger: @@ -153,6 +153,9 @@ For [precompiled C#](functions-dotnet-class-library.md) functions, use the follo [FunctionName("ServiceBusQueueTriggerCSharp")] public static void Run( [ServiceBusTrigger("myqueue")] string myQueueItem, TraceWriter log) + { + ... + } ``` You can set the `Connection` property to specify the Service Bus account to use, as shown in the following example: @@ -162,8 +165,13 @@ For [precompiled C#](functions-dotnet-class-library.md) functions, use the follo public static void Run( [ServiceBusTrigger("myqueue", Connection = "ServiceBusConnection")] string myQueueItem, TraceWriter log) + { + ... + } ``` + For a complete example, see [Trigger - precompiled C# example](#trigger---c-example). + * [ServiceBusAccountAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.ServiceBus/ServiceBusAccountAttribute.cs), defined in NuGet package [Microsoft.Azure.WebJobs.ServiceBus](http://www.nuget.org/packages/Microsoft.Azure.WebJobs.ServiceBus) Provides another way to specify the Service Bus account to use. The constructor takes the name of an app setting that contains a Service Bus connection string. The attribute can be applied at the parameter, method, or class level. The following example shows class level and method level: @@ -177,6 +185,9 @@ For [precompiled C#](functions-dotnet-class-library.md) functions, use the follo public static void Run( [ServiceBusTrigger("myqueue", AccessRights.Manage)] string myQueueItem, TraceWriter log) + { + ... + } ``` The Service Bus account to use is determined in the following order: @@ -199,9 +210,11 @@ The following table explains the binding configuration properties that you set i |**queueName**|**QueueName**|Name of the queue to monitor. Set only if monitoring a queue, not for a topic. |**topicName**|**TopicName**|Name of the topic to monitor. Set only if monitoring a topic, not for a queue.| |**subscriptionName**|**SubscriptionName**|Name of the subscription to monitor. Set only if monitoring a topic, not for a queue.| -|**connection**|**Connection**|The name of an app setting that contains the Service Bus connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name. For example, if you set `connection` to "MyServiceBus", the Functions runtime looks for an app setting that is named "AzureWebJobsMyServiceBus." If you leave `connection` empty, the Functions runtime uses the default Service Bus connection string in the app setting that is named "AzureWebJobsServiceBus".

To obtain a connection string, follow the steps shown at [Obtain the management credentials](../service-bus-messaging/service-bus-dotnet-get-started-with-queues.md#obtain-the-management-credentials). The connection string must be for a Service Bus namespace, not limited to a specific queue or topic.
When you're developing locally, app settings go into the values of the [local.settings.json file](functions-run-local.md#local-settings-file).| +|**connection**|**Connection**|The name of an app setting that contains the Service Bus connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name. For example, if you set `connection` to "MyServiceBus", the Functions runtime looks for an app setting that is named "AzureWebJobsMyServiceBus." If you leave `connection` empty, the Functions runtime uses the default Service Bus connection string in the app setting that is named "AzureWebJobsServiceBus".

To obtain a connection string, follow the steps shown at [Obtain the management credentials](../service-bus-messaging/service-bus-dotnet-get-started-with-queues.md#obtain-the-management-credentials). The connection string must be for a Service Bus namespace, not limited to a specific queue or topic. | |**accessRights**|**Access**|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights.| +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] + ## Trigger - usage In C# and C# script, access the queue or topic message by using a method parameter such as `string paramName`. In C# script, `paramName` is the value specified in the `name` property of *function.json*. You can use any of the following types instead of `string`: @@ -227,7 +240,7 @@ The [host.json](functions-host-json.md#servicebus) file contains settings that c [!INCLUDE [functions-host-json-event-hubs](../../includes/functions-host-json-service-bus.md)] -## Service Bus output binding +## Output Use Azure Service Bus output binding to send queue or topic messages. @@ -394,27 +407,35 @@ module.exports = function (context, myTimer) { }; ``` -## Output - Attributes for precompiled C# +## Output - attributes For [precompiled C#](functions-dotnet-class-library.md) functions, use the [ServiceBusAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.ServiceBus/ServiceBusAttribute.cs), which is defined in NuGet package [Microsoft.Azure.WebJobs.ServiceBus](http://www.nuget.org/packages/Microsoft.Azure.WebJobs.ServiceBus). - The attribute's constructor takes the name of the queue or the topic and subscription. You can also specify the connection's access rights. How to choose the access rights setting is explained in the [Output - configuration](#output---configuration) section. Here's an example that shows the attribute applied to the return value of the function: +The attribute's constructor takes the name of the queue or the topic and subscription. You can also specify the connection's access rights. How to choose the access rights setting is explained in the [Output - configuration](#output---configuration) section. Here's an example that shows the attribute applied to the return value of the function: - ```csharp - [FunctionName("ServiceBusOutput")] - [return: ServiceBus("myqueue")] - public static string Run([HttpTrigger] dynamic input, TraceWriter log) - ``` +```csharp +[FunctionName("ServiceBusOutput")] +[return: ServiceBus("myqueue")] +public static string Run([HttpTrigger] dynamic input, TraceWriter log) +{ + ... +} +``` - You can set the `Connection` property to specify the Service Bus account to use, as shown in the following example: +You can set the `Connection` property to specify the Service Bus account to use, as shown in the following example: - ```csharp - [FunctionName("ServiceBusOutput")] - [return: ServiceBus("myqueue", Connection = "ServiceBusConnection")] - public static string Run([HttpTrigger] dynamic input, TraceWriter log) - ``` +```csharp +[FunctionName("ServiceBusOutput")] +[return: ServiceBus("myqueue", Connection = "ServiceBusConnection")] +public static string Run([HttpTrigger] dynamic input, TraceWriter log) +{ + ... +} +``` -You can use the `ServiceBusAccount` attribute to specify the Service Bus account to use at class, method, or parameter level. For more information, see [Trigger - Attributes for precompiled C#](#trigger---attributes-for-precompiled-c). +For a complete example, see [Output - precompiled C# example](#output---c-example). + +You can use the `ServiceBusAccount` attribute to specify the Service Bus account to use at class, method, or parameter level. For more information, see [Trigger - attributes](#trigger---attributes-for-precompiled-c). ## Output - configuration @@ -428,9 +449,11 @@ The following table explains the binding configuration properties that you set i |**queueName**|**QueueName**|Name of the queue. Set only if sending queue messages, not for a topic. |**topicName**|**TopicName**|Name of the topic to monitor. Set only if sending topic messages, not for a queue.| |**subscriptionName**|**SubscriptionName**|Name of the subscription to monitor. Set only if sending topic messages, not for a queue.| -|**connection**|**Connection**|The name of an app setting that contains the Service Bus connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name. For example, if you set `connection` to "MyServiceBus", the Functions runtime looks for an app setting that is named "AzureWebJobsMyServiceBus." If you leave `connection` empty, the Functions runtime uses the default Service Bus connection string in the app setting that is named "AzureWebJobsServiceBus".

To obtain a connection string, follow the steps shown at [Obtain the management credentials](../service-bus-messaging/service-bus-dotnet-get-started-with-queues.md#obtain-the-management-credentials). The connection string must be for a Service Bus namespace, not limited to a specific queue or topic.
When you're developing locally, app settings go into the values of the [local.settings.json file](functions-run-local.md#local-settings-file).| +|**connection**|**Connection**|The name of an app setting that contains the Service Bus connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name. For example, if you set `connection` to "MyServiceBus", the Functions runtime looks for an app setting that is named "AzureWebJobsMyServiceBus." If you leave `connection` empty, the Functions runtime uses the default Service Bus connection string in the app setting that is named "AzureWebJobsServiceBus".

To obtain a connection string, follow the steps shown at [Obtain the management credentials](../service-bus-messaging/service-bus-dotnet-get-started-with-queues.md#obtain-the-management-credentials). The connection string must be for a Service Bus namespace, not limited to a specific queue or topic.| |**accessRights**|**Access** |Access rights for the connection string. Available values are "manage" and "listen". The default is "manage", which indicates that the connection has **Manage** permissions. If you use a connection string that does not have **Manage** permissions, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights.| +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] + ## Output - usage In C# and C# script, access the queue or topic by using a method parameter such as `out string paramName`. In C# script, `paramName` is the value specified in the `name` property of *function.json*. You can use any of the following parameter types: diff --git a/articles/azure-functions/functions-bindings-storage-blob.md b/articles/azure-functions/functions-bindings-storage-blob.md index ff9b04ea1cd42..eb8170d10ade1 100644 --- a/articles/azure-functions/functions-bindings-storage-blob.md +++ b/articles/azure-functions/functions-bindings-storage-blob.md @@ -1,5 +1,5 @@ --- -title: Azure Functions Blob storage bindings +title: Azure Blob storage bindings for Azure Functions description: Understand how to use Azure Blob storage triggers and bindings in Azure Functions. services: functions documentationcenter: na @@ -18,7 +18,7 @@ ms.date: 10/27/2017 ms.author: glenga --- -# Azure Functions Blob storage bindings +# Azure Blob storage bindings for Azure Functions This article explains how to work with Azure Blob storage bindings in Azure Functions. Azure Functions supports trigger, input, and output bindings for blobs. @@ -27,7 +27,7 @@ This article explains how to work with Azure Blob storage bindings in Azure Func > [!NOTE] > [Blob-only storage accounts](../storage/common/storage-create-storage-account.md#blob-storage-accounts) are not supported. Blob storage triggers and bindings require a general-purpose storage account. -## Blob storage trigger +## Trigger Use a Blob storage trigger to start a function when a new or updated blob is detected. The blob contents are provided as input to the function. @@ -56,7 +56,7 @@ public static void Run([BlobTrigger("samples-workitems/{name}")] Stream myBlob, } ``` -For more information about the `BlobTrigger` attribute, see [Trigger - Attributes for precompiled C#](#trigger---attributes-for-precompiled-c). +For more information about the `BlobTrigger` attribute, see [Trigger - attributes](#trigger---attributes-for-precompiled-c). ### Trigger - C# script example @@ -135,7 +135,7 @@ module.exports = function(context) { }; ``` -## Trigger - Attributes for precompiled C# +## Trigger - attributes For [precompiled C#](functions-dotnet-class-library.md) functions, use the following attributes to configure a blob trigger: @@ -148,6 +148,9 @@ For [precompiled C#](functions-dotnet-class-library.md) functions, use the follo public static void Run( [BlobTrigger("sample-images/{name}")] Stream image, [Blob("sample-images-md/{name}", FileAccess.Write)] Stream imageSmall) + { + .... + } ``` You can set the `Connection` property to specify the storage account to use, as shown in the following example: @@ -157,8 +160,13 @@ For [precompiled C#](functions-dotnet-class-library.md) functions, use the follo public static void Run( [BlobTrigger("sample-images/{name}", Connection = "StorageConnectionAppSetting")] Stream image, [Blob("sample-images-md/{name}", FileAccess.Write)] Stream imageSmall) + { + .... + } ``` + For a complete example, see [Trigger - precompiled C# example](#trigger---c-example). + * [StorageAccountAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/StorageAccountAttribute.cs), defined in NuGet package [Microsoft.Azure.WebJobs](http://www.nuget.org/packages/Microsoft.Azure.WebJobs) Provides another way to specify the storage account to use. The constructor takes the name of an app setting that contains a storage connection string. The attribute can be applied at the parameter, method, or class level. The following example shows class level and method level: @@ -170,6 +178,9 @@ For [precompiled C#](functions-dotnet-class-library.md) functions, use the follo [FunctionName("BlobTrigger")] [StorageAccount("FunctionLevelStorageAppSetting")] public static void Run( //... + { + .... + } ``` The storage account to use is determined in the following order: @@ -190,7 +201,9 @@ The following table explains the binding configuration properties that you set i |**direction** | n/a | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. Exceptions are noted in the [usage](#trigger---usage) section. | |**name** | n/a | The name of the variable that represents the blob in function code. | |**path** | **BlobPath** |The container to monitor. May be a [blob name pattern](#trigger-blob-name-patterns). | -|**connection** | **Connection** | The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.

The connection string must be for a general-purpose storage account, not a [blob-only storage account](../storage/common/storage-create-storage-account.md#blob-storage-accounts).
When you're developing locally, app settings go into the values of the [local.settings.json file](functions-run-local.md#local-settings-file).| +|**connection** | **Connection** | The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.

The connection string must be for a general-purpose storage account, not a [blob-only storage account](../storage/common/storage-create-storage-account.md#blob-storage-accounts).| + +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ## Trigger - usage @@ -296,7 +309,7 @@ after the blob is created. In addition, [storage logs are created on a "best eff basis. There's no guarantee that all events are captured. Under some conditions, logs may be missed. If you require faster or more reliable blob processing, consider creating a [queue message](../storage/queues/storage-dotnet-how-to-use-queues.md) when you create the blob. Then use a [queue trigger](functions-bindings-storage-queue.md) instead of a blob trigger to process the blob. Another option is to use Event Grid; see the tutorial [Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md). -## Blob storage input & output bindings +## Input & output Use Blob storage input and output bindings to read and write blobs. @@ -435,7 +448,7 @@ module.exports = function(context) { }; ``` -## Input & output - Attributes for precompiled C# +## Input & output - attributes For [precompiled C#](functions-dotnet-class-library.md) functions, use the [BlobAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/BlobAttribute.cs), which is defined in NuGet package [Microsoft.Azure.WebJobs](http://www.nuget.org/packages/Microsoft.Azure.WebJobs). @@ -446,6 +459,9 @@ The attribute's constructor takes the path to the blob and a `FileAccess` parame public static void Run( [BlobTrigger("sample-images/{name}")] Stream image, [Blob("sample-images-md/{name}", FileAccess.Write)] Stream imageSmall) +{ + ... +} ``` You can set the `Connection` property to specify the storage account to use, as shown in the following example: @@ -455,9 +471,14 @@ You can set the `Connection` property to specify the storage account to use, as public static void Run( [BlobTrigger("sample-images/{name}")] Stream image, [Blob("sample-images-md/{name}", FileAccess.Write, Connection = "StorageConnectionAppSetting")] Stream imageSmall) +{ + ... +} ``` -You can use the `StorageAccount` attribute to specify the storage account at class, method, or parameter level. For more information, see [Trigger - Attributes for precompiled C#](#trigger---attributes-for-precompiled-c). +For a complete example, see [Input & output - precompiled C# example](#input--output---c-example). + +You can use the `StorageAccount` attribute to specify the storage account at class, method, or parameter level. For more information, see [Trigger - attributes](#trigger---attributes-for-precompiled-c). ## Input & output - configuration @@ -469,9 +490,11 @@ The following table explains the binding configuration properties that you set i |**direction** | n/a | Must be set to `in` for an input binding or out for an output binding. Exceptions are noted in the [usage](#input--output---usage) section. | |**name** | n/a | The name of the variable that represents the blob in function code. Set to `$return` to reference the function return value.| |**path** |**BlobPath** | The path to the blob. | -|**connection** |**Connection**| The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.

The connection string must be for a general-purpose storage account, not a [blob-only storage account](../storage/common/storage-create-storage-account.md#blob-storage-accounts).
When you're developing locally, app settings go into the values of the [local.settings.json file](functions-run-local.md#local-settings-file).| +|**connection** |**Connection**| The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.

The connection string must be for a general-purpose storage account, not a [blob-only storage account](../storage/common/storage-create-storage-account.md#blob-storage-accounts).| |n/a | **Access** | Indicates whether you will be reading or writing. | +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] + ## Input & output - usage In precompiled C# and C# script, access the blob by using a method parameter such as `Stream paramName`. In C# script, `paramName` is the value specified in the `name` property of *function.json*. You can bind to any of the following types: diff --git a/articles/azure-functions/functions-bindings-storage-queue.md b/articles/azure-functions/functions-bindings-storage-queue.md index 81ffd0f48d880..6b6929f33686f 100644 --- a/articles/azure-functions/functions-bindings-storage-queue.md +++ b/articles/azure-functions/functions-bindings-storage-queue.md @@ -1,5 +1,5 @@ --- -title: Azure Functions Queue storage bindings +title: Azure Queue storage bindings for Azure Functions description: Understand how to use the Azure Queue storage trigger and output binding in Azure Functions. services: functions documentationcenter: na @@ -18,13 +18,13 @@ ms.date: 10/23/2017 ms.author: glenga --- -# Azure Functions Queue storage bindings +# Azure Queue storage bindings for Azure Functions This article explains how to work with Azure Queue storage bindings in Azure Functions. Azure Functions supports trigger and output bindings for queues. [!INCLUDE [intro](../../includes/functions-bindings-intro.md)] -## Queue storage trigger +## Trigger Use the queue trigger to start a function when a new item is received on a queue. The queue message is provided as input to the function. @@ -148,7 +148,7 @@ module.exports = function (context) { The [usage](#trigger---usage) section explains `myQueueItem`, which is named by the `name` property in function.json. The [message metadata section](#trigger---message-metadata) explains all of the other variables shown. -## Trigger - Attributes for precompiled C# +## Trigger - attributes For [precompiled C#](functions-dotnet-class-library.md) functions, use the following attributes to configure a queue trigger: @@ -161,6 +161,9 @@ For [precompiled C#](functions-dotnet-class-library.md) functions, use the follo public static void Run( [QueueTrigger("myqueue-items")] string myQueueItem, TraceWriter log) + { + ... + } ``` You can set the `Connection` property to specify the storage account to use, as shown in the following example: @@ -170,8 +173,13 @@ For [precompiled C#](functions-dotnet-class-library.md) functions, use the follo public static void Run( [QueueTrigger("myqueue-items", Connection = "StorageConnectionAppSetting")] string myQueueItem, TraceWriter log) + { + .... + } ``` + For a complete example, see [Trigger - precompiled C# example](#trigger---c-example). + * [StorageAccountAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/StorageAccountAttribute.cs), defined in NuGet package [Microsoft.Azure.WebJobs](http://www.nuget.org/packages/Microsoft.Azure.WebJobs) Provides another way to specify the storage account to use. The constructor takes the name of an app setting that contains a storage connection string. The attribute can be applied at the parameter, method, or class level. The following example shows class level and method level: @@ -183,6 +191,9 @@ For [precompiled C#](functions-dotnet-class-library.md) functions, use the follo [FunctionName("QueueTrigger")] [StorageAccount("FunctionLevelStorageAppSetting")] public static void Run( //... + { + ... + } ``` The storage account to use is determined in the following order: @@ -203,7 +214,9 @@ The following table explains the binding configuration properties that you set i |**direction**| n/a | In the *function.json* file only. Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. | |**name** | n/a |The name of the variable that represents the queue in function code. | |**queueName** | **QueueName**| The name of the queue to poll. | -|**connection** | **Connection** |The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.
When you're developing locally, app settings go into the values of the [local.settings.json file](functions-run-local.md#local-settings-file).| +|**connection** | **Connection** |The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.| + +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ## Trigger - usage @@ -242,7 +255,7 @@ The [host.json](functions-host-json.md#queues) file contains settings that contr [!INCLUDE [functions-host-json-queues](../../includes/functions-host-json-queues.md)] -## Queue storage output binding +## Output Use the Azure Queue storage output binding to write messages to a queue. @@ -383,7 +396,7 @@ module.exports = function(context) { }; ``` -## Output - Attributes for precompiled C# +## Output - attributes For [precompiled C#](functions-dotnet-class-library.md) functions, use the [QueueAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/QueueAttribute.cs), which is defined in NuGet package [Microsoft.Azure.WebJobs](http://www.nuget.org/packages/Microsoft.Azure.WebJobs). @@ -393,6 +406,9 @@ The attribute applies to an `out` parameter or the return value of the function. [FunctionName("QueueOutput")] [return: Queue("myqueue-items")] public static string Run([HttpTrigger] dynamic input, TraceWriter log) +{ + ... +} ``` You can set the `Connection` property to specify the storage account to use, as shown in the following example: @@ -401,9 +417,14 @@ You can set the `Connection` property to specify the storage account to use, as [FunctionName("QueueOutput")] [return: Queue("myqueue-items, Connection = "StorageConnectionAppSetting")] public static string Run([HttpTrigger] dynamic input, TraceWriter log) +{ + ... +} ``` -You can use the `StorageAccount` attribute to specify the storage account at class, method, or parameter level. For more information, see [Trigger - Attributes for precompiled C#](#trigger---attributes-for-precompiled-c). +For a complete example, see [Output - precompiled C# example](#output---c-example). + +You can use the `StorageAccount` attribute to specify the storage account at class, method, or parameter level. For more information, see [Trigger - attributes](#trigger---attributes-for-precompiled-c). ## Output - configuration @@ -415,7 +436,9 @@ The following table explains the binding configuration properties that you set i |**direction** | n/a | Must be set to `out`. This property is set automatically when you create the trigger in the Azure portal. | |**name** | n/a | The name of the variable that represents the queue in function code. Set to `$return` to reference the function return value.| |**queueName** |**QueueName** | The name of the queue. | -|**connection** | **Connection** |The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.
When you're developing locally, app settings go into the values of the [local.settings.json file](functions-run-local.md#local-settings-file).| +|**connection** | **Connection** |The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.| + +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ## Output - usage diff --git a/articles/azure-functions/functions-bindings-storage-table.md b/articles/azure-functions/functions-bindings-storage-table.md index 15d1f64115d49..13a9f09c77fe0 100644 --- a/articles/azure-functions/functions-bindings-storage-table.md +++ b/articles/azure-functions/functions-bindings-storage-table.md @@ -1,9 +1,9 @@ --- -title: Azure Functions Table storage bindings +title: Azure Table storage bindings for Azure Functions description: Understand how to use Azure Table storage bindings in Azure Functions. services: functions documentationcenter: na -author: christopheranderson +author: tdykstra manager: cfowler editor: '' tags: '' @@ -15,15 +15,15 @@ ms.topic: reference ms.tgt_pltfrm: multiple ms.workload: na ms.date: 11/08/2017 -ms.author: chrande +ms.author: tdykstra --- -# Azure Functions Table storage bindings +# Azure Table storage bindings for Azure Functions This article explains how to work with Azure Table storage bindings in Azure Functions. Azure Functions supports input and output bindings for Azure Table storage. [!INCLUDE [intro](../../includes/functions-bindings-intro.md)] -## Table storage input binding +## Input Use the Azure Table storage input binding to read a table in an Azure Storage account. @@ -280,7 +280,7 @@ module.exports = function (context, myQueueItem) { }; ``` -## Input - Attributes for precompiled C# +## Input - attributes For [precompiled C#](functions-dotnet-class-library.md) functions, use the following attributes to configure a table input binding: @@ -294,6 +294,9 @@ For [precompiled C#](functions-dotnet-class-library.md) functions, use the follo [QueueTrigger("table-items")] string input, [Table("MyTable", "Http", "{queueTrigger}")] MyPoco poco, TraceWriter log) + { + ... + } ``` You can set the `Connection` property to specify the storage account to use, as shown in the following example: @@ -304,8 +307,13 @@ For [precompiled C#](functions-dotnet-class-library.md) functions, use the follo [QueueTrigger("table-items")] string input, [Table("MyTable", "Http", "{queueTrigger}", Connection = "StorageConnectionAppSetting")] MyPoco poco, TraceWriter log) + { + ... + } ``` + For a complete example, see [Input - precompiled C# example](#input---c-example). + * [StorageAccountAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/StorageAccountAttribute.cs), defined in NuGet package [Microsoft.Azure.WebJobs](http://www.nuget.org/packages/Microsoft.Azure.WebJobs) Provides another way to specify the storage account to use. The constructor takes the name of an app setting that contains a storage connection string. The attribute can be applied at the parameter, method, or class level. The following example shows class level and method level: @@ -317,6 +325,9 @@ For [precompiled C#](functions-dotnet-class-library.md) functions, use the follo [FunctionName("TableInput")] [StorageAccount("FunctionLevelStorageAppSetting")] public static void Run( //... + { + ... + } ``` The storage account to use is determined in the following order: @@ -341,7 +352,9 @@ The following table explains the binding configuration properties that you set i |**rowKey** |**RowKey** | Optional. The row key of the table entity to read. See the [usage](#input---usage) section for guidance on how to use this property.| |**take** |**Take** | Optional. The maximum number of entities to read in JavaScript. See the [usage](#input---usage) section for guidance on how to use this property.| |**filter** |**Filter** | Optional. An OData filter expression for table input in JavaScript. See the [usage](#input---usage) section for guidance on how to use this property.| -|**connection** |**Connection** | The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.
When you're developing locally, app settings go into the values of the [local.settings.json file](functions-run-local.md#local-settings-file).| +|**connection** |**Connection** | The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.| + +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ## Input - usage @@ -364,7 +377,7 @@ The Table storage input binding supports the following scenarios: Set the `filter` and `take` properties. Don't set `partitionKey` or `rowKey`. Access the input table entity (or entities) using `context.bindings.`. The deserialized objects have `RowKey` and `PartitionKey` properties. -## Table storage output binding +## Output Use an Azure Table storage output binding to write entities to a table in an Azure Storage account. @@ -550,9 +563,9 @@ module.exports = function (context) { }; ``` -## Output - Attributes for precompiled C# +## Output - attributes - For [precompiled C#](functions-dotnet-class-library.md) functions, use the [TableAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/TableAttribute.cs), which is defined in NuGet package [Microsoft.Azure.WebJobs](http://www.nuget.org/packages/Microsoft.Azure.WebJobs). +For [precompiled C#](functions-dotnet-class-library.md) functions, use the [TableAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/TableAttribute.cs), which is defined in NuGet package [Microsoft.Azure.WebJobs](http://www.nuget.org/packages/Microsoft.Azure.WebJobs). The attribute's constructor takes the table name. It can be used on an `out` parameter or on the return value of the function, as shown in the following example: @@ -562,6 +575,9 @@ The attribute's constructor takes the table name. It can be used on an `out` par public static MyPoco TableOutput( [HttpTrigger] dynamic input, TraceWriter log) +{ + ... +} ``` You can set the `Connection` property to specify the storage account to use, as shown in the following example: @@ -572,9 +588,14 @@ You can set the `Connection` property to specify the storage account to use, as public static MyPoco TableOutput( [HttpTrigger] dynamic input, TraceWriter log) +{ + ... +} ``` -You can use the `StorageAccount` attribute to specify the storage account at class, method, or parameter level. For more information, see [Input - Attributes for precompiled C#](#input---attributes-for-precompiled-c). +For a complete example, see [Output - precompiled C# example](#output---c-example). + +You can use the `StorageAccount` attribute to specify the storage account at class, method, or parameter level. For more information, see [Input - attributes](#input---attributes-for-precompiled-c). ## Output - configuration @@ -588,7 +609,9 @@ The following table explains the binding configuration properties that you set i |**tableName** |**TableName** | The name of the table.| |**partitionKey** |**PartitionKey** | The partition key of the table entity to write. See the [usage section](#output---usage) for guidance on how to use this property.| |**rowKey** |**RowKey** | The row key of the table entity to write. See the [usage section](#output---usage) for guidance on how to use this property.| -|**connection** |**Connection** | The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.
When you're developing locally, app settings go into the values of the [local.settings.json file](functions-run-local.md#local-settings-file).| +|**connection** |**Connection** | The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.| + +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ## Output - usage diff --git a/articles/azure-functions/functions-bindings-timer.md b/articles/azure-functions/functions-bindings-timer.md index a47ca5d282efc..b57343b71fd8a 100644 --- a/articles/azure-functions/functions-bindings-timer.md +++ b/articles/azure-functions/functions-bindings-timer.md @@ -1,9 +1,9 @@ --- -title: Azure Functions timer trigger +title: Timer trigger for Azure Functions description: Understand how to use timer triggers in Azure Functions. services: functions documentationcenter: na -author: christopheranderson +author: tdykstra manager: cfowler editor: '' tags: '' @@ -16,12 +16,12 @@ ms.topic: reference ms.tgt_pltfrm: multiple ms.workload: na ms.date: 02/27/2017 -ms.author: glenga +ms.author: tdykstra ms.custom: --- -# Azure Functions timer trigger +# Timer trigger for Azure Functions This article explains how to work with timer triggers in Azure Functions. A timer trigger lets you run a function on a schedule. @@ -133,7 +133,7 @@ module.exports = function (context, myTimer) { }; ``` -## Attributes for precompiled C# +## Attributes For [precompiled C#](functions-dotnet-class-library.md) functions, use the [TimerTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerTriggerAttribute.cs), defined in NuGet package [Microsoft.Azure.WebJobs.Extensions](http://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions). @@ -142,10 +142,15 @@ The attribute's constructor takes a CRON expression, as shown in the following e ```csharp [FunctionName("TimerTriggerCSharp")] public static void Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, TraceWriter log) +{ + ... +} ``` You can specify a `TimeSpan` instead of a CRON expression if your function app runs on an App Service plan (not a Consumption plan). +For a complete example, see [Precompiled C# example](#c-example). + ## Configuration The following table explains the binding configuration properties that you set in the *function.json* file and the `TimerTrigger` attribute. @@ -155,7 +160,9 @@ The following table explains the binding configuration properties that you set i |**type** | n/a | Must be set to "timerTrigger". This property is set automatically when you create the trigger in the Azure portal.| |**direction** | n/a | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. | |**name** | n/a | The name of the variable that represents the timer object in function code. | -|**schedule**|**ScheduleExpression**|On the Consumption plan, you can define schedules with a CRON expression. If you're using an App Service Plan, you can also use a `TimeSpan` string. The following sections explain CRON expressions. You can put the schedule expression in an app setting and set this property to a value wrapped in **%** signs, as in this example: "%NameOfAppSettingWithCRONExpression%". When you're developing locally, app settings go into the values of the [local.settings.json file](functions-run-local.md#local-settings-file).| +|**schedule**|**ScheduleExpression**|On the Consumption plan, you can define schedules with a CRON expression. If you're using an App Service Plan, you can also use a `TimeSpan` string. The following sections explain CRON expressions. You can put the schedule expression in an app setting and set this property to a value wrapped in **%** signs, as in this example: "%NameOfAppSettingWithCRONExpression%". | + +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ### CRON format diff --git a/articles/azure-functions/functions-bindings-twilio.md b/articles/azure-functions/functions-bindings-twilio.md index f2717b26dcfbc..f3a191f85a79a 100644 --- a/articles/azure-functions/functions-bindings-twilio.md +++ b/articles/azure-functions/functions-bindings-twilio.md @@ -1,5 +1,5 @@ --- -title: Azure Functions Twilio binding | Microsoft Docs +title: Azure Functions Twilio binding description: Understand how to use Twilio bindings with Azure Functions. services: functions documentationcenter: na @@ -9,40 +9,60 @@ editor: '' tags: '' keywords: azure functions, functions, event processing, dynamic compute, serverless architecture -ms.assetid: a60263aa-3de9-4e1b-a2bb-0b52e70d559b ms.service: functions ms.devlang: multiple ms.topic: reference ms.tgt_pltfrm: multiple ms.workload: na -ms.date: 10/20/2016 +ms.date: 11/21/2017 ms.author: wesmc - ms.custom: H1Hack27Feb2017 - --- -# Send SMS messages from Azure Functions using the Twilio output binding -[!INCLUDE [functions-selector-bindings](../../includes/functions-selector-bindings.md)] -This article explains how to configure and use Twilio bindings with Azure Functions. +# Twilio binding for Azure Functions + +This article explains how to send text messages by using [Twilio](https://www.twilio.com/) bindings in Azure Functions. Azure Functions supports output bindings for Twilio. [!INCLUDE [intro](../../includes/functions-bindings-intro.md)] -Azure Functions supports Twilio output bindings to enable your functions to send SMS text messages with a few lines of code and a [Twilio](https://www.twilio.com/) account. +## Example + +See the language-specific example: -## function.json for the Twilio output binding -The function.json file provides the following properties: +* [Precompiled C#](#c-example) +* [C# script](#c-script-example) +* [JavaScript](#javascript-example) -|Property |Description | -|---------|---------| -|**name**| Variable name used in function code for the Twilio SMS text message. | -|**type**| must be set to `twilioSms`.| -|**accountSid**| This value must be set to the name of an App Setting that holds your Twilio Account Sid.| -|**authToken**| This value must be set to the name of an App Setting that holds your Twilio authentication token.| -|**to**| This value is set to the phone number that the SMS text is sent to.| -|**from**| This value is set to the phone number that the SMS text is sent from.| -|**direction**| must be set to `out`.| -|**body**| This value can be used to hard code the SMS text message if you don't need to set it dynamically in the code for your function. | +### C# example + +The following example shows a [precompiled C# function](functions-dotnet-class-library.md) that sends a text message when triggered by a queue message. + +```cs +[FunctionName("QueueTwilio")] +[return: TwilioSms(AccountSidSetting = "TwilioAccountSid", AuthTokenSetting = "TwilioAuthToken", From = "+1425XXXXXXX" )] +public static SMSMessage Run( + [QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")] JObject order, + TraceWriter log) +{ + log.Info($"C# Queue trigger function processed: {order}"); + + var message = new SMSMessage() + { + Body = $"Hello {order["name"]}, thanks for your order!", + To = order["mobileNumber"].ToString() + }; + + return message; +} +``` + +This example uses the `TwilioSms` attribute with the method return value. An alternative is to use the attribute with an `out SMSMessage` parameter or an `ICollector` or `IAsyncCollector` parameter. + +### C# script example + +The following example shows a Twilio output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function uses an `out` parameter to send a text message. + +Here's binding data in the *function.json* file: Example function.json: @@ -59,10 +79,7 @@ Example function.json: } ``` - -## Example C# queue trigger with Twilio output binding -#### Synchronous -This synchronous example code for an Azure Storage queue trigger uses an out parameter to send a text message to a customer who placed an order. +Here's C# script code: ```cs #r "Newtonsoft.Json" @@ -93,8 +110,7 @@ public static void Run(string myQueueItem, out SMSMessage message, TraceWriter } ``` -#### Asynchronous -This asynchronous example code for an Azure Storage queue trigger sends a text message to a customer who placed an order. +You can't use out parameters in asynchronous code. Here's an asynchronous C# script code example: ```cs #r "Newtonsoft.Json" @@ -127,8 +143,28 @@ public static async Task Run(string myQueueItem, IAsyncCollector mes } ``` -## Example Node.js queue trigger with Twilio output binding -This Node.js example sends a text message to a customer who placed an order. +### JavaScript example + +The following example shows a Twilio output binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. + +Here's binding data in the *function.json* file: + +Example function.json: + +```json +{ + "type": "twilioSms", + "name": "message", + "accountSid": "TwilioAccountSid", + "authToken": "TwilioAuthToken", + "to": "+1704XXXXXXX", + "from": "+1425XXXXXXX", + "direction": "out", + "body": "Azure Functions Testing" +} +``` + +Here's the JavaScript code: ```javascript module.exports = function (context, myQueueItem) { @@ -154,6 +190,48 @@ module.exports = function (context, myQueueItem) { }; ``` +## Attributes + +For [precompiled C#](functions-dotnet-class-library.md) functions, use the [TwilioSms](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.Twilio/TwilioSMSAttribute.cs) attribute, which is defined in NuGet package [Microsoft.Azure.WebJobs.Extensions.Twilio](http://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Twilio). + +For information about attribute properties that you can configure, see [Configuration](#configuration). Here's a `TwilioSms` attribute example in a method signature: + +```csharp +[FunctionName("QueueTwilio")] +[return: TwilioSms( + AccountSidSetting = "TwilioAccountSid", + AuthTokenSetting = "TwilioAuthToken", + From = "+1425XXXXXXX" )] +public static SMSMessage Run( + [QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")] JObject order, + TraceWriter log) +{ + ... +} + ``` + +For a complete example, see [Precompiled C# example](#c-example). + +## Configuration + +The following table explains the binding configuration properties that you set in the *function.json* file and the `TwilioSms` attribute. + +|function.json property | Attribute property |Description| +|---------|---------|----------------------| +|**type**|| must be set to `twilioSms`.| +|**direction**|| must be set to `out`.| +|**name**|| Variable name used in function code for the Twilio SMS text message. | +|**accountSid**|**AccountSid**| This value must be set to the name of an app setting that holds your Twilio Account Sid.| +|**authToken**|**AuthToken**| This value must be set to the name of an app setting that holds your Twilio authentication token.| +|**to**|**To**| This value is set to the phone number that the SMS text is sent to.| +|**from**|**From**| This value is set to the phone number that the SMS text is sent from.| +|**body**|**Body**| This value can be used to hard code the SMS text message if you don't need to set it dynamically in the code for your function. | + +[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] + ## Next steps -[!INCLUDE [next steps](../../includes/functions-bindings-next-steps.md)] + +> [!div class="nextstepaction"] +> [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md) + diff --git a/articles/azure-functions/functions-create-your-first-function-visual-studio.md b/articles/azure-functions/functions-create-your-first-function-visual-studio.md index c6f63ad32633e..24d7ddd934304 100644 --- a/articles/azure-functions/functions-create-your-first-function-visual-studio.md +++ b/articles/azure-functions/functions-create-your-first-function-visual-studio.md @@ -3,7 +3,7 @@ title: Create your first function in Azure using Visual Studio | Microsoft Docs description: Create and publish a simple HTTP triggered function to Azure by using Azure Functions Tools for Visual Studio. services: functions documentationcenter: na -author: rachelappel +author: ggailey777 manager: cfowler editor: '' tags: '' diff --git a/articles/azure-functions/functions-dotnet-class-library.md b/articles/azure-functions/functions-dotnet-class-library.md index 4ad6a944d8fb2..72d094955a1a9 100644 --- a/articles/azure-functions/functions-dotnet-class-library.md +++ b/articles/azure-functions/functions-dotnet-class-library.md @@ -426,10 +426,8 @@ public static SMSMessage Run([QueueTrigger("myqueue-items", Connection = "AzureW ## Next steps -For more information on using Azure Functions in C# scripting, see [Azure Functions C\# script developer reference](functions-reference-csharp.md). - -[!INCLUDE [next steps](../../includes/functions-bindings-next-steps.md)] - +> [!div class="nextstepaction"] +> [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md) [Microsoft.Azure.WebJobs]: http://www.nuget.org/packages/Microsoft.Azure.WebJobs/2.1.0-beta1 diff --git a/articles/azure-functions/functions-host-json.md b/articles/azure-functions/functions-host-json.md index 2d9e617d9f73e..fb489e5c7e32d 100644 --- a/articles/azure-functions/functions-host-json.md +++ b/articles/azure-functions/functions-host-json.md @@ -161,23 +161,7 @@ Indicates the timeout duration for all functions. In Consumption plans, the vali Configuration settings for [http triggers and bindings](functions-bindings-http-webhook.md). -```json -{ - "http": { - "routePrefix": "api", - "maxOutstandingRequests": 20, - "maxConcurrentRequests": - "dynamicThrottlesEnabled": false - } -} -``` - -|Property |Default | Description | -|---------|---------|---------| -|routePrefix|api|The route prefix that applies to all routes. Use an empty string to remove the default prefix. | -|maxOutstandingRequests|-1|The maximum number of outstanding requests that will be held at any given time (-1 means unbounded). The limit includes requests that are queued but have not started executing, as well as any in-progress executions. Any incoming requests over this limit are rejected with a 429 "Too Busy" response. Callers can use that response to employ time-based retry strategies. This setting controls only queuing that occurs within the job host execution path. Other queues, such as the ASP.NET request queue, are unaffected by this setting. | -|maxConcurrentRequests|-1|The maximum number of HTTP functions that will be executed in parallel (-1 means unbounded). For example, you could set a limit if your HTTP functions use too many system resources when concurrency is high. Or if your functions make outbound requests to a third-party service, those calls might need to be rate-limited.| -|dynamicThrottlesEnabled|false|Causes the request processing pipeline to periodically check system performance counters. Counters include connections, threads, processes, memory, and cpu. If any of the counters are over a built-in threshold (80%), requests are rejected with a 429 "Too Busy" response until the counter(s) return to normal levels.| +[!INCLUDE [functions-host-json-http](../../includes/functions-host-json-http.md)] ## id diff --git a/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md b/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md index d3124c33d252d..30b9482b1afbd 100644 --- a/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md +++ b/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md @@ -3,7 +3,7 @@ title: Configure Azure Function App Settings | Microsoft Docs description: Learn how to configure Azure function app settings. services: '' documentationcenter: .net -author: rachelappel +author: ggailey777 manager: cfowler editor: '' diff --git a/articles/azure-functions/functions-how-to-use-sendgrid.md b/articles/azure-functions/functions-how-to-use-sendgrid.md index 14f9a12e58d96..212f16e8f407a 100644 --- a/articles/azure-functions/functions-how-to-use-sendgrid.md +++ b/articles/azure-functions/functions-how-to-use-sendgrid.md @@ -3,7 +3,7 @@ title: How to use SendGrid in Azure Functions | Microsoft Docs description: Shows how to use SendGrid in Azure Functions services: functions documentationcenter: na -author: rachelappel +author: ggailey777 manager: cfowler ms.service: functions @@ -12,7 +12,7 @@ ms.topic: article ms.tgt_pltfrm: multiple ms.workload: na ms.date: 01/31/2017 -ms.author: rachelap +ms.author: glenga --- # How to use SendGrid in Azure Functions diff --git a/articles/azure-functions/functions-reference-csharp.md b/articles/azure-functions/functions-reference-csharp.md index 1b44889608981..da96831763958 100644 --- a/articles/azure-functions/functions-reference-csharp.md +++ b/articles/azure-functions/functions-reference-csharp.md @@ -378,7 +378,7 @@ Mobile Apps table output binding supports but you can only use [ICollector](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/ICollector.cs) or [IAsyncCollector](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/IAsyncCollector.cs) for `T`. -The following example code creates a [Storage blob output binding](functions-bindings-storage-blob.md#blob-storage-input--output-bindings) +The following example code creates a [Storage blob output binding](functions-bindings-storage-blob.md#input--output) with blob path that's defined at run time, then writes a string to the blob. ```cs diff --git a/articles/azure-functions/functions-reference-node.md b/articles/azure-functions/functions-reference-node.md index 7bcabe30cc3da..000ce0363477b 100644 --- a/articles/azure-functions/functions-reference-node.md +++ b/articles/azure-functions/functions-reference-node.md @@ -3,7 +3,7 @@ title: JavaScript developer reference for Azure Functions | Microsoft Docs description: Understand how to develop functions by using JavaScript. services: functions documentationcenter: na -author: christopheranderson +author: tdykstra manager: cfowler editor: '' tags: '' @@ -16,7 +16,7 @@ ms.topic: reference ms.tgt_pltfrm: multiple ms.workload: na ms.date: 05/25/2017 -ms.author: glenga +ms.author: tdykstra --- # Azure Functions JavaScript developer guide diff --git a/articles/azure-functions/functions-reference.md b/articles/azure-functions/functions-reference.md index 27d9268968d17..5fb066ad63da1 100644 --- a/articles/azure-functions/functions-reference.md +++ b/articles/azure-functions/functions-reference.md @@ -3,7 +3,7 @@ title: Guidance for developing Azure Functions | Microsoft Docs description: Learn the Azure Functions concepts and techniques that you need to develop functions in Azure, across all programming languages and bindings. services: functions documentationcenter: na -author: christopheranderson +author: tdykstra manager: cfowler editor: '' tags: '' @@ -16,7 +16,7 @@ ms.topic: reference ms.tgt_pltfrm: multiple ms.workload: na ms.date: 10/12/2017 -ms.author: chrande +ms.author: tdykstra --- # Azure Functions developers guide diff --git a/articles/azure-functions/functions-scenario-database-table-cleanup.md b/articles/azure-functions/functions-scenario-database-table-cleanup.md index 522c589c6cf88..5e834ec8e525c 100644 --- a/articles/azure-functions/functions-scenario-database-table-cleanup.md +++ b/articles/azure-functions/functions-scenario-database-table-cleanup.md @@ -19,7 +19,9 @@ ms.author: glenga --- # Use Azure Functions to connect to an Azure SQL Database -This topic shows you how to use Azure Functions to create a scheduled job that cleans up rows in a table in an Azure SQL Database. The new C# function is created based on a pre-defined timer trigger template in the Azure portal. To support this scenario, you must also set a database connection string as a setting in the function app. This scenario uses a bulk operation against the database. To have your function process individual CRUD operations in a Mobile Apps table, you should instead use [Mobile Apps bindings](functions-bindings-mobile-apps.md). +This topic shows you how to use Azure Functions to create a scheduled job that cleans up rows in a table in an Azure SQL Database. The new C# function is created based on a pre-defined timer trigger template in the Azure portal. To support this scenario, you must also set a database connection string as an app setting in the function app. This scenario uses a bulk operation against the database. + +To have your function process individual create, read, update, and delete (CRUD) operations in a Mobile Apps table, you should instead use [Mobile Apps bindings](functions-bindings-mobile-apps.md). ## Prerequisites @@ -56,7 +58,7 @@ A function app hosts the execution of your functions in Azure. It is a best prac | Setting       | Suggested value | Description             | | ------------ | ------------------ | --------------------- | | **Name** |  sqldb_connection  | Used to access the stored connection string in your function code.    | - | **Value** | Copied string | Past the connection string you copied in the previous section. | + | **Value** | Copied string | Paste the connection string you copied in the previous section and replace `{your_username}` and `{your_password}` placeholders with real values. | | **Type** | SQL Database | Use the default SQL Database connection. | 3. Click **Save**. @@ -81,7 +83,7 @@ Now, you can add the C# function code that connects to your SQL Database. using System.Threading.Tasks; ``` -4. Replace the existing **Run** function with the following code: +4. Replace the existing `Run` function with the following code: ```cs public static async Task Run(TimerInfo myTimer, TraceWriter log) { @@ -102,7 +104,7 @@ Now, you can add the C# function code that connects to your SQL Database. } ``` - This sample command updates the **Status** column based on the ship date. It should update 32 rows of data. + This sample command updates the `Status` column based on the ship date. It should update 32 rows of data. 5. Click **Save**, watch the **Logs** windows for the next function execution, then note the number of rows updated in the **SalesOrderHeader** table. diff --git a/articles/azure-functions/functions-triggers-bindings.md b/articles/azure-functions/functions-triggers-bindings.md index 3006d42a8d73b..e0854f1517bf4 100644 --- a/articles/azure-functions/functions-triggers-bindings.md +++ b/articles/azure-functions/functions-triggers-bindings.md @@ -1,5 +1,5 @@ --- -title: Work with triggers and bindings in Azure Functions | Microsoft Docs +title: Work with triggers and bindings in Azure Functions description: Learn how to use triggers and bindings in Azure Functions to connect your code execution to online events and cloud-based services. services: functions documentationcenter: na @@ -9,15 +9,13 @@ editor: '' tags: '' keywords: azure functions, functions, event processing, webhooks, dynamic compute, serverless architecture -ms.assetid: cbc7460a-4d8a-423f-a63e-1cd33fef7252 ms.service: functions ms.devlang: multiple ms.topic: reference ms.tgt_pltfrm: multiple ms.workload: na -ms.date: 05/30/2017 +ms.date: 11/21/2017 ms.author: glenga - --- # Azure Functions triggers and bindings concepts @@ -25,7 +23,7 @@ Azure Functions allows you to write code in response to events in Azure and othe ## Overview -Triggers and bindings are a declarative way to define how a function is invoked and what data it works with. A *trigger* defines how a function is invoked. A function must have exactly one trigger. Triggers have associated data, which is usually the payload that triggered the function. +Triggers and bindings are a declarative way to define how a function is invoked and what data it works with. A *trigger* defines how a function is invoked. A function must have exactly one trigger. Triggers have associated data, which is usually the payload that triggered the function. Input and output *bindings* provide a declarative way to connect to data from within your code. Similar to triggers, you specify connection strings and other properties in your function configuration. Bindings are optional and a function can have multiple input and output bindings. @@ -33,11 +31,13 @@ Using triggers and bindings, you can write code that is more generic and does no You can configure triggers and bindings in the **Integrate** tab in the Azure Functions portal. Under the covers, the UI modifies a file called *function.json* file in the function directory. You can edit this file by changing to the **Advanced editor**. -The following table shows the triggers and bindings that are supported with Azure Functions. +## Supported bindings [!INCLUDE [Full bindings table](../../includes/functions-bindings.md)] -### Example: queue trigger and table output binding +For information about which bindings are in preview or are approved for production use, see [Supported languages](supported-languages.md). + +## Example: queue trigger and table output binding Suppose you want to write a new row to Azure Table Storage whenever a new message appears in Azure Queue Storage. This scenario can be implemented using an Azure Queue trigger and an Azure Table Storage output binding. @@ -124,9 +124,9 @@ To view and edit the contents of *function.json* in the Azure portal, click the For more code examples and details on integrating with Azure Storage, see [Azure Functions triggers and bindings for Azure Storage](functions-bindings-storage.md). -### Binding direction +## Binding direction -All triggers and bindings have a `direction` property: +All triggers and bindings have a `direction` property in the *function.json* file: - For triggers, the direction is always `in` - Input and output bindings use `in` and `out` @@ -241,7 +241,7 @@ For example, an Azure Storage Queue trigger supports the following properties: Details of metadata properties for each trigger are described in the corresponding reference topic. Documentation is also available in the **Integrate** tab of the portal, in the **Documentation** section below the binding configuration area. -For example, since blob triggers have some delays, you can use a queue trigger to run your function (see [Blob Storage Trigger](functions-bindings-storage-blob.md#blob-storage-trigger)). The queue message would contain the blob filename to trigger on. Using the `queueTrigger` metadata property, you can specify this behavior all in your configuration, rather than your code. +For example, since blob triggers have some delays, you can use a queue trigger to run your function (see [Blob Storage Trigger](functions-bindings-storage-blob.md#trigger)). The queue message would contain the blob filename to trigger on. Using the `queueTrigger` metadata property, you can specify this behavior all in your configuration, rather than your code. ```json "bindings": [ diff --git a/articles/azure-functions/functions-twitter-email.md b/articles/azure-functions/functions-twitter-email.md index 416476d1725fd..4edb1a65c2dcd 100644 --- a/articles/azure-functions/functions-twitter-email.md +++ b/articles/azure-functions/functions-twitter-email.md @@ -30,7 +30,7 @@ This tutorial shows you how to use Functions with Logic Apps and Microsoft Cogni In this tutorial, you learn how to: > [!div class="checklist"] -> * Create a Cognitive Services account. +> * Create a Cognitive Services API Resource. > * Create a function that categorizes tweet sentiment. > * Create a logic app that connects to Twitter. > * Add sentiment detection to the logic app. @@ -44,29 +44,28 @@ In this tutorial, you learn how to: + This topic uses as its starting point the resources created in [Create your first function from the Azure portal](functions-create-first-azure-function.md). If you haven't already done so, complete these steps now to create your function app. -## Create a Cognitive Services account +## Create a Cognitive Services resource -A Cognitive Services account is required to detect the sentiment of tweets being monitored. +The Cognitive Services APIs are available in Azure as individual resources. Use the Text Analytics API to detect the sentiment of the tweets being monitored. 1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Click the **New** button found on the upper left-hand corner of the Azure portal. -3. Click **Data + Analytics** > **Cognitive Services**. Then, use the settings as specified in the table, accept the terms, and check **Pin to dashboard**. +3. Click **AI + Analytics** > **Text Analytics API**. Then, use the settings as specified in the table, accept the terms, and check **Pin to dashboard**. - ![Create Cognitive account page](media/functions-twitter-email/cog_svcs_account.png) + ![Create Cognitive resource page](media/functions-twitter-email/cog_svcs_resource.png) | Setting | Suggested value | Description | | --- | --- | --- | | **Name** | MyCognitiveServicesAccnt | Choose a unique account name. | - | **API type** | Text Analytics API | API used to analyze text. | - | **Location** | West US | Currently, only **West US** is available for text analytics. | + | **Location** | West US | Use the location nearest you. | | **Pricing tier** | F0 | Start with the lowest tier. If you run out of calls, scale to a higher tier.| | **Resource group** | myResourceGroup | Use the same resource group for all services in this tutorial.| -4. Click **Create** to create your account. After the account is created, click your new Cognitive Services account pinned to the dashboard. +4. Click **Create** to create your resource. After it is created, select your new Cognitive Services resource pinned to the dashboard. -5. In the account, click **Keys**, and then copy the value of **Key 1** and save it. You use this key to connect the logic app to your Cognitive Services account. +5. In the left navigation column, click **Keys**, and then copy the value of **Key 1** and save it. You use this key to connect the logic app to your Cognitive Services API. ![Keys](media/functions-twitter-email/keys.png) @@ -74,13 +73,26 @@ A Cognitive Services account is required to detect the sentiment of tweets being Functions provides a great way to offload processing tasks in a logic apps workflow. This tutorial uses an HTTP triggered function to process tweet sentiment scores from Cognitive Services and return a category value. -1. Expand your function app, click the **+** button next to **Functions**, click the **HTTPTrigger** template. Type `CategorizeSentiment` for the function **Name** and click **Create**. +1. Click the **New** button and select **Compute** > **Function App**. Then, use the settings as specified in the table below. Accept the terms, then select **Pin to dashboard**. + + ![Create Azure Function App](media/functions-twitter-email/create_fun.png) + + | Setting | Suggested value | Description | + | --- | --- | --- | + | **Name** | MyFunctionApp | Choose a unique account name. | + | **Resource group** | myResourceGroup | Use the same resource group for all services in this tutorial.| + | **Hosting plan** | Consumption Plan | This defines your cost and usage allocations. + | **Location** | West US | Use the location nearest you. | + | **Storage** | Create New | Automatically generates a new storage account.| + | **Pricing tier** | F0 | Start with the lowest tier. If you run out of calls, scale to a higher tier.| + +2. Select your functions app from your dashboard and expand your function, click the **+** button next to **Functions**, click the **Webhook + API**, **CSharp**, then **Create This Function**. This will create a function using the HTTPTrigger C# template. Your code will appear in a new window as `run.csx` ![Function Apps blade, Functions +](media/functions-twitter-email/add_fun.png) -2. Replace the contents of the run.csx file with the following code, then click **Save**: +3. Replace the contents of the `run.csx` file with the following code, then click **Save**: - ```c# + ```csharp using System.Net; public static async Task Run(HttpRequestMessage req, TraceWriter log) @@ -107,11 +119,11 @@ Functions provides a great way to offload processing tasks in a logic apps workf ``` This function code returns a color category based on the sentiment score received in the request. -3. To test the function, click **Test** at the far right to expand the Test tab. Type a value of `0.2` for the **Request body**, and then click **Run**. A value of **RED** is returned in the body of the response. +4. To test the function, click **Test** at the far right to expand the Test tab. Type a value of `0.2` for the **Request body**, and then click **Run**. A value of **RED** is returned in the body of the response. ![Test the function in the Azure portal](./media/functions-twitter-email/test.png) -Now you have a function that categorizes sentiment scores. Next, you create a logic app that integrates your function with your Twitter and Cognitive Services accounts. +Now you have a function that categorizes sentiment scores. Next, you create a logic app that integrates your function with your Twitter and Cognitive Services API. ## Create a logic app @@ -121,7 +133,7 @@ Now you have a function that categorizes sentiment scores. Next, you create a lo 4. Then, type a **Name** like `TweetSentiment`, use the settings as specified in the table, accept the terms, and check **Pin to dashboard**. - ![Create logic app in the Azure portal](./media/functions-twitter-email/new_logicApp.png) + ![Create logic app in the Azure portal](./media/functions-twitter-email/new_logic_app.png) | Setting | Suggested value | Description | | ----------------- | ------------ | ------------- | @@ -149,7 +161,7 @@ First, create a connection to your Twitter account. The logic app polls for twee | Setting | Suggested value | Description | | ----------------- | ------------ | ------------- | - | **Search text** | #Azure | Use a hashtag that is popular enough to generate new tweets in the chosen interval. When using the Free tier and your hashtag is too popular, you can quickly use up the transactions in your Cognitive Services account. | + | **Search text** | #Azure | Use a hashtag that is popular enough to generate new tweets in the chosen interval. When using the Free tier and your hashtag is too popular, you can quickly use up the transaction quota in your Cognitive Services API. | | **Frequency** | Minute | The frequency unit used for polling Twitter. | | **Interval** | 15 | The time elapsed between Twitter requests, in frequency units. | @@ -167,7 +179,7 @@ Now your app is connected to Twitter. Next, you connect to text analytics to det ![Detect Sentiment](media/functions-twitter-email/detect_sent.png) -3. Type a connection name such as `MyCognitiveServicesConnection`, paste the key for your Cognitive Services account that you saved, and click **Create**. +3. Type a connection name such as `MyCognitiveServicesConnection`, paste the key for your Cognitive Services API that you saved, and click **Create**. 4. Click **Text to analyze** > **Tweet text**, and then click **Save**. @@ -199,7 +211,7 @@ The last part of the workflow is to trigger an email when the sentiment is score ![Add a condition to the logic app.](media/functions-twitter-email/condition.png) -3. In **IF YES, DO NOTHING**, click **Add an action**, search for `outlook.com`, click **Send an email**, and sign in to your Outlook.com account. +3. In **IF TRUE**, click **Add an action**, search for `outlook.com`, click **Send an email**, and sign in to your Outlook.com account. ![Choose an action for the condition.](media/functions-twitter-email/outlook.png) @@ -208,7 +220,7 @@ The last part of the workflow is to trigger an email when the sentiment is score 4. In the **Send an email** action, use the email settings as specified in the table. - ![Configure the email for the send an email action.](media/functions-twitter-email/sendEmail.png) + ![Configure the email for the send an email action.](media/functions-twitter-email/send_email.png) | Setting | Suggested value | Description | | ----------------- | ------------ | ------------- | @@ -243,7 +255,7 @@ Now that the workflow is complete, you can enable the logic app and see the func return req.CreateResponse(HttpStatusCode.OK, category); > [!IMPORTANT] - > After you have completed this tutorial, you should disable the logic app. By disabling the app, you avoid being charged for executions and using up the transactions in your Cognitive Services account. + > After you have completed this tutorial, you should disable the logic app. By disabling the app, you avoid being charged for executions and using up the transactions in your Cognitive Services API. Now you have seen how easy it is to integrate Functions into a Logic Apps workflow. @@ -258,7 +270,7 @@ To disable the logic app, click **Overview** and then click **Disable** at the t In this tutorial, you learned how to: > [!div class="checklist"] -> * Create a Cognitive Services account. +> * Create a Cognitive Services API Resource. > * Create a function that categorizes tweet sentiment. > * Create a logic app that connects to Twitter. > * Add sentiment detection to the logic app. diff --git a/articles/azure-functions/functions-versions.md b/articles/azure-functions/functions-versions.md index 72ce63d798964..f970a2f843e56 100644 --- a/articles/azure-functions/functions-versions.md +++ b/articles/azure-functions/functions-versions.md @@ -1,6 +1,6 @@ --- title: How to target Azure Functions runtime versions -description: Azure Functions supports multiple versions of the runtime. Learn how to specify the runtime version of an Azure hosted function app. +description: Azure Functions supports multiple versions of the runtime. Learn how to specify the runtime version of a function app hosted in Azure. services: functions documentationcenter: author: ggailey777 @@ -11,7 +11,7 @@ ms.service: functions ms.workload: na ms.devlang: na ms.topic: article -ms.date: 11/07/2017 +ms.date: 11/21/2017 ms.author: glenga --- @@ -41,10 +41,12 @@ For more information, see [Supported languages](supported-languages.md). ### Bindings -The experimental bindings that runtime 1.x supports are not available in 2.x. For information about bindings support and other functional gaps in 2.x, see [Runtime 2.0 known issues](https://github.com/Azure/azure-webjobs-sdk-script/wiki/Azure-Functions-runtime-2.0-known-issues). - Runtime 2.x lets you create custom [binding extensions](https://github.com/Azure/azure-webjobs-sdk-extensions/wiki/Binding-Extensions-Overview). Built-in bindings that use this extensibility model are only available in 2.x; among the first of these is the [Microsoft Graph bindings](functions-bindings-microsoft-graph.md). +[!INCLUDE [Full bindings table](../../includes/functions-bindings.md)] + +For more information about bindings support and other functional gaps in 2.x, see [Runtime 2.0 known issues](https://github.com/Azure/azure-webjobs-sdk-script/wiki/Azure-Functions-runtime-2.0-known-issues). + ### Cross-platform development Runtime 1.x supports function development only in the portal or on Windows; with 2.x you can develop and run Azure Functions on Linux or macOS. diff --git a/articles/azure-functions/index.yml b/articles/azure-functions/index.yml index e1bf3ca0564e6..95d58d26f3653 100644 --- a/articles/azure-functions/index.yml +++ b/articles/azure-functions/index.yml @@ -34,7 +34,7 @@ sections: style: icon48 items: - image: - src: media/index/http.svg + src: https://docs.microsoft.com/en-us/media/common/i_http.svg text: HTTP href: /azure/azure-functions/functions-create-first-azure-function - image: diff --git a/articles/azure-functions/media/functions-twitter-email/add_fun.png b/articles/azure-functions/media/functions-twitter-email/add_fun.png index 331162c1de7e5..4024e0abe74d6 100644 Binary files a/articles/azure-functions/media/functions-twitter-email/add_fun.png and b/articles/azure-functions/media/functions-twitter-email/add_fun.png differ diff --git a/articles/azure-functions/media/functions-twitter-email/cog_svcs_account.png b/articles/azure-functions/media/functions-twitter-email/cog_svcs_account.png deleted file mode 100644 index d7dd6d2432796..0000000000000 Binary files a/articles/azure-functions/media/functions-twitter-email/cog_svcs_account.png and /dev/null differ diff --git a/articles/azure-functions/media/functions-twitter-email/cog_svcs_resource.png b/articles/azure-functions/media/functions-twitter-email/cog_svcs_resource.png new file mode 100644 index 0000000000000..86a8074fbf487 Binary files /dev/null and b/articles/azure-functions/media/functions-twitter-email/cog_svcs_resource.png differ diff --git a/articles/azure-functions/media/functions-twitter-email/create_fun.png b/articles/azure-functions/media/functions-twitter-email/create_fun.png new file mode 100644 index 0000000000000..c5a62fd42be5a Binary files /dev/null and b/articles/azure-functions/media/functions-twitter-email/create_fun.png differ diff --git a/articles/azure-functions/media/functions-twitter-email/designer1.png b/articles/azure-functions/media/functions-twitter-email/designer1.png index 06e64dac8167f..b9689cc1fa796 100644 Binary files a/articles/azure-functions/media/functions-twitter-email/designer1.png and b/articles/azure-functions/media/functions-twitter-email/designer1.png differ diff --git a/articles/azure-functions/media/functions-twitter-email/disable-logic-app.png b/articles/azure-functions/media/functions-twitter-email/disable-logic-app.png index 7c073634fefa9..568152990cadd 100644 Binary files a/articles/azure-functions/media/functions-twitter-email/disable-logic-app.png and b/articles/azure-functions/media/functions-twitter-email/disable-logic-app.png differ diff --git a/articles/azure-functions/media/functions-twitter-email/keys.png b/articles/azure-functions/media/functions-twitter-email/keys.png index 221d33f9cf142..cf112b1f2d1f6 100644 Binary files a/articles/azure-functions/media/functions-twitter-email/keys.png and b/articles/azure-functions/media/functions-twitter-email/keys.png differ diff --git a/articles/azure-functions/media/functions-twitter-email/new_logicApp.png b/articles/azure-functions/media/functions-twitter-email/new_logicApp.png deleted file mode 100644 index e65afca0cd76f..0000000000000 Binary files a/articles/azure-functions/media/functions-twitter-email/new_logicApp.png and /dev/null differ diff --git a/articles/azure-functions/media/functions-twitter-email/new_logic_app.png b/articles/azure-functions/media/functions-twitter-email/new_logic_app.png new file mode 100644 index 0000000000000..283747be1bd9c Binary files /dev/null and b/articles/azure-functions/media/functions-twitter-email/new_logic_app.png differ diff --git a/articles/azure-functions/media/functions-twitter-email/outlook.png b/articles/azure-functions/media/functions-twitter-email/outlook.png index 90b0cd4dcf9ed..3bc930d7d9e29 100644 Binary files a/articles/azure-functions/media/functions-twitter-email/outlook.png and b/articles/azure-functions/media/functions-twitter-email/outlook.png differ diff --git a/articles/azure-functions/media/functions-twitter-email/over1.png b/articles/azure-functions/media/functions-twitter-email/over1.png index 1d25e3e08f6c6..0cf064c814913 100644 Binary files a/articles/azure-functions/media/functions-twitter-email/over1.png and b/articles/azure-functions/media/functions-twitter-email/over1.png differ diff --git a/articles/azure-functions/media/functions-twitter-email/sendEmail.png b/articles/azure-functions/media/functions-twitter-email/sendEmail.png deleted file mode 100644 index 6de576c4922dc..0000000000000 Binary files a/articles/azure-functions/media/functions-twitter-email/sendEmail.png and /dev/null differ diff --git a/articles/azure-functions/media/functions-twitter-email/send_email.png b/articles/azure-functions/media/functions-twitter-email/send_email.png new file mode 100644 index 0000000000000..0618c9edb7964 Binary files /dev/null and b/articles/azure-functions/media/functions-twitter-email/send_email.png differ diff --git a/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-cosmos-db.md b/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-cosmos-db.md index 8ad5d437f955c..b7befa24755cb 100644 --- a/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-cosmos-db.md +++ b/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-cosmos-db.md @@ -3,7 +3,7 @@ title: Create an Azure Function that connects to an Azure Cosmos DB | Microsoft description: Azure CLI Script Sample - Create an Azure Function that connects to an Azure Cosmos DB services: functions documentationcenter: functions -author: rachelappel +author: ggailey777 manager: cfowler editor: tags: functions @@ -14,7 +14,7 @@ ms.topic: sample ms.tgt_pltfrm: na ms.workload: ms.date: 04/20/2017 -ms.author: rachelap +ms.author: glenga ms.custom: mvc --- # Create an Azure Function that connects to an Azure Cosmos DB @@ -25,7 +25,7 @@ This sample script creates an Azure Function App and connects to an Azure Cosmos [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)] -If you choose to install and use the CLI locally, this topic requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI 2.0]( /cli/azure/install-azure-cli). +If you use the CLI locally, make sure that you are running the Azure CLI version 2.0 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install Azure CLI 2.0](/cli/azure/install-azure-cli). ## Sample script @@ -41,13 +41,13 @@ After the script sample has been run, the follow command can be used to remove t ## Script explanation -This script uses the following commands. Each command in the table links to command specific documentation. +This script uses the following commands: Each command in the table links to command specific documentation. | Command | Notes | |---|---| -| [az login](https://docs.microsoft.com/cli/azure/#login) | Login to Azure. | +| [az login](https://docs.microsoft.com/cli/azure/#login) | Log in to Azure. | | [az group create](https://docs.microsoft.com/cli/azure/group#az_group_create) | Create a resource group with location | -| [az storage account create](https://docs.microsoft.com/cli/azure/storage/account) | Create a storage account | +| [az storage accounts create](https://docs.microsoft.com/cli/azure/storage/account) | Create a storage account | | [az functionapp create](https://docs.microsoft.com/cli/azure/functionapp#az_functionapp_create) | Create a new function app | | [az cosmosdb create](https://docs.microsoft.com/cli/azure/cosmosdb#az_cosmosdb_create) | Create cosmosdb database | | [az group delete](https://docs.microsoft.com/cli/azure/group#az_group_delete) | Clean up | diff --git a/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-storage-account.md b/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-storage-account.md index 2e55483926648..acc53320db7d0 100644 --- a/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-storage-account.md +++ b/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-storage-account.md @@ -3,7 +3,7 @@ title: Create an Azure Function that connects to an Azure Storage | Microsoft Do description: Azure CLI Script Sample - Create an Azure Function that connects to an Azure Storage services: functions documentationcenter: functions -author: rachelappel +author: ggailey777 manager: cfowler editor: tags: functions @@ -14,7 +14,7 @@ ms.topic: sample ms.tgt_pltfrm: na ms.workload: ms.date: 04/20/2017 -ms.author: rachelap +ms.author: glenga ms.custom: mvc --- # Integrate Function App into Azure Storage Account @@ -25,7 +25,7 @@ This sample script creates a Function App and Storage Account. [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)] -If you choose to install and use the CLI locally, this topic requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI 2.0]( /cli/azure/install-azure-cli). +If you use the CLI locally, make sure that you are running the Azure CLI version 2.0 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install Azure CLI 2.0](/cli/azure/install-azure-cli). ## Sample script @@ -36,7 +36,7 @@ This sample creates an Azure Function app and adds the storage connection string ## Clean up deployment -After the script sample has been run, the following command can be used to remove the resource group, App Service app, and all related resources: +After the script sample has been run, run the following command to remove the resource group and all related resources: [!INCLUDE [cli-script-clean-up](../../../includes/cli-script-clean-up.md)] @@ -46,7 +46,7 @@ This script uses the following commands. Each command in the table links to comm | Command | Notes | |---|---| -| [az login](https://docs.microsoft.com/cli/azure/#login) | Login to Azure. | +| [az login](https://docs.microsoft.com/cli/azure/#login) | Log in to Azure. | | [az group create](https://docs.microsoft.com/cli/azure/group#az_group_create) | Create a resource group with location | | [az storage account create](https://docs.microsoft.com/cli/azure/storage/account) | Create a storage account | | [az functionapp create](https://docs.microsoft.com/cli/azure/functionapp#az_functionapp_create) | Create a new function app | diff --git a/articles/azure-government/documentation-government-functions.md b/articles/azure-government/documentation-government-functions.md index dd322e3aa7e59..c899c537838f5 100644 --- a/articles/azure-government/documentation-government-functions.md +++ b/articles/azure-government/documentation-government-functions.md @@ -177,18 +177,18 @@ Use cURL to test the deployed function on a Mac or Linux computer or using Bash Execute the following cURL command, replacing the `` placeholder with the name of your function app. Append the query string `&name=` to the URL. ```bash -curl http://.azurewebsites.net/api/HttpTriggerJS1?name= +curl http://.azurewebsites.us/api/HttpTriggerJS1?name= ```   ![Function response shown in a browser](./media/documentation-government-function2.png)   If you don't have cURL available in your command line, enter the same URL in the address of your web browser. Again, replace the `` placeholder with the name of your function app, and append the query string `&name=` to the URL and execute the request. -    http://.azurewebsites.net/api/HttpTriggerJS1?name= +    http://.azurewebsites.us/api/HttpTriggerJS1?name= ![Function response shown in a browser.](./media/documentation-government-function3.png)   -## Create function using Visual Studio +## Create function - Visual Studio Before starting, first check to make sure that your Visual Studio is [connected to the Azure Government environment](documentation-government-get-started-connect-with-vs.md). diff --git a/articles/azure-policy/azure-policy-introduction.md b/articles/azure-policy/azure-policy-introduction.md index fe5167d7feafa..2e272f2573ed4 100644 --- a/articles/azure-policy/azure-policy-introduction.md +++ b/articles/azure-policy/azure-policy-introduction.md @@ -20,7 +20,7 @@ Azure Policy is a service in Azure that you use to create, assign and, manage po ## How is it different from RBAC? -There are a few key differences between policy and role-based access control (RABC). RBAC focuses on user actions at different scopes. For example, you might be added to the contributor role for a resource group at the desired scope. The role allows you to make changes to that resource group. Policy focuses on resource properties during deployment and for already existing resources. For example, through policies, you can control the types of resources that can be provisioned. Or, you can restrict the locations in which the resources can be provisioned. Unlike RBAC, policy is a default allow and explicit deny system. +There are a few key differences between policy and role-based access control (RBAC). RBAC focuses on user actions at different scopes. For example, you might be added to the contributor role for a resource group at the desired scope. The role allows you to make changes to that resource group. Policy focuses on resource properties during deployment and for already existing resources. For example, through policies, you can control the types of resources that can be provisioned. Or, you can restrict the locations in which the resources can be provisioned. Unlike RBAC, policy is a default allow and explicit deny system. To use policies, you must be authenticated through RBAC. Specifically, your account needs the: diff --git a/articles/azure-resource-manager/TOC.md b/articles/azure-resource-manager/TOC.md index d602395d43ca2..081d057e80d69 100644 --- a/articles/azure-resource-manager/TOC.md +++ b/articles/azure-resource-manager/TOC.md @@ -67,26 +67,20 @@ ## Troubleshoot ### [Common deployment errors](resource-manager-common-deployment-errors.md) -### [Understand deployment errors](resource-manager-troubleshoot-tips.md) -### Resolve errors #### [AccountNameInvalid](resource-manager-storage-account-name-errors.md) #### [InvalidTemplate](resource-manager-invalid-template-errors.md) +#### [Linux deployment issues](../virtual-machines/linux/troubleshoot-deploy-vm.md) #### [NoRegisteredProviderFound](resource-manager-register-provider-errors.md) #### [NotFound](resource-manager-not-found-errors.md) #### [ParentResourceNotFound](resource-manager-parent-resource-errors.md) +#### [Provisioning and allocation issues for Linux](../virtual-machines/linux/troubleshoot-deployment-new-vm.md) +#### [Provisioning and allocation issues for Windows](../virtual-machines/windows/troubleshoot-deployment-new-vm.md) #### [RequestDisallowedByPolicy](resource-manager-policy-requestdisallowedbypolicy-error.md) #### [ReservedResourceName](resource-manager-reserved-resource-name.md) #### [ResourceQuotaExceeded](resource-manager-quota-errors.md) #### [SkuNotAvailable](resource-manager-sku-not-available-errors.md) -### Virtual Machine deployment errors -#### Linux -##### [Deployment issues](../virtual-machines/linux/troubleshoot-deploy-vm.md) -##### [Provisioning and allocation issues](../virtual-machines/linux/troubleshoot-deployment-new-vm.md) -##### [Common error messages](../virtual-machines/linux/error-messages.md) -#### Windows -##### [Deployment issues](../virtual-machines/windows/troubleshoot-deploy-vm.md) -##### [Provisioning and allocation issues](../virtual-machines/windows/troubleshoot-deployment-new-vm.md) -##### [Common error messages](../virtual-machines/windows/error-messages.md) +#### [Windows deployment issues](../virtual-machines/windows/troubleshoot-deploy-vm.md) +### [Understand deployment errors](resource-manager-troubleshoot-tips.md) # Reference ## [Template format](/azure/templates/) diff --git a/articles/azure-resource-manager/media/resource-group-linked-templates/deployment-history.png b/articles/azure-resource-manager/media/resource-group-linked-templates/deployment-history.png new file mode 100644 index 0000000000000..ee9a038a9b52c Binary files /dev/null and b/articles/azure-resource-manager/media/resource-group-linked-templates/deployment-history.png differ diff --git a/articles/azure-resource-manager/media/resource-group-linked-templates/linked-deployment-history.png b/articles/azure-resource-manager/media/resource-group-linked-templates/linked-deployment-history.png deleted file mode 100644 index 3cd94ecd16441..0000000000000 Binary files a/articles/azure-resource-manager/media/resource-group-linked-templates/linked-deployment-history.png and /dev/null differ diff --git a/articles/azure-resource-manager/media/resource-group-linked-templates/nestedTemplateDesign.png b/articles/azure-resource-manager/media/resource-group-linked-templates/nestedTemplateDesign.png new file mode 100644 index 0000000000000..cd7f922cb0c6d Binary files /dev/null and b/articles/azure-resource-manager/media/resource-group-linked-templates/nestedTemplateDesign.png differ diff --git a/articles/azure-resource-manager/resource-group-linked-templates.md b/articles/azure-resource-manager/resource-group-linked-templates.md index 4b8f987956c73..d1a39559a97de 100644 --- a/articles/azure-resource-manager/resource-group-linked-templates.md +++ b/articles/azure-resource-manager/resource-group-linked-templates.md @@ -13,113 +13,126 @@ ms.devlang: na ms.topic: article ms.tgt_pltfrm: na ms.workload: na -ms.date: 05/31/2017 +ms.date: 11/28/2017 ms.author: tomfitz - --- # Using linked templates when deploying Azure resources -From within one Azure Resource Manager template, you can link to another template, which enables you to decompose your deployment into a set of targeted, purpose-specific templates. As with decomposing an application into several code classes, decomposition provides benefits in terms of testing, reuse, and readability. - -You can pass parameters from a main template to a linked template, and those parameters can directly map to parameters or variables exposed by the calling template. The linked template can also pass an output variable back to the source template, enabling a two-way data exchange between templates. -## Linking to a template -You create a link between two templates by adding a deployment resource within the main template that points to the linked template. You set the **templateLink** property to the URI of the linked template. You can provide parameter values for the linked template directly in your template or in a parameter file. The following example uses the **parameters** property to specify a parameter value directly. +To deploy your solution, you can use either a single template or a main template with multiple linked templates. For small to medium solutions, a single template is easier to understand and maintain. You are able to see all the resources and values in a single file. For advanced scenarios, linked templates enable you to break down the solution into targeted components, and reuse templates. -```json -"resources": [ - { - "apiVersion": "2017-05-10", - "name": "linkedTemplate", - "type": "Microsoft.Resources/deployments", - "properties": { - "mode": "incremental", - "templateLink": { - "uri": "https://www.contoso.com/AzureTemplates/newStorageAccount.json", - "contentVersion": "1.0.0.0" - }, - "parameters": { - "StorageAccountName":{"value": "[parameters('StorageAccountName')]"} - } - } - } -] -``` +When using linked template, you create a main template that receives the parameter values during deployment. The main template contains all the linked templates and passes values to those templates as needed. -Like other resource types, you can set dependencies between the linked template and other resources. Therefore, when other resources require an output value from the linked template, you can make sure the linked template is deployed before them. Or, when the linked template relies on other resources, you can make sure other resources are deployed before the linked template. You can retrieve a value from a linked template with the following syntax: +![linked templates](./media/resource-group-linked-templates/nestedTemplateDesign.png) -```json -"[reference('linkedTemplate').outputs.exampleProperty.value]" -``` +## Link to a template -The Resource Manager service must be able to access the linked template. You cannot specify a local file or a file that is only available on your local network for the linked template. You can only provide a URI value that includes either **http** or **https**. One option is to place your linked template in a storage account, and use the URI for that item, such as shown in the following example: +To link to another template, add a **deployments** resource to your main template. ```json -"templateLink": { - "uri": "http://mystorageaccount.blob.core.windows.net/templates/template.json", - "contentVersion": "1.0.0.0", -} +"resources": [ + { + "apiVersion": "2017-05-10", + "name": "linkedTemplate", + "type": "Microsoft.Resources/deployments", + "properties": { + "mode": "Incremental", + + } + } +] ``` -Although the linked template must be externally available, it does not need to be generally available to the public. You can add your template to a private storage account that is accessible to only the storage account owner. Then, you create a shared access signature (SAS) token to enable access during deployment. You add that SAS token to the URI for the linked template. For steps on setting up a template in a storage account and generating a SAS token, see [Deploy resources with Resource Manager templates and Azure PowerShell](resource-group-template-deploy.md) or [Deploy resources with Resource Manager templates and Azure CLI](resource-group-template-deploy-cli.md). +The properties you provide for the deployment resource vary based on whether you are linking to an external template or embedding an inline template in the main template. -The following example shows a parent template that links to another template. The linked template is accessed with a SAS token that is passed in as a parameter. +### Inline template + +To embed the linked template, use the **template** property and include the template. ```json -"parameters": { - "sasToken": { "type": "securestring" } -}, "resources": [ - { - "apiVersion": "2017-05-10", - "name": "linkedTemplate", - "type": "Microsoft.Resources/deployments", - "properties": { - "mode": "incremental", - "templateLink": { - "uri": "[concat('https://storagecontosotemplates.blob.core.windows.net/templates/helloworld.json', parameters('sasToken'))]", - "contentVersion": "1.0.0.0" + { + "apiVersion": "2017-05-10", + "name": "nestedTemplate", + "type": "Microsoft.Resources/deployments", + "properties": { + "mode": "Incremental", + "template": { + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": {}, + "variables": {}, + "resources": [ + { + "type": "Microsoft.Storage/storageAccounts", + "name": "[variables('storageName')]", + "apiVersion": "2015-06-15", + "location": "West US", + "properties": { + "accountType": "Standard_LRS" + } } - } + ] + }, + "parameters": {} } -], + } +] ``` -Even though the token is passed in as a secure string, the URI of the linked template, including the SAS token, is logged in the deployment operations. To limit exposure, set an expiration for the token. - -Resource Manager handles each linked template as a separate deployment. In the deployment history for the resource group, you see separate deployments for the parent and nested templates. +### External template and external parameters -![deployment history](./media/resource-group-linked-templates/linked-deployment-history.png) - -## Linking to a parameter file -The next example uses the **parametersLink** property to link to a parameter file. +To link to an external template and parameter file, use **templateLink** and **parametersLink**. When linking to a template, the Resource Manager service must be able to access it. You cannot specify a local file or a file that is only available on your local network. You can only provide a URI value that includes either **http** or **https**. One option is to place your linked template in a storage account, and use the URI for that item. ```json -"resources": [ - { - "apiVersion": "2017-05-10", - "name": "linkedTemplate", - "type": "Microsoft.Resources/deployments", - "properties": { - "mode": "incremental", +"resources": [ + { + "apiVersion": "2017-05-10", + "name": "linkedTemplate", + "type": "Microsoft.Resources/deployments", + "properties": { + "mode": "incremental", "templateLink": { - "uri":"https://www.contoso.com/AzureTemplates/newStorageAccount.json", + "uri":"https://mystorageaccount.blob.core.windows.net/AzureTemplates/newStorageAccount.json", "contentVersion":"1.0.0.0" - }, - "parametersLink": { - "uri":"https://www.contoso.com/AzureTemplates/parameters.json", + }, + "parametersLink": { + "uri":"https://mystorageaccount.blob.core.windows.net/AzureTemplates/newStorageAccount.parameters.json", "contentVersion":"1.0.0.0" - } - } - } -] + } + } + } +] ``` -The URI value for the linked parameter file cannot be a local file, and must include either **http** or **https**. The parameter file can also be limited to access through a SAS token. +### External template and inline parameters + +Or, you can provide the parameter inline. To pass a value from the main template to the linked template, use **parameters**. + +```json +"resources": [ + { + "apiVersion": "2017-05-10", + "name": "linkedTemplate", + "type": "Microsoft.Resources/deployments", + "properties": { + "mode": "incremental", + "templateLink": { + "uri":"https://mystorageaccount.blob.core.windows.net/AzureTemplates/newStorageAccount.json", + "contentVersion":"1.0.0.0" + }, + "parameters": { + "StorageAccountName":{"value": "[parameters('StorageAccountName')]"} + } + } + } +] +``` ## Using variables to link templates + The previous examples showed hard-coded URL values for the template links. This approach might work for a simple template but it does not work well when working with a large set of modular templates. Instead, you can create a static variable that stores a base URL for the main template and then dynamically create URLs for the linked templates from that base URL. The benefit of this approach is you can easily move or fork the template because you only need to change the static variable in the main template. The main template passes the correct URIs throughout the decomposed template. -The following example shows how to use a base URL to create two URLs for linked templates (**sharedTemplateUrl** and **vmTemplate**). +The following example shows how to use a base URL to create two URLs for linked templates (**sharedTemplateUrl** and **vmTemplate**). ```json "variables": { @@ -129,7 +142,7 @@ The following example shows how to use a base URL to create two URLs for linked } ``` -You can also use [deployment()](resource-group-template-functions-deployment.md#deployment) to get the base URL for the current template, and use that to get the URL for other templates in the same location. This approach is useful if your template location changes (maybe due to versioning) or you want to avoid hard coding URLs in the template file. +You can also use [deployment()](resource-group-template-functions-deployment.md#deployment) to get the base URL for the current template, and use that to get the URL for other templates in the same location. This approach is useful if your template location changes (maybe due to versioning) or you want to avoid hard coding URLs in the template file. ```json "variables": { @@ -137,10 +150,269 @@ You can also use [deployment()](resource-group-template-functions-deployment.md# } ``` -## Complete example -The following example templates show a simplified arrangement of linked templates to illustrate several of the concepts in this article. It assumes the templates have been added to the same container in a storage account with public access turned off. The linked template passes a value back to the main template in the **outputs** section. +## Get values from linked template + +To get an output value from a linked template, retrieve the property value with syntax like: `"[reference('').outputs..value]"`. + +The following examples demonstrate how to reference a linked template and retrieve an output value. The linked template returns a simple message. + +```json +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": {}, + "variables": {}, + "resources": [], + "outputs": { + "greetingMessage": { + "value": "Hello World", + "type" : "string" + } + } +} +``` + +The parent template deploys the linked template and gets the returned value. Notice that it references the deployment resource by name, and it uses the name of the property returned by the linked template. + +```json +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": {}, + "variables": {}, + "resources": [ + { + "apiVersion": "2017-05-10", + "name": "linkedTemplate", + "type": "Microsoft.Resources/deployments", + "properties": { + "mode": "incremental", + "templateLink": { + "uri": "[uri(deployment().properties.templateLink.uri, 'helloworld.json')]", + "contentVersion": "1.0.0.0" + } + } + } + ], + "outputs": { + "messageFromLinkedTemplate": { + "type": "string", + "value": "[reference('linkedTemplate').outputs.greetingMessage.value]" + } + } +} +``` + +Like other resource types, you can set dependencies between the linked template and other resources. Therefore, when other resources require an output value from the linked template, you can make sure the linked template is deployed before them. Or, when the linked template relies on other resources, you can make sure other resources are deployed before the linked template. + +The following example shows a template that deploys a public IP address and returns the resource ID: + +```json +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "publicIPAddresses_name": { + "type": "string" + } + }, + "variables": {}, + "resources": [ + { + "type": "Microsoft.Network/publicIPAddresses", + "name": "[parameters('publicIPAddresses_name')]", + "apiVersion": "2017-06-01", + "location": "eastus", + "properties": { + "publicIPAddressVersion": "IPv4", + "publicIPAllocationMethod": "Dynamic", + "idleTimeoutInMinutes": 4 + }, + "dependsOn": [] + } + ], + "outputs": { + "resourceID": { + "type": "string", + "value": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddresses_name'))]" + } + } +} +``` + +To use the public IP address from the preceding template when deploying a load balancer, link to the template and add a dependency on the deployment resource. The public IP address on the load balancer is set to the output value from the linked template. + +```json +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "loadBalancers_name": { + "defaultValue": "mylb", + "type": "string" + }, + "publicIPAddresses_name": { + "defaultValue": "myip", + "type": "string" + } + }, + "variables": {}, + "resources": [ + { + "type": "Microsoft.Network/loadBalancers", + "name": "[parameters('loadBalancers_name')]", + "apiVersion": "2017-06-01", + "location": "eastus", + "properties": { + "frontendIPConfigurations": [ + { + "name": "LoadBalancerFrontEnd", + "properties": { + "privateIPAllocationMethod": "Dynamic", + "publicIPAddress": { + "id": "[reference('linkedTemplate').outputs.resourceID.value]" + } + } + } + ], + "backendAddressPools": [], + "loadBalancingRules": [], + "probes": [], + "inboundNatRules": [], + "outboundNatRules": [], + "inboundNatPools": [] + }, + "dependsOn": [ + "linkedTemplate" + ] + }, + { + "apiVersion": "2017-05-10", + "name": "linkedTemplate", + "type": "Microsoft.Resources/deployments", + "properties": { + "mode": "Incremental", + "templateLink": { + "uri": "[uri(deployment().properties.templateLink.uri, 'publicip.json')]", + "contentVersion": "1.0.0.0" + }, + "parameters":{ + "publicIPAddresses_name":{"value": "[parameters('publicIPAddresses_name')]"} + } + } + } + ] +} +``` + +## Linked templates in deployment history + +Resource Manager processes each linked template as a separate deployment in the deployment history. Therefore, a parent template with three linked templates appears in the deployment history as: + +![Deployment history](./media/resource-group-linked-templates/deployment-history.png) + +You can use these separate entries in the history to retrieve output values after the deployment. The following template creates a public IP address and outputs the IP address: + +```json +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "publicIPAddresses_name": { + "type": "string" + } + }, + "variables": {}, + "resources": [ + { + "type": "Microsoft.Network/publicIPAddresses", + "name": "[parameters('publicIPAddresses_name')]", + "apiVersion": "2017-06-01", + "location": "southcentralus", + "properties": { + "publicIPAddressVersion": "IPv4", + "publicIPAllocationMethod": "Static", + "idleTimeoutInMinutes": 4, + "dnsSettings": { + "domainNameLabel": "[concat(parameters('publicIPAddresses_name'), uniqueString(resourceGroup().id))]" + } + }, + "dependsOn": [] + } + ], + "outputs": { + "returnedIPAddress": { + "type": "string", + "value": "[reference(parameters('publicIPAddresses_name')).ipAddress]" + } + } +} +``` + +The following template links to the preceding template. It creates three public IP addresses. + +```json +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + }, + "variables": {}, + "resources": [ + { + "apiVersion": "2017-05-10", + "name": "[concat('linkedTemplate', copyIndex())]", + "type": "Microsoft.Resources/deployments", + "properties": { + "mode": "Incremental", + "templateLink": { + "uri": "[uri(deployment().properties.templateLink.uri, 'static-public-ip.json')]", + "contentVersion": "1.0.0.0" + }, + "parameters":{ + "publicIPAddresses_name":{"value": "[concat('myip-', copyIndex())]"} + } + }, + "copy": { + "count": 3, + "name": "ip-loop" + } + } + ] +} +``` + +After the deployment, you can retrieve the output values with the following PowerShell script: + +```powershell +$loopCount = 3 +for ($i = 0; $i -lt $loopCount; $i++) +{ + $name = 'linkedTemplate' + $i; + $deployment = Get-AzureRmResourceGroupDeployment -ResourceGroupName examplegroup -Name $name + Write-Output "deployment $($deployment.DeploymentName) returned $($deployment.Outputs.returnedIPAddress.value)" +} +``` + +Or, Azure CLI script: + +```azurecli +for i in 0 1 2; +do + name="linkedTemplate$i"; + deployment=$(az group deployment show -g examplegroup -n $name); + ip=$(echo $deployment | jq .properties.outputs.returnedIPAddress.value); + echo "deployment $name returned $ip"; +done +``` + +## Securing an external template + +Although the linked template must be externally available, it does not need to be generally available to the public. You can add your template to a private storage account that is accessible to only the storage account owner. Then, you create a shared access signature (SAS) token to enable access during deployment. You add that SAS token to the URI for the linked template. Even though the token is passed in as a secure string, the URI of the linked template, including the SAS token, is logged in the deployment operations. To limit exposure, set an expiration for the token. + +The parameter file can also be limited to access through a SAS token. -The **parent.json** file consists of: +The following example shows how to pass a SAS token when linking to a template: ```json { @@ -164,28 +436,6 @@ The **parent.json** file consists of: } ], "outputs": { - "result": { - "type": "string", - "value": "[reference('linkedTemplate').outputs.result.value]" - } - } -} -``` - -The **helloworld.json** file consists of: - -```json -{ - "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": {}, - "variables": {}, - "resources": [], - "outputs": { - "result": { - "value": "Hello World", - "type" : "string" - } } } ``` @@ -199,7 +449,7 @@ $url = (Get-AzureStorageBlob -Container templates -Blob parent.json).ICloudBlob. New-AzureRmResourceGroupDeployment -ResourceGroupName ExampleGroup -TemplateUri ($url + $token) -containerSasToken $token ``` -In Azure CLI 2.0, you get a token for the container and deploy the templates with the following code: +In Azure CLI, you get a token for the container and deploy the templates with the following code: ```azurecli expiretime=$(date -u -d '30 minutes' +%Y-%m-%dT%H:%MZ) @@ -222,7 +472,64 @@ parameter='{"containerSasToken":{"value":"?'$token'"}}' az group deployment create --resource-group ExampleGroup --template-uri $url?$token --parameters $parameter ``` +## Example templates + +### Hello World from linked template + +To deploy the [parent template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/linkedtemplates/helloworldparent.json) and [linked template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/linkedtemplates/helloworld.json), use PowerShell: + +```powershell +New-AzureRmResourceGroupDeployment ` + -ResourceGroupName examplegroup ` + -TemplateUri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/helloworldparent.json +``` + +Or, Azure CLI: + +```azurecli-interactive +az group deployment create \ + -g examplegroup \ + --template-uri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/helloworldparent.json +``` + +### Load Balancer with public IP address in linked template + +To deploy the [parent template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/linkedtemplates/public-ip-parentloadbalancer.json) and [linked template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/linkedtemplates/public-ip.json), use PowerShell: + +```powershell +New-AzureRmResourceGroupDeployment ` + -ResourceGroupName examplegroup ` + -TemplateUri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/linkedtemplates/public-ip-parentloadbalancer.json +``` + +Or, Azure CLI: + +```azurecli-interactive +az group deployment create \ + -g examplegroup \ + --template-uri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/linkedtemplates/public-ip-parentloadbalancer.json +``` + +### Multiple public IP addresses in linked template + +To deploy the [parent template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/linkedtemplates/static-public-ip-parent.json) and [linked template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/linkedtemplates/static-public-ip.json), use PowerShell: + +```powershell +New-AzureRmResourceGroupDeployment ` + -ResourceGroupName examplegroup ` + -TemplateUri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/linkedtemplates/static-public-ip-parent.json +``` + +Or, Azure CLI: + +```azurecli-interactive +az group deployment create \ + -g examplegroup \ + --template-uri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/linkedtemplates/static-public-ip-parent.json +``` + ## Next steps -* To learn about the defining the deployment order for your resources, see [Defining dependencies in Azure Resource Manager templates](resource-group-define-dependencies.md) -* To learn how to define one resource but create many instances of it, see [Create multiple instances of resources in Azure Resource Manager](resource-group-create-multiple.md) +* To learn about the defining the deployment order for your resources, see [Defining dependencies in Azure Resource Manager templates](resource-group-define-dependencies.md). +* To learn how to define one resource but create many instances of it, see [Create multiple instances of resources in Azure Resource Manager](resource-group-create-multiple.md). +* For steps on setting up a template in a storage account and generating a SAS token, see [Deploy resources with Resource Manager templates and Azure PowerShell](resource-group-template-deploy.md) or [Deploy resources with Resource Manager templates and Azure CLI](resource-group-template-deploy-cli.md). \ No newline at end of file diff --git a/articles/azure-resource-manager/resource-manager-common-deployment-errors.md b/articles/azure-resource-manager/resource-manager-common-deployment-errors.md index 6ab405bb90f89..9758c6758247a 100644 --- a/articles/azure-resource-manager/resource-manager-common-deployment-errors.md +++ b/articles/azure-resource-manager/resource-manager-common-deployment-errors.md @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: support-article ms.tgt_pltfrm: na ms.workload: na -ms.date: 11/08/2017 +ms.date: 11/29/2017 ms.author: tomfitz --- @@ -28,6 +28,7 @@ This article describes some common Azure deployment errors you may encounter, an | ---------- | ---------- | ---------------- | | AccountNameInvalid | Follow naming restrictions for storage accounts. | [Resolve storage account name](resource-manager-storage-account-name-errors.md) | | AccountPropertyCannotBeSet | Check available storage account properties. | [storageAccounts](/azure/templates/microsoft.storage/storageaccounts) | +| AllocationFailed | The cluster or region does not have resources available or cannot support the requested VM size. Retry the request at a later time, or request a different VM size. | [Provisioning and allocation issues for Linux](../virtual-machines/linux/troubleshoot-deployment-new-vm.md) and [Provisioning and allocation issues for Windows](../virtual-machines/windows/troubleshoot-deployment-new-vm.md) | | AnotherOperationInProgress | Wait for concurrent operation to complete. | | | AuthorizationFailed | Your account or service principal does not have sufficient access to complete the deployment. Check the role your account belongs to, and its access for the deployment scope. | [Azure Role-Based Access Control](../active-directory/role-based-access-control-configure.md) | | BadRequest | You sent deployment values that do not match what is expected by Resource Manager. Check the inner status message for help with troubleshooting. | [Template reference](/azure/templates/) and [Supported locations](resource-manager-template-location.md) | diff --git a/articles/azure-stack/azure-stack-marketplace-azure-items.md b/articles/azure-stack/azure-stack-marketplace-azure-items.md index 5a149cb3a7101..6b91bab3ec3e8 100644 --- a/articles/azure-stack/azure-stack-marketplace-azure-items.md +++ b/articles/azure-stack/azure-stack-marketplace-azure-items.md @@ -3,8 +3,8 @@ title: Azure Marketplace items available for Azure Stack | Microsoft Docs description: These Azure Marketplace items can be used in Azure Stack. services: azure-stack documentationcenter: '' -author: ErikjeMS -manager: byronr +author: JeffGoldner +manager: bradleyb editor: '' ms.assetid: @@ -13,8 +13,8 @@ ms.workload: na ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 10/17/2017 -ms.author: erikje +ms.date: 11/29/2017 +ms.author: JeffGoldner --- # Azure Marketplace items available for Azure Stack diff --git a/articles/azure-stack/azure-stack-mysql-resource-provider-deploy.md b/articles/azure-stack/azure-stack-mysql-resource-provider-deploy.md index a5740642ac811..171bd85a7d733 100644 --- a/articles/azure-stack/azure-stack-mysql-resource-provider-deploy.md +++ b/articles/azure-stack/azure-stack-mysql-resource-provider-deploy.md @@ -12,7 +12,7 @@ ms.workload: na ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 10/10/2017 +ms.date: 11/29/2017 ms.author: JeffGo --- @@ -58,10 +58,16 @@ The system account must have the following privileges: b. On multi-node systems, the host must be a system that can access the Privileged Endpoint. -3. [Download the MySQL resource provider binaries file](https://aka.ms/azurestackmysqlrp) and execute the self-extractor to extract the contents to a temporary directory. +3. Download the MySQL resource provider binary and execute the self-extractor to extract the contents to a temporary directory. - > [!NOTE] - > If you running on an Azure Stack build 20170928.3 or earlier, [Download this version](https://aka.ms/azurestackmysqlrp1709). + >[!NOTE] + > The resource provider build corresponds to Azure Stack builds. You must download the correct binary for the version of Azure Stack that is running. + + | Azure Stack Build | MySQL RP installer | + | --- | --- | + | 1.0.171122.1 | [MySQL RP version 1.1.10.0](https://aka.ms/azurestackmysqlrp) | + | 1.0.171028.1 | [MySQL RP version 1.1.8.0](https://aka.ms/azurestackmysqlrp1710) | + | 1.0.170928.3 | [MySQL RP version 1.1.3.0](https://aka.ms/azurestackmysqlrp1709) | 4. The Azure Stack root certificate is retrieved from the Privileged Endpoint. For ASDK, a self-signed certificate is created as part of this process. For multi-node, you must provide an appropriate certificate. @@ -114,7 +120,7 @@ $serviceAdmin = "admin@mydomain.onmicrosoft.com" $AdminPass = ConvertTo-SecureString "P@ssw0rd1" -AsPlainText -Force $AdminCreds = New-Object System.Management.Automation.PSCredential ($serviceAdmin, $AdminPass) -# Set the credentials for the Resource Provider VM +# Set the credentials for the new Resource Provider VM $vmLocalAdminPass = ConvertTo-SecureString "P@ssw0rd1" -AsPlainText -Force $vmLocalAdminCreds = New-Object System.Management.Automation.PSCredential ("mysqlrpadmin", $vmLocalAdminPass) diff --git a/articles/azure-stack/azure-stack-sql-resource-provider-deploy.md b/articles/azure-stack/azure-stack-sql-resource-provider-deploy.md index dcb73f809e9ba..77b62eddc67c4 100644 --- a/articles/azure-stack/azure-stack-sql-resource-provider-deploy.md +++ b/articles/azure-stack/azure-stack-sql-resource-provider-deploy.md @@ -12,7 +12,7 @@ ms.workload: na ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 10/10/2017 +ms.date: 11/29/2017 ms.author: JeffGo --- @@ -48,10 +48,17 @@ You must create one (or more) SQL servers and/or provide access to external SQL b. On multi-node systems, the host must be a system that can access the Privileged Endpoint. -3. [Download the SQL resource provider binaries file](https://aka.ms/azurestacksqlrp) and execute the self-extractor to extract the contents to a temporary directory. +3. Download the SQL resource provider binary and execute the self-extractor to extract the contents to a temporary directory. - > [!NOTE] - > If you running on an Azure Stack build 20170928.3 or earlier, [Download this version](https://aka.ms/azurestacksqlrp1709). + >[!NOTE] + > The resource provider build corresponds to Azure Stack builds. You must download the correct binary for the version of Azure Stack that is running. + + | Azure Stack Build | SQL RP installer | + | --- | --- | + | 1.0.171122.1 | [SQL RP version 1.1.10.0](https://aka.ms/azurestacksqlrp) | + | 1.0.171028.1 | [SQL RP version 1.1.8.0](https://aka.ms/azurestacksqlrp1710) | + | 1.0.170928.3 | [SQL RP version 1.1.3.0](https://aka.ms/azurestacksqlrp1709) | + 4. The Azure Stack root certificate is retrieved from the Privileged Endpoint. For ASDK, a self-signed certificate is created as part of this process. For multi-node, you must provide an appropriate certificate. @@ -101,7 +108,7 @@ $serviceAdmin = "admin@mydomain.onmicrosoft.com" $AdminPass = ConvertTo-SecureString "P@ssw0rd1" -AsPlainText -Force $AdminCreds = New-Object System.Management.Automation.PSCredential ($serviceAdmin, $AdminPass) -# Set the credentials for the Resource Provider VM +# Set credentials for the new Resource Provider VM $vmLocalAdminPass = ConvertTo-SecureString "P@ssw0rd1" -AsPlainText -Force $vmLocalAdminCreds = New-Object System.Management.Automation.PSCredential ("sqlrpadmin", $vmLocalAdminPass) diff --git a/articles/azure-stack/azure-stack-update-1711.md b/articles/azure-stack/azure-stack-update-1711.md index 5738c45f4e940..cca371388b6f0 100644 --- a/articles/azure-stack/azure-stack-update-1711.md +++ b/articles/azure-stack/azure-stack-update-1711.md @@ -134,7 +134,7 @@ This section contains post-installation known issues with build **20171122.1**. In Azure Active Directory Federation Services (ADFS) deployed environments, the **azurestack\azurestackadmin** account is no longer the owner of the Default Provider Subscription. Instead of logging into the **Admin portal / adminmanagement endpoint** with the **azurestack\azurestackadmin**, you can use the **azurestack\cloudadmin** account, so that you can manage and use the Default Provider Subscription. > [!IMPORTANT] -> Even the **azurestack\cloudadmin** account is the owner of the Default Provider Subscription in ADFS deployed environments, it does not have permissions to RDP into the host. Continue to use the **azurestack\azurestackadmin** account or the local administrator account to login, access and manage the host as needed. +> Even though the **azurestack\cloudadmin** account is the owner of the Default Provider Subscription in ADFS deployed environments, it does not have permissions to RDP into the host. Continue to use the **azurestack\azurestackadmin** account or the local administrator account to login, access and manage the host as needed. ## Download the update @@ -149,4 +149,4 @@ Microsoft has provided a way to monitor and resume updates using the Privileged ## See also - See [Manage updates in Azure Stack overview](azure-stack-updates.md) for an overview of the update management in Azure Stack. -- See [Apply updates in Azure Stack](azure-stack-apply-updates.md) for more information about how to apply updates with Azure Stack. \ No newline at end of file +- See [Apply updates in Azure Stack](azure-stack-apply-updates.md) for more information about how to apply updates with Azure Stack. diff --git a/articles/azure-stack/user/azure-stack-quick-linux-portal.md b/articles/azure-stack/user/azure-stack-quick-linux-portal.md index 4c5321e463bcb..8f10bbc8b5a4e 100644 --- a/articles/azure-stack/user/azure-stack-quick-linux-portal.md +++ b/articles/azure-stack/user/azure-stack-quick-linux-portal.md @@ -1,140 +1,140 @@ ---- -title: Azure Stack Quick Start - Create VM Portal -description: Azure Stack Quick Start - Create a Linux VM using the portal -services: azure-stack -cloud: azure-stack -author: vhorne -manager: byronr - -ms.service: azure-stack -ms.topic: quickstart -ms.date: 09/25/2017 -ms.author: victorh -ms.custom: mvc ---- - -# Create a Linux virtual machine with the Azure Stack portal - -*Applies to: Azure Stack integrated systems and Azure Stack Development Kit* - -Azure Stack virtual machines can be created through the Azure Stack portal. This method provides a browser-based user interface to create and configure a virtual machine and all related resources. This Quickstart shows you how to quickly create a Linux virtual machine and install a web server on it. - -## Prerequisites - -* **A Linux image in the Azure Stack marketplace** - - The Azure Stack marketplace doesn't contain a Linux image by default. So, before you can create a Linux virtual machine, ensure that the Azure Stack operator has downloaded the **Ubuntu Server 16.04 LT** image by using the steps described in the [Download marketplace items from Azure to Azure Stack](../azure-stack-download-azure-marketplace-item.md) topic. - -* **Access to an SSH client** - - If you are using the Azure Stack Development Kit (ASDK), you may not have access to an SSH client in your environment. If this is the case, you can choose among several packages that include an SSH client. For example, you can install PuTTY that includes an SSH client and SSH key generator (puttygen.exe). For more information about possible options, see the following related Azure article: [How to Use SSH keys with Windows on Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/ssh-from-windows#windows-packages-and-ssh-clients). - - This Quickstart uses PuTTY to generate the SSH keys and to connect to the Linux virtual machine. To download and install PuTTY, go to [http://www.putty.org/](http://www.putty.org). - -## Create an SSH key pair - -You need an SSH key pair to complete this Quickstart. If you have an existing SSH key pair, this step can be skipped. - -1. Navigate to the PuTTY installation folder (the default location is ```C:\Program Files\PuTTY```) and run ```puttygen.exe```. -2. In the PuTTY Key Generator window, ensure the **Type of key to generate** is set to **RSA**, and the **Number of bits in a generated key** is set to **2048**. When ready, click **Generate**. - - ![puttygen.exe](media/azure-stack-quick-linux-portal/Putty01.PNG) - -3. To complete the key generation process, move your mouse cursor within the PuTTY Key Generator window. -4. When the key generation completes, click **Save public key** and **Save private key** to save your public and private keys to files. - - ![PuTTY keys](media/azure-stack-quick-linux-portal/Putty02.PNG) - - - -## Sign in to the Azure Stack portal - -Sign in to the Azure Stack portal. The address of the Azure Stack portal depends on which Azure Stack product you are connecting to: - -* For Azure Stack Development Kit (ASDK) go to: https://portal.local.azurestack.external. -* For an Azure Stack integrated system, go to the URL that your Azure Stack operator provided. - -## Create the virtual machine - -1. Click the **New** button found on the upper left-hand corner of the Azure Stack portal. - -2. Select **Compute**, and then select **Ubuntu Server 16.04 LTS**. -3. Click **Create**. - -4. Type the virtual machine information. For **Authentication type**, select **SSH public key**. When you paste in your SSH public key (which you saved to a file previously), take care to remove any leading or trailing white space. When complete, click **OK**. - - ![Virtual machine basics](media/azure-stack-quick-linux-portal/linux-01.PNG) - -5. Select **D1_V2** for the virtual machine. - - ![Machine size](media/azure-stack-quick-linux-portal/linux-02.PNG) - -6. On the **Settings** page, keep the defaults and click **OK**. - -7. On the **Summary** page, click **OK** to start the virtual machine deployment. - - -## Connect to the virtual machine - -1. Click **Connect** on the virtual machine page. This displays an SSH connection string that can be used to connect to the virtual machine. - - ![Connect virtual machine](media/azure-stack-quick-linux-portal/linux-03.PNG) - -2. Open PuTTY. -3. On the **PuTTY Configuration** screen, under **Category**, expand **SSH** and then click **Auth**. Click **Browse** and select the private key file that you saved previously. - - ![PuTTY private key](media/azure-stack-quick-linux-portal/Putty03.PNG) -4. Under **Category**, scroll up and click **Session**. -5. In the **Host Name (or IP address)** box, paste the connection string from the Azure Stack portal that you saw previously. In this example, the string is ```asadmin@192.168.102.34```. - - ![PuTTY session](media/azure-stack-quick-linux-portal/Putty04.PNG) -6. Click **Open** to open a session to the virtual machine. - - ![Linus session](media/azure-stack-quick-linux-portal/Putty05.PNG) - -## Install NGINX - -Use the following bash script to update package sources and install the latest NGINX package on the virtual machine. - -```bash -#!/bin/bash - -# update package source -sudo apt-get -y update - -# install NGINX -sudo apt-get -y install nginx -``` - -When done, exit the SSH session and return the virtual machine Overview page in the Azure Stack portal. - - -## Open port 80 for web traffic - -A Network security group (NSG) secures inbound and outbound traffic. When a virtual machine is created from the Azure Stack portal, an inbound rule is created on port 22 for SSH connections. Because this virtual machine hosts a web server, an NSG rule needs to be created for port 80. - -1. On the virtual machine **Overview** page, click the name of the **Resource group**. -2. Select the **network security group** for the virtual machine. The NSG can be identified using the **Type** column. -3. On the left-hand menu, under **Settings**, click **Inbound security rules**. -4. Click **Add**. -5. In **Name**, type **http**. Make sure **Port range** is set to 80 and **Action** is set to **Allow**. -6. Click **OK**. - - -## View the NGINX welcome page - -With NGINX installed, and port 80 open on your virtual machine, the web server can now be accessed at the virtual machine's public IP address. The public IP address can be found on the virtual machine's Overview page in the Azure Stack portal. - -Open a web browser, and browse to ```http://```. - -![NGINX default site](media/azure-stack-quick-linux-portal/linux-04.PNG) - - -## Clean up resources - -When no longer needed, delete the resource group, virtual machine, and all related resources. To do so, select the resource group from the virtual machine page and click **Delete**. - -## Next steps - -In this quick start, you’ve deployed a simple Linux virtual machine, a network security group rule, and installed a web server. To learn more about Azure Stack virtual machines, continue to [Considerations for Virtual Machines in Azure Stack](azure-stack-vm-considerations.md). - +--- +title: Azure Stack Quick Start - Create VM Portal +description: Azure Stack Quick Start - Create a Linux VM using the portal +services: azure-stack +cloud: azure-stack +author: vhorne +manager: byronr + +ms.service: azure-stack +ms.topic: quickstart +ms.date: 09/25/2017 +ms.author: victorh +ms.custom: mvc +--- + +# Create a Linux virtual machine with the Azure Stack portal + +*Applies to: Azure Stack integrated systems and Azure Stack Development Kit* + +Azure Stack virtual machines can be created through the Azure Stack portal. This method provides a browser-based user interface to create and configure a virtual machine and all related resources. This Quickstart shows you how to quickly create a Linux virtual machine and install a web server on it. + +## Prerequisites + +* **A Linux image in the Azure Stack marketplace** + + The Azure Stack marketplace doesn't contain a Linux image by default. So, before you can create a Linux virtual machine, ensure that the Azure Stack operator has downloaded the **Ubuntu Server 16.04 LTS** image by using the steps described in the [Download marketplace items from Azure to Azure Stack](../azure-stack-download-azure-marketplace-item.md) topic. + +* **Access to an SSH client** + + If you are using the Azure Stack Development Kit (ASDK), you may not have access to an SSH client in your environment. If this is the case, you can choose among several packages that include an SSH client. For example, you can install PuTTY that includes an SSH client and SSH key generator (puttygen.exe). For more information about possible options, see the following related Azure article: [How to Use SSH keys with Windows on Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/ssh-from-windows#windows-packages-and-ssh-clients). + + This Quickstart uses PuTTY to generate the SSH keys and to connect to the Linux virtual machine. To download and install PuTTY, go to [http://www.putty.org/](http://www.putty.org). + +## Create an SSH key pair + +You need an SSH key pair to complete this Quickstart. If you have an existing SSH key pair, this step can be skipped. + +1. Navigate to the PuTTY installation folder (the default location is ```C:\Program Files\PuTTY```) and run ```puttygen.exe```. +2. In the PuTTY Key Generator window, ensure the **Type of key to generate** is set to **RSA**, and the **Number of bits in a generated key** is set to **2048**. When ready, click **Generate**. + + ![puttygen.exe](media/azure-stack-quick-linux-portal/Putty01.PNG) + +3. To complete the key generation process, move your mouse cursor within the PuTTY Key Generator window. +4. When the key generation completes, click **Save public key** and **Save private key** to save your public and private keys to files. + + ![PuTTY keys](media/azure-stack-quick-linux-portal/Putty02.PNG) + + + +## Sign in to the Azure Stack portal + +Sign in to the Azure Stack portal. The address of the Azure Stack portal depends on which Azure Stack product you are connecting to: + +* For Azure Stack Development Kit (ASDK) go to: https://portal.local.azurestack.external. +* For an Azure Stack integrated system, go to the URL that your Azure Stack operator provided. + +## Create the virtual machine + +1. Click the **New** button found on the upper left-hand corner of the Azure Stack portal. + +2. Select **Compute**, and then select **Ubuntu Server 16.04 LTS**. +3. Click **Create**. + +4. Type the virtual machine information. For **Authentication type**, select **SSH public key**. When you paste in your SSH public key (which you saved to a file previously), take care to remove any leading or trailing white space. When complete, click **OK**. + + ![Virtual machine basics](media/azure-stack-quick-linux-portal/linux-01.PNG) + +5. Select **D1_V2** for the virtual machine. + + ![Machine size](media/azure-stack-quick-linux-portal/linux-02.PNG) + +6. On the **Settings** page, keep the defaults and click **OK**. + +7. On the **Summary** page, click **OK** to start the virtual machine deployment. + + +## Connect to the virtual machine + +1. Click **Connect** on the virtual machine page. This displays an SSH connection string that can be used to connect to the virtual machine. + + ![Connect virtual machine](media/azure-stack-quick-linux-portal/linux-03.PNG) + +2. Open PuTTY. +3. On the **PuTTY Configuration** screen, under **Category**, expand **SSH** and then click **Auth**. Click **Browse** and select the private key file that you saved previously. + + ![PuTTY private key](media/azure-stack-quick-linux-portal/Putty03.PNG) +4. Under **Category**, scroll up and click **Session**. +5. In the **Host Name (or IP address)** box, paste the connection string from the Azure Stack portal that you saw previously. In this example, the string is ```asadmin@192.168.102.34```. + + ![PuTTY session](media/azure-stack-quick-linux-portal/Putty04.PNG) +6. Click **Open** to open a session to the virtual machine. + + ![Linus session](media/azure-stack-quick-linux-portal/Putty05.PNG) + +## Install NGINX + +Use the following bash script to update package sources and install the latest NGINX package on the virtual machine. + +```bash +#!/bin/bash + +# update package source +sudo apt-get -y update + +# install NGINX +sudo apt-get -y install nginx +``` + +When done, exit the SSH session and return the virtual machine Overview page in the Azure Stack portal. + + +## Open port 80 for web traffic + +A Network security group (NSG) secures inbound and outbound traffic. When a virtual machine is created from the Azure Stack portal, an inbound rule is created on port 22 for SSH connections. Because this virtual machine hosts a web server, an NSG rule needs to be created for port 80. + +1. On the virtual machine **Overview** page, click the name of the **Resource group**. +2. Select the **network security group** for the virtual machine. The NSG can be identified using the **Type** column. +3. On the left-hand menu, under **Settings**, click **Inbound security rules**. +4. Click **Add**. +5. In **Name**, type **http**. Make sure **Port range** is set to 80 and **Action** is set to **Allow**. +6. Click **OK**. + + +## View the NGINX welcome page + +With NGINX installed, and port 80 open on your virtual machine, the web server can now be accessed at the virtual machine's public IP address. The public IP address can be found on the virtual machine's Overview page in the Azure Stack portal. + +Open a web browser, and browse to ```http://```. + +![NGINX default site](media/azure-stack-quick-linux-portal/linux-04.PNG) + + +## Clean up resources + +When no longer needed, delete the resource group, virtual machine, and all related resources. To do so, select the resource group from the virtual machine page and click **Delete**. + +## Next steps + +In this quick start, you’ve deployed a simple Linux virtual machine, a network security group rule, and installed a web server. To learn more about Azure Stack virtual machines, continue to [Considerations for Virtual Machines in Azure Stack](azure-stack-vm-considerations.md). + diff --git a/articles/backup/backup-mabs-protection-matrix.md b/articles/backup/backup-mabs-protection-matrix.md index cc1df1b8d47f4..36a9b344a5334 100644 --- a/articles/backup/backup-mabs-protection-matrix.md +++ b/articles/backup/backup-mabs-protection-matrix.md @@ -8,7 +8,7 @@ ms.assetid: ms.service: backup ms.workload: storage-backup-recovery keywords: -ms.date: 05/15/2017 +ms.date: 11/28/2017 ms.topic: article ms.author: markgal,masaran manager: carmonm @@ -87,7 +87,7 @@ This article lists the various servers and workloads that you can protect with A |Hyper-V host - DPM protection agent on Hyper-V host server, cluster, or VM|Windows Server 2008 R2 SP1 - Enterprise and Standard|Physical server

On-premises Hyper-V virtual machine|Y|Y|Protect: Hyper-V computers, cluster shared volumes (CSVs)

Recover: Virtual machine, Item-level recovery of files and folder, volumes, virtual hard drives| |Hyper-V host - DPM protection agent on Hyper-V host server, cluster, or VM|Windows Server 2008|Physical server

On-premises Hyper-V virtual machine|N|N|Protect: Hyper-V computers, cluster shared volumes (CSVs)

Recover: Virtual machine, Item-level recovery of files and folder, volumes, virtual hard drives| |VMware VMs|VMware server 5.5 or 6.0 or 6.5 |On-premises Hyper-V virtual machine|Y|Y (with UR1)|VMware VMs on cluster-shared volumes (CSVs), NFS, and SAN storage
Item-level recovery of files and folders available only for Windows
VMware vApps not supported| -|Linux|Linux running as Hyper-V or VMware guest|On-premises Hyper-V virtual machine|Y|Y|Hyper-V must be running on Windows Server 2012 R2 or Windows Server 2016. Protect: Entire virtual machine

Recover: Entire virtual machine| +|Linux|Linux running as Hyper-V or VMware guest|On-premises Hyper-V virtual machine|Y|Y|Hyper-V must be running on Windows Server 2012 R2 or Windows Server 2016. Protect: Entire virtual machine

Recover: Entire virtual machine

For a complete list of supported Linux distributions and versions, see the article, [Linux on distributions endorsed by Azure](../virtual-machines/linux/endorsed-distros.md).| ## Cluster support Azure Backup Server can protect data in the following clustered applications: diff --git a/articles/cdn/TOC.md b/articles/cdn/TOC.md index e43cdf5c3b16d..ab666b5a7e349 100644 --- a/articles/cdn/TOC.md +++ b/articles/cdn/TOC.md @@ -21,8 +21,8 @@ ## Manage ### [Manage with Azure PowerShell](cdn-manage-powershell.md) ### Configure time-to-live -#### [Web Apps/Cloud Services, ASP.NET, or IIS content](cdn-manage-expiration-of-cloud-service-content.md) -#### [Storage blob service content](cdn-manage-expiration-of-blob-content.md) +#### [Azure web content](cdn-manage-expiration-of-cloud-service-content.md) +#### [Azure Blob storage](cdn-manage-expiration-of-blob-content.md) ### [Restrict access by country](cdn-restrict-access-by-country.md) ### [Improve performance by compressing files](cdn-improve-performance.md) ### Cache content by query string diff --git a/articles/cdn/cdn-manage-expiration-of-blob-content.md b/articles/cdn/cdn-manage-expiration-of-blob-content.md index f9153775861de..70a7a554aefd8 100644 --- a/articles/cdn/cdn-manage-expiration-of-blob-content.md +++ b/articles/cdn/cdn-manage-expiration-of-blob-content.md @@ -47,7 +47,7 @@ $context = New-AzureStorageContext -StorageAccountName "" $blob = Get-AzureStorageBlob -Context $context -Container "" -Blob "" # Set the CacheControl property to expire in 1 hour (3600 seconds) -$blob.ICloudBlob.Properties.CacheControl = "public, max-age=3600" +$blob.ICloudBlob.Properties.CacheControl = "max-age=3600" # Send the update to the cloud $blob.ICloudBlob.SetProperties() @@ -82,7 +82,7 @@ class Program CloudBlob blob = container.GetBlobReference(""); // Set the CacheControl property to expire in 1 hour (3600 seconds) - blob.Properties.CacheControl = "public, max-age=3600"; + blob.Properties.CacheControl = "max-age=3600"; // Update the blob's properties in the cloud blob.SetProperties(); @@ -110,9 +110,9 @@ To update the *CacheControl* property of a blob with Azure Storage Explorer: ### Azure Command-Line Interface When you upload a blob, you can set the *cacheControl* property with the `-p` switch in the [Azure Command-Line Interface](../cli-install-nodejs.md). The following example shows how to set the TTL to one hour (3600 seconds): - ```text - azure storage blob upload -c -p cacheControl="public, max-age=3600" .\test.txt myContainer test.txt - ``` +```command +azure storage blob upload -c -p cacheControl="max-age=3600" .\test.txt myContainer test.txt +``` ### Azure storage services REST API You can use the [Azure storage services REST API](https://msdn.microsoft.com/library/azure/dd179355.aspx) to explicitly set the *x-ms-blob-cache-control* property by using the following operations on a request: diff --git a/articles/cdn/media/cdn-manage-expiration-of-blob-content/cdn-storage-explorer-properties.png b/articles/cdn/media/cdn-manage-expiration-of-blob-content/cdn-storage-explorer-properties.png index cf196db5e2db5..b977d69d3e70b 100644 Binary files a/articles/cdn/media/cdn-manage-expiration-of-blob-content/cdn-storage-explorer-properties.png and b/articles/cdn/media/cdn-manage-expiration-of-blob-content/cdn-storage-explorer-properties.png differ diff --git a/articles/cli-install-nodejs.md b/articles/cli-install-nodejs.md index b7c910cb1a296..16310c40a9895 100644 --- a/articles/cli-install-nodejs.md +++ b/articles/cli-install-nodejs.md @@ -25,7 +25,8 @@ ms.author: rasquill > * [Azure CLI 2.0](/cli/azure/install-azure-cli) > [!IMPORTANT] -> This topic describes how to install the Azure CLI 1.0, which is built on nodeJs and supports all classic deployment API calls as well as a large number of Resource Manager deployment activities. You should use the [Azure CLI 2.0](/cli/azure/overview) for new or forward-looking CLI deployments and management. +> This topic describes how to install the Azure CLI 1.0. This CLI is deprecated and should only be used for support with the Azure Service Management (ASM) model with "classic" resources. +> For Azure Resource Manager (ARM) deployments, use [Azure CLI 2.0](/cli/azure/overview). Quickly install the Azure Command-Line Interface (Azure CLI 1.0) to use a set of open-source shell-based commands for creating and managing resources in Microsoft Azure. You have several options to install these cross-platform tools on your computer: diff --git a/articles/cognitive-services/Bing-News-Search/endpoint-news.md b/articles/cognitive-services/Bing-News-Search/endpoint-news.md new file mode 100644 index 0000000000000..4d688545b3d66 --- /dev/null +++ b/articles/cognitive-services/Bing-News-Search/endpoint-news.md @@ -0,0 +1,42 @@ +--- +title: Web search endpoint | Microsoft Docs +description: Summary of the News search API endpoint. +services: cognitive-services +author: mikedodaro +manager: rosh +ms.service: cognitive-services +ms.technology: bing-news-search +ms.topic: article +ms.date: 11/28/2017 +ms.author: v-gedod +--- + +# News Search endpoint +The **News Search API** returns news articles, Web pages, images, videos, and [entities](https://docs.microsoft.com/en-us/azure/cognitive-services/bing-entities-search/search-the-web). Entities contain summary information about a person, place, or topic. +##Endpoints +To get News search results using the Bing API, send a `GET` request to one of the following endpoints. The headers and URL parameters define further specifications. + +Endpoint 1: + +https://api.cognitive.microsoft.com/bing/v7.0/news +Returns the top news items by category. You can specifically request the top business, sports, or entertainment articles using `category=business`, `category=sports`, or `category=entertainment`.  The `category` parameter can only be used with the `/news` URL. There are some formal requirements for specifying categories; refer to `category` in the [query parameter](https://docs.microsoft.com/en-us/rest/api/cognitiveservices/bing-news-api-v7-reference#query-parameters) documentation. + +Endpoint 2: + +https://api.cognitive.microsoft.com/bing/v7.0/news/search +Returns news items based on the user's search query. If the search query is empty, the call returns the top news articles. The query `?q=""` option can also be used with the `/news` URL. + +Endpoint 3: + +https://api.cognitive.microsoft.com/bing/v7.0/news/trendingtopics +Returns news topics that are currently trending on social networks. When the `/trendingtopics` option is included, Bing search ignores several other parameters, such as `freshness` and `?q=""`. + +For details about headers, parameters, market codes, response objects, errors, etc., see the [Bing News search API v7](https://docs.microsoft.com/en-us/rest/api/cognitiveservices/bing-news-api-v7-reference) reference. +##Response JSON +The response to a News search request includes results as JSON objects. Parsing the results requires procedures that handle elements of each type. See the [tutorial](https://docs.microsoft.com/en-us/azure/cognitive-services/bing-news-search/tutorial-bing-news-search-single-page-app) and [source code](https://docs.microsoft.com/en-us/azure/cognitive-services/bing-news-search/tutorial-bing-news-search-single-page-app-source) for examples. + +##Next steps +The **Bing** APIs support search actions that return results according to their type. All search endpoints return results as JSON response objects.  All endpoints support queries that return a specific language and/or location by longitude, latitude, and search radius. + +For complete information about the parameters supported by each endpoint, see the reference pages for each type. +For examples of basic requests using the News search API, see [Bing News Search Quick-starts](https://docs.microsoft.com/azure/cognitive-services/bing-news-search). \ No newline at end of file diff --git a/articles/cognitive-services/Bing-News-Search/toc.yml b/articles/cognitive-services/Bing-News-Search/toc.yml index 94e503da5d053..936f300d5bfb5 100644 --- a/articles/cognitive-services/Bing-News-Search/toc.yml +++ b/articles/cognitive-services/Bing-News-Search/toc.yml @@ -4,6 +4,8 @@ items: - name: Search the web for news href: search-the-web.md + - name: News Search endpoint + href: endpoint-news.md - name: Upgrade from v5 to v7 href: bing-news-upgrade-guide-v5-to-v7.md - name: Use and display requirements diff --git a/articles/cognitive-services/Bing-Web-Search/toc.yml b/articles/cognitive-services/Bing-Web-Search/toc.yml index c19f96e10edc8..23198c24400b7 100644 --- a/articles/cognitive-services/Bing-Web-Search/toc.yml +++ b/articles/cognitive-services/Bing-Web-Search/toc.yml @@ -4,6 +4,8 @@ items: - name: About Bing Web Search API href: overview.md + - name: Web Search endpoint + href: web-search-endpoints.md - name: Supported countries and markets href: supported-countries-markets.md - name: Quickstarts diff --git a/articles/cognitive-services/Bing-Web-Search/web-search-endpoints.md b/articles/cognitive-services/Bing-Web-Search/web-search-endpoints.md new file mode 100644 index 0000000000000..b7a4c08666f28 --- /dev/null +++ b/articles/cognitive-services/Bing-Web-Search/web-search-endpoints.md @@ -0,0 +1,29 @@ +--- +title: Web search endpoint | Microsoft Docs +description: Summary of the Web search API endpoint. +services: cognitive-services +author: mikedodaro +manager: rosh +ms.service: cognitive-services +ms.technology: bing-web-search +ms.topic: article +ms.date: 11/28/2017 +ms.author: v-gedod +--- + +# Web Search endpoint +The **Web Search API** returns Web pages, news, images, videos, and [entities](https://docs.microsoft.com/en-us/azure/cognitive-services/bing-entities-search/search-the-web). Entities contain summary information about a person, place, or topic. +##Endpoint +To get Web search results using the Bing API, send a `GET` request to the following endpoint. The headers and URL parameters define further specifications. + +Endpoint: https://api.cognitive.microsoft.com/bing/v7.0/search + +For details about headers, parameters, market codes, response objects, errors, etc., see the [Bing Web API v7](https://docs.microsoft.com/en-us/rest/api/cognitiveservices/bing-web-api-v7-reference) reference. +##Response JSON +The response to a Web search request includes all results as JSON objects. Parsing the result requires procedures that handle the elements of each type. See the [tutorial](https://docs.microsoft.com/en-us/azure/cognitive-services/bing-web-search/tutorial-bing-web-search-single-page-app) and [source code](https://docs.microsoft.com/en-us/azure/cognitive-services/bing-web-search/tutorial-bing-web-search-single-page-app-source) for examples. + +##Next steps +The **Bing** APIs support search actions that return results according to their type. All search endpoints return results as JSON response objects.  All endpoints support queries that return a specific language and/or location by longitude, latitude, and search radius. + +For complete information about the parameters supported by each endpoint, see the reference pages for each type. +For examples of basic requests using the Web search API, see [Search the Web Quick-starts](https://docs.microsoft.com/azure/cognitive-services/bing-web-search/search-the-web). \ No newline at end of file diff --git a/articles/cognitive-services/Content-Moderator/api-reference.md b/articles/cognitive-services/Content-Moderator/api-reference.md index f6060d1db62a7..db401176159e8 100644 --- a/articles/cognitive-services/Content-Moderator/api-reference.md +++ b/articles/cognitive-services/Content-Moderator/api-reference.md @@ -1,6 +1,6 @@ --- -title: API reference for Content Moderator | Microsoft Docs -description: Learn about the Image and Text Moderation, and Review APIs for Content Moderator. +title: API reference for Azure Content Moderator | Microsoft Docs +description: About the Moderation and Review APIs of Content Moderator services: cognitive-services author: sanjeev3 manager: mikemcca @@ -12,24 +12,25 @@ ms.date: 06/25/2017 ms.author: sajagtap --- -# API Reference # +# API reference You get started with the Content Moderator APIs in the following ways: 1. [Subscribe to the Content Moderator API](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator) on the Microsoft Azure portal. - 1. Sign up for the [content moderator review tool](http://contentmoderator.cognitive.microsoft.com/). See the screenshot in the [Overview](overview.md) article. + 1. Sign up for the [Content Moderator review tool](http://contentmoderator.cognitive.microsoft.com/). See the screenshot in the [Overview](overview.md) article. -## Content Moderation APIs ## +## Content moderation | API Description | API Reference | | -------------------- |-------------| -| **Image Moderation** : Scan images and get back predicted tags, their confidence scores, and other extracted information. Use this information to implement your post-moderation workflow: publish, reject, or review the content within your systems. | [Image Moderation API Reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c "Image Moderation API Reference") | -| **Text Moderation API** : Scan text content and get back identified profanity terms and Personal Identifiable Information (PII). Use this information to implement your post-moderation workflow: publish, reject, or review the content within your systems. | [Text Moderation API Reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f "Text Moderation API Reference") | -| **List Management API** : Create and manage custom exclusion or inclusion lists of images and text. If enabled, the Image/Match and Text/Screen operations do fuzzy matching of the submitted content against your custom lists, and skip the ML-based moderation step for efficiency. | [List Management API Reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f675 "List Management API Reference") | +| **Image Moderation**: Scan images and detect possible adult and racy content, with tags and confidence scores, and other extracted information. Use this information to implement your post-moderation workflow: publish, reject, or review the content. | [Image Moderation API Reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c "Image Moderation API Reference") | +| **Text Moderation**: Scan text content and get back identified profanity terms and Personal Identifiable Information (PII). Use this information to implement your post-moderation workflow: publish, reject, or review the content. | [Text Moderation API Reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f "Text Moderation API Reference") | +| **Video Moderation**: Scan videos and detect potential adult and racy content. Use this information to implement your post-moderation workflow: publish, reject, or review the content. | [Video Moderation API Overview](video-moderation-api.md "Video Moderation API Overview") | +| **List Management**: Create and manage custom exclusion or inclusion lists of images and text. If enabled, the Image/Match and Text/Screen operations do fuzzy matching of the submitted content against your custom lists, and skip the ML-based moderation step for efficiency. | [List Management API Reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f675 "List Management API Reference") | -## Review API ## +## Review API | Description | Reference | | -------------------- |-------------| -| **Job**: Initiate scan-and-review moderation workflows with both image and text content. The moderation job scans your content by using the Image and Text Moderation APIs. It then uses the defined and default workflows to generate reviews. Once your human moderators have reviewed the auto-assigned tags and prediction data and submitted their final decision, the API submits all information to your API endpoint. | [Job Reference](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c5 "Job Reference") | -| **Review**: Directly create image or text reviews in the review tool for human moderators. Once your human moderators have reviewed the tags and meta data and submitted their final decision, the API submits all information to your API endpoint. | [Review Reference](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c4 "Review Reference")   | -| **Workflow**: Create, update, and get details of the custom workflows created by your team. (Workflows are defined using the Review Tool.) Workflows typically use Content Moderator, but can also use certain other APIs that are available as connectors within the Review Tool. | [Workflow Reference](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/5813b46b3f9b0711b43c4c59 "Workflow Reference") | +| **Jobs**: Initiate scan-and-review moderation workflows with both image and text content. The moderation job scans your content by using the Image and Text Moderation APIs. It then uses the defined and default workflows to generate reviews. Once your human moderators have reviewed the auto-assigned tags and prediction data and submitted their final decision, the API submits all information to your API endpoint. | [Job Reference](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c5 "Job Reference") | +| **Reviews**: Directly create image or text reviews in the review tool for human moderators. Once your human moderators have reviewed the tags and meta data and submitted their final decision, the API submits all information to your API endpoint. | [Review Reference](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c4 "Review Reference") | +| **Workflows**: Create, update, and get details of the custom workflows created by your team. (Workflows are defined using the Review Tool.) Workflows typically use Content Moderator, but can also use certain other APIs that are available as connectors within the Review Tool. | [Workflow Reference](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/5813b46b3f9b0711b43c4c59 "Workflow Reference") | diff --git a/articles/cognitive-services/Content-Moderator/ecommerce-retail-catalog-moderation.md b/articles/cognitive-services/Content-Moderator/ecommerce-retail-catalog-moderation.md index 2bc64d2266658..349802a34ab04 100644 --- a/articles/cognitive-services/Content-Moderator/ecommerce-retail-catalog-moderation.md +++ b/articles/cognitive-services/Content-Moderator/ecommerce-retail-catalog-moderation.md @@ -1,6 +1,6 @@ --- -title: eCommerce catalog classification with machine learning based Azure Content Moderator | Microsoft Docs -description: Automatically classify eCommerce catalogs with machine learning +title: eCommerce catalog moderation with machine learning and AI with Azure Content Moderator | Microsoft Docs +description: Automatically moderate eCommerce catalogs with machine learning and AI services: cognitive-services author: sanjeev3 manager: mikemcca @@ -12,9 +12,9 @@ ms.date: 09/25/2017 ms.author: sajagtap --- -# eCommerce catalog classification with machine learning +# eCommerce catalog moderation with machine learning -In this tutorial, we learn how to implement machine-learning-based automated ecommerce catalog classification with Content Moderator, Computer Vision and Custom Vision services. The solution combines machine-assisted catalog classification with human moderation capabilities. +In this tutorial, we learn how to implement machine-learning-based intelligent ecommerce catalog moderation by combining machine-assisted AI technologies with human moderation to provide an intelligent catalog system. ![Classified product images](images/tutorial-ecommerce-content-moderator.PNG) diff --git a/articles/cognitive-services/Content-Moderator/toc.yml b/articles/cognitive-services/Content-Moderator/toc.yml index 8abfc587b420f..c0b1e6e3340e3 100644 --- a/articles/cognitive-services/Content-Moderator/toc.yml +++ b/articles/cognitive-services/Content-Moderator/toc.yml @@ -64,7 +64,7 @@ href: Review-Tool-User-Guide/View-Dashboard.md - name: Tutorials items: - - name: Facebook post moderation + - name: Facebook content moderation href: facebook-post-moderation.md - name: eCommerce catalog moderation href: ecommerce-retail-catalog-moderation.md diff --git a/articles/cognitive-services/Content-Moderator/try-image-api.md b/articles/cognitive-services/Content-Moderator/try-image-api.md index 95b9567943fab..cc3c54a99252d 100644 --- a/articles/cognitive-services/Content-Moderator/try-image-api.md +++ b/articles/cognitive-services/Content-Moderator/try-image-api.md @@ -1,6 +1,6 @@ --- -title: Try the Image API in Azure Content Moderator | Microsoft Docs -description: Try Image API from the online console +title: Test drive the Image API in Azure Content Moderator | Microsoft Docs +description: Use image moderation from the online console services: cognitive-services author: sanjeev3 manager: mikemcca @@ -12,7 +12,7 @@ ms.date: 08/05/2017 ms.author: sajagtap --- -# Image Moderation API +# Image moderation from the online console Use the [Image Moderation API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c) to initiate scan-and-review moderation workflows with text content. The moderation job scans your content for profanity, comparing it against custom and/or shared blacklists. diff --git a/articles/cognitive-services/Emotion/QuickStarts/CSharp.md b/articles/cognitive-services/Emotion/QuickStarts/CSharp.md index 8d72925dcaaa3..564eaf58bcbe4 100644 --- a/articles/cognitive-services/Emotion/QuickStarts/CSharp.md +++ b/articles/cognitive-services/Emotion/QuickStarts/CSharp.md @@ -1,6 +1,6 @@ --- title: Emotion API C# quick start | Microsoft Docs -description: Get information and a code sample to help you quickly get started using the Emotion API with C# in Cognitive Services. +description: Get information and a code sample to help you quickly get started by using the Emotion API with C# in Cognitive Services. services: cognitive-services author: v-royhar manager: yutkuo @@ -12,25 +12,25 @@ ms.date: 11/02/2017 ms.author: anroth --- -# Emotion API C# Quick Start +# Emotion API C# quick start > [!IMPORTANT] -> Video API Preview will end on October 30th, 2017. Try the new [Video Indexer API Preview](https://azure.microsoft.com/services/cognitive-services/video-indexer/) to easily extract insights from -videos and to enhance content discovery experiences, such as search results, by detecting spoken words, faces, characters, and emotions. [Learn more](https://docs.microsoft.com/azure/cognitive-services/video-indexer/video-indexer-overview). +> The Video API Preview ended on October 30, 2017. To easily extract insights from +videos, try the new [Video Indexer API Preview](https://azure.microsoft.com/services/cognitive-services/video-indexer/). You also can use it to enhance content discovery experiences, such as search results, by detecting spoken words, faces, characters, and emotions. To learn more, see the [Video Indexer Preview](https://docs.microsoft.com/azure/cognitive-services/video-indexer/video-indexer-overview) overview. -This article provides information and a code sample to help you quickly get started using the [Emotion API Recognize method](https://dev.projectoxford.ai/docs/services/5639d931ca73072154c1ce89/operations/563b31ea778daf121cc3a5fa) with C# to recognize the emotions expressed by one or more people in an image. +This article provides information and a code sample to help you quickly get started by using the [Emotion API Recognize method](https://dev.projectoxford.ai/docs/services/5639d931ca73072154c1ce89/operations/563b31ea778daf121cc3a5fa) with C#. You can use it to recognize the emotions expressed by one or more people in an image. ## Prerequisites -* Get the Microsoft Cognitive Emotion API Windows SDK [here](https://www.nuget.org/packages/Microsoft.ProjectOxford.Emotion/) -* Get your free Subscription Key [here](https://azure.microsoft.com/en-us/try/cognitive-services/) +* Get the Cognitive Services [Emotion API Windows SDK](https://www.nuget.org/packages/Microsoft.ProjectOxford.Emotion/). +* Get your free [subscription key](https://azure.microsoft.com/en-us/try/cognitive-services/). -## Emotion Recognition C# Example Request +## Emotion recognition C# example request -Create a new Console solution in Visual Studio, then replace Program.cs with the following code. Change the `string uri` to use the region where you obtained your subscription keys, and replace the "Ocp-Apim-Subscription-Key" value with your valid subscription key. The subscription key can be found in the Azure portal under the "Keys" section of the left-hand navigation column when browsing your Emotion API resource. Similarly, you will get the proper connect URI in the "Overview" panel for your resource listed under "Endpoint." +Create a new Console solution in Visual Studio, and then replace Program.cs with the following code. Change the `string uri` to use the region where you obtained your subscription keys. Replace the **Ocp-Apim-Subscription-Key** value with your valid subscription key. To find the subscription key, go to the Azure portal. On the navigation pane on the left, under the **Keys** section, browse to your Emotion API resource. Similarly, you can get the proper connect URI in the **Overview** panel for your resource listed under **Endpoint**. -![Your API Resource Keys](../../media/emotion-api/keys.png) +![Your API resource keys](../../media/emotion-api/keys.png) -To process the response of your request, you can use a library like `Newtonsoft.Json`. This allows you to handle a JSON string as series of manageable objects called Tokens. To add this library to your package, right click on your project in Solution Explorer and select "Manage Nuget Packages". Then search for "Newtonsoft". The first result should be "Newtonsoft.Json". Select install, and you'll now be able to reference this in your application. +To process the response of your request, use a library like `Newtonsoft.Json`. This way you can handle a JSON string as a series of manageable objects called Tokens. To add this library to your package, right-click your project in Solution Explorer and select **Manage Nuget Packages**. Then search for **Newtonsoft**. The first result should be **Newtonsoft.Json**. Select **Install**. You can now reference this library in your application. ![Install Newtonsoft.Json](../../media/emotion-api/newtonsoft-nuget.png) @@ -89,7 +89,7 @@ namespace CSHttpClientSample responseContent = response.Content.ReadAsStringAsync().Result; } - // A peak at the raw JSON response. + // A peek at the raw JSON response. Console.WriteLine(responseContent); // Processing the JSON into manageable objects. @@ -117,10 +117,11 @@ namespace CSHttpClientSample } ``` -## Recognize Emotions Sample Response -A successful call returns an array of face entries and their associated emotion scores, ranked by face rectangle size in descending order. An empty response indicates that no faces were detected. An emotion entry contains the following fields: -* faceRectangle - Rectangle location of face in the image. -* scores - Emotion scores for each face in the image. +## Recognize emotions sample response +A successful call returns an array of face entries and their associated emotion scores. They are ranked by face rectangle size in descending order. An empty response indicates that no faces were detected. An emotion entry contains the following fields: + +* faceRectangle: Rectangle location of face in the image +* scores: Emotion scores for each face in the image ```json application/json diff --git a/articles/cognitive-services/LUIS/luis-reference-prebuilt-domains.md b/articles/cognitive-services/LUIS/luis-reference-prebuilt-domains.md new file mode 100644 index 0000000000000..e63955d11c4a6 --- /dev/null +++ b/articles/cognitive-services/LUIS/luis-reference-prebuilt-domains.md @@ -0,0 +1,373 @@ +--- +title: Prebuilt domain reference | Microsoft Docs +description: Reference for the prebuilt domains, which are prebuilt collections of intents and entities from Language Understanding Intelligent Services (LUIS). +services: cognitive-services +author: DeniseMak +manager: rstand +ms.service: cognitive-services +ms.technology: luis +ms.topic: article +ms.date: 11/02/2017 +ms.author: v-demak +--- + +# Prebuilt domain reference +This reference provides information about the prebuilt domains, which are prebuilt collections of intents and entities that LUIS offers. + +## List of prebuilt domains +LUIS offers 20 prebuilt domains. + +| Prebuilt domain | Description | +| ---------------- |-----------------------| +| Calendar | The Calendar domain provides intent and entities for adding, deleting, or editing an appointment, checking participants availability, and finding information about a calendar event.| +| Camera | The Camera domain provides intents and entities for taking pictures, recording videos, and broadcasting video to an application.| +| Communication | Sending messages and making phone calls.| +| Entertainment | Handling queries related to music, movies, and TV.| +| Events | Booking tickets for concerts, festivals, sports games and comedy shows.| +| Fitness | Handling requests related to tracking fitness activities.| +| Gaming | Handling requests related to a game party in a multiplayer game.| +| HomeAutomation | Controlling smart home devices like lights and appliances.| +| MovieTickets | Booking tickets to movies at a movie theater.| +| Music | Playing music on a music player.| +| Note | The Note domain provides intents and entities related to creating, editing, and finding notes.| +| OnDevice | The OnDevice domain provides intents and entities related to controlling the device.| +| Places | Handling queries related to places like businesses, institutions, restaurants, public spaces, and addresses.| +| Reminder | Handling requests related to creating, editing, and finding reminders.| +| RestaurantReservation | Handling requests to manage restaurant reservations.| +| Taxi | Handling bookings for a taxi.| +| Translate | Translating text to a target language.| +| TV | Controlling TVs.| +| Utilities | Handling requests that are common in many domains, like "help", "repeat", "start over."| +| Weather | Getting weather reports and forecasts.| +| Web | Navigating to a website.| + +For more detail on each domain, see the sections that follow. + +## Calendar + +The Calendar domain provides intents and entities related to calendar entries. The Calendar intents include adding, deleting or editing an appointment, checking availability, and finding information about a calendar entry or appointment. + +### Intents +| Intent name | Description | Examples | +| ---------------- |-----------------------|----| +| Add | Add a new one-time item to the calendar.| Make an appointment with Lisa at 2pm on Sunday

I want to schedule a meeting

I need to set up a meeting| +| CheckAvailability | Find availability for an appointment or meeting on the user's calendar or another person's calendar.| When is Jim available to meet?

Show when Carol is available tomorrow

Is Chris free on Saturday?| +| Delete | Request to delete a calendar entry.| Cancel my appointment with Carol.

Delete my 9 am meeting
| +| Edit | Request to change an existing meeting or calendar entry.| Move my 9 am meeting to 10 am.

I want to update my schedule.

Reschdule my meeting with Ryan.| +| Find | Display my weekly calendar.| Find the dentist review appointment.

Show my calendar
| + +### Entities +| Entity name | Description | Examples | +| ---------------- |-----------------------|----| +| Location | Location of calendar item, meeting or appointment. Addresses, cities, and regions are good examples of locations.| 209 Nashville Gym

897 Pancake house

Garage| +| Subject | The title of a meeting or appointment.| Dentist's appointment

Lunch with Julia

Doctor's appointment| + +## Camera +The Camera domain provides intents and entities related to using a camera. The intents cover capturing a photo, selfie, screenshot or video, and broadcasting video to an application. + +### Intents +| Intent name | Description | Examples | +| ---------------- |-----------------------|----| +| CapturePhoto| Capture a photo.| Take a photo

capture| +| CaptureScreenshot | Capture a screenshot.| Take screen shot.

capture the screen.| +| CaptureSelfie | Capture a selfie.| Take a selfie

take a picture of me | +| CaptureVideo | Start recording video.| Start recording

Begin recording| +| StartBroadcasting| Start broadcasting video.| Start broadcasting to Facebook| +| StopBroadcasting| Stop broadcasting video.| Stop broadcasting| +| StopVideoRecording| Stop recording a video.| That's enough

stop recording| + +### Entities +| Entity name | Description | Examples | +| ---------------- |-----------------------|----| +| AppName | The name of an application to broadcast video to.| OneNote

Facebook

Skype| + + +## Communication +The Communication domain provides intents and entities related to email, messages and phone calls. + +### Intents +| Intent name | Description | Examples | +| ---------------- |-----------------------|----| +| AddContact| Add a new contact to the user's list of contacts.|Add new contact

Save this number and put the name as Carol| +| AddMore| Add more to an email or text, as part of a step-wise email or text composition.|Add more to text

Add more to email body| +| Answer| Answer an incoming phone call.|Answer the call

Pick it up| +| AssignContactNickname| Assign a nickname to a contact.|Change Isaac to dad
Edit Jim's nickname
Add nickname to Patti Owens| +| CallVoiceMail| Connect to the user's voice mail.|Connect me to my voicemail box
Voice mail
Call voicemail| +| CheckIMStatus| Check the status of a contact in Skype.|Is Jim's online status set to away?
Is Carol available to chat with?| +| Confirm| Confirm an action.|Yes
Okay
All right
I confirm that I want to send this email.
| +| Dial| Make a phone call.|Call Jim
Please dial 311
| +| FindContact| Find contact information by name.|Find Carol's number
Show me Carol's number
| +| FindSpeedDial| Find the speedial number a phone number is set to and vice versa.|What is my dial number 5?
Do I have speed dial set?
What is the dial number for 941-5555-333?| +| GetForwardingsStatus| Get the current status of call forwarding.|Is my call forwarding turned on?
Tell me if my call status is on or off
| +| Goback| Go back to the previous step.|Go back to twitter
Go back a step
Go back| +| Ignore| Ignore an incoming call.|Don't answer
Ignore call| +| IgnoreWithMessage| Ignore an incoming call and reply with text instead.|Don't answer that call but send a message instead.
Ignore and send a text back.| +| PressKey| Press a button or number on the keypad.|Dial star.
Press 1 2 3.| +| ReadAloud| Read a message or email to the user.|Read text.
What did she say in the message?| +| TurnForwardingOff| Make a phone call.|

| +| Redial| Redial or call a number again.|Redial.
Redial my last call.| +| Reject| Reject an incoming call.|Reject call
Can't answer now
Not available at the moment and will call back later.| +| SendEmail| Send an email. This intent applies to email but not text messages.|Email to Mike Waters: Mike, that dinner last week was splendid.
Send an email to Bob
| +| SendMessage| Send a text message or an instant message.|Send text to Chris and Carol| +| SetSpeedDial| Set a speed dial shortcut for a contact's phone number.|Set speed dial one for Carol.
Set up speed dial for mom.| +| ShowNext| See the next item, for example, in a list of text messages or emails.|Show the next one.
Go to the next page.| +| ShowPrevious| See the previous item, for example, in a list of text messages or emails.|Show the previous one.
Previous
Go to previous.| +| StartOver| Start the system over or start a new session.|Start over
New session
restart| +| TurnForwardingOff| Turn off call forwarding.|Stop forwarding my calls
Switch off call forwarding| +| TurnForwardingOn| Turn off the speaker phone.|Forwarding my calls to 3333
Switch on call forwarding to 3333| +| TurnSpeakerOff| Turn off the speaker phone.|Take me off speaker.
Turn off speakerphone.
| +| TurnSpeakerOn| Turn on the speaker phone.|Speakerphone mode.
Put speakerphone on.
| + +### Entities +| Entity name | Description | Examples | +| ---------------- |-----------------------|----| +| AudioDeviceType | Type of audio device (speaker, headset, microphone, etc).| Speaker
Hands-free
Bluetooth| +| Category | The category of a message or email.| Important
High priority| +| ContactAttribute | An attribute of the contact the user inquires about.| Birthdays
Address
Phone number| +| ContactName | The name of a contact or message recipient.| Carol
Jim
Chris| +| EmailSubject | The text used as the subject line for an email.| RE: interesting story| +| Line | The line the user wants to use to make a call or send a text/email from.| Work line
British cell
Skype| +| Message | The message to send as an email or text.| It was great meeting you today. See you again soon!| +| MessageType | The name of a contact or message recipient.| Text
Email| +| OrderReference | The ordinal or relative position in a list, identifying an item to retrieve. For example, "last" or "recent" in "What was the last message I sent?"| Last
Recent| +| SenderName | The name of the sender.| Patti Owens| + +## Entertainment +The Entertainment domain provides intents and entities related to searching for movies, music, games and TV shows. + +### Intents +| Intent name | Description | Examples | +| ---------------- |-----------------------|----| +| Search| Search for movies, music, apps, games and TV shows.|Search the store for Halo.
Search for Avatar.| + +### Entities +| Entities name | Description | Examples | +| ---------------- |-----------------------|----| +| ContentRating | Media content rating like G, or R for movies.|Kids video.
PG rated.| +| Genre | The genre of a movie, game, app or song.|Comedies
Dramas
Funny| +| Keyword| A generic search keyword specifying an attribute the doesn't exist in the more specific media slots.|Soundtracks
Moon River
Amelia Earhart| +| Language | Media content rating like G, or R for movies.|French
English
Korean| +| MediaFormat | The additional special technical type in which the media is formatted.|HD Movies
3D movies
Downloadable| +| MediaSource | The store or marketplace for acquiring the media.|Netflix
Prime| +| MediaSubTypes| Media types smaller than movies and games.|Demos
Dlc
Trailers| +| Nationality| The country where a movie, show, or song was created.|French
German
Korean| +| Person| The actor, director, producer, musician or artist associated with a movie, app, game or TV show.|Madonna
Stanley Kubrick| +| Role| Role played by a person in the creation of media.|Sings
Directed by
By| +| Title| The name of a movie, app, game, TV show, or song.|Friends
Minecraft| +| Type| The type or media format of a movie, app, game, TV show, or song.|Music
MovieTV
shows| +| UserRating| User star or thumbs rating.|5 stars
3 stars
4 stars| + +## Events +The Events domain provides intents and entities related to booking tickets for events like concerts, festivals, sports games and comedy shows. + +### Intents +| Intent name | Description | Examples | +| ---------------- |-----------------------|----| +| Book| Purchase tickets to an event.|I'd like to buy a ticket for the symphony this weekend.| + + + +## Fitness +The Fitness domain provides intents and entities related to tracking fitness activities. The intents include saving notes, remaining time or distance, or saving activity results. + +### Intents +| Intent name | Description | Examples | +| ---------------- |-----------------------|----| +| AddNote| Adds supplemental notes to a tracked activity.|The difficulty of this run was 6/10
The terrain I am on running on is asphalt
I am using a 3 speed bike| +|GetRemaining| Gets the remaining time or distance for an activity.|How much time till the next lap?
How many miles are remaining in my run? How much time for the split?| +| LogActivity| Save or log completed activity results.|Save my last run
Log my Saturday morning walk
store my previous swim| +| LogWeight| Save or log the user's current weight.|Save my current weight
log my weight now
store my current body weight| + + + +## Gaming +The Gaming domain provides intents and entities related to managing a game party in a multiplayer game. + +### Intents +| Intent name | Description | Examples | +| ---------------- |-----------------------|----| +| InviteParty| Invite a contact to join a gaming party.|Invite this player to my party
Come to my party
Join my clan| +|LeaveParty| Gets the remaining time or distance for an activity.|I'm out
I'm leaving this party for another
I am quitting| +| StartParty| Start a gaming party in a multiplayer game.|Dude let's start a party
start a party
should we start a clan tonight| + +### Entities +| Entity name | Description | Examples | +| ---------------- |-----------------------|----| +| Contact| A contact name to use in a multiplayer game.|Carol
Jim| + + +## HomeAutomation +The HomeAutomation domain provides intents and entities related to controlling smart home devices like lights and appliances. + +### Intents +| Intent name | Description | Examples | +| ---------------- |-----------------------|----| +| TurnOff| Turn off, close, or unlock a device.|Turn off the lights
Stop the coffee maker
Close garage door| +|TurnOn| Turn on a device or set the device to a particular setting or mode.|turn on my coffee maker
can you turn on my coffee maker?
Set the thermostat to 72 degrees.| + + +### Entities +| Entity name | Description | Examples | +| ---------------- |-----------------------|----| +| Device | A type of device that can be turned on or off.|coffee maker
thermostat
lights| +| Operation | The state to set of the device.|lock
open
on
off| +| Room | The location or room the device is in.|living room
bedroom
kitchen| + +## MovieTickets +The Movie Tickets domain provides intents and entities related to booking tickets to movies at a movie theater. + +### Examples +``` +Book me two tickets for Captain Omar and the two Musketeers +Cancel tickets +When is Captain Omar showing? +``` + + +## Music +The Music domain provides intents and entities related to playing music on a music player. + +### Examples +``` +play Kevin Durant +Increase track volume +Skip to the next song +``` + + +## Note +The Note domain provides intents and entities related to creating, editing, and finding notes. + +### Examples +``` +Add to my groceries note lettuce tomato bread coffee +Check off bananas from my grocery list +Remove all items from my vacation list +``` + + +## OnDevice +The OnDevice domain provides intents and entities related to controlling the device. + +### Examples +``` +Close video player +Cancel playback +Can you make the screen brighter? +``` + + +## Places +The Places domain provides intents for handling queries related to places like businesses, institution, restaurants, public spaces and addresses. + +### Examples +``` +Save this location to my favorites +How far away is Holiday Inn? +At what time does Safeway close? +``` + + +## Reminder +The reminder domain provides intents and entities for creating, editing, and finding reminders. + +### Examples +``` +Change my interview to 9 am tomorrow +Remind me to buy milk on my way back home +Can you check if I have a reminder about Christine's birthday? +``` + + +## RestaurantReservation +The Reservation domain provides intents and entities related to managing restaurant reservations. + +### Examples +``` +Reserve at Zucca for two for tonight +Book a table at BJ's for tomorrow +Table for 3 in Palo Alto at 7 +``` + + +## Taxi + +The Taxi domain provides intents and entities for creating and managing taxi bookings. + +### Examples +``` +Get me a cab at 3 pm +How much longer do I have to wait for my taxi? +Cancel my Uber +``` + + +## Translate +The Translate domain provides intents and entities related to translating text to a target language. + +### Examples +``` +Translate to French +Translate hello to German +Translate this sentence to English +``` + + +## TV + +The TV domain provides intents and entities for controlling TVs. + +### Examples +``` +Switch channel to BBC +Show TV guide +Watch National Geographic +``` + + +## Utilities +The Utilities domain provides intents for tasks that are common to many tasks, such as greetings, cancellation, confirmation, help, repetition, navigation, starting and stopping. + +### Examples +``` +Go back to Twitter +Please help +Repeat last question please +``` + + +## Weather +The Weather domain provides intents and entities for getting weather reports and forecasts. + +### Examples +``` +weather in London in september +What?s the 10 day forecast? +What's the average temperature in India in september? +``` + + +## Web +The Web domain provides intents for navigating to a website. + +### Examples +``` +Navigate to facebook.com +Go to www.twitter.com +Navigate to www.bing.com +``` + + diff --git a/articles/cognitive-services/LUIS/toc.yml b/articles/cognitive-services/LUIS/toc.yml index 8296bfffb9064..050105d8c6c51 100644 --- a/articles/cognitive-services/LUIS/toc.yml +++ b/articles/cognitive-services/LUIS/toc.yml @@ -101,6 +101,8 @@ items: - name: Prebuilt entity reference href: luis-reference-prebuilt-entities.md + - name: Prebuilt domain reference + href: luis-reference-prebuilt-domains.md - name: Cortana prebuilt app reference href: luis-reference-cortana-prebuilt.md - name: Authoring APIs diff --git a/articles/container-instances/container-instances-quickstart.md b/articles/container-instances/container-instances-quickstart.md index 611b0c8fa63a5..3af1384b5f558 100644 --- a/articles/container-instances/container-instances-quickstart.md +++ b/articles/container-instances/container-instances-quickstart.md @@ -2,20 +2,13 @@ title: Quickstart - Create your first Azure Container Instances container description: Deploy and get started with Azure Container Instances services: container-instances -documentationcenter: '' author: seanmck manager: timlt editor: mmacy -tags: -keywords: '' -ms.assetid: ms.service: container-instances -ms.devlang: na ms.topic: quickstart -ms.tgt_pltfrm: na -ms.workload: na -ms.date: 11/20/2017 +ms.date: 11/29/2017 ms.author: seanmck ms.custom: mvc --- @@ -90,10 +83,9 @@ az container logs --name mycontainer --resource-group myResourceGroup Output: ```bash -Server running... -10.240.255.107 - - [20/Nov/2017:19:16:28 +0000] "GET / HTTP/1.1" 200 1663 "" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36" -10.240.255.107 - - [20/Nov/2017:19:16:28 +0000] "GET / HTTP/1.1" 200 1663 -10.240.255.107 - - [20/Nov/2017:19:16:28 +0000] "GET /favicon.ico HTTP/1.1" 404 19 +listening on port 80 +::ffff:10.240.255.107 - - [29/Nov/2017:20:48:50 +0000] "GET / HTTP/1.1" 200 1663 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36" +::ffff:10.240.255.107 - - [29/Nov/2017:20:48:50 +0000] "GET /favicon.ico HTTP/1.1" 404 150 "http://52.224.178.107/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36" ``` ## Delete the container diff --git a/articles/cosmos-db/documentdb-sdk-node.md b/articles/cosmos-db/documentdb-sdk-node.md index 18ad801ccb652..3a61b5cf0665b 100644 --- a/articles/cosmos-db/documentdb-sdk-node.md +++ b/articles/cosmos-db/documentdb-sdk-node.md @@ -63,7 +63,7 @@ ms.custom: H1Hack27Feb2017 * This SDK version requires the latest version of Azure Cosmos DB Emulator available for download from https://aka.ms/cosmosdb-emulator. ### 1.13.0 -* Splitproofed cross partition queries. +* Split proofed cross partition queries. * Adds supports for resource link with leading and trailing slashes (and corresponding tests). ### 1.12.2 diff --git a/articles/cosmos-db/faq.md b/articles/cosmos-db/faq.md index 7838ff7a24187..0cf8559ef3a33 100644 --- a/articles/cosmos-db/faq.md +++ b/articles/cosmos-db/faq.md @@ -507,7 +507,7 @@ Use [Diagnostic logs](logging.md). ### Which client SDKs can work with Apache Cassandra API of Azure Cosmos DB? In private preview Apache Cassandra SDK's client drivers which use CQLv3 were used for client programs. If you have other drivers that you use or if you are facing issues, send mail to [askcosmosdbcassandra@microsoft.com](mailto:askcosmosdbcassandra@microsoft.com). -### Is composite primary key supported? +### Is composite partition key supported? Yes, you can use regular syntax to create composite partition key. ### Can I use sstable loader for data loading? diff --git a/articles/cosmos-db/manage-account.md b/articles/cosmos-db/manage-account.md index e3e416c4c70c6..e6ce927d9b72e 100644 --- a/articles/cosmos-db/manage-account.md +++ b/articles/cosmos-db/manage-account.md @@ -14,7 +14,7 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 11/15/2017 +ms.date: 11/28/2017 ms.author: kirillg --- @@ -30,10 +30,10 @@ Selecting the right consistency level depends on the semantics of your applicati 3. In the **Default Consistency** page, select the new consistency level and click **Save**. ![Default consistency session][5] -## View, copy, and regenerate access keys -When you create an Azure Cosmos DB account, the service generates two master access keys that can be used for authentication when the Azure Cosmos DB account is accessed. By providing two access keys, Azure Cosmos DB enables you to regenerate the keys with no interruption to your Azure Cosmos DB account. +## View, copy, and regenerate access keys and passwords +When you create an Azure Cosmos DB account, the service generates two master access keys (or two passwords for MongoDB API accounts) that can be used for authentication when the Azure Cosmos DB account is accessed. By providing two access keys, Azure Cosmos DB enables you to regenerate the keys with no interruption to your Azure Cosmos DB account. -In the [Azure portal](https://portal.azure.com/), access the **Keys** page from the resource menu on the **Azure Cosmos DB account** page to view, copy, and regenerate the access keys that are used to access your Azure Cosmos DB account. +In the [Azure portal](https://portal.azure.com/), access the **Keys** page from the resource menu on the **Azure Cosmos DB account** page to view, copy, and regenerate the access keys that are used to access your Azure Cosmos DB account. For MongoDB API accounts, access the **Connection String** page from the resource menu to view, copy, and regenerate the passwords that are used to access your account. ![Azure portal screenshot, Keys page](./media/manage-account/keys.png) @@ -44,26 +44,26 @@ In the [Azure portal](https://portal.azure.com/), access the **Keys** page from Read-only keys are also available on this page. Reads and queries are read-only operations, while creates, deletes, and replaces are not. -### Copy an access key in the Azure portal -On the **Keys** page, click the **Copy** button to the right of the key you wish to copy. +### Copy an access key or password in the Azure portal +On the **Keys** page (or **Connection string** page for MongoDB API accounts), click the **Copy** button to the right of the key or password you wish to copy. ![View and copy an access key in the Azure portal, Keys page](./media/manage-account/copykeys.png) -### Regenerate access keys -You should change the access keys to your Azure Cosmos DB account periodically to help keep your connections more secure. Two access keys are assigned to enable you to maintain connections to the Azure Cosmos DB account using one access key while you regenerate the other access key. +### Regenerate access keys and passwords +You should change the access keys (and passwords for MongoDB API accounts) to your Azure Cosmos DB account periodically to help keep your connections more secure. Two access keys/passwords are assigned to enable you to maintain connections to the Azure Cosmos DB account using one access key while you regenerate the other access key. > [!WARNING] > Regenerating your access keys affects any applications that are dependent on the current key. All clients that use the access key to access the Azure Cosmos DB account must be updated to use the new key. > > -If you have applications or cloud services using the Azure Cosmos DB account, you will lose the connections if you regenerate keys, unless you roll your keys. The following steps outline the process involved in rolling your keys. +If you have applications or cloud services using the Azure Cosmos DB account, you will lose the connections if you regenerate keys, unless you roll your keys. The following steps outline the process involved in rolling your keys/passwords. 1. Update the access key in your application code to reference the secondary access key of the Azure Cosmos DB account. 2. Regenerate the primary access key for your Azure Cosmos DB account. In the [Azure portal](https://portal.azure.com/), access your Azure Cosmos DB account. -3. In the **Azure Cosmos DB Account** page, click **Keys**. -4. On the **Keys** page, click the regenerate button, then click **Ok** to confirm that you want to generate a new key. +3. In the **Azure Cosmos DB Account** page, click **Keys** (or **Connection String** for MongoDB accounts**). +4. On the **Keys**/**Connection String** page, click the regenerate button, then click **Ok** to confirm that you want to generate a new key. ![Regenerate access keys](./media/manage-account/regenerate-keys.png) 5. Once you have verified that the new key is available for use (approximately five minutes after regeneration), update the access key in your application code to reference the new primary access key. 6. Regenerate the secondary access key. @@ -75,11 +75,11 @@ If you have applications or cloud services using the Azure Cosmos DB account, yo > > -## Get the connection string +## Get the connection string To retrieve your connection string, do the following: 1. In the [Azure portal](https://portal.azure.com), access your Azure Cosmos DB account. -2. In the resource menu, click **Keys**. +2. In the resource menu, click **Keys** (or **Connection String** for MongoDB API accounts). 3. Click the **Copy** button next to the **Primary Connection String** or **Secondary Connection String** box. If you are using the connection string in the [Azure Cosmos DB Database Migration Tool](import-data.md), append the database name to the end of the connection string. `AccountEndpoint=< >;AccountKey=< >;Database=< >`. diff --git a/articles/cosmos-db/performance-levels.md b/articles/cosmos-db/performance-levels.md index b98e40f5ba582..edb0ea8247d73 100644 --- a/articles/cosmos-db/performance-levels.md +++ b/articles/cosmos-db/performance-levels.md @@ -13,7 +13,7 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 08/28/2017 +ms.date: 11/29/2017 ms.author: mimig ms.custom: H1Hack27Feb2017 @@ -90,7 +90,7 @@ Assuming you have 10 S1 collections, 1 GB of storage for each, in the US East re ## What if I need more than 10 GB of storage? -Whether you have a collection with an S1, S2, or S3 performance level, or have a single partition collection, all of which have 10 GB of storage available, you can use the Cosmos DB Data Migration tool to migrate your data to a partitioned collection with virtually unlimited storage. For information about the benefits of a partitioned collection, see [Partitioning and scaling in Azure Cosmos DB](documentdb-partition-data.md). For information about how to migrate your S1, S2, S3, or single partition collection to a partitioned collection, see [Migrating from single-partition to partitioned collections](documentdb-partition-data.md#migrating-from-single-partition). +Whether you have a collection with an S1, S2, or S3 performance level, or have a single partition collection, all of which have 10 GB of storage available, you can use the Cosmos DB Data Migration tool to migrate your data to a partitioned collection with virtually unlimited storage. For information about the benefits of a partitioned collection, see [Partitioning and scaling in Azure Cosmos DB](documentdb-partition-data.md). diff --git a/articles/cosmos-db/tutorial-query-table.md b/articles/cosmos-db/tutorial-query-table.md index 9a506b6b38919..866d42d67b833 100644 --- a/articles/cosmos-db/tutorial-query-table.md +++ b/articles/cosmos-db/tutorial-query-table.md @@ -36,7 +36,7 @@ The queries in this article use the following sample `People` table: | Smith | Ben | Ben@contoso.com| 425-555-0102 | | Smith | Jeff | Jeff@contoso.com| 425-555-0104 | -See [Querying Tables and Entities] (https://docs.microsoft.com/rest/api/storageservices/fileservices/querying-tables-and-entities) for details on how to query by using the Table API. +See [Querying Tables and Entities](https://docs.microsoft.com/rest/api/storageservices/fileservices/querying-tables-and-entities) for details on how to query by using the Table API. For more information on the premium capabilities that Azure Cosmos DB offers, see [Azure Cosmos DB Table API](table-introduction.md) and [Develop with the Table API in .NET](tutorial-develop-table-dotnet.md). diff --git a/articles/cost-management/media/use-reports/actual-amort-example.png b/articles/cost-management/media/use-reports/actual-amort-example.png new file mode 100644 index 0000000000000..8e1b3bbd2f345 Binary files /dev/null and b/articles/cost-management/media/use-reports/actual-amort-example.png differ diff --git a/articles/cost-management/media/use-reports/actual-cost-analysis.png b/articles/cost-management/media/use-reports/actual-cost-analysis.png new file mode 100644 index 0000000000000..1a22b96c46785 Binary files /dev/null and b/articles/cost-management/media/use-reports/actual-cost-analysis.png differ diff --git a/articles/cost-management/media/use-reports/actual-cost-over-time.png b/articles/cost-management/media/use-reports/actual-cost-over-time.png new file mode 100644 index 0000000000000..6eaf108437afc Binary files /dev/null and b/articles/cost-management/media/use-reports/actual-cost-over-time.png differ diff --git a/articles/cost-management/media/use-reports/amort-cost-over-time.png b/articles/cost-management/media/use-reports/amort-cost-over-time.png new file mode 100644 index 0000000000000..3c5328ca0aa56 Binary files /dev/null and b/articles/cost-management/media/use-reports/amort-cost-over-time.png differ diff --git a/articles/cost-management/media/use-reports/cost-analysis01.png b/articles/cost-management/media/use-reports/cost-analysis01.png new file mode 100644 index 0000000000000..68384bee05be5 Binary files /dev/null and b/articles/cost-management/media/use-reports/cost-analysis01.png differ diff --git a/articles/cost-management/media/use-reports/cost-analysis02.png b/articles/cost-management/media/use-reports/cost-analysis02.png new file mode 100644 index 0000000000000..c7d4965bcf0ae Binary files /dev/null and b/articles/cost-management/media/use-reports/cost-analysis02.png differ diff --git a/articles/cost-management/media/use-reports/cost-over-time.png b/articles/cost-management/media/use-reports/cost-over-time.png new file mode 100644 index 0000000000000..c203d781e0ba4 Binary files /dev/null and b/articles/cost-management/media/use-reports/cost-over-time.png differ diff --git a/articles/cost-management/toc.yml b/articles/cost-management/toc.yml index 385042598628c..3df2c8c6525ec 100644 --- a/articles/cost-management/toc.yml +++ b/articles/cost-management/toc.yml @@ -27,6 +27,10 @@ items: - name: Understanding cost reports href: understading-cost-reports.md +- name: How-to guides + items: + - name: Use Cost Management reports + href: use-reports.md - name: Resources items: - name: Cost Management videos diff --git a/articles/cost-management/use-reports.md b/articles/cost-management/use-reports.md new file mode 100644 index 0000000000000..b1a944095a3ae --- /dev/null +++ b/articles/cost-management/use-reports.md @@ -0,0 +1,161 @@ +--- +title: Use Cost Management reports in Azure Cost Management | Microsoft Docs +description: This article describes how to use various Cost Management reports in the Cloudyn portal. +services: cost-management +keywords: +author: bandersmsft +ms.author: banders +ms.date: 11/29/2017 +ms.topic: article +ms.service: cost-management +manager: carmonm +ms.custom: +--- + +# Use Cost Management reports + +This article describes how to use various Cost Management reports in the Cloudyn portal. Most Cloudyn reports are intuitive and have a uniform look and feel. For an overview about Cloudyn reports, see [Understanding cost reports](understading-cost-reports.md). The article also describes various options and fields used in most reports. + +## Cost Analysis reports + +Cost Analysis reports display billing data from your Cloud providers. Using the reports, you can group and drill into various data segments itemized in the billing file. The reports enable granular cost navigation across the cloud vendors' raw billing data. + +Cost Analysis reports do not group costs by tags. Tag-based reporting is only available in the Cost Allocation reports set after you create a cost model using Cost Allocation 360. + +### Actual Cost Analysis + +The Actual Cost Analysis report shows your main cost contributors, including ongoing costs and one-time fees. + + Use the Actual Cost Analysis report to: + +- Analyze and monitor actual costs spent during a specified time frame +- Schedule a threshold alert +- Analyze showback and chargeback costs + +#### To use the Actual Cost Analysis report + +At a minimum, perform the following steps. You can also use other options and fields. + +1. Select a date range. +2. Select a filter. + +You can right-click report results to drill into them and view more detailed information. + +![Actual Cost Analysis report example](./media/use-reports/actual-cost-analysis.png) + +### Actual Cost Over Time + +The Actual Cost Over Time report is a standard cost analysis report distributing cost over a defined time resolution. The report displays spending over time to allow you to observe trends and detect spending irregularities. This report shows your main cost contributors including ongoing costs and one-time reserved instance fees that are being spent during a selected time frame. + +Use the Actual Cost Over Time report to: + +- See cost trends over time. +- Find irregularities in cost. +- Find all cost-related questions related to Amazon Web Services. + +#### To use the Actual Cost Over Time report: + +At a minimum, perform the following steps. You can also use other options and fields. + +- Select a date range. + +For example, you can select groups to view their cost over time. And then add filters to narrow your results. + +![Actual Cost Over Time report example](./media/use-reports/actual-cost-over-time.png) + + + +### Amortized Cost reports + +This set of amortized cost reports shows linearized non-usage based service fees, or one-time payable costs and spread their cost over time evenly during their lifespan. + +For example, one-time fees might include: + +- Annual support fees +- Annual security component fees +- Reserved Instances purchase fees +- Some Azure Marketplace items + +In the billing file, one-time fees are characterized when the service consumption start and end dates, or timestamp, have equal values. Cloudyn then recognizes them as one-time fees that can be amortized. Other consumption-based services with on-demand usage costs cannot be amortized. + +To illustrate amortized costs, review the following example image of an Actual Cost Over time report. In the example, it shows a cost spike on August 23. It might seem an anomaly compared to the usual daily cost trend. Root cause analysis and data navigation identified this cost as an annual AWS service APN reservation, which is a one-time fee purchased and billed on that day. You can see how this cost is amortized in the next section. + +![Actual Cost Over Time report example showing one-time cost](./media/use-reports/actual-amort-example.png) + +#### To use the Amortized Cost Over Time report: + +At a minimum, perform the following steps. You can also use other options and fields. + +1. Select a date range. +2. Select a Service and a Provider. + +Carrying-forward the previous example, you can see that the one-time cost is now amortized in the following image: + +![Amortized Cost Over Time report example](./media/use-reports/amort-cost-over-time.png) + +The preceding image shows the amortized cost for the APN reservation cost over time. This report shows the one-time fee amortization and the APN cost as an annual reservation purchase. The APN cost is spread evenly on a daily basis as 1/365th of the reservation up-front cost. + +## Cost Allocation Analysis reports + +Cost Allocation Analysis reports are available after you create a cost model using Cost Allocation 360. Cloudyn processes cost/billing data and matches the data to the usage and tag data of your cloud accounts. To match the data, Cloudyn requires access to your usage data. Accounts that are missing credentials, are labeled as uncategorized resources. + +### Cost Analysis report + +The Cost Analysis report provides insight into your cloud consumption and spending during a selected time frame. The policies set in the Cost Allocation Manager are used in the Cost Analysis report. + +How does Cloudyn calculate this report? + +Cloudyn ensures allocation retains the integrity of each linked account by applying Account Affinity. Affinity ensures an account that does not use a specific service does not have any costs of this service allocated to it. The costs accrued in that account remain in that account and are not calculated by the allocation policies. For example, you might have five linked accounts. If only three of them use storage services, then the cost of storage services is only allocated across tags in the three accounts. + + Use the Cost Analysis report to: + +- Display an aggregated view of your entire deployment for a specific time frame. +- View costs by tag categories based on policies created in the cost model. + +#### To use the Cost Analysis report: + +1. Select a date range. +2. Add tags, as needed. +3. Add groups. +4. Choose a cost model that you created previously. + +The following image shows an example Cost Analysis report in sunburst format. The rings show groups. The outer ring shows Service and the inner circle shows Unit. + +![Cost Analysis report example](./media/use-reports/cost-analysis01.png) + + + +Here's example of the same information in a table view. + +![Cost Analysis report example](./media/use-reports/cost-analysis02.png) + + + +### Cost Over Time report + +The Cost Over Time report displays spending over time so you can observe trends and detect irregularities in your deployment. It essentially shows costs distributed over a defined period. The report includes your main cost contributors including ongoing costs and one-time reserved instance fees that are being spent during a selected time frame. Policies set in Cost Manager 360° can be used in this report. + +Use the Cost Over Time report to: + +- See changes over time and which influences change from one day (or date range) to the next. +- Analyze costs over time for a specific instance. +- Understand why there was a cost increase for a specific instance. + +#### To use the Cost Over Time report: + +1. Select a date range. +2. Add tags, as needed. +3. Add groups. +4. Choose a cost model that you created previously. +5. Select actual costs or amortized costs. +6. Choose whether to apply allocation rules to view raw billing data view or to recalculated cost by Cloudyn view. + +Here's an example of the report. + +![Cost Over Time example](./media/use-reports/cost-over-time.png) + + + +## Next steps + +- If you haven't already completed the first tutorial for Cost Management, read it at [Review usage and costs](tutorial-review-usage.md). diff --git a/articles/data-factory/TOC.yml b/articles/data-factory/TOC.yml index 35412be28bf24..e903fec411dd5 100644 --- a/articles/data-factory/TOC.yml +++ b/articles/data-factory/TOC.yml @@ -18,7 +18,9 @@ - name: REST href: quickstart-create-data-factory-rest-api.md - name: Portal - href: quickstart-create-data-factory-portal.md + href: quickstart-create-data-factory-portal.md + - name: Resource Manager template + href: quickstart-create-data-factory-resource-manager-template.md - name: Tutorials items: - name: 1 - Deploy SSIS packages to Azure diff --git a/articles/data-factory/create-azure-ssis-integration-runtime.md b/articles/data-factory/create-azure-ssis-integration-runtime.md index 419daaffc4408..dc54fbfb67ee1 100644 --- a/articles/data-factory/create-azure-ssis-integration-runtime.md +++ b/articles/data-factory/create-azure-ssis-integration-runtime.md @@ -47,7 +47,8 @@ For conceptual information on joining an Azure-SSIS IR to a VNet and configuring Define variables for use in the script in this tutorial: ```powershell -# Azure Data Factory version 2 information +# Azure Data Factory version 2 information +# If your input contains a PSH special character, e.g. "$", precede it with the escape character "`" like "`$". $SubscriptionName = "[your Azure subscription name]" $ResourceGroupName = "[your Azure resource group name]" $DataFactoryName = "[your data factory name]" diff --git a/articles/data-factory/index.yml b/articles/data-factory/index.yml index 8aa837d979d59..e3421234e109d 100644 --- a/articles/data-factory/index.yml +++ b/articles/data-factory/index.yml @@ -43,6 +43,10 @@ sections: src: /azure/data-factory/media/index/portal.svg text: Azure portal href: /azure/data-factory/quickstart-create-data-factory-portal + - image: + src: /azure/data-factory/media/index/logo_azure_resource_manager.svg + text: Resource Manager template + href: /azure/data-factory/quickstart-create-data-factory-resource-manager-template - title: Step-by-Step Tutorials items: - type: paragraph diff --git a/articles/data-factory/media/index/logo_azure_resource_manager.svg b/articles/data-factory/media/index/logo_azure_resource_manager.svg new file mode 100644 index 0000000000000..c7575a9aa6e93 --- /dev/null +++ b/articles/data-factory/media/index/logo_azure_resource_manager.svg @@ -0,0 +1,31 @@ + + + + + + + + + + + + + + + + + + + + + + + diff --git a/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/activity-runs.png b/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/activity-runs.png new file mode 100644 index 0000000000000..5b373f381be4e Binary files /dev/null and b/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/activity-runs.png differ diff --git a/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/browse-data-factories-menu.png b/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/browse-data-factories-menu.png new file mode 100644 index 0000000000000..57c0667355a80 Binary files /dev/null and b/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/browse-data-factories-menu.png differ diff --git a/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/monitor-manage-tile.png b/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/monitor-manage-tile.png new file mode 100644 index 0000000000000..ae1a5e85857a7 Binary files /dev/null and b/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/monitor-manage-tile.png differ diff --git a/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/monitor-pipeline-run.png b/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/monitor-pipeline-run.png new file mode 100644 index 0000000000000..c1c4a99a50eda Binary files /dev/null and b/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/monitor-pipeline-run.png differ diff --git a/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/output-window.png b/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/output-window.png new file mode 100644 index 0000000000000..cffc848ecb464 Binary files /dev/null and b/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/output-window.png differ diff --git a/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/pipeline-actions-link.png b/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/pipeline-actions-link.png new file mode 100644 index 0000000000000..f3656c98c7203 Binary files /dev/null and b/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/pipeline-actions-link.png differ diff --git a/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/select-data-factory.png b/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/select-data-factory.png new file mode 100644 index 0000000000000..4ccf12e3e9455 Binary files /dev/null and b/articles/data-factory/media/quickstart-create-data-factory-resource-manager-template/select-data-factory.png differ diff --git a/articles/data-factory/quickstart-create-data-factory-powershell.md b/articles/data-factory/quickstart-create-data-factory-powershell.md index aa1fef8b576d1..db182420d4c53 100644 --- a/articles/data-factory/quickstart-create-data-factory-powershell.md +++ b/articles/data-factory/quickstart-create-data-factory-powershell.md @@ -28,99 +28,9 @@ This quickstart describes how to use PowerShell to create an Azure data factory. > > This article does not provide a detailed introduction of the Data Factory service. For an introduction to the Azure Data Factory service, see [Introduction to Azure Data Factory](introduction.md). -## Prerequisites +[!INCLUDE [data-factory-quickstart-prerequisites](../../includes/data-factory-quickstart-prerequisites.md)] -### Azure subscription -If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. - -### Azure roles -To create Data Factory instances, the user account you use to log in to Azure must be a member of **contributor** or **owner** roles, or an **administrator** of the Azure subscription. In the Azure portal, click your **user name** at the top-right corner, and select **Permissions** to view the permissions you have in the subscription. If you have access to multiple subscriptions, select the appropriate subscription. For sample instructions on adding a user to a role, see the [Add roles](../billing/billing-add-change-azure-subscription-administrator.md) article. - -### Azure Storage Account -You use a general-purpose Azure Storage Account (specifically Blob Storage) as both **source** and **destination** data stores in this quickstart. If you don't have a general-purpose Azure storage account, see [Create a storage account](../storage/common/storage-create-storage-account.md#create-a-storage-account) on creating one. - -#### Get storage account name and account key -You use the name and key of your Azure storage account in this quickstart. The following procedure provides steps to get the name and key of your storage account. - -1. Launch a Web browser and navigate to [Azure portal](https://portal.azure.com). Log in using your Azure user name and password. -2. Click **More services >** in the left menu, and filter with **Storage** keyword, and select **Storage accounts**. - - ![Search for storage account](media/quickstart-create-data-factory-powershell/search-storage-account.png) -3. In the list of storage accounts, filter for your storage account (if needed), and then select **your storage account**. -4. In the **Storage account** page, select **Access keys** on the menu. - - ![Get storage account name and key](media/quickstart-create-data-factory-powershell/storage-account-name-key.png) -5. Copy the values for **Storage account name** and **key1** fields to the clipboard. Paste them into a notepad or any other editor and save it. You use them later in this quickstart. - -#### Create input folder and files -In this section, you create a blob container named **adftutorial** in your Azure blob storage. Then, you create a folder named **input** in the container, and then upload a sample file to the input folder. - -1. In the **Storage account** page, switch to the **Overview**, and then click **Blobs**. - - ![Select Blobs option](media/quickstart-create-data-factory-powershell/select-blobs.png) -2. In the **Blob service** page, click **+ Container** on the toolbar. - - ![Add container button](media/quickstart-create-data-factory-powershell/add-container-button.png) -3. In the **New container** dialog, enter **adftutorial** for the name, and click **OK**. - - ![Enter container name](media/quickstart-create-data-factory-powershell/new-container-dialog.png) -4. Click **adftutorial** in the list of containers. - - ![Select the container](media/quickstart-create-data-factory-powershell/seelct-adftutorial-container.png) -1. In the **Container** page, click **Upload** on the toolbar. - - ![Upload button](media/quickstart-create-data-factory-powershell/upload-toolbar-button.png) -6. In the **Upload blob** page, click **Advanced**. - - ![Click Advanced link](media/quickstart-create-data-factory-powershell/upload-blob-advanced.png) -7. Launch **Notepad** and create a file named **emp.txt** with the following content: Save it in the **c:\ADFv2QuickStartPSH** folder: Create the folder **ADFv2QuickStartPSH** if it does not already exist. - - ``` - John, Doe - Jane, Doe - ``` -8. In the Azure portal, in the **Upload blob** page, browse, and select the **emp.txt** file for the **Files** field. -9. Enter **input** as a value **Upload to folder** filed. - - ![Upload blob settings](media/quickstart-create-data-factory-powershell/upload-blob-settings.png) -10. Confirm that the folder is **input** and file is **emp.txt**, and click **Upload**. -11. You should see the **emp.txt** file and the status of the upload in the list. -12. Close the **Upload blob** page by clicking **X** in the corner. - - ![Close upload blob page](media/quickstart-create-data-factory-powershell/close-upload-blob.png) -1. Keep the **container** page open. You use it to verify the output at the end of this quickstart. - -### Windows PowerShell - -#### Install PowerShell -Install the latest PowerShell if you don't have it on your machine. - -1. In your web browser, navigate to [Azure SDK Downloads and SDKS](https://azure.microsoft.com/downloads/) page. -2. Click **Windows install** in the **Command-line tools** -> **PowerShell** section. -3. To install PowerShell, run the **MSI** file. - -For detailed instructions, see [How to install and configure PowerShell](/powershell/azure/install-azurerm-ps). - -#### Log in to PowerShell - -1. Launch **PowerShell** on your machine. Keep PowerShell open until the end of this quickstart. If you close and reopen, you need to run these commands again. - - ![Launch PowerShell](media/quickstart-create-data-factory-powershell/search-powershell.png) -1. Run the following command, and enter the same Azure user name and password that you use to sign in to the Azure portal: - - ```powershell - Login-AzureRmAccount - ``` -2. If you have multiple Azure subscriptions, run the following command to view all the subscriptions for this account: - - ```powershell - Get-AzureRmSubscription - ``` -3. Run the following command to select the subscription that you want to work with. Replace **SubscriptionId** with the ID of your Azure subscription: - - ```powershell - Select-AzureRmSubscription -SubscriptionId "" - ``` +[!INCLUDE [data-factory-quickstart-prerequisites-2](../../includes/data-factory-quickstart-prerequisites-2.md)] ## Create a data factory 1. Define a variable for the resource group name that you use in PowerShell commands later. Copy the following command text to PowerShell, specify a name for the [Azure resource group](../azure-resource-manager/resource-group-overview.md) in double quotes, and then run the command. For example: `"adfrg"`. @@ -439,30 +349,7 @@ In this step, you set values for the pipeline parameters: **inputPath** and **o "billedDuration": 14 ``` -## Verify the output -The pipeline automatically creates the output folder in the adftutorial blob container. Then, it copies the emp.txt file from the input folder to the output folder. - -1. In the Azure portal, on the **adftutorial** container page, click **Refresh** to see the output folder. - - ![Refresh](media/quickstart-create-data-factory-powershell/output-refresh.png) -2. Click **output** in the folder list. -2. Confirm that the **emp.txt** is copied to the output folder. - - ![Refresh](media/quickstart-create-data-factory-powershell/output-file.png) - -## Clean up resources -You can clean up the resources that you created in the Quickstart in two ways. You can delete the [Azure resource group](../azure-resource-manager/resource-group-overview.md), which includes all the resources in the resource group. If you want to keep the other resources intact, delete only the data factory you created in this tutorial. - -Deleting a resource group deletes all resources including data factories in it. Run the following command to delete the entire resource group: -```powershell -Remove-AzureRmResourceGroup -ResourceGroupName $resourcegroupname -``` - -If you want to delete just the data factory, not the entire resource group, run the following command: - -```powershell -Remove-AzureRmDataFactoryV2 -Name $dataFactoryName -ResourceGroupName $resourceGroupName -``` +[!INCLUDE [data-factory-quickstart-verify-output-cleanup.md](../../includes/data-factory-quickstart-verify-output-cleanup.md)] ## Next steps The pipeline in this sample copies data from one location to another location in an Azure blob storage. Go through the [tutorials](tutorial-copy-data-dot-net.md) to learn about using Data Factory in more scenarios. diff --git a/articles/data-factory/quickstart-create-data-factory-resource-manager-template.md b/articles/data-factory/quickstart-create-data-factory-resource-manager-template.md new file mode 100644 index 0000000000000..cdcf466c17034 --- /dev/null +++ b/articles/data-factory/quickstart-create-data-factory-resource-manager-template.md @@ -0,0 +1,646 @@ +--- +title: Create an Azure data factory using Resource Manager template | Microsoft Docs +description: In this tutorial, you create a sample Azure Data Factory pipeline using an Azure Resource Manager template. +services: data-factory +documentationcenter: '' +author: spelluru +manager: jhubbard +editor: monicar + +ms.service: data-factory +ms.workload: data-services +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: hero-article +ms.date: 11/28/2017 +ms.author: spelluru + +--- +# Tutorial: Create an Azure data factory using Azure Resource Manager template +> [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"] +> * [Version 1 - GA](v1/data-factory-build-your-first-pipeline-using-arm.md) +> * [Version 2 - Preview](quickstart-create-data-factory-resource-manager-template.md) + +This quickstart describes how to use an Azure Resource Manager template to create an Azure data factory. The pipeline you create in this data factory **copies** data from one folder to another folder in an Azure blob storage. For a tutorial on how to **transform** data using Azure Data Factory, see [Tutorial: Transform data using Spark](transform-data-using-spark.md). + +> [!NOTE] +> This article applies to version 2 of Data Factory, which is currently in preview. If you are using version 1 of the Data Factory service, which is generally available (GA), see [build your first data factory with Data Factory version 1](v1/data-factory-build-your-first-pipeline-using-arm.md). +> +> This article does not provide a detailed introduction of the Data Factory service. For an introduction to the Azure Data Factory service, see [Introduction to Azure Data Factory](introduction.md). + +[!INCLUDE [data-factory-quickstart-prerequisites](../../includes/data-factory-quickstart-prerequisites.md)] + +[!INCLUDE [data-factory-quickstart-prerequisites-2](../../includes/data-factory-quickstart-prerequisites-2.md)] + +## Resource Manager templates +To learn about Azure Resource Manager templates in general, see [Authoring Azure Resource Manager Templates](../azure-resource-manager/resource-group-authoring-templates.md). + +The following section provides the complete Resource Manager template for defining Data Factory entities so that you can quickly run through the tutorial and test the template. To understand how each Data Factory entity is defined, see [Data Factory entities in the template](#data-factory-entities-in-the-template) section. + +## Data Factory JSON +Create a JSON file named **ADFTutorialARM.json** in **C:\ADFTutorial** folder with the following content: + +```json +{ + "contentVersion": "1.0.0.0", + "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "parameters": { + "dataFactoryName": { + "type": "string", + "metadata": { + "description": "Name of the data factory. Must be globally unique." + } + }, + "dataFactoryLocation": { + "type": "string", + "allowedValues": [ + "East US", + "East US 2" + ], + "defaultValue": "East US", + "metadata": { + "description": "Location of the data factory. Currently, only East US and East US 2 are supported. " + } + }, + "storageAccountName": { + "type": "string", + "metadata": { + "description": "Name of the Azure storage account that contains the input/output data." + } + }, + "storageAccountKey": { + "type": "securestring", + "metadata": { + "description": "Key for the Azure storage account." + } + }, + "blobContainer": { + "type": "string", + "metadata": { + "description": "Name of the blob container in the Azure Storage account." + } + }, + "inputBlobFolder": { + "type": "string", + "metadata": { + "description": "The folder in the blob container that has the input file." + } + }, + "inputBlobName": { + "type": "string", + "metadata": { + "description": "Name of the input file/blob." + } + }, + "outputBlobFolder": { + "type": "string", + "metadata": { + "description": "The folder in the blob container that will hold the transformed data." + } + }, + "outputBlobName": { + "type": "string", + "metadata": { + "description": "Name of the output file/blob." + } + }, + "triggerStartTime": { + "type": "string", + "metadata": { + "description": "Start time for the trigger." + } + }, + "triggerEndTime": { + "type": "string", + "metadata": { + "description": "End time for the trigger." + } + } + }, + "variables": { + "azureStorageLinkedServiceName": "ArmtemplateStorageLinkedService", + "inputDatasetName": "ArmtemplateTestDatasetIn", + "outputDatasetName": "ArmtemplateTestDatasetOut", + "pipelineName": "ArmtemplateSampleCopyPipeline", + "triggerName": "ArmTemplateTestTrigger" + }, + "resources": [{ + "name": "[parameters('dataFactoryName')]", + "apiVersion": "2017-09-01-preview", + "type": "Microsoft.DataFactory/factories", + "location": "[parameters('dataFactoryLocation')]", + "properties": { + "loggingStorageAccountName": "[parameters('storageAccountName')]", + "loggingStorageAccountKey": "[parameters('storageAccountKey')]" + }, + "resources": [{ + "type": "linkedservices", + "name": "[variables('azureStorageLinkedServiceName')]", + "dependsOn": [ + "[parameters('dataFactoryName')]" + ], + "apiVersion": "2017-09-01-preview", + "properties": { + "type": "AzureStorage", + "description": "Azure Storage linked service", + "typeProperties": { + "connectionString": { + "value": "[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageAccountName'),';AccountKey=',parameters('storageAccountKey'))]", + "type": "SecureString" + } + } + } + }, + { + "type": "datasets", + "name": "[variables('inputDatasetName')]", + "dependsOn": [ + "[parameters('dataFactoryName')]", + "[variables('azureStorageLinkedServiceName')]" + ], + "apiVersion": "2017-09-01-preview", + "properties": { + "type": "AzureBlob", + "typeProperties": { + "folderPath": "[concat(parameters('blobContainer'), '/', parameters('inputBlobFolder'), '/')]", + "fileName": "[parameters('inputBlobName')]" + }, + "linkedServiceName": { + "referenceName": "[variables('azureStorageLinkedServiceName')]", + "type": "LinkedServiceReference" + } + } + }, + { + "type": "datasets", + "name": "[variables('outputDatasetName')]", + "dependsOn": [ + "[parameters('dataFactoryName')]", + "[variables('azureStorageLinkedServiceName')]" + ], + "apiVersion": "2017-09-01-preview", + "properties": { + "type": "AzureBlob", + "typeProperties": { + "folderPath": "[concat(parameters('blobContainer'), '/', parameters('outputBlobFolder'), '/')]", + "fileName": "[parameters('outputBlobName')]" + }, + "linkedServiceName": { + "referenceName": "[variables('azureStorageLinkedServiceName')]", + "type": "LinkedServiceReference" + } + } + }, + { + "type": "pipelines", + "name": "[variables('pipelineName')]", + "dependsOn": [ + "[parameters('dataFactoryName')]", + "[variables('azureStorageLinkedServiceName')]", + "[variables('inputDatasetName')]", + "[variables('outputDatasetName')]" + ], + "apiVersion": "2017-09-01-preview", + "properties": { + "activities": [{ + "type": "Copy", + "typeProperties": { + "source": { + "type": "BlobSource" + }, + "sink": { + "type": "BlobSink" + } + }, + "name": "MyCopyActivity", + "inputs": [{ + "referenceName": "[variables('inputDatasetName')]", + "type": "DatasetReference" + }], + "outputs": [{ + "referenceName": "[variables('outputDatasetName')]", + "type": "DatasetReference" + }] + }] + } + }, + { + "type": "triggers", + "name": "[variables('triggerName')]", + "dependsOn": [ + "[parameters('dataFactoryName')]", + "[variables('azureStorageLinkedServiceName')]", + "[variables('inputDatasetName')]", + "[variables('outputDatasetName')]", + "[variables('pipelineName')]" + ], + "apiVersion": "2017-09-01-preview", + "properties": { + "type": "ScheduleTrigger", + "typeProperties": { + "recurrence": { + "frequency": "Hour", + "interval": 1, + "startTime": "[parameters('triggerStartTime')]", + "endTime": "[parameters('triggerEndTime')]", + "timeZone": "UTC" + } + }, + "pipelines": [{ + "pipelineReference": { + "type": "PipelineReference", + "referenceName": "ArmtemplateSampleCopyPipeline" + }, + "parameters": {} + }] + } + } + ] + }] +} +``` + +## Parameters JSON +Create a JSON file named **ADFTutorialARM-Parameters.json** that contains parameters for the Azure Resource Manager template. + +> [!IMPORTANT] +> - Specify the name and key of your Azure Storage account for the **storageAccountName** and **storageAccountKey** parameters in this parameter file. You created the adftutorial container and uploaded the sample file (emp.txt) to the input folder in this Azure blob storage. +> - Specify a globally unique name for the data factory for the **dataFactoryName** parameter. For example: ARMTutorialFactoryJohnDoe11282017. +> - For the **triggerStartTime**, specify the current day in the format: `2017-11-28T00:00:00`. +> - For the **triggerEndTime**, specify the next day in the format: `2017-11-29T00:00:00`. You can also check the current UTC time and specify the next hour or two as the end time. For example, if the UTC time now is 1:32 AM, specify `2017-11-29:03:00:00` as the end time. In this case, the trigger runs the pipeline twice (at 2 AM and 3 AM). + +```json +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "dataFactoryName": { + "value": "" + }, + "dataFactoryLocation": { + "value": "East US" + }, + "storageAccountName": { + "value": "" + }, + "storageAccountKey": { + "value": "" + }, + "blobContainer": { + "value": "adftutorial" + }, + "inputBlobFolder": { + "value": "input" + }, + "inputBlobName": { + "value": "emp.txt" + }, + "outputBlobFolder": { + "value": "output" + }, + "outputBlobName": { + "value": "emp.txt" + }, + "triggerStartTime": { + "value": "2017-11-28T00:00:00. Set to today" + }, + "triggerEndTime": { + "value": "2017-11-29T00:00:00. Set to tomorrow" + } + } +} +``` + +> [!IMPORTANT] +> You may have separate parameter JSON files for development, testing, and production environments that you can use with the same Data Factory JSON template. By using a Power Shell script, you can automate deploying Data Factory entities in these environments. + +## Deploy Data Factory entities +In PowerShell, run the following command to deploy Data Factory entities using the Resource Manager template you created earlier in this quickstart. + +```PowerShell +New-AzureRmResourceGroupDeployment -Name MyARMDeployment -ResourceGroupName ADFTutorialResourceGroup -TemplateFile C:\ADFTutorial\ADFTutorialARM.json -TemplateParameterFile C:\ADFTutorial\ADFTutorialARM-Parameters.json +``` + +You see output similar to the following sample: + +``` +DeploymentName : MyARMDeployment +ResourceGroupName : ADFTutorialResourceGroup +ProvisioningState : Succeeded +Timestamp : 11/29/2017 3:11:13 AM +Mode : Incremental +TemplateLink : +Parameters : + Name Type Value + =============== ============ ========== + dataFactoryName String + dataFactoryLocation String East US + storageAccountName String + storageAccountKey SecureString + blobContainer String adftutorial + inputBlobFolder String input + inputBlobName String emp.txt + outputBlobFolder String output + outputBlobName String emp.txt + triggerStartTime String 11/29/2017 12:00:00 AM + triggerEndTime String 11/29/2017 4:00:00 AM + +Outputs : +DeploymentDebugLogLevel : +``` + +## Start the trigger + +The template deploys the following Data Factory entities: + +- Azure Storage linked service +- Azure Blob datasets (input and output) +- Pipeline with a copy activity +- Trigger to trigger the pipeline + +The deployed trigger is in stopped state. One of the ways to start the trigger is to use the **Start-AzureRmDataFactoryV2Trigger** PowerShell cmdlet. The following procedure provides detailed steps: + +1. In the PowerShell window, create a variable to hold the name of the resource group. Copy the following command into the PowerShell window, and press ENTER. If you have specified a different resource group name for the New-AzureRmResourceGroupDeployment command, update the value here. + + ```powershell + $resourceGroupName = "ADFTutorialResourceGroup" + ``` +1. Create a variable to hold the name of the data factory. Specify the same name that you specified in the ADFTutorialARM-Parameters.json file. + + ```powershell + $dataFactoryName = "" + ``` +3. Set a variable for the name of the trigger. The name of the trigger is hardcoded in the Resource Manager template file (ADFTutorialARM.json). + + ```powershell + $triggerName = "ArmTemplateTestTrigger" + ``` +4. Get the **status of the trigger** by running the following PowerShell command after specifying the name of your data factory and trigger: + + ```powershell + Get-AzureRmDataFactoryV2Trigger -ResourceGroupName $resourceGroupName -DataFactoryName $dataFactoryName -Name $triggerName + ``` + + Here is the sample output: + + ```json + TriggerName : ArmTemplateTestTrigger + ResourceGroupName : ADFTutorialResourceGroup + DataFactoryName : ARMFactory1128 + Properties : Microsoft.Azure.Management.DataFactory.Models.ScheduleTrigger + RuntimeState : Stopped + ``` + + Notice that the runtime state of the trigger is **Stopped**. +5. **Start the trigger**. The trigger runs the pipeline defined in the template at the hour. That's, if you executed this command at 2:25 PM, the trigger runs the pipeline at 3 PM for the first time. Then, it runs the pipeline hourly until the end time you specified for the trigger. + + ```powershell + Start-AzureRmDataFactoryV2Trigger -ResourceGroupName $resourceGroupName -DataFactoryName $dataFactoryName -TriggerName $triggerName + ``` + + Here is the sample output: + + ``` + Confirm + Are you sure you want to start trigger 'ArmTemplateTestTrigger' in data factory 'ARMFactory1128'? + [Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): y + True + ``` +6. Confirm that the trigger has been started by running the Get-AzureRmDataFactoryV2Trigger command again. + + ```powershell + Get-AzureRmDataFactoryV2Trigger -ResourceGroupName $resourceGroupName -DataFactoryName $dataFactoryName -TriggerName $triggerName + ``` + + Here is the sample output: + + ``` + TriggerName : ArmTemplateTestTrigger + ResourceGroupName : ADFTutorialResourceGroup + DataFactoryName : ARMFactory1128 + Properties : Microsoft.Azure.Management.DataFactory.Models.ScheduleTrigger + RuntimeState : Started + ``` + +## Monitor the pipeline +1. After logging in to the [Azure portal](https://portal.azure.com/), Click **More services**, search with the keyword such as `data fa`, and select **Data factories**. + + ![Browse data factories menu](media/quickstart-create-data-factory-resource-manager-template/browse-data-factories-menu.png) +2. In the **Data Factories** page, click the data factory you created. If needed, filter the list with the name of your data factory. + + ![Select data factory](media/quickstart-create-data-factory-resource-manager-template/select-data-factory.png) +3. In the Data factory page, click **Monitor & Manage** tile. + + ![Monitor and manage tile](media/quickstart-create-data-factory-resource-manager-template/monitor-manage-tile.png) +4. The **Data Integration Application** should open in a separate tab in the web browser. If the monitor tab is not active, switch to the **monitor tab**. Notice that the pipeline run was triggered by a **scheduler trigger**. + + ![Monitor pipeline run](media/quickstart-create-data-factory-resource-manager-template/monitor-pipeline-run.png) + + > [!IMPORTANT] + > You see pipeline runs only at the hour clock (for example: 4 AM, 5 AM, 6 AM, etc.). Click **Refresh** on the toolbar to refresh the list when the time reaches the next hour. +5. Click the link in the **Actions** columns. + + ![Pipeline actions link](media/quickstart-create-data-factory-resource-manager-template/pipeline-actions-link.png) +6. You see the activity runs associated with the pipeline run. In this quickstart, the pipeline has only one activity of type: Copy. Therefore, you see a run for that activity. + + ![Activity runs](media/quickstart-create-data-factory-resource-manager-template/activity-runs.png) +1. Click the link under **Output** column. You see the output from the copy operation in an **Output** window. Click the maximize button to see the full output. You can close the maximized output window or close it. + + ![Output window](media/quickstart-create-data-factory-resource-manager-template/output-window.png) +7. Stop the trigger once you see a successful/failure run. The trigger runs the pipeline once an hour. The pipeline copies the same file from the input folder to the output folder for each run. To stop the trigger, run the following command in the PowerShell window. + + ```powershell + Stop-AzureRmDataFactoryV2Trigger -ResourceGroupName $resourceGroupName -DataFactoryName $dataFactoryName -Name $triggerName + ``` + +[!INCLUDE [data-factory-quickstart-verify-output-cleanup.md](../../includes/data-factory-quickstart-verify-output-cleanup.md)] + +## JSON definitions for entities +The following Data Factory entities are defined in the JSON template: + +- [Azure Storage linked service](#azure-storage-linked-service) +- [Azure blob input dataset](#azure-blob-input-dataset) +- [Azure Blob output dataset](#azure-blob-output-dataset) +- [Data pipeline with a copy activity](#data-pipeline) +- [Trigger](#trigger) + +#### Azure Storage linked service +The AzureStorageLinkedService links your Azure storage account to the data factory. You created a container and uploaded data to this storage account as part of prerequisites. You specify the name and key of your Azure storage account in this section. See [Azure Storage linked service](connector-azure-blob-storage.md#linked-service-properties) for details about JSON properties used to define an Azure Storage linked service. + +```json +{ + "type": "linkedservices", + "name": "[variables('azureStorageLinkedServiceName')]", + "dependsOn": [ + "[parameters('dataFactoryName')]" + ], + "apiVersion": "2017-09-01-preview", + "properties": { + "type": "AzureStorage", + "description": "Azure Storage linked service", + "typeProperties": { + "connectionString": { + "value": "[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageAccountName'),';AccountKey=',parameters('storageAccountKey'))]", + "type": "SecureString" + } + } + } +} +``` + +The connectionString uses the storageAccountName and storageAccountKey parameters. The values for these parameters passed by using a configuration file. The definition also uses variables: azureStroageLinkedService and dataFactoryName defined in the template. + +#### Azure blob input dataset +The Azure storage linked service specifies the connection string that Data Factory service uses at run time to connect to your Azure storage account. In Azure blob dataset definition, you specify names of blob container, folder, and file that contains the input data. See [Azure Blob dataset properties](connector-azure-blob-storage.md#dataset-properties) for details about JSON properties used to define an Azure Blob dataset. + +```json +{ + "type": "datasets", + "name": "[variables('inputDatasetName')]", + "dependsOn": [ + "[parameters('dataFactoryName')]", + "[variables('azureStorageLinkedServiceName')]" + ], + "apiVersion": "2017-09-01-preview", + "properties": { + "type": "AzureBlob", + "typeProperties": { + "folderPath": "[concat(parameters('blobContainer'), '/', parameters('inputBlobFolder'), '/')]", + "fileName": "[parameters('inputBlobName')]" + }, + "linkedServiceName": { + "referenceName": "[variables('azureStorageLinkedServiceName')]", + "type": "LinkedServiceReference" + } + } +}, + +``` + +#### Azure blob output dataset +You specify the name of the folder in the Azure Blob Storage that holds the copied data from the input folder. See [Azure Blob dataset properties](connector-azure-blob-storage.md#dataset-properties) for details about JSON properties used to define an Azure Blob dataset. + +```json +{ + "type": "datasets", + "name": "[variables('outputDatasetName')]", + "dependsOn": [ + "[parameters('dataFactoryName')]", + "[variables('azureStorageLinkedServiceName')]" + ], + "apiVersion": "2017-09-01-preview", + "properties": { + "type": "AzureBlob", + "typeProperties": { + "folderPath": "[concat(parameters('blobContainer'), '/', parameters('outputBlobFolder'), '/')]", + "fileName": "[parameters('outputBlobName')]" + }, + "linkedServiceName": { + "referenceName": "[variables('azureStorageLinkedServiceName')]", + "type": "LinkedServiceReference" + } + } +} +``` + +#### Data pipeline +You define a pipeline that copies data from one Azure blob dataset to another Azure blob dataset. See [Pipeline JSON](concepts-pipelines-activities.md#pipeline-json) for descriptions of JSON elements used to define a pipeline in this example. + +```json +{ + "type": "pipelines", + "name": "[variables('pipelineName')]", + "dependsOn": [ + "[parameters('dataFactoryName')]", + "[variables('azureStorageLinkedServiceName')]", + "[variables('inputDatasetName')]", + "[variables('outputDatasetName')]" + ], + "apiVersion": "2017-09-01-preview", + "properties": { + "activities": [ + { + "type": "Copy", + "typeProperties": { + "source": { + "type": "BlobSource" + }, + "sink": { + "type": "BlobSink" + } + }, + "name": "MyCopyActivity", + "inputs": [ + { + "referenceName": "[variables('inputDatasetName')]", + "type": "DatasetReference" + } + ], + "outputs": [ + { + "referenceName": "[variables('outputDatasetName')]", + "type": "DatasetReference" + } + ] + } + ] + } +} +``` + +#### Trigger +You define a trigger that runs the pipeline once an hour. The deployed trigger is in stopped state. Start the trigger by using the **Start-AzureRmDataFactoryV2Trigger** cmdlet. For more information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#triggers) article. + +```json +{ + "type": "triggers", + "name": "[variables('triggerName')]", + "dependsOn": [ + "[parameters('dataFactoryName')]", + "[variables('azureStorageLinkedServiceName')]", + "[variables('inputDatasetName')]", + "[variables('outputDatasetName')]", + "[variables('pipelineName')]" + ], + "apiVersion": "2017-09-01-preview", + "properties": { + "type": "ScheduleTrigger", + "typeProperties": { + "recurrence": { + "frequency": "Hour", + "interval": 1, + "startTime": "2017-11-28T00:00:00", + "endTime": "2017-11-29T00:00:00", + "timeZone": "UTC" + } + }, + "pipelines": [{ + "pipelineReference": { + "type": "PipelineReference", + "referenceName": "ArmtemplateSampleCopyPipeline" + }, + "parameters": {} + }] + } +} +``` + +## Reuse the template +In the tutorial, you created a template for defining Data Factory entities and a template for passing values for parameters. To use the same template to deploy Data Factory entities to different environments, you create a parameter file for each environment and use it when deploying to that environment. + +Example: + +```PowerShell +New-AzureRmResourceGroupDeployment -Name MyARMDeployment -ResourceGroupName ADFTutorialResourceGroup -TemplateFile ADFTutorialARM.json -TemplateParameterFile ADFTutorialARM-Parameters-Dev.json + +New-AzureRmResourceGroupDeployment -Name MyARMDeployment -ResourceGroupName ADFTutorialResourceGroup -TemplateFile ADFTutorialARM.json -TemplateParameterFile ADFTutorialARM-Parameters-Test.json + +New-AzureRmResourceGroupDeployment -Name MyARMDeployment -ResourceGroupName ADFTutorialResourceGroup -TemplateFile ADFTutorialARM.json -TemplateParameterFile ADFTutorialARM-Parameters-Production.json +``` +Notice that the first command uses parameter file for the development environment, second one for the test environment, and the third one for the production environment. + +You can also reuse the template to perform repeated tasks. For example, create many data factories with one or more pipelines that implement the same logic but each data factory uses different Azure storage accounts. In this scenario, you use the same template in the same environment (dev, test, or production) with different parameter files to create data factories. + + +## Next steps +The pipeline in this sample copies data from one location to another location in an Azure blob storage. Go through the [tutorials](tutorial-copy-data-dot-net.md) to learn about using Data Factory in more scenarios. \ No newline at end of file diff --git a/articles/data-factory/tutorial-deploy-ssis-packages-azure.md b/articles/data-factory/tutorial-deploy-ssis-packages-azure.md index 44572bfee0310..5e02ee04d333b 100644 --- a/articles/data-factory/tutorial-deploy-ssis-packages-azure.md +++ b/articles/data-factory/tutorial-deploy-ssis-packages-azure.md @@ -51,6 +51,8 @@ Start **Windows PowerShell ISE** with administrative privileges. Copy and pate the following script: Specify values for the variables. For a list of supported **pricing tiers** for Azure SQL Database, see [SQL Database resource limits](../sql-database/sql-database-resource-limits.md). ```powershell +# Azure Data Factory version 2 information +# If your input contains a PSH special character, e.g. "$", precede it with the escape character "`" like "`$". $SubscriptionName = "" $ResourceGroupName = "" # Data factory name. Must be globally unique @@ -210,6 +212,8 @@ For a list of supported **pricing tiers** for Azure SQL Database, see [SQL Datab For a list of regions supported by Azure Data Factory V2 and Azure-SSIS Integration Runtime, see [Products available by region](https://azure.microsoft.com/regions/services/). Expand **Data + Analytics** to see **Data Factory V2** and **SSIS Integration Runtime**. ```powershell +# Azure Data Factory version 2 information +# If your input contains a PSH special character, e.g. "$", precede it with the escape character "`" like "`$". $SubscriptionName = "" $ResourceGroupName = "" # Data factory name. Must be globally unique diff --git a/articles/data-factory/tutorial-incremental-copy-powershell.md b/articles/data-factory/tutorial-incremental-copy-powershell.md index ad4cf667aadbe..e87e4a93c3e3a 100644 --- a/articles/data-factory/tutorial-incremental-copy-powershell.md +++ b/articles/data-factory/tutorial-incremental-copy-powershell.md @@ -147,40 +147,47 @@ END ``` ## Create a data factory +1. Define a variable for the resource group name that you use in PowerShell commands later. Copy the following command text to PowerShell, specify a name for the [Azure resource group](../azure-resource-manager/resource-group-overview.md) in double quotes, and then run the command. For example: `"adfrg"`. + + ```powershell + $resourceGroupName = "ADFTutorialResourceGroup"; + ``` -1. Launch **PowerShell**. Keep Azure PowerShell open until the end of this tutorial. If you close and reopen, you need to run the commands again. + If the resource group already exists, you may not want to overwrite it. Assign a different value to the `$resourceGroupName` variable and run the command again +2. To create the Azure resource group, run the following command: - Run the following command, and enter the user name and password that you use to sign in to the Azure portal: - ```powershell - Login-AzureRmAccount - ``` - Run the following command to view all the subscriptions for this account: + New-AzureRmResourceGroup $resourceGroupName $location + ``` + If the resource group already exists, you may not want to overwrite it. Assign a different value to the `$resourceGroupName` variable and run the command again. +3. Define a variable for the data factory name. - ```powershell - Get-AzureRmSubscription - ``` - Run the following command to select the subscription that you want to work with. Replace **SubscriptionId** with the ID of your Azure subscription: + > [!IMPORTANT] + > Update the data factory name to be globally unique. For example, ADFTutorialFactorySP1127. ```powershell - Select-AzureRmSubscription -SubscriptionId "" + $dataFactoryName = "ADFIncCopyTutorialFactory"; ``` -2. Run the **Set-AzureRmDataFactoryV2** cmdlet to create a data factory. Replace place-holders with your own values before executing the command. +1. Define a variable for the location of the data factory: ```powershell - Set-AzureRmDataFactoryV2 -ResourceGroupName "" -Location "East US" -Name "" + $location = "East US" + ``` +5. To create the data factory, run the following **Set-AzureRmDataFactoryV2** cmdlet: + + ```powershell + Set-AzureRmDataFactoryV2 -ResourceGroupName $resourceGroupName -Location "East US" -Name $dataFactoryName ``` - Note the following points: - - * The name of the Azure data factory must be globally unique. If you receive the following error, change the name and try again. +Note the following points: - ``` - The specified Data Factory name '' is already in use. Data Factory names must be globally unique. - ``` +* The name of the Azure data factory must be globally unique. If you receive the following error, change the name and try again. - * To create Data Factory instances, you must be a contributor or administrator of the Azure subscription. - * Currently, Data Factory V2 allows you to create data factories only in the East US, East US2, and West Europe regions. The data stores (Azure Storage, Azure SQL Database, etc.) and computes (HDInsight, etc.) used by data factory can be in other regions. + ``` + The specified Data Factory name 'ADFv2QuickStartDataFactory' is already in use. Data Factory names must be globally unique. + ``` +* To create Data Factory instances, the user account you use to log in to Azure must be a member of **contributor** or **owner** roles, or an **administrator** of the Azure subscription. +* Currently, Data Factory version 2 allows you to create data factories only in the East US, East US2, and West Europe regions. The data stores (Azure Storage, Azure SQL Database, etc.) and computes (HDInsight, etc.) used by data factory can be in other regions. ## Create linked services @@ -508,7 +515,7 @@ In this tutorial, you create a pipeline with two lookup activities, one copy act 1. Run the pipeline: **IncrementalCopyPipeline** by using **Invoke-AzureRmDataFactoryV2Pipeline** cmdlet. Replace place-holders with your own resource group and data factory name. ```powershell - $RunId = Invoke-AzureRmDataFactoryV2Pipeline -PipelineName "IncrementalCopyPipeline" -ResourceGroup "" -dataFactoryName "" + $RunId = Invoke-AzureRmDataFactoryV2Pipeline -PipelineName "IncrementalCopyPipeline" -ResourceGroupName $resourceGroupName -dataFactoryName $dataFactoryName ``` 2. Check the status of pipeline by running the Get-AzureRmDataFactoryV2ActivityRun cmdlet until you see all the activities running successfully. Replace place-holders with your own appropriate time for parameter RunStartedAfter and RunStartedBefore. In this tutorial, we use -RunStartedAfter "2017/09/14" -RunStartedBefore "2017/09/15" @@ -628,7 +635,7 @@ In this tutorial, you create a pipeline with two lookup activities, one copy act 2. Run the pipeline: **IncrementalCopyPipeline** again using the **Invoke-AzureRmDataFactoryV2Pipeline** cmdlet. Replace place-holders with your own resource group and data factory name. ```powershell - $RunId = Invoke-AzureRmDataFactoryV2Pipeline -PipelineName "IncrementalCopyPipeline" -ResourceGroup "" -dataFactoryName "" + $RunId = Invoke-AzureRmDataFactoryV2Pipeline -PipelineName "IncrementalCopyPipeline" -ResourceGroupName $resourceGroupName -dataFactoryName $dataFactoryName ``` 3. Check the status of pipeline by running **Get-AzureRmDataFactoryV2ActivityRun** cmdlet until you see all the activities running successfully. Replace place-holders with your own appropriate time for parameter RunStartedAfter and RunStartedBefore. In this tutorial, we use -RunStartedAfter "2017/09/14" -RunStartedBefore "2017/09/15" diff --git a/articles/data-lake-analytics/TOC.md b/articles/data-lake-analytics/TOC.md index 82c42c9d4c3e9..157a768929655 100644 --- a/articles/data-lake-analytics/TOC.md +++ b/articles/data-lake-analytics/TOC.md @@ -31,6 +31,8 @@ ### [U-SQL Cognitive extensions](data-lake-analytics-u-sql-cognitive.md) ### [Analyze website logs](data-lake-analytics-analyze-weblogs.md) ### [U-SQL custom code for Visual Studio Code](data-lake-analytics-u-sql-develop-with-python-r-csharp-in-vscode.md) +### [U-SQL for Visual Studio Code](data-lake-analytics-data-lake-tools-for-vscode.md) +### [Export U-SQL database](data-lake-analytics-data-lake-tools-export-database.md) ## Debug U-SQL programs ### [Monitor and troubleshoot jobs](data-lake-analytics-monitor-and-troubleshoot-jobs-tutorial.md) diff --git a/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-export-database.md b/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-export-database.md new file mode 100644 index 0000000000000..f059321bbc2bc --- /dev/null +++ b/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-export-database.md @@ -0,0 +1,83 @@ +--- +title: How to export U-SQL databases using Azure Data Lake Tools for Visual Studio | Microsoft Docs +description: 'Learn how to use Azure Data Lake Tools for Visual Studio to export U-SQL database and import it to local account at the same time.' +services: data-lake-analytics +documentationcenter: '' +author: yanancai +manager: +editor: + +ms.assetid: dc9b21d8-c5f4-4f77-bcbc-eff458f48de2 +ms.service: data-lake-analytics +ms.devlang: na +ms.topic: article +ms.tgt_pltfrm: na +ms.workload: big-data +ms.date: 11/27/2017 +ms.author: yanacai + +--- + +# How to export U-SQL database + +In this document, you will learn how to use [Azure Data Lake Tools for Visual Studio](http://aka.ms/adltoolsvs) to export U-SQL database as a single U-SQL script and downloaded resources. Importing the exported database to local account is also supported in the same process. + +Customers usually maintain multiple environments for development, test and production. These environments are hosted on both local account on developers' local machine and Azure Data Lake Analytics account on Azure. When developing and tuning U-SQL queries on development and test environments, it is common that developers need to recreate everything in production database. **Database Export Wizard** helps accelerate this process. By using the wizard, developers can clone the existing database environment and sample data to other Azure Data Lake Analytics accounts. + +## Export steps + +### Step 1: Right-click the database in Server Explorer and click "Export..." + +All Azure Data Lake Analytics accounts you have permission to are listed in Server Explorer. Expand the one contains the database you want to export, and right-click the database to choose **Export...**. If you don't find the context menu, you need to [update the tool to the lasted release](http://aka.ms/adltoolsvs). + +![Data Lake Analytics Tools Export Database](./media/data-lake-analytics-data-lake-tools-export-database/export-database.png) + +### Step 2: Configure the objects you want to export + +Sometimes a database is large but you just need a small part of it, then you can configure the subset of objects you want to export in the export wizard. Note that the export action is completed by running a U-SQL job, and therefore exporting from Azure account incurs some cost. + +![Data Lake Analytics Tools Export Database Wizard](./media/data-lake-analytics-data-lake-tools-export-database/export-database-wizard.png) + +### Step 3: Check the objects list and more configurations + +In this step, you can double check the selected objects at the top of the dialog. If there are some errors, you can click Previous to go back and configure the objects you want to export again. + +You can also do other configurations about export target, the descriptions of these configurations are listed in below table: + +|Configuration|Description| +|-------------|-----------| +|Destination Name|This name indicates where you want to save the exported database resources, like assemblies, additional files and sample data. A folder with this name will be created under your local data root folder.| +|Project Directory|This path defines where you want to save the exported U-SQL script, which includes all database object definitions.| +|Schema Only|Selecting this option results in only database definitions and resources (like assemblies and additional files) being exported.| +|Schema and Data|Selecting this option results in database definitions, resources, and data to be exported. The top N rows of tables are exported.| +|Import to Local Database Automatically|Checking this configuration means the exported database will be imported to your local database automatically after exporting is completed.| + +![Data Lake Analytics Tools Export Database Wizard Configuration](./media/data-lake-analytics-data-lake-tools-export-database/export-database-wizard-configuration.png) + +### Step 4: Check the export results + +After all these settings and export progress, you can find the exported results from the log window in the wizard. Through the log marked by red rectangle in blow screenshot, you can find the location of the exported U-SQL script and database resources including assemblies, additional files and sample data. + +![Data Lake Analytics Tools Export Database Wizard Completed](./media/data-lake-analytics-data-lake-tools-export-database/export-database-wizard-completed.png) + +## How to import the exported database to local account + +The most convenient way to do this importing is checking **Import to Local Database Automatically** during the exporting progress in Step 3. If you forgot to do so, you can find the exported U-SQL script through the exporting log and run the U-SQL script locally to import the database to your local account. + +## How to import the exported database to Azure Data Lake Analytics account + +To import the database to other Azure Data Lake Analytics account, you need two steps: + +1. Upload the exported resources including assemblies, additional files and sample data to the default Azure Data Lake Store account of the Azure Data Lake Analytics account you want to import to. You can find the exported resource folder under the local data root folder, and upload the entire folder to the root of the default store account. +2. Submit the exported U-SQL script to the Azure Data Lake Analytics account you want to import database to after uploading completed. + +## Known limitation + +Currently, if you selected **Schema and Data** in the wizard, the tool runs a U-SQL job to export the data stored in tables. That's why the data exporting process could be slow and incur cost. + +## Next steps + +* [Understand U-SQL database](https://msdn.microsoft.com/library/azure/mt621299.aspx) +* [how to test and debug U-SQL jobs by using local run and the Azure Data Lake U-SQL SDK](data-lake-analytics-data-lake-tools-local-run.md) + + diff --git a/articles/data-lake-analytics/media/data-lake-analytics-data-lake-tools-export-database/export-database-wizard-completed.png b/articles/data-lake-analytics/media/data-lake-analytics-data-lake-tools-export-database/export-database-wizard-completed.png new file mode 100644 index 0000000000000..edd8e962d555f Binary files /dev/null and b/articles/data-lake-analytics/media/data-lake-analytics-data-lake-tools-export-database/export-database-wizard-completed.png differ diff --git a/articles/data-lake-analytics/media/data-lake-analytics-data-lake-tools-export-database/export-database-wizard-configuration.png b/articles/data-lake-analytics/media/data-lake-analytics-data-lake-tools-export-database/export-database-wizard-configuration.png new file mode 100644 index 0000000000000..422bb4694ab9e Binary files /dev/null and b/articles/data-lake-analytics/media/data-lake-analytics-data-lake-tools-export-database/export-database-wizard-configuration.png differ diff --git a/articles/data-lake-analytics/media/data-lake-analytics-data-lake-tools-export-database/export-database-wizard.png b/articles/data-lake-analytics/media/data-lake-analytics-data-lake-tools-export-database/export-database-wizard.png new file mode 100644 index 0000000000000..df1bdde33e198 Binary files /dev/null and b/articles/data-lake-analytics/media/data-lake-analytics-data-lake-tools-export-database/export-database-wizard.png differ diff --git a/articles/data-lake-analytics/media/data-lake-analytics-data-lake-tools-export-database/export-database.png b/articles/data-lake-analytics/media/data-lake-analytics-data-lake-tools-export-database/export-database.png new file mode 100644 index 0000000000000..dc3591b093c37 Binary files /dev/null and b/articles/data-lake-analytics/media/data-lake-analytics-data-lake-tools-export-database/export-database.png differ diff --git a/articles/devtest-lab/TOC.md b/articles/devtest-lab/TOC.md index b58298b01224a..127e9229d0800 100644 --- a/articles/devtest-lab/TOC.md +++ b/articles/devtest-lab/TOC.md @@ -31,6 +31,7 @@ ### [Configure marketplace images](devtest-lab-configure-marketplace-images.md) ### [Enable a licensed image](devtest-lab-enable-licensed-images.md) ### [Add tags to a lab](devtest-lab-add-tag.md) +### [Post announcement in a lab](devtest-lab-announcements.md) ## [Select custom image or formula](devtest-lab-comparing-vm-base-image-types.md) diff --git a/articles/devtest-lab/devtest-lab-announcements.md b/articles/devtest-lab/devtest-lab-announcements.md new file mode 100644 index 0000000000000..9af3c469e3139 --- /dev/null +++ b/articles/devtest-lab/devtest-lab-announcements.md @@ -0,0 +1,76 @@ +--- +title: Post an announcment to a lab in Azure DevTest Labs | Microsoft Docs +description: Learn how to add an announcement to a lab in Azure DevTest Labs +services: devtest-lab,virtual-machines +documentationcenter: na +author: tomarcher +manager: douge +editor: '' + +ms.assetid: 67a09946-4584-425e-a94c-abe57c9cbb82 +ms.service: devtest-lab +ms.workload: na +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 11/30/2017 +ms.author: tarcher + +--- +# Post an announcement to a lab in Azure DevTest Labs + +As a lab administrator, you can post a custom announcement in an existing lab to notify users about recent changes or additions to the lab. For example, you might want to inform users about: + +- New VM sizes that are available +- Images that are currently unusable +- Updates to lab policies + +Once posted, the announcement is displayed on the lab's Overview page and the user can select it for more details. + +The announcement feature is meant to be used for temporary notifications. You can easily disable an announcement after it is no longer needed. + +## Steps to post an announcement in an existing lab + +1. Sign in to the [Azure portal](http://go.microsoft.com/fwlink/p/?LinkID=525040). +1. If necessary, select **All Services**, and then select **DevTest Labs** from the list. (Your lab might already be shown on the Dashboard under **All Resources**). +1. From the list of labs, select the lab in which you want to post an announcement. +1. On the lab's **Overview** area, select **Configuration and policies**. + + ![Configuration and policies button](./media/devtest-lab-announcements/devtestlab-config-and-policies.png) + +1. On the left under **SETTINGS**, select **Lab announcement**. + + ![Lab announcement button](./media/devtest-lab-announcements/devtestlab-announcements.png) + +1. To create a message for the users in this lab, set **Enabled** to **Yes**. + +1. Enter an **Announcement title** and the **Announcement text**. + + The title can be up to 100 characters and is shown to the user on the lab's Overview page. If the user selects the title, the announcement text is displayed. + + The announcement text accepts markdown. As you enter the announcement text, you can view the message in the Preview area at the bottom of the screen. + + ![Lab announcement screen to create the message.](./media/devtest-lab-announcements/devtestlab-post-announcement.png) + + +1. Select **Save** once your announcement is ready to post. + +When you no longer want to show this announcement to lab users, return to the **Lab announcement** page and set **Enabled** to **No**. + +## Steps for users to view an announcement + +1. From the [Azure portal](http://go.microsoft.com/fwlink/p/?LinkID=525040), select a lab. + +1. If the lab has an announcement posted for it, an information notice is shown at the top of the lab's Overview page. This information notice is the announcement title that was specified when the announcement was created. + + ![Lab announcement on Overview page](./media/devtest-lab-announcements/devtestlab-user-announcement.png) + +1. The user can select the message to view the entire announcement. + + ![More information for the lab announcement](./media/devtest-lab-announcements/devtestlab-user-announcement-text.png) + +[!INCLUDE [devtest-lab-try-it-out](../../includes/devtest-lab-try-it-out.md)] + +## Next steps +* If you change or set a lab policy, you might want to post an announcement to inform users. [Set policies and schedules](devtest-lab-set-lab-policy.md) provides information about applying restrictions and conventions across your subscription by using customized policies. +* Explore the [DevTest Labs Azure Resource Manager QuickStart template gallery](https://github.com/Azure/azure-devtestlab/tree/master/Samples). diff --git a/articles/devtest-lab/devtest-lab-create-template.md b/articles/devtest-lab/devtest-lab-create-template.md index 9e74ce750dc28..63c835fb32968 100644 --- a/articles/devtest-lab/devtest-lab-create-template.md +++ b/articles/devtest-lab/devtest-lab-create-template.md @@ -73,4 +73,4 @@ The following steps walk you through creating a custom image from a VHD file usi ##Next steps -- [Add a VM to your lab](./devtest-lab-add-vm-with-artifacts.md) +- [Add a VM to your lab](./devtest-lab-add-vm.md) diff --git a/articles/devtest-lab/media/devtest-lab-announcements/devtestlab-announcements.png b/articles/devtest-lab/media/devtest-lab-announcements/devtestlab-announcements.png new file mode 100644 index 0000000000000..18361cfa021cc Binary files /dev/null and b/articles/devtest-lab/media/devtest-lab-announcements/devtestlab-announcements.png differ diff --git a/articles/devtest-lab/media/devtest-lab-announcements/devtestlab-config-and-policies.png b/articles/devtest-lab/media/devtest-lab-announcements/devtestlab-config-and-policies.png new file mode 100644 index 0000000000000..e70a8be863344 Binary files /dev/null and b/articles/devtest-lab/media/devtest-lab-announcements/devtestlab-config-and-policies.png differ diff --git a/articles/devtest-lab/media/devtest-lab-announcements/devtestlab-post-announcement.png b/articles/devtest-lab/media/devtest-lab-announcements/devtestlab-post-announcement.png new file mode 100644 index 0000000000000..ff5ee0eda0a7f Binary files /dev/null and b/articles/devtest-lab/media/devtest-lab-announcements/devtestlab-post-announcement.png differ diff --git a/articles/devtest-lab/media/devtest-lab-announcements/devtestlab-user-announcement-text.png b/articles/devtest-lab/media/devtest-lab-announcements/devtestlab-user-announcement-text.png new file mode 100644 index 0000000000000..f5fcbf7686799 Binary files /dev/null and b/articles/devtest-lab/media/devtest-lab-announcements/devtestlab-user-announcement-text.png differ diff --git a/articles/devtest-lab/media/devtest-lab-announcements/devtestlab-user-announcement.png b/articles/devtest-lab/media/devtest-lab-announcements/devtestlab-user-announcement.png new file mode 100644 index 0000000000000..3223f4d08fbfd Binary files /dev/null and b/articles/devtest-lab/media/devtest-lab-announcements/devtestlab-user-announcement.png differ diff --git a/articles/dms/quickstart-create-data-migration-service-portal.md b/articles/dms/quickstart-create-data-migration-service-portal.md index 10e82dea16493..f315b4282667f 100644 --- a/articles/dms/quickstart-create-data-migration-service-portal.md +++ b/articles/dms/quickstart-create-data-migration-service-portal.md @@ -10,11 +10,11 @@ ms.service: database-migration ms.workload: data-services ms.custom: mvc ms.topic: quickstart -ms.date: 11/17/2017 +ms.date: 11/28/2017 --- # Create an instance of the Azure Database Migration Service by using the Azure portal -In this quick start, you use the Azure portal to create an instance of the Azure Database Migration Service. After you create the service, you will be able to use it to migrate data from SQL Server on-premises to an Azure SQL database. +In this Quickstart, you use the Azure portal to create an instance of the Azure Database Migration Service. After you create the service, you can use it to migrate data from SQL Server on-premises to an Azure SQL database. If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. @@ -22,38 +22,39 @@ If you don't have an Azure subscription, create a [free](https://azure.microsoft Open your web browser, and navigate to the [Microsoft Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal. The default view is your service dashboard. ## Register the resource provider -You need to register the Microsoft.DataMigration resource provider before you create your first Database Migration Service. +Register the Microsoft.DataMigration resource provider before you create your first instance of the Database Migration Service. 1. In the Azure portal, select **All services**, and then select **Subscriptions**. -1. Select the subscription in which you want to create the instance of the Azure Database Migration Service, and then select **Resource providers**. +2. Select the subscription in which you want to create the instance of the Azure Database Migration Service, and then select **Resource providers**. -1. Search for migration, and then to the right of Microsoft.DataMigration, select **Register**. +3. Search for migration, and then to the right of Microsoft.DataMigration, select **Register**. ![Register resource provider](media/quickstart-create-data-migration-service-portal/dms-register-provider.png) -## Create Azure Database Migration Service -1. Click **+** to create a new service. Database Migration Service is still in preview. +## Create an instance of the service +1. Click **+ Create a resource** to create an instance of the Azure Database Migration Service, which is currently in preview. -1. Search the marketplace for "migration", select "Database Migration Service (preview)," then click **create**. +2. Search the marketplace for "migration", select **Azure Database Migration Service**, and then on the **Azure Database Migration Service (preview)** screen, click **Create**. - ![Create migration service](media/quickstart-create-data-migration-service-portal/dms-create-service.png) +3. On the **Database Migration Service** screen: - - Choose a **Service name** that is memorable and unique to identify your Azure Database Migration Service Instance. - - Select your Azure **Subscription** in which you want to create the Database Migration Service. + - Choose a **Service name** that is memorable and unique to identify your instance of the Azure Database Migration Service. + - Select the Azure **Subscription** in which you want to create the instance. - Create a new **Network** with a unique name. - Choose the **Location** that is closest to your source or target server. - Select Basic: 1 vCore for the **Pricing tier**. -1. Click **Create**. + ![Create migration service](media/quickstart-create-data-migration-service-portal/dms-create-service.png) +4. Select **Create**. -After a few moments, your Azure Database Migration service will be created and ready to use. You'll see the Database Migration Service as shown in the image. +After a few moments, your instance of the Azure Database Migration service is created and ready to use. The Database Migration Service displays as shown in the following image: ![Migration service created](media/quickstart-create-data-migration-service-portal/dms-service-created.png) ## Clean up resources -You can clean up the resources that you created in the quickstart by deleting the [Azure resource group](../azure-resource-manager/resource-group-overview.md). To delete the resource group, navigate to the Database Migration Service you created, click on the **Resource group** name and then select **Delete resource group**. This action deletes all of the assets in the resource group as well as the group itself. +You can clean up the resources created in this Quickstart by deleting the [Azure resource group](../azure-resource-manager/resource-group-overview.md). To delete the resource group, navigate to the instance of the Azure Database Migration Service that you created. Select the **Resource group** name, and then select **Delete resource group**. This action deletes all assets in the resource group as well as the group itself. ## Next steps > [!div class="nextstepaction"] -> [Migrate SQL Server on-premises to Azure SQL DB](tutorial-sql-server-to-azure-sql.md) \ No newline at end of file +> [Migrate SQL Server on-premises to Azure SQL Database](tutorial-sql-server-to-azure-sql.md) \ No newline at end of file diff --git a/articles/dns/dns-for-azure-services.md b/articles/dns/dns-for-azure-services.md index 96455b235789f..dc82fb24b4ef4 100644 --- a/articles/dns/dns-for-azure-services.md +++ b/articles/dns/dns-for-azure-services.md @@ -3,8 +3,8 @@ title: Using Azure DNS with other Azure services | Microsoft Docs description: Understanding how to use Azure DNS to resolve name for other Azure services services: dns documentationcenter: na -author: georgewallace -manager: timlt +author: KumudD +manager: jeconnoc editor: '' tags: azure dns @@ -16,7 +16,7 @@ ms.tgt_pltfrm: na ms.custom: H1Hack27Feb2017 ms.workload: infrastructure-services ms.date: 09/21/2016 -ms.author: gwallace +ms.author: kumud --- # How Azure DNS works with other Azure services @@ -32,7 +32,7 @@ The following table outlines the supported record types that can be used for var | Application Gateway |[Front-end Public IP](dns-custom-domain.md#public-ip-address) |You can create a DNS A or CNAME record. | | Load Balancer |[Front-end Public IP](dns-custom-domain.md#public-ip-address) |You can create a DNS A or CNAME record. Load Balancer can have an IPv6 Public IP address that is dynamically assigned. Therefore, you must create a CNAME record for an IPv6 address. | | Traffic Manager |Public name |You can only create a CNAME that maps to the trafficmanager.net name assigned to your Traffic Manager profile. For more information, see [How Traffic Manager works](../traffic-manager/traffic-manager-overview.md#traffic-manager-example). | -| Cloud Service |[Public IP](dns-custom-domain.md#public-ip-address) |For statically allocated IP addresses, you can create a DNS A record. For dynamically allocated IP addresses, you must create a CNAME record that maps to the *cloudapp.net* name. This rule applies to VMs created in the classic portal because they are deployed as a cloud service. For more information, see [Configure a custom domain name in Cloud Services](../cloud-services/cloud-services-custom-domain-name-portal.md). | +| Cloud Service |[Public IP](dns-custom-domain.md#public-ip-address) |For statically allocated IP addresses, you can create a DNS A record. For dynamically allocated IP addresses, you must create a CNAME record that maps to the *cloudapp.net* name.| | App Service | [External IP](dns-custom-domain.md#app-service-web-apps) |For external IP addresses, you can create a DNS A record. Otherwise, you must create a CNAME record that maps to the azurewebsites.net name. For more information, see [Map a custom domain name to an Azure app](../app-service/app-service-web-tutorial-custom-domain.md) | | Resource Manager VMs |[Public IP](dns-custom-domain.md#public-ip-address) |Resource Manager VMs can have Public IP addresses. A VM with a Public IP address may also be behind a load balancer. You can create a DNS A or CNAME record for the Public address. This custom name can be used to bypass the VIP on the load balancer. | | Classic VMs |[Public IP](dns-custom-domain.md#public-ip-address) |Classic VMs created using PowerShell or CLI can be configured with a dynamic or static (reserved) virtual address. You can create a DNS CNAME or A record, respectively. | diff --git a/articles/dns/dns-reverse-dns-for-azure-services.md b/articles/dns/dns-reverse-dns-for-azure-services.md index a1244b1e4364a..48a59735cc092 100644 --- a/articles/dns/dns-reverse-dns-for-azure-services.md +++ b/articles/dns/dns-reverse-dns-for-azure-services.md @@ -3,7 +3,7 @@ title: Reverse DNS for Azure services | Microsoft Docs description: Learn how to configure reverse DNS lookups for services hosted in Azure services: dns documentationcenter: na -author: jtuliani +author: KumudD manager: timlt ms.service: dns @@ -12,7 +12,7 @@ ms.topic: article ms.tgt_pltfrm: na ms.workload: infrastructure-services ms.date: 05/29/2017 -ms.author: jonatul +ms.author: kumud --- # Configure reverse DNS for services hosted in Azure @@ -25,9 +25,8 @@ This scenario should not be confused with the ability to [host the reverse DNS l Before reading this article, you should be familiar with this [Overview of reverse DNS and support in Azure](dns-reverse-dns-overview.md). -Azure has two different deployment models for creating and working with resources: [Resource Manager and Classic](../azure-resource-manager/resource-manager-deployment-model.md). -* In the Resource Manager deployment model, compute resources (such as virtual machines, virtual machine scale sets, or Service Fabric clusters) are exposed via a PublicIpAddress resource. Reverse DNS lookups are configured using the 'ReverseFqdn' property of the PublicIpAddress. -* In the Classic deployment model, compute resources are exposed using Cloud Services. Reverse DNS lookups are configured using the 'ReverseDnsFqdn' property of the Cloud Service. +In Azure DNS, compute resources (such as virtual machines, virtual machine scale sets, or Service Fabric clusters) are exposed via a PublicIpAddress resource. Reverse DNS lookups are configured using the 'ReverseFqdn' property of the PublicIpAddress. + Reverse DNS is not currently supported for the Azure App Service. diff --git a/articles/event-hubs/event-hubs-programming-guide.md b/articles/event-hubs/event-hubs-programming-guide.md index 4a4e102276e91..7307514fbe457 100644 --- a/articles/event-hubs/event-hubs-programming-guide.md +++ b/articles/event-hubs/event-hubs-programming-guide.md @@ -114,7 +114,7 @@ Sending events in batches can help increase throughput. The [SendBatch](/dotnet/ public void SendBatch(IEnumerable eventDataList); ``` -Note that a single batch must not exceed the 256 KB limit of an event. Additionally, each message in the batch uses the same publisher identity. It is the responsibility of the sender to ensure that the batch does not exceed the maximum event size. If it does, a client **Send** error is generated. You can use the helper class [EventHubClient.CreateBatch](/dotnet/api/microsoft.servicebus.messaging.eventhubclient.createbatch) to ensure that the batch does not exceed 256 KB. You get an empty [EventDataBatch](/dotnet/api/microsoft.servicebus.messaging.eventdatabatch) from the [CreateBatch](/dotnet/api/microsoft.servicebus.messaging.eventhubclient.createbatch) API and then use [TryAdd](/dotnet/api/microsoft.servicebus.messaging.eventdatabatch.tryadd#Microsoft_ServiceBus_Messaging_EventDataBatch_TryAdd_Microsoft_ServiceBus_Messaging_EventData_) to add events to construct the batch. Finally, use [EventDataBatch.ToEnumerable](/dotnet/api/microsoft.servicebus.messaging.eventdatabatch.toenumerable) to get the underlying events to pass to the [EventHubClient.Send](/dotnet/api/microsoft.servicebus.messaging.eventhubclient.send) API. +Note that a single batch must not exceed the 256 KB limit of an event. Additionally, each message in the batch uses the same publisher identity. It is the responsibility of the sender to ensure that the batch does not exceed the maximum event size. If it does, a client **Send** error is generated. You can use the helper method [EventHubClient.CreateBatch](/dotnet/api/microsoft.servicebus.messaging.eventhubclient.createbatch) to ensure that the batch does not exceed 256 KB. You get an empty [EventDataBatch](/dotnet/api/microsoft.servicebus.messaging.eventdatabatch) from the [CreateBatch](/dotnet/api/microsoft.servicebus.messaging.eventhubclient.createbatch) API and then use [TryAdd](/dotnet/api/microsoft.servicebus.messaging.eventdatabatch.tryadd#Microsoft_ServiceBus_Messaging_EventDataBatch_TryAdd_Microsoft_ServiceBus_Messaging_EventData_) to add events to construct the batch. Finally, use [EventDataBatch.ToEnumerable](/dotnet/api/microsoft.servicebus.messaging.eventdatabatch.toenumerable) to get the underlying events to pass to the [EventHubClient.Send](/dotnet/api/microsoft.servicebus.messaging.eventhubclient.send) API. ## Send asynchronously and send at scale You can also send events to an event hub asynchronously. Sending asynchronously can increase the rate at which a client is able to send events. Both the [Send](/dotnet/api/microsoft.servicebus.messaging.eventhubclient.send) and [SendBatch](/dotnet/api/microsoft.servicebus.messaging.eventhubclient.sendbatch) methods are available in asynchronous versions that return a [Task](https://msdn.microsoft.com/library/system.threading.tasks.task.aspx) object. While this technique can increase throughput, it can also cause the client to continue to send events even while it is being throttled by the Event Hubs service and can result in the client experiencing failures or lost messages if not properly implemented. In addition, you can use the [RetryPolicy](/dotnet/api/microsoft.servicebus.messaging.cliententity.retrypolicy) property on the client to control client retry options. diff --git a/articles/expressroute/TOC.md b/articles/expressroute/TOC.md index 64d4d516bf9fb..79f4574399e95 100644 --- a/articles/expressroute/TOC.md +++ b/articles/expressroute/TOC.md @@ -58,6 +58,7 @@ ## Troubleshoot ### [Verifying ExpressRoute connectivity](expressroute-troubleshooting-expressroute-overview.md) +### [Reset a failed circuit](reset-circuit.md) ### [Getting ARP tables](expressroute-troubleshooting-arp-resource-manager.md) ### [Getting ARP tables (Classic)](expressroute-troubleshooting-arp-classic.md) diff --git a/articles/expressroute/how-to-routefilter-powershell.md b/articles/expressroute/how-to-routefilter-powershell.md index d6f22532fb9ad..b209939d3f7ba 100644 --- a/articles/expressroute/how-to-routefilter-powershell.md +++ b/articles/expressroute/how-to-routefilter-powershell.md @@ -67,7 +67,7 @@ To be able to successfully connect to services through Microsoft peering, you mu Before you begin configuration, make sure you meet the following criteria: - - Install the latest version of the Azure Resource Manager PowerShell cmdlets. For more information, see [Install and configure Azure PowerShelll](/powershell/azure/install-azurerm-ps). + - Install the latest version of the Azure Resource Manager PowerShell cmdlets. For more information, see [Install and configure Azure PowerShell](/powershell/azure/install-azurerm-ps). > [!NOTE] > Download the latest version from the PowerShell Gallery, rather than using the Installer. The Installer currently does not support the required cmdlets. @@ -200,4 +200,4 @@ Remove-AzureRmRouteFilter -Name "MyRouteFilter" -ResourceGroupName "MyResourceGr ## Next Steps -For more information about ExpressRoute, see the [ExpressRoute FAQ](expressroute-faqs.md). \ No newline at end of file +For more information about ExpressRoute, see the [ExpressRoute FAQ](expressroute-faqs.md). diff --git a/articles/expressroute/reset-circuit.md b/articles/expressroute/reset-circuit.md new file mode 100644 index 0000000000000..fbf44a5f774dd --- /dev/null +++ b/articles/expressroute/reset-circuit.md @@ -0,0 +1,56 @@ +--- +title: 'Reset a failed Azure ExpressRoute circuit: PowerShell | Microsoft Docs' +description: This article helps you reset an ExpressRoute circuit that is in a failed state. +documentationcenter: na +services: expressroute +author: anzaman +manager: +editor: '' +tags: azure-resource-manager + +ms.assetid: +ms.service: expressroute +ms.devlang: na +ms.topic: article +ms.tgt_pltfrm: na +ms.workload: infrastructure-services +ms.date: 11/28/2017 +ms.author: anzaman;cherylmc + +--- +# Reset a failed ExpressRoute circuit + +When an operation on an ExpressRoute circuit does not complete successfully, the circuit may go into a 'failed' state. This article helps you reset a failed Azure ExpressRoute circuit. + +## Reset a circuit + +1. Install the latest version of the Azure Resource Manager PowerShell cmdlets. For more information, see [Install and configure Azure PowerShell](/powershell/azure/install-azurerm-ps). + +2. Open your PowerShell console with elevated privileges, and connect to your account. Use the following example to help you connect: + + ```powershell + Login-AzureRmAccount + ``` +3. If you have multiple Azure subscriptions, check the subscriptions for the account. + + ```powershell + Get-AzureRmSubscription + ``` +4. Specify the subscription that you want to use. + + ```powershell + Select-AzureRmSubscription -SubscriptionName "Replace_with_your_subscription_name" + ``` +5. Run the following commands to reset a circuit that is in a failed state: + + ```powershell + $ckt = Get-AzureRmExpressRouteCircuit -Name "ExpressRouteARMCircuit" -ResourceGroupName "ExpressRouteResourceGroup" + + Set-AzureRmExpressRouteCircuit -ExpressRouteCircuit $ckt + ``` + +The circuit should now be healthy. Open a support ticket with [Microsoft support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) if the circuit is still in a failed state. + +## Next steps + +Open a support ticket with [Microsoft support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) if you are still experiencing issues. diff --git a/articles/guides/developer/azure-developer-guide.md b/articles/guides/developer/azure-developer-guide.md index eeafaab4d348d..8dd664f4d88fd 100644 --- a/articles/guides/developer/azure-developer-guide.md +++ b/articles/guides/developer/azure-developer-guide.md @@ -303,7 +303,7 @@ given task. - **Service principal objects**: In addition to providing access to user principals and groups, you can grant the same access to a service principal. - > **When to use**: When you’re programmatically managing Azure resources or granting access for applications. For more information, see [Create Active Directory supplication and service principal](../../resource-group-create-service-principal-portal.md). + > **When to use**: When you’re programmatically managing Azure resources or granting access for applications. For more information, see [Create Active Directory application and service principal](../../resource-group-create-service-principal-portal.md). #### Tags diff --git a/articles/hdinsight/hdinsight-use-oozie-coordinator-time.md b/articles/hdinsight/hdinsight-use-oozie-coordinator-time.md index 2891d8f1bcb6e..9b1a0d68e51d0 100644 --- a/articles/hdinsight/hdinsight-use-oozie-coordinator-time.md +++ b/articles/hdinsight/hdinsight-use-oozie-coordinator-time.md @@ -78,6 +78,7 @@ Before you begin this tutorial, you must have the following: Azure storage account name$storageAccountNameAn Azure Storage account available to the HDInsight cluster. For this tutorial, use the default storage account that you specified during the cluster provision process. Azure Blob container name$containerNameFor this example, use the Azure Blob storage container that is used for the default HDInsight cluster file system. By default, it has the same name as the HDInsight cluster. + * **An Azure SQL database**. You must configure a firewall rule for the SQL Database server to allow access from your workstation. For instructions about creating an Azure SQL database and configuring the firewall, see [Get started using Azure SQL database][sqldatabase-get-started]. This article provides a Windows PowerShell script for creating the Azure SQL database table that you need for this tutorial. diff --git a/articles/hdinsight/spark/apache-spark-eclipse-tool-plugin.md b/articles/hdinsight/spark/apache-spark-eclipse-tool-plugin.md index 5a421bdc135dc..83520d00333a1 100644 --- a/articles/hdinsight/spark/apache-spark-eclipse-tool-plugin.md +++ b/articles/hdinsight/spark/apache-spark-eclipse-tool-plugin.md @@ -15,7 +15,7 @@ ms.workload: big-data ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 10/20/2017 +ms.date: 11/30/2017 ms.author: nitinme --- @@ -83,10 +83,7 @@ When you open Eclipse, HDInsight Tool automatically detects whether you installe * In the **Spark Library** area, you can choose **Use Maven to configure Spark SDK** option. Our tool integrates the proper version for Spark SDK and Scala SDK. You can also choose **Add Spark SDK manually** option, download and add Spark SDK by manually. ![New HDInsight Scala Project dialog box](./media/apache-spark-eclipse-tool-plugin/create-hdi-scala-app-3.png) -5. Due to known issue, you need confirm the scala version again after clicking **Next**. Make sure the scala version is close to the selection for the step 4. - - ![comfirm-scala-library](./media/apache-spark-eclipse-tool-plugin/comfirm-scala-library-container.png) -6. In the next dialog box, select **Finish**. +5. In the next dialog box, select **Finish**. ## Create a Scala application for an HDInsight Spark cluster diff --git a/articles/hdinsight/spark/apache-spark-intellij-tool-debug-remotely-through-ssh.md b/articles/hdinsight/spark/apache-spark-intellij-tool-debug-remotely-through-ssh.md index 16a0dd857707a..ce59b20745822 100644 --- a/articles/hdinsight/spark/apache-spark-intellij-tool-debug-remotely-through-ssh.md +++ b/articles/hdinsight/spark/apache-spark-intellij-tool-debug-remotely-through-ssh.md @@ -118,7 +118,7 @@ To resolve this error, [download the executable](http://public-repo-1.hortonwork ![Remote run button](./media/apache-spark-intellij-tool-debug-remotely-through-ssh/perform-remote-run.png) -7. If don't want to see the running log in the right panel, you can click the **Disconnect** button. However, it is still running on the backend, and the result will display in the left panel. +7. Click the **Disconnect** button that the submission logs not appear in the left panel. However, it is still running on the backend. ![Remote run button](./media/apache-spark-intellij-tool-debug-remotely-through-ssh/remote-run-result.png) diff --git a/articles/hdinsight/spark/apache-spark-jupyter-notebook-install-locally.md b/articles/hdinsight/spark/apache-spark-jupyter-notebook-install-locally.md index b15d1ee22462d..5ec0c9bb9764f 100644 --- a/articles/hdinsight/spark/apache-spark-jupyter-notebook-install-locally.md +++ b/articles/hdinsight/spark/apache-spark-jupyter-notebook-install-locally.md @@ -15,7 +15,7 @@ ms.workload: big-data ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 08/28/2017 +ms.date: 11/28/2017 ms.author: nitinme --- diff --git a/articles/hdinsight/spark/apache-spark-jupyter-spark-sql.md b/articles/hdinsight/spark/apache-spark-jupyter-spark-sql.md index 4506e21fa31fa..da810035c3c0f 100644 --- a/articles/hdinsight/spark/apache-spark-jupyter-spark-sql.md +++ b/articles/hdinsight/spark/apache-spark-jupyter-spark-sql.md @@ -4,8 +4,8 @@ description: HDInsight Spark quickstart on how to create an Apache Spark cluster keywords: spark quickstart,interactive spark,interactive query,hdinsight spark,azure spark services: hdinsight documentationcenter: '' -author: nitinme -manager: jhubbard +author: mumian +manager: cgronlun editor: cgronlun tags: azure-portal @@ -17,7 +17,7 @@ ms.tgt_pltfrm: na ms.devlang: na ms.topic: get-started-article ms.date: 09/07/2017 -ms.author: nitinme +ms.author: jgao --- # Create an Apache Spark cluster in Azure HDInsight diff --git a/articles/hdinsight/spark/apache-spark-known-issues.md b/articles/hdinsight/spark/apache-spark-known-issues.md index c47f432167a01..ee54c949c4f30 100644 --- a/articles/hdinsight/spark/apache-spark-known-issues.md +++ b/articles/hdinsight/spark/apache-spark-known-issues.md @@ -3,7 +3,7 @@ title: Troubleshoot issues with Apache Spark cluster in Azure HDInsight | Micros description: Learn about issues related to Apache Spark clusters in Azure HDInsight and how to work around those. services: hdinsight documentationcenter: '' -author: mumian +author: nitinme manager: jhubbard editor: cgronlun tags: azure-portal @@ -15,7 +15,7 @@ ms.workload: big-data ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 08/28/2017 +ms.date: 11/28/2017 ms.author: nitinme --- diff --git a/articles/hdinsight/spark/apache-spark-load-data-run-query.md b/articles/hdinsight/spark/apache-spark-load-data-run-query.md index 695a280eb9e51..72e5f43694d28 100644 --- a/articles/hdinsight/spark/apache-spark-load-data-run-query.md +++ b/articles/hdinsight/spark/apache-spark-load-data-run-query.md @@ -4,8 +4,8 @@ description: HDInsight Spark quickstart on how to create an Apache Spark cluster keywords: spark quickstart,interactive spark,interactive query,hdinsight spark,azure spark services: hdinsight documentationcenter: '' -author: nitinme -manager: jhubbard +author: mumian +manager: cgronlun editor: cgronlun tags: azure-portal @@ -16,7 +16,7 @@ ms.tgt_pltfrm: na ms.devlang: na ms.topic: article ms.date: 09/22/2017 -ms.author: nitinme +ms.author: jgao --- # Run interactive queries on an HDInsight Spark cluster diff --git a/articles/hdinsight/spark/apache-spark-microsoft-cognitive-toolkit.md b/articles/hdinsight/spark/apache-spark-microsoft-cognitive-toolkit.md index b5e4c682e17ff..16f6ea018e3fe 100644 --- a/articles/hdinsight/spark/apache-spark-microsoft-cognitive-toolkit.md +++ b/articles/hdinsight/spark/apache-spark-microsoft-cognitive-toolkit.md @@ -14,7 +14,7 @@ ms.workload: big-data ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 08/28/2017 +ms.date: 11/28/2017 ms.author: nitinme --- diff --git a/articles/hdinsight/spark/apache-spark-resource-manager.md b/articles/hdinsight/spark/apache-spark-resource-manager.md index 7865b221361fa..0532a3e6e8a8c 100644 --- a/articles/hdinsight/spark/apache-spark-resource-manager.md +++ b/articles/hdinsight/spark/apache-spark-resource-manager.md @@ -3,8 +3,8 @@ title: Manage resources for Apache Spark cluster on Azure HDInsight | Microsoft description: Learn how to use manage resources for Spark clusters on Azure HDInsight for better performance. services: hdinsight documentationcenter: '' -author: nitinme -manager: jhubbard +author: mumian +manager: cgronlun editor: cgronlun tags: azure-portal @@ -16,7 +16,7 @@ ms.tgt_pltfrm: na ms.devlang: na ms.topic: article ms.date: 08/28/2017 -ms.author: nitinme +ms.author: jgao --- # Manage resources for Apache Spark cluster on Azure HDInsight diff --git a/articles/hdinsight/spark/apache-spark-use-bi-tools.md b/articles/hdinsight/spark/apache-spark-use-bi-tools.md index a9aedb53c6a36..c3f790f4ffa8a 100644 --- a/articles/hdinsight/spark/apache-spark-use-bi-tools.md +++ b/articles/hdinsight/spark/apache-spark-use-bi-tools.md @@ -4,8 +4,8 @@ description: Use data visualization tools for analytics using Apache Spark BI on keywords: apache spark bi,spark bi, spark data visualization, spark business intelligence services: hdinsight documentationcenter: '' -author: nitinme -manager: jhubbard +author: mumian +manager: cgronlun editor: cgronlun tags: azure-portal @@ -17,7 +17,7 @@ ms.tgt_pltfrm: na ms.devlang: na ms.topic: article ms.date: 10/24/2017 -ms.author: nitinme +ms.author: jgao --- # Apache Spark BI using data visualization tools with Azure HDInsight diff --git a/articles/hdinsight/spark/apache-spark-use-with-data-lake-store.md b/articles/hdinsight/spark/apache-spark-use-with-data-lake-store.md index 37875529cc728..6c891412fc3e5 100644 --- a/articles/hdinsight/spark/apache-spark-use-with-data-lake-store.md +++ b/articles/hdinsight/spark/apache-spark-use-with-data-lake-store.md @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: article ms.tgt_pltfrm: na ms.workload: big-data -ms.date: 08/28/2017 +ms.date: 11/28/2017 ms.author: nitinme --- diff --git a/articles/hdinsight/spark/apache-spark-zeppelin-notebook.md b/articles/hdinsight/spark/apache-spark-zeppelin-notebook.md index a3b725903387f..9d49779d097d8 100644 --- a/articles/hdinsight/spark/apache-spark-zeppelin-notebook.md +++ b/articles/hdinsight/spark/apache-spark-zeppelin-notebook.md @@ -14,7 +14,7 @@ ms.workload: big-data ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 08/28/2017 +ms.date: 11/28/2017 ms.author: nitinme --- diff --git a/articles/iot-edge/how-to-install-iot-core.md b/articles/iot-edge/how-to-install-iot-core.md index f56ca84596562..aa340d65f3794 100644 --- a/articles/iot-edge/how-to-install-iot-core.md +++ b/articles/iot-edge/how-to-install-iot-core.md @@ -48,7 +48,7 @@ The Azure IoT Edge Runtime can run even on tiny Single Board Computer (SBC) devi * Python 3.6 * The IoT Edge control script (iotedgectl.exe) -You may see informational output from the iotedgectl.exe tool in red in the remote PowerShell window. This doesn't necessarily indicate errors. +You may see informational output from the iotedgectl.exe tool in green in the remote PowerShell window. This doesn't necessarily indicate errors. ## Next steps diff --git a/articles/iot-edge/tutorial-csharp-module.md b/articles/iot-edge/tutorial-csharp-module.md index 1000d56d2e1ba..7e501a03490d4 100644 --- a/articles/iot-edge/tutorial-csharp-module.md +++ b/articles/iot-edge/tutorial-csharp-module.md @@ -4,7 +4,7 @@ title: Azure IoT Edge C# module | Microsoft Docs description: Create an IoT Edge module with C# code and deploy it to an edge device services: iot-edge keywords: -author: JimacoMS2 +author: kgremban manager: timlt ms.author: v-jamebr @@ -273,8 +273,8 @@ Add the credentials for your registry to the Edge runtime on the computer where ```json { "routes":{ - "sensorToFilter":"FROM /messages/modules/tempSensor/outputs/temperatureOutput INTO BrokeredEndpoint(\"/modules/filtermodule/inputs/input1\")", - "filterToIoTHub":"FROM /messages/modules/filtermodule/outputs/output1 INTO $upstream" + "sensorToFilter":"FROM /messages/modules/tempSensor/outputs/temperatureOutput INTO BrokeredEndpoint(\"/modules/filterModule/inputs/input1\")", + "filterToIoTHub":"FROM /messages/modules/filterModule/outputs/output1 INTO $upstream" } } ``` @@ -311,4 +311,4 @@ In this tutorial, you created an IoT Edge module that contains code to filter ra [1]: ./media/tutorial-csharp-module/programcs.png [2]: ./media/tutorial-csharp-module/build-module.png -[3]: ./media/tutorial-csharp-module/docker-os.png \ No newline at end of file +[3]: ./media/tutorial-csharp-module/docker-os.png diff --git a/articles/iot-edge/tutorial-deploy-function.md b/articles/iot-edge/tutorial-deploy-function.md index 418519271d7a5..0fba69349a8b9 100644 --- a/articles/iot-edge/tutorial-deploy-function.md +++ b/articles/iot-edge/tutorial-deploy-function.md @@ -4,7 +4,7 @@ title: Deploy Azure Function with Azure IoT Edge | Microsoft Docs description: Deploy Azure Function as a module to an edge device services: iot-edge keywords: -author: JimacoMS2 +author: kgremban manager: timlt ms.author: v-jamebr diff --git a/articles/iot-edge/tutorial-deploy-machine-learning.md b/articles/iot-edge/tutorial-deploy-machine-learning.md index 47c5e32cdb408..3f16176414b1a 100644 --- a/articles/iot-edge/tutorial-deploy-machine-learning.md +++ b/articles/iot-edge/tutorial-deploy-machine-learning.md @@ -48,7 +48,7 @@ To create your Azure ML container, follow the instructions in the [AI toolkit fo 1. Click **Save**. 1. Back in the **Add Modules** step, click **Next**. 1. Update routes for your module: -1. In the **Specify Routes** step, copy the JSON below into the text box. Modules publish all messages to the Edge runtime. Declarative rules in the runtime define where those messages flow. In this tutorial you need two routes. The first route transports messages from the temperature sensor to the machine learning module via the "mlInput" endpoint, which is the endpoint that all Azure Machine Learning modules use. The second route transports messages from the machine learning module to IoT Hub. In this route, ''mlOutput'' is the endput that all Azure Machine Learning modules use to output data, and ''upstream'' is a special destination that tells Edge Hub to send messages to IoT Hub. +1. In the **Specify Routes** step, copy the JSON below into the text box. Modules publish all messages to the Edge runtime. Declarative rules in the runtime define where those messages flow. In this tutorial you need two routes. The first route transports messages from the temperature sensor to the machine learning module via the "amlInput" endpoint, which is the endpoint that all Azure Machine Learning modules use. The second route transports messages from the machine learning module to IoT Hub. In this route, ''amlOutput'' is the endpoint that all Azure Machine Learning modules use to output data, and ''$upstream'' is a special destination that tells Edge Hub to send messages to IoT Hub. ```json { @@ -76,4 +76,4 @@ In this tutorial, you deployed an IoT Edge module powered by Azure Machine Learn [lnk-tutorial1-win]: tutorial-simulate-device-windows.md -[lnk-tutorial1-lin]: tutorial-simulate-device-linux.md \ No newline at end of file +[lnk-tutorial1-lin]: tutorial-simulate-device-linux.md diff --git a/articles/iot-hub/iot-hub-devguide-messages-read-custom.md b/articles/iot-hub/iot-hub-devguide-messages-read-custom.md index 93a7860ef712c..5f8b4a7acf5ba 100644 --- a/articles/iot-hub/iot-hub-devguide-messages-read-custom.md +++ b/articles/iot-hub/iot-hub-devguide-messages-read-custom.md @@ -12,7 +12,7 @@ ms.devlang: multiple ms.topic: article ms.tgt_pltfrm: na ms.workload: na -ms.date: 09/19/2017 +ms.date: 11/29/2017 ms.author: dobett --- @@ -31,6 +31,8 @@ A single message may match the condition on multiple routing rules, in which cas An IoT hub has a default [built-in endpoint][lnk-built-in]. You can create custom endpoints to route messages to by linking other services in your subscription to the hub. IoT Hub currently supports Azure Storage containers, Event Hubs, Service Bus queues, and Service Bus topics as custom endpoints. +When you use routing and custom endpoints, messages are only delivered to the built-in endpoint if they don't match any rules. To deliver messages to the built-in endpoint as well as to a custom endpoint, add a route that sends messages to the **events** endpoint. + > [!NOTE] > IoT Hub only supports writing data to Azure Storage containers as blobs. diff --git a/articles/iot-suite/iot-suite-options.md b/articles/iot-suite/iot-suite-options.md index 0cfb362ce9c2b..71ae9fcdcfa1d 100644 --- a/articles/iot-suite/iot-suite-options.md +++ b/articles/iot-suite/iot-suite-options.md @@ -48,7 +48,7 @@ Choosing your Azure IoT product is a critical part of planning your IoT solution | Access to underlying PaaS services | You have access to the underlying Azure services to manage them, or replace them as needed. | SaaS. Fully managed solution, the underlying services aren't exposed. | | Flexibility | High. The code for the microservices is open source and you can modify it in any way you see fit. Additionally, you can customize the deployment infrastructure.| Medium. You can use the built-in browser-based user experience to customize the solution model and aspects of the UI. The infrastructure is not customizable because the different components are not exposed.| | Skill level | Medium-High. You need Java or .NET skills to customize the solution back end. You need JavaScript skills to customize the visualization. | Low. You need modeling skills to customize the solution. No coding skills are required. | -| Get started experience | Preconfigured solutions implement common IoT scenarios. Can be deployed in minutes. | Templates provide pre-built models. Can be deployed in minutes. | +| Get started experience | Preconfigured solutions implement common IoT scenarios. Can be deployed in minutes. | Application templates and device templates provide pre-built models. Can be deployed in minutes. | | Pricing | You can fine-tune the services to control the cost. | Simple, predictable pricing structure. | The decision of which product to use to build your IoT solution is ultimately determined by: diff --git a/articles/load-balancer/load-balancer-configure-ha-ports.md b/articles/load-balancer/load-balancer-configure-ha-ports.md index ab79ebb78d56f..6dc1f62a0cc12 100644 --- a/articles/load-balancer/load-balancer-configure-ha-ports.md +++ b/articles/load-balancer/load-balancer-configure-ha-ports.md @@ -38,13 +38,10 @@ Figure 1 - Network Virtual Appliances deployed behind an internal Load Balancer ## Preview sign-up -To participate in the Preview of the HA ports feature in Load Balancer Standard, register your subscription to gain access using either Azure CLI 2.0 or PowerShell. Please register your subscription for - -1. [Load Balancer Standard preview](https://aka.ms/lbpreview#preview-sign-up) and -2. [HA Ports preview](https://aka.ms/haports#preview-sign-up). +To participate in the Preview of the HA ports feature in Load Balancer Standard, register your subscription to gain access using either Azure CLI 2.0 or PowerShell. Register your subscription for [Load Balancer Standard preview](https://aka.ms/lbpreview#preview-sign-up). >[!NOTE] ->To use this feature, you must also sign-up for Load Balancer [Standard Preview](https://aka.ms/lbpreview#preview-sign-up) in addition to HA Ports. Registration of the HA Ports or Load Balancer Standard previews may take up to an hour. +>Registration of the Load Balancer Standard previews can take up to an hour. ## Configuring HA Ports diff --git a/articles/load-balancer/load-balancer-ha-ports-overview.md b/articles/load-balancer/load-balancer-ha-ports-overview.md index 4cc7c61e7944b..4470ae588234a 100644 --- a/articles/load-balancer/load-balancer-ha-ports-overview.md +++ b/articles/load-balancer/load-balancer-ha-ports-overview.md @@ -62,69 +62,10 @@ The HA ports feature is available in the [same regions as Load Balancer Standard ## Preview sign-up -To participate in the preview of the HA ports feature in Load Balancer Standard, register your subscription to gain access. You can use either Azure CLI 2.0 or PowerShell. +To participate in the preview of the HA ports feature in Load Balancer Standard, register your subscription for Load Balancer [Standard preview](https://aka.ms/lbpreview#preview-sign-up). You can register using either Azure CLI 2.0 or PowerShell. >[!NOTE] ->To use this feature, you must also sign up for Load Balancer [Standard preview](https://aka.ms/lbpreview#preview-sign-up), in addition to the HA ports feature. Registration can take up to an hour. - -### Sign up by using Azure CLI 2.0 - -1. Register the feature with the provider: - ```cli - az feature register --name AllowILBAllPortsRule --namespace Microsoft.Network - ``` - -2. The preceding operation can take up to 10 minutes to complete. You can check the status of the operation with the following command: - - ```cli - az feature show --name AllowILBAllPortsRule --namespace Microsoft.Network - ``` - - The operation is successful when the feature registration state returns **Registered**, as shown here: - - ```json - { - "id": "/subscriptions/foo/providers/Microsoft.Features/providers/Microsoft.Network/features/AllowLBPreview", - "name": "Microsoft.Network/AllowILBAllPortsRule", - "properties": { - "state": "Registered" - }, - "type": "Microsoft.Features/providers/features" - } - ``` - -3. Complete the preview sign-up by re-registering your subscription with the resource provider: - - ```cli - az provider register --namespace Microsoft.Network - ``` - -### Sign up by using PowerShell - -1. Register the feature with the provider: - ```powershell - Register-AzureRmProviderFeature -FeatureName AllowILBAllPortsRule -ProviderNamespace Microsoft.Network - ``` - -2. The preceding operation can take up to 10 minutes to complete. You can check the status of the operation with the following command: - - ```powershell - Get-AzureRmProviderFeature -FeatureName AllowILBAllPortsRule -ProviderNamespace Microsoft.Network - ``` - The operation is successful when the feature registration state returns **Registered**, as shown here: - - ``` - FeatureName ProviderName RegistrationState - ----------- ------------ ----------------- - AllowILBAllPortsRule Microsoft.Network Registered - ``` - -3. Complete the preview sign-up by re-registering your subscription with the resource provider: - - ```powershell - Register-AzureRmResourceProvider -ProviderNamespace Microsoft.Network - ``` - +>Registration can take up to an hour. ## Limitations diff --git a/articles/location-based-services/how-to-search-for-address.md b/articles/location-based-services/how-to-search-for-address.md new file mode 100644 index 0000000000000..c36c2c3d4e3a9 --- /dev/null +++ b/articles/location-based-services/how-to-search-for-address.md @@ -0,0 +1,208 @@ +--- +# Mandatory fields. See more on aka.ms/skyeye/meta. +title: How to search for an address using the Azure Location Based Services (preview) Search service | Microsoft Docs +description: Learn how to search for an address using the Azure Location Based Services (preview) Search service +services: location-based-services +keywords: Don’t add or edit keywords without consulting your SEO champ. +author: philmea +ms.author: philmea +ms.date: 11/29/2017 +ms.topic: how-to +ms.service: location-based-services +--- +# How to find an address using the Azure Location Based Services (preview) Search service +The Search service is a RESTful set of APIs designed for developers to search for addresses, places, points of interest, business listings, and other geographic information. The Search Service assigns a latitude/longitude to a specific address, cross street, geographic feature, or point of interest (POI). Latitude and longitude values returned by the Search service APIs can be used as parameters in other Azure Location Based Services such as the Route and Traffic Flow APIs. + +## Prerequisites +Install the [Postman app](https://www.getpostman.com/apps). + +An Azure Location Based Services account and subscription key. For information on creating an account and retrieving a subscription key, see [How to manage your Azure Location Based Services account and keys](how-to-manage-account-keys.md). + +## Using Fuzzy Search + +The default API for the Search service is Fuzzy Search, which handles inputs of any combination of address or POI tokens. This search API is the canonical 'single-line search' and is useful when you do not know what your user inputs as a search query. The Fuzzy Search API is a combination of POI search and geocoding. The API can also be weighted with a contextual position (lat./lon. pair), fully constrained by a coordinate and radius, or it can be executed more generally without any geo biasing anchor point. + +Most Search queries default to 'maxFuzzyLevel=1' to gain performance and reduce unusual results. This default can be overridden as needed per request by passing in the query parameter 'maxFuzzyLevel=2' or '3'. + +### Search for an address using Fuzzy Search + +1. Open the Postman app, click New | Create New, and select **GET request**. Enter a Request name of **Fuzzy search**, select a collection or folder to save it to, and click **Save**. + + For more information, see the Postman's Requests documentation. + +2. On the Builder tab, select the **GET** HTTP method and enter the request URL for your API endpoint. + + ![Fuzzy Search ](./media/how-to-search-for-address/fuzzy_search_url.png) + + | Parameter | Suggested value | + |---------------|------------------------------------------------| + | HTTP method | GET | + | Request URL | https://atlas.microsoft.com/search/fuzzy/json? | + | Authorization | No Auth | + + The **json** attribute in the URL path determines the response format. You are using json throughout this article for ease of use and readability. You can find the available response formats in the **Get Search Fuzzy** definition of the [Location Based Services Functional API reference] (https://docs.microsoft.com/en-us/rest/api/location-based-services/search/getsearchfuzzy). + +3. Click **Params**, and enter the following Key / Value pairs to use as query or path parameters in the request URL: + + ![Fuzzy Search ](./media/how-to-search-for-address/fuzzy_search_params.png) + + | Key | Value | + |------------------|-------------------------| + | api-version | 1.0 | + | subscription-key | *subscription key* | + | query | pizza | + +4. Click **Send** and review the response body. + + The ambiguous query string of "pizza" returned 10 point of interest (POI) results with categories falling in "pizza" and "restaurant". Each result returns a street address, latitude / longitude values, view port, and entry points for the location. + + The results are varied for this query, not tied to any particular reference location. You can use the **countrySet** parameter to specify only the countries for which your application needs coverage, as the default behavior is to search the entire world, potentially returning unnecessary results. + +5. Add the following value to the query string and click **Send**: + ``` + ,countrySet=US + ``` + >[!NOTE] + >Ensure that you comma-separate the additional URI parameters in the query string. + + The results are now bounded by the country code and the query returns pizza restaurants in the United States. + + To provide results oriented on a particular location, you can query a point of interest and use the returned latitude and longitude values in your call to the Fuzzy Search service. In this case, you used the Search service to return the location of the Seattle Space Needle and used the lat. / lon. values to orient the search. + +4. In Params, enter the following Key / Value pairs and click **Send**: + + ![Fuzzy Search ](./media/how-to-search-for-address/fuzzy_search_latlon.png) + + | Key | Value | + |-----|------------| + | lat | 47.62039 | + | lon | -122.34928 | + +## Search for address properties and coordinates + +You can pass a complete or partial street address to the Search Address API and receive a response that includes detailed address properties such as municipality or subdivision, as well as positional values in latitude and longitude. + +1. In Postman, click **New Request** | **GET request** and name it **Address Search**. +2. On the Builder tab, select the **GET** HTTP method, enter the request URL for your API endpoint, and select an authorization protocol, if any. + + ![Address Search ](./media/how-to-search-for-address/address_search_url.png) + + | Parameter | Suggested value | + |---------------|------------------------------------------------| + | HTTP method | GET | + | Request URL | https://atlas.microsoft.com/search/address/json? | + | Authorization | No Auth | + +2. Click **Params**, and enter the following Key / Value pairs to use as query or path parameters in the request URL: + + ![Address Search ](./media/how-to-search-for-address/address_search_params.png) + + | Key | Value | + |------------------|-------------------------| + | api-version | 1.0 | + | subscription-key | *subscription key* | + | query | 400 Broad St, Seattle, WA 98109 | + +3. Click **Send** and review the response body. + + In this case, you specified a complete address query and receive a single result in the response body. + +4. In Params, edit the query string to the following value: + ``` + 400 Broad, Seattle + ``` + +5. Add the following value to the query string and click **Send**: + ``` + ,typeahead + ``` + + The **typeahead** flag tells the Address Search API to treat the query as a partial input and return an array of predictive values. + +## Search for a street address using Reverse Address Search +1. In Postman, click **New Request** | **GET request** and name it **Reverse Address Search**. + +2. On the Builder tab, select the **GET** HTTP method and enter the request URL for your API endpoint. + + ![Reverse Address Search URL ](./media/how-to-search-for-address/reverse_address_search_url.png) + + | Parameter | Suggested value | + |---------------|------------------------------------------------| + | HTTP method | GET | + | Request URL | https://atlas.microsoft.com/search/address/reverse/json? | + | Authorization | No Auth | + +2. Click **Params**, and enter the following Key / Value pairs to use as query or path parameters in the request URL: + + ![Reverse Address Search Parameters ](./media/how-to-search-for-address/reverse_address_search_params.png) + + | Key | Value | + |------------------|-------------------------| + | api-version | 1.0 | + | subscription-key | *subscription key* | + | query | 47.59093,-122.33263 | + +3. Click **Send** and review the response body. + + The response includes the POI entry for Safeco Field with a poi category of "stadium". + +4. Add the following value to the query string and click **Send**: + ``` + ,number + ``` + If the [number](https://docs.microsoft.com/en-us/rest/api/location-based-services/search/getsearchaddressreverse#search_getsearchaddressreverse_uri_parameters) query parameter is sent with the request, the response may include the side of the street (Left/Right) and also an offset position for that number. + +5. Add the following value to the query string and click **Send**: + ``` + ,spatialKeys + ``` + + When the [spatialKeys](https://docs.microsoft.com/en-us/rest/api/location-based-services/search/getsearchaddressreverse#search_getsearchaddressreverse_uri_parameters) query parameter is set, the response contains proprietary geo-spatial key information for a specified location. + +6. Add the following value to the query string and click **Send**: + ``` + ,returnSpeedLimit + ``` + + When the [returnSpeedLimit](https://docs.microsoft.com/en-us/rest/api/location-based-services/search/getsearchaddressreverse#search_getsearchaddressreverse_uri_parameters) query parameter is set, the response return of the posted speed limit. + +7. Add the following value to the query string and click **Send**: + ``` + ,returnRoadUse + ``` + + When the [returnRoadUse](https://docs.microsoft.com/en-us/rest/api/location-based-services/search/getsearchaddressreverse#search_getsearchaddressreverse_uri_parameters) query parameter is set, the response returns the road use array for reversegeocodes at street level. + +8. Add the following value to the query string and click **Send**: + ``` + ,roadUse + ``` + + You can restrict the reverse geocode query to a specific type of road use using the [roadUse](https://docs.microsoft.com/en-us/rest/api/location-based-services/search/getsearchaddressreverse#search_getsearchaddressreverse_uri_parameters) query parameter. + +## Search for the cross street using Reverse Address Cross Street Search + +1. In Postman, click **New Request** | **GET request** and name it **Reverse Address Cross Street Search**. + +2. On the Builder tab, select the **GET** HTTP method and enter the request URL for your API endpoint. + + ![Reverse Address Cross Street Search ](./media/how-to-search-for-address/reverse_address_search_url.png) + + | Parameter | Suggested value | + |---------------|------------------------------------------------| + | HTTP method | GET | + | Request URL | https://atlas.microsoft.com/search/address/reverse/crossstreet/json? | + | Authorization | No Auth | + +3. Click **Params**, and enter the following Key / Value pairs to use as query or path parameters in the request URL: + + | Key | Value | + |------------------|-------------------------| + | api-version | 1.0 | + | subscription-key | *subscription key* | + | query | 47.59093,-122.33263 | + +4. Click **Send** and review the response body. + +## Next steps +- Explore the [Azure Location Based Serices Search service](https://docs.microsoft.com/en-us/rest/api/location-based-services/search) API documentation \ No newline at end of file diff --git a/articles/location-based-services/media/how-to-search-for-address/address_search_params.png b/articles/location-based-services/media/how-to-search-for-address/address_search_params.png new file mode 100644 index 0000000000000..0b9007a27f5b6 Binary files /dev/null and b/articles/location-based-services/media/how-to-search-for-address/address_search_params.png differ diff --git a/articles/location-based-services/media/how-to-search-for-address/address_search_url.png b/articles/location-based-services/media/how-to-search-for-address/address_search_url.png new file mode 100644 index 0000000000000..0737a73a00956 Binary files /dev/null and b/articles/location-based-services/media/how-to-search-for-address/address_search_url.png differ diff --git a/articles/location-based-services/media/how-to-search-for-address/fuzzy_search_latlon.png b/articles/location-based-services/media/how-to-search-for-address/fuzzy_search_latlon.png new file mode 100644 index 0000000000000..0223c92921aba Binary files /dev/null and b/articles/location-based-services/media/how-to-search-for-address/fuzzy_search_latlon.png differ diff --git a/articles/location-based-services/media/how-to-search-for-address/fuzzy_search_params.png b/articles/location-based-services/media/how-to-search-for-address/fuzzy_search_params.png new file mode 100644 index 0000000000000..8342904390caf Binary files /dev/null and b/articles/location-based-services/media/how-to-search-for-address/fuzzy_search_params.png differ diff --git a/articles/location-based-services/media/how-to-search-for-address/fuzzy_search_url.png b/articles/location-based-services/media/how-to-search-for-address/fuzzy_search_url.png new file mode 100644 index 0000000000000..91641bd49569f Binary files /dev/null and b/articles/location-based-services/media/how-to-search-for-address/fuzzy_search_url.png differ diff --git a/articles/location-based-services/media/how-to-search-for-address/reverse_address_search_params.png b/articles/location-based-services/media/how-to-search-for-address/reverse_address_search_params.png new file mode 100644 index 0000000000000..051d22de80505 Binary files /dev/null and b/articles/location-based-services/media/how-to-search-for-address/reverse_address_search_params.png differ diff --git a/articles/location-based-services/media/how-to-search-for-address/reverse_address_search_url.png b/articles/location-based-services/media/how-to-search-for-address/reverse_address_search_url.png new file mode 100644 index 0000000000000..9144b3eb2a8b0 Binary files /dev/null and b/articles/location-based-services/media/how-to-search-for-address/reverse_address_search_url.png differ diff --git a/articles/location-based-services/quick-demo-map-app.md b/articles/location-based-services/quick-demo-map-app.md index dc4c08a94299e..d31afdb47fc68 100644 --- a/articles/location-based-services/quick-demo-map-app.md +++ b/articles/location-based-services/quick-demo-map-app.md @@ -56,15 +56,14 @@ Log in to the [Azure portal](https://portal.azure.com/). ## Clean up resources -The tutorials go in details about how to use and configure the Azure Location Based Services for your account. If you plan to continue on to work with the tutorials, do not clean up the resources created in this Quickstart. If you do not plan to continue, use the following steps to delete all resources created by this Quickstart in the Azure portal. +The tutorials go in details about how to use and configure the Azure Location Based Services for your account. If you plan to continue on to work with the tutorials, do not clean up the resources created in this Quickstart. If you do not plan to continue, use the following steps to delete all resources created by this Quickstart. 1. Close the browser running the **AzureMapDemo.html** web application. -2. From the left-hand menu in the Azure portal, click **All resources** and then select your Device Provisioning service. At the top of the **All resources** blade, click **Delete**. -2. From the left-hand menu in the Azure portal, click **All resources** and then select your IoT hub. At the top of the **All resources** blade, click **Delete**. +2. From the left-hand menu in the Azure portal, click **All resources** and then select your LBS account. At the top of the **All resources** blade, click **Delete**. ## Next steps -In this Quickstart, you’ve deployed an IoT hub and a Device Provisioning Service instance, and linked the two resources. To learn how to use this set up to provision a simulated device, continue to the Quickstart for creating simulated device. +In this Quickstart, you’ve created your Azure LBS account, and launched a demo app using your account. To learn how to create your own application using the Azure Location Based Services APIs, continue to the following tutorial. > [!div class="nextstepaction"] > [Tutorial to user Azure Map and Search](./tutorial-search-location.md) diff --git a/articles/location-based-services/toc.yml b/articles/location-based-services/toc.yml index a15d63da6de29..6f6a0c3716293 100644 --- a/articles/location-based-services/toc.yml +++ b/articles/location-based-services/toc.yml @@ -29,6 +29,8 @@ href: how-to-manage-account-keys.md - name: How to use the Map Control href: how-to-use-map-control.md + - name: How to search for an address + href: how-to-search-for-address.md - name: Reference items: - name: Resources diff --git a/articles/log-analytics/log-analytics-quick-collect-azurevm.md b/articles/log-analytics/log-analytics-quick-collect-azurevm.md index 108362501a465..2117a256fb87b 100644 --- a/articles/log-analytics/log-analytics-quick-collect-azurevm.md +++ b/articles/log-analytics/log-analytics-quick-collect-azurevm.md @@ -12,7 +12,7 @@ ms.workload: na ms.tgt_pltfrm: na ms.devlang: na ms.topic: quickstart -ms.date: 09/20/2017 +ms.date: 11/28/2017 ms.author: magoedte ms.custom: mvc --- @@ -43,6 +43,9 @@ While the information is verified and the workspace is created, you can track it ## Enable the Log Analytics VM Extension For Windows and Linux virtual machines already deployed in Azure, you install the Log Analytics agent with the Log Analytics VM Extension. Using the extension simplifies the installation process and automatically configures the agent to send data to the Log Analytics workspace that you specify. The agent is also upgraded automatically, ensuring that you have the latest features and fixes. +>[!NOTE] +>The OMS agent for Linux cannot be configured to report to more than one Log Analytics workspace. + You may notice the banner across the top of your Log Analytics resource page in the portal inviting you to upgrade. The upgrade is not needed for the purposes of this quickstart.
![Log Analytics upgrade notice in the Azure portal](media/log-analytics-quick-collect-azurevm/log-analytics-portal-upgradebanner.png). diff --git a/articles/log-analytics/log-analytics-quick-collect-linux-computer.md b/articles/log-analytics/log-analytics-quick-collect-linux-computer.md index dd243cc188c7c..4abc401690448 100644 --- a/articles/log-analytics/log-analytics-quick-collect-linux-computer.md +++ b/articles/log-analytics/log-analytics-quick-collect-linux-computer.md @@ -12,14 +12,14 @@ ms.workload: na ms.tgt_pltfrm: na ms.devlang: na ms.topic: quickstart -ms.date: 10/13/2017 +ms.date: 11/28/2017 ms.author: magoedte ms.custom: mvc --- # Collect data from Linux computers hosted in your environment [Azure Log Analytics](log-analytics-overview.md) can collect data directly from your physical or virtual Linux computers and other resources in your environment into a single repository for detailed analysis and correlation. This quickstart shows you how to configure and collect data from your Linux computer with a few easy steps. For Azure Linux VMs, see the following topic [Collect data about Azure Virtual Machines](log-analytics-quick-collect-azurevm.md). - + If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Log in to Azure portal @@ -52,6 +52,9 @@ Before installing the OMS agent for Linux, you need the workspace ID and key for ## Install the agent for Linux The following steps configure setup of the agent for Log Analytics in Azure and Azure Government cloud. +>[!NOTE] +>The OMS agent for Linux cannot be configured to report to more than one Log Analytics workspace. + 1. To configure the Linux computer to connect to Log Analytics, run the following command providing the workspace ID and primary key copied earlier. This command downloads the agent, validates its checksum, and installs it. ``` diff --git a/articles/logic-apps/logic-apps-create-a-logic-app.md b/articles/logic-apps/logic-apps-create-a-logic-app.md index ced4277869c68..f4f10f48eec57 100644 --- a/articles/logic-apps/logic-apps-create-a-logic-app.md +++ b/articles/logic-apps/logic-apps-create-a-logic-app.md @@ -77,7 +77,7 @@ for example, running your own code from a logic app with [Azure Functions](../az 1. Sign in to the [Azure portal](https://portal.azure.com "Azure portal"). 2. From the main Azure menu, choose -**New** > **Enterprise Integration** > **Logic App**. +**New** > **Web + Mobile** > **Logic App**. ![Azure portal, New, Enterprise Integration, Logic App](./media/logic-apps-create-a-logic-app/azure-portal-create-logic-app.png) diff --git a/articles/logic-apps/media/logic-apps-create-a-logic-app/azure-portal-create-logic-app.png b/articles/logic-apps/media/logic-apps-create-a-logic-app/azure-portal-create-logic-app.png index 07230b27a894a..283747be1bd9c 100644 Binary files a/articles/logic-apps/media/logic-apps-create-a-logic-app/azure-portal-create-logic-app.png and b/articles/logic-apps/media/logic-apps-create-a-logic-app/azure-portal-create-logic-app.png differ diff --git a/articles/machine-learning/preview/known-issues-and-troubleshooting-guide.md b/articles/machine-learning/preview/known-issues-and-troubleshooting-guide.md index 8968c05fb52d2..46f11305c28ed 100644 --- a/articles/machine-learning/preview/known-issues-and-troubleshooting-guide.md +++ b/articles/machine-learning/preview/known-issues-and-troubleshooting-guide.md @@ -37,6 +37,17 @@ If you run into issue during installation, the installer log files are here: ``` You can zip up the contents of these directories and send it to us for diagnostics. +### App Update +#### No update notification on Windows desktop +This issue will be addressed in an upcoming update. In the meantime, the workaround is to avoid launching the app from the shortcut pinned to the Taskbar. Instead to launch the app by using the Start menu or Start search-bar, or the shortcut on your desktop (if you have one). + +#### No update notification on an Ubuntu Data Sciece Virtual Machine (DSVM) +Perform the following steps to download the latest application : + - remove the folder \Users\AppData\Local\amlworkbench + - remove script `c:\dsvm\tools\setup\InstallAMLFromLocal.ps1` + - remove desktop shortcut that launches the above script + - install cleanly using [https://aka.ms/azureml-wb-msi](https://aka.ms/azureml-wb-msi) + ### Workbench desktop app If you have trouble logging in, or if the Workbench desktop crashes, you can find log files here: ``` diff --git a/articles/machine-learning/preview/toc.yml b/articles/machine-learning/preview/toc.yml index 30681cdac68d0..0185527d7897d 100644 --- a/articles/machine-learning/preview/toc.yml +++ b/articles/machine-learning/preview/toc.yml @@ -39,12 +39,14 @@ href: how-to-use-tdsp-in-azure-ml.md - name: Roaming and collaboration href: roaming-and-collaboration.md - - name: IDE extensions + - name: Use IDE extensions items: - name: Use Visual Studio Tools for AI href: quickstart-visual-studio-tools.md - name: Use Visual Studio Code Tools for AI href: quickstart-visual-studio-code-tools.md + - name: Use Azure IoT Edge AI Toolkit + href: use-azure-iot-edge-ai-toolkit.md - name: Configure compute environment items: - name: Configure experimentation @@ -92,6 +94,8 @@ href: how-to-scale-clusters.md - name: Deploy a web service href: model-management-service-deploy.md + - name: Deploy to an IoT Edge device + href: deploy-to-iot-edge-device.md - name: Consume a web service href: model-management-consumption.md - name: Collect model data diff --git a/articles/machine-learning/preview/tutorial-classifying-iris-part-3.md b/articles/machine-learning/preview/tutorial-classifying-iris-part-3.md index 2998da2d87fbf..8cbd388835fc9 100644 --- a/articles/machine-learning/preview/tutorial-classifying-iris-part-3.md +++ b/articles/machine-learning/preview/tutorial-classifying-iris-part-3.md @@ -10,7 +10,7 @@ ms.service: machine-learning ms.workload: data-services ms.custom: mvc, tutorial ms.topic: hero-article -ms.date: 11/14/2017 +ms.date: 11/29/2017 --- # Classify Iris part 3: Deploy a model @@ -159,7 +159,7 @@ You can use _local mode_ for development and testing. The Docker engine must be 3. Create the environment. You must run this step once per environment. For example, run it once for development environment, and once for production. Use _local mode_ for this first environment. You can try the `-c` or `--cluster` switch in the following command to set up an environment in _cluster mode_ later. -Note that the following setup command requires you to have Contributor access to the subscription. If you don't have that, you at least need Contributor access to the resource group that you are deploying into. To do the latter, you need to specify the resource group name as part of the setup command using `-g` the flag. + Note that the following setup command requires you to have Contributor access to the subscription. If you don't have that, you at least need Contributor access to the resource group that you are deploying into. To do the latter, you need to specify the resource group name as part of the setup command using `-g` the flag. ```azurecli az ml env setup -n --location @@ -272,10 +272,10 @@ You are now ready to run the web service. To test the **irisapp** web service that's running, use a JSON-encoded record containing an array of four random numbers: -1. The web service includes sample data. When running in local mode, you can call the **az ml service show realtime** command. That call retrieves a sample run command that's useful for you to use to test the service. The call also retrieves the scoring URL that you can use to incorporate the service into your own custom app: +1. The web service includes sample data. When running in local mode, you can call the **az ml service usage realtime** command. That call retrieves a sample run command that's useful for you to use to test the service. The call also retrieves the scoring URL that you can use to incorporate the service into your own custom app: ```azurecli - az ml service show realtime -i + az ml service usage realtime -i ``` 2. To test the service, execute the returned service run command: diff --git a/articles/machine-learning/studio/algorithm-parameters-optimize.md b/articles/machine-learning/studio/algorithm-parameters-optimize.md index 36519a103275b..624bd06c2e0ad 100644 --- a/articles/machine-learning/studio/algorithm-parameters-optimize.md +++ b/articles/machine-learning/studio/algorithm-parameters-optimize.md @@ -4,7 +4,7 @@ description: Explains how to choose the optimal parameter set for an algorithm i services: machine-learning documentationcenter: '' author: bradsev -manager: jhubbard +manager: cgronlun editor: cgronlun ms.assetid: 6717e30e-b8d8-4cc1-ad0b-1d4727928d32 @@ -13,8 +13,8 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 01/29/2017 -ms.author: bradsev +ms.date: 11/29/2017 +ms.author: bradsev;garye --- # Choose parameters to optimize your algorithms in Azure Machine Learning diff --git a/articles/machine-learning/studio/consume-web-services.md b/articles/machine-learning/studio/consume-web-services.md index 2eba3f607e451..9bc5d065f0a89 100644 --- a/articles/machine-learning/studio/consume-web-services.md +++ b/articles/machine-learning/studio/consume-web-services.md @@ -61,19 +61,12 @@ To retrieve the API key for a Classic Machine Learning Web service: 5. Copy and save the **Primary Key**. ### Classic Web service - You can also retrieve a key for a Classic Web service from Machine Learning Studio or the Azure classic portal. + You can also retrieve a key for a Classic Web service from Machine Learning Studio. #### Machine Learning Studio 1. In Machine Learning Studio, click **WEB SERVICES** on the left. 2. Click a Web service. The **API key** is on the **DASHBOARD** tab. -#### Azure classic portal -1. Click **MACHINE LEARNING** on the left. -2. Click the workspace in which your Web service is located. -3. Click **WEB SERVICES**. -4. Click a Web service. -5. Click an endpoint. The “API KEY” is down at the lower-right. - ## Connect to a Machine Learning Web service You can connect to a Machine Learning Web service using any programming language that supports HTTP request and response. You can view examples in C#, Python, and R from a Machine Learning Web service help page. diff --git a/articles/machine-learning/studio/create-endpoint.md b/articles/machine-learning/studio/create-endpoint.md index 1747fef169f57..ae190e962e25a 100644 --- a/articles/machine-learning/studio/create-endpoint.md +++ b/articles/machine-learning/studio/create-endpoint.md @@ -17,7 +17,7 @@ ms.date: 10/04/2016 ms.author: himad --- -# Creating Endpoints +# Creating endpoints > [!NOTE] > This topic describes techniques applicable to a **Classic** Machine Learning Web service. > @@ -30,11 +30,10 @@ To accomplish this, Azure Machine Learning allows you to create multiple endpoin [!INCLUDE [machine-learning-free-trial](../../../includes/machine-learning-free-trial.md)] ## Adding endpoints to a Web service -There are three ways to add an endpoint to a Web service. +There are two ways to add an endpoint to a Web service. * Programmatically * Through the Azure Machine Learning Web Services portal -* Though the Azure classic portal Once the endpoint is created, you can consume it through synchronous APIs, batch APIs, and excel worksheets. In addition to adding endpoints through this UI, you can also use the Endpoint Management APIs to programmatically add endpoints. @@ -52,20 +51,6 @@ You can add an endpoint to your Web service programmatically using the [AddEndpo 3. Click **New**. 4. Type a name and description for the new endpoint. Endpoint names must be 24 character or less in length, and must be made up of lower-case alphabets or numbers. Select the logging level and whether sample data is enabled. For more information on logging, see [Enable logging for Machine Learning Web services](web-services-logging.md). -## Adding an endpoint using the Azure classic portal -1. Sign in to the [Azure classic portal](http://manage.windowsazure.com), click **Machine Learning** in the left column. Click the workspace which contains the Web service in which you are interested. - - ![Navigate to workspace](./media/create-endpoint/figure-1.png) -2. Click **Web Services**. - - ![Navigate to Web services](./media/create-endpoint/figure-2.png) -3. Click the Web service you're interested in to see the list of available endpoints. - - ![Navigate to endpoint](./media/create-endpoint/figure-3.png) -4. At the bottom of the page, click **Add Endpoint**. Type a name and description, ensure there are no other endpoints with the same name in this Web service. Leave the throttle level with its default value unless you have special requirements. To learn more about throttling, see [Scaling API Endpoints](scaling-webservice.md). - - ![Create endpoint](./media/create-endpoint/figure-4.png) - -## Next Steps +## Next steps [How to consume an Azure Machine Learning Web service](consume-web-services.md). diff --git a/articles/machine-learning/studio/custom-r-modules.md b/articles/machine-learning/studio/custom-r-modules.md index 4dcd2c694a894..2465a6819278c 100644 --- a/articles/machine-learning/studio/custom-r-modules.md +++ b/articles/machine-learning/studio/custom-r-modules.md @@ -4,7 +4,7 @@ description: Quick start for authoring custom R modules in Azure Machine Learnin services: machine-learning documentationcenter: '' author: bradsev -manager: jhubbard +manager: cgronlun editor: cgronlun ms.assetid: 6cbc628a-7e60-42ce-9f90-20aaea7ba630 @@ -13,8 +13,8 @@ ms.devlang: na ms.topic: article ms.tgt_pltfrm: na ms.workload: tbd -ms.date: 03/24/2017 -ms.author: bradsev;ankarlof +ms.date: 11/29/2017 +ms.author: bradsev;ankarlof;garye --- # Author custom R modules in Azure Machine Learning diff --git a/articles/machine-learning/studio/deploy-consume-web-service-guide.md b/articles/machine-learning/studio/deploy-consume-web-service-guide.md index ed988213fd0d9..a8504b04abc41 100644 --- a/articles/machine-learning/studio/deploy-consume-web-service-guide.md +++ b/articles/machine-learning/studio/deploy-consume-web-service-guide.md @@ -23,6 +23,7 @@ You can use Azure Machine Learning to deploy machine-learning workflows and mode The next sections provide links to walkthroughs, code, and documentation to help get you started. ## Deploy a web service + ### With Azure Machine Learning Studio Machine Learning Studio and the Microsoft Azure Machine Learning Web Services portal help you deploy and manage a web service without writing code. @@ -56,7 +57,7 @@ Running the application creates a web service JSON template. To use the template * Storage account name and key - You can get the storage account name and key from either the [Azure portal](https://portal.azure.com/) or the [Azure classic portal](http://manage.windowsazure.com/). + You can get the storage account name and key from the [Azure portal](https://portal.azure.com/). * Commitment plan ID You can get the plan ID from the [Azure Machine Learning Web Services](https://services.azureml.net) portal by signing in and clicking a plan name. diff --git a/articles/machine-learning/studio/execute-python-scripts.md b/articles/machine-learning/studio/execute-python-scripts.md index 9d37e96f23cc1..7b928181095b8 100644 --- a/articles/machine-learning/studio/execute-python-scripts.md +++ b/articles/machine-learning/studio/execute-python-scripts.md @@ -5,7 +5,7 @@ keywords: python machine learning,pandas,python pandas,python scripts, execute p services: machine-learning documentationcenter: '' author: bradsev -manager: jhubbard +manager: cgronlun editor: cgronlun ms.assetid: ee9eb764-0d3e-4104-a797-19fc29345d39 @@ -14,8 +14,8 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 07/26/2017 -ms.author: bradsev +ms.date: 11/29/2017 +ms.author: bradsev;garye --- # Execute Python machine learning scripts in Azure Machine Learning Studio diff --git a/articles/machine-learning/studio/faq.md b/articles/machine-learning/studio/faq.md index 3e8ef271a8379..883c9d386f9b4 100644 --- a/articles/machine-learning/studio/faq.md +++ b/articles/machine-learning/studio/faq.md @@ -227,7 +227,7 @@ For more information, see [Retrain Machine Learning models programmatically](ret **How do I monitor my web service deployed in production?** -After you deploy a predictive model, you can monitor it from the Azure classic portal (Classic web services only) or the Azure Machine Learning Web Services portal. Each deployed service has its own dashboard where you can see monitoring information for that service. For more information about how to manage your deployed web services, see [Manage a Web service using the Azure Machine Learning Web Services portal](manage-new-webservice.md) and [Manage an Azure Machine Learning workspace](manage-workspace.md). +After you deploy a predictive model, you can monitor it from the Azure Machine Learning Web Services portal. Each deployed service has its own dashboard where you can see monitoring information for that service. For more information about how to manage your deployed web services, see [Manage a Web service using the Azure Machine Learning Web Services portal](manage-new-webservice.md) and [Manage an Azure Machine Learning workspace](manage-workspace.md). **Is there a place where I can see the output of my RRS/BES?** @@ -282,7 +282,7 @@ No. ## Security and availability **Who can access the http endpoint for the web service by default? How do I restrict access to the endpoint?** -After a web service is deployed, a default endpoint is created for that service. The default endpoint can be called by using its API key. You can add more endpoints with their own keys from the Azure classic portal or programmatically by using the Web Service Management APIs. Access keys are needed to make calls to the web service. For more information, see [How to consume an Azure Machine Learning Web service](consume-web-services.md). +After a web service is deployed, a default endpoint is created for that service. The default endpoint can be called by using its API key. You can add more endpoints with their own keys from the Web Services portal or programmatically by using the Web Service Management APIs. Access keys are needed to make calls to the web service. For more information, see [How to consume an Azure Machine Learning Web service](consume-web-services.md). **What happens if my Azure storage account can't be found?** @@ -294,7 +294,7 @@ If you accidentally deleted the storage account, recreate the storage account wi Machine Learning Studio relies on a user-supplied Azure storage account to store intermediary data when it executes the workflow. This storage account is provided to Machine Learning Studio when a workspace is created, and the access keys are associated with that workspace. If the access keys are changed after the workspace is created, the workspace can no longer access the storage account. It will stop functioning and all experiments in that workspace will fail. -If you changed storage account access keys, resync the access keys in the workspace by using the Azure classic portal. +If you changed storage account access keys, resync the access keys in the workspace by using the Azure portal. ## Support and training **Where can I get training for Azure Machine Learning?** @@ -506,7 +506,7 @@ All you need is a Microsoft account. Go to [Azure Machine Learning home](https:/ **How do I sign up for Azure Machine Learning Standard tier?** -You must first have access to an Azure subscription to create a Standard Machine Learning workspace. You can sign up for a 30-day free trial Azure subscription and later upgrade to a paid Azure subscription, or you can purchase a paid Azure subscription outright. You can then create a Machine Learning workspace from the Microsoft Azure classic portal after you gain access to the subscription. View the [step-by-step instructions](https://azure.microsoft.com/trial/get-started-machine-learning-b/). +You must first have access to an Azure subscription to create a Standard Machine Learning workspace. You can sign up for a 30-day free trial Azure subscription and later upgrade to a paid Azure subscription, or you can purchase a paid Azure subscription outright. You can then create a Machine Learning workspace from the Microsoft Azure portal after you gain access to the subscription. View the [step-by-step instructions](https://azure.microsoft.com/trial/get-started-machine-learning-b/). Alternatively, you can be invited by a Standard Machine Learning workspace owner to access the owner's workspace. diff --git a/articles/machine-learning/studio/import-data-from-online-sources.md b/articles/machine-learning/studio/import-data-from-online-sources.md index 126d20f18af9c..a4f573863c28d 100644 --- a/articles/machine-learning/studio/import-data-from-online-sources.md +++ b/articles/machine-learning/studio/import-data-from-online-sources.md @@ -14,7 +14,7 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 03/24/2017 +ms.date: 11/29/2017 ms.author: bradsev;garye --- diff --git a/articles/machine-learning/studio/import-data.md b/articles/machine-learning/studio/import-data.md index bfa58da24c29f..630b5480fe96c 100644 --- a/articles/machine-learning/studio/import-data.md +++ b/articles/machine-learning/studio/import-data.md @@ -5,7 +5,7 @@ keywords: import data,data format,data types,data sources,training data services: machine-learning documentationcenter: '' author: bradsev -manager: jhubbard +manager: cgronlun editor: cgronlun ms.assetid: c194ee3b-838c-4efe-bb2a-c1d052326216 @@ -14,7 +14,7 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 03/24/2017 +ms.date: 11/29/2017 ms.author: garye;bradsev --- diff --git a/articles/machine-learning/studio/interpret-model-results.md b/articles/machine-learning/studio/interpret-model-results.md index 8643b35c27a84..f30b9cc7ffb4c 100644 --- a/articles/machine-learning/studio/interpret-model-results.md +++ b/articles/machine-learning/studio/interpret-model-results.md @@ -4,7 +4,7 @@ description: How to choose the optimal parameter set for an algorithm using and services: machine-learning documentationcenter: '' author: bradsev -manager: jhubbard +manager: cgronlun editor: cgronlun ms.assetid: 6230e5ab-a5c0-4c21-a061-47675ba3342c @@ -13,8 +13,8 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 03/24/2017 -ms.author: bradsev +ms.date: 11/29/2017 +ms.author: bradsev;garye --- # Interpret model results in Azure Machine Learning diff --git a/articles/machine-learning/studio/manage-new-webservice.md b/articles/machine-learning/studio/manage-new-webservice.md index 7a60a2551e199..d27635dfa3c45 100644 --- a/articles/machine-learning/studio/manage-new-webservice.md +++ b/articles/machine-learning/studio/manage-new-webservice.md @@ -152,36 +152,4 @@ You can update the following properties: * **Logging** allows you to enable or disable error logging on the endpoint. For more information on Logging, see Enable [logging for Machine Learning web services](web-services-logging.md). * **Enable Sample data** allows you to provide sample data that you can use to test the Request-Response service. If you created the web service in Machine Learning Studio, the sample data is taken from the data your used to train your model. If you created the service programmatically, the data is taken from the example data you provided as part of the JSON package. -## Grant or suspend access to Web services for users in the portal -Using the Azure classic portal, you can allow or deny access to specific users. - -### Access for users of New Web services -To enable other users to work with your Web services in the Azure Machine Learning Web Services portal, you must add them as co-adminstrators on your Azure subscription. - -Sign in to the [Azure classic portal](https://manage.windowsazure.com/) using your Microsoft Azure account - use the account that's associated with the Azure subscription. - -1. In the navigation pane, click **Settings**, then click **Administrators**. -2. At the bottom of the window, click **Add**. -3. In the ADD A CO-ADMINISTRATOR dialog, type the email address of the person you want to add as Co-administrator and then select the subscription that you want the Co-administrator to access. -4. Click **Save**. - -### Access for users of Classic Web services -To manage a workspace: - -Sign in to the [Azure classic portal](https://manage.windowsazure.com/) using your Microsoft Azure account - use the account that's associated with the Azure subscription. - -1. In the Microsoft Azure services panel, click **MACHINE LEARNING**. -2. Click the workspace you want to manage. -3. Click the **CONFIGURE** tab. - -From the configuration tab, you can suspend access to the Machine Learning workspace by clicking **DENY**. Users will no longer be able to open the workspace in Machine Learning Studio. To restore access, click **ALLOW**. - -To specific users: - -To manage additional accounts who have access to the workspace in Machine Learning Studio, click **Sign-in to ML Studio** in the **DASHBOARD** tab. This opens the workspace in Machine Learning Studio. From here, click the **SETTINGS** tab and then **USERS**. You can click **INVITE MORE USERS** to give users access to the workspace, or select a user and click **REMOVE**. - -> [!NOTE] -> The **Sign-in to ML Studio** link opens Machine Learning Studio using the Microsoft Account you are currently signed into. The Microsoft Account you used to sign in to the Azure classic portal to create a workspace does not automatically have permission to open that workspace. To open a workspace, you must be signed in to the Microsoft Account that was defined as the owner of the workspace, or you need to receive an invitation from the owner to join the workspace. -> -> diff --git a/articles/machine-learning/studio/manage-web-service-endpoints-using-api-management.md b/articles/machine-learning/studio/manage-web-service-endpoints-using-api-management.md index 76d89838ca7aa..bfc0c97276840 100644 --- a/articles/machine-learning/studio/manage-web-service-endpoints-using-api-management.md +++ b/articles/machine-learning/studio/manage-web-service-endpoints-using-api-management.md @@ -14,7 +14,7 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 01/19/2017 +ms.date: 11/03/2017 ms.author: roalexan --- @@ -36,94 +36,133 @@ To complete this guide, you need: * The workspace, service, and api_key for an AzureML experiment deployed as a web service. Click [here](create-experiment.md) for details on how to create an AzureML experiment. Click [here](publish-a-machine-learning-web-service.md) for details on how to deploy an AzureML experiment as a web service. Alternately, Appendix A has instructions for how to create and test a simple AzureML experiment and deploy it as a web service. ## Create an API Management instance -Below are the steps for using API Management to manage your AzureML web service. First create a service instance. Log in to the [Classic Portal](https://manage.windowsazure.com/) and click **New** > **App Services** > **API Management** > **Create**. -![create-instance](./media/manage-web-service-endpoints-using-api-management/create-instance.png) +You can manage your Azure Machine Learning web service with an API Management instance. -Specify a unique **URL**. This guide uses **demoazureml** – you will need to choose something different. Choose the desired **Subscription** and **Region** for your service instance. After making your selections, click the next button. +1. Sign in to the [Azure portal](https://portal.azure.com). +2. Select **+ Create a resource**. +3. In the search box, type "API management", then select the "API management" resource. +4. Click **Create**. +5. The **Name** value will be used to create a unique URL (this example uses "demoazureml"). +6. Select a **Subscription**, **Resource group**, and **Location** for your service instance. +7. Specify a value for **Organization name** (this example uses "demoazureml"). +8. Enter your **Administrator email** - this email will be used for notifications from the API Management system. +9. Click **Create**. -![create-service-1](./media/manage-web-service-endpoints-using-api-management/create-service-1.png) +It may take up to 30 minutes for a new service to be created. -Specify a value for the **Organization Name**. This guide uses **demoazureml** – you will need to choose something different. Enter your email address in the **administrator e-mail** field. This email address is used for notifications from the API Management system. +![create-service](./media/manage-web-service-endpoints-using-api-management/create-service.png) -![create-service-2](./media/manage-web-service-endpoints-using-api-management/create-service-2.png) - -Click the check box to create your service instance. *It takes up to thirty minutes for a new service to be created*. ## Create the API Once the service instance is created, the next step is to create the API. An API consists of a set of operations that can be invoked from a client application. API operations are proxied to existing web services. This guide creates APIs that proxy to the existing AzureML RRS and BES web services. -APIs are created and configured from the API publisher portal, which is accessed through the Azure Classic Portal. To reach the publisher portal, select your service instance. - -![select-service-instance](./media/manage-web-service-endpoints-using-api-management/select-service-instance.png) - -Click **Manage** in the Azure Classic Portal for your API Management service. - -![manage-service](./media/manage-web-service-endpoints-using-api-management/manage-service.png) +To create the API: -Click **APIs** from the **API Management** menu on the left, and then click **Add API**. +1. In the Azure portal, open the service instance you just created. +2. In the left navigation pane, select **APIs**. -![api-management-menu](./media/manage-web-service-endpoints-using-api-management/api-management-menu.png) + ![api-management-menu](./media/manage-web-service-endpoints-using-api-management/api-management.png) -Type **AzureML Demo API** as the **Web API name**. Type **https://ussouthcentral.services.azureml.net** as the **Web service URL**. Type **azureml-demo** as the **Web API URL suffix**. Check **HTTPS** as the **Web API URL** scheme. Select **Starter** as **Products**. When finished, click **Save** to create the API. +1. Click **Add API**. +2. Enter a **Web API name** (this example uses "AzureML Demo API"). +3. For **Web service URL**, enter "`https://ussouthcentral.services.azureml.net`". +4. Enter a **Web API URL suffix". This will become the last part of the URL that customers will use for sending requests to the service instance (this example uses "azureml-demo"). +5. For **Web API URL scheme**, select **HTTPS**. +6. For **Products**, select **Starter**. +7. Click **Save**. -![add-new-api](./media/manage-web-service-endpoints-using-api-management/add-new-api.png) ## Add the operations -Click **Add operation** to add operations to this API. -![add-operation](./media/manage-web-service-endpoints-using-api-management/add-operation.png) +Operations are added and configured to an API in the publisher portal. To access the publisher portal, click **Publisher portal** in the Azure portal for your API Management service, select **APIs**, **Operations**, then click **Add operation**. + +![add-operation](./media/manage-web-service-endpoints-using-api-management/add-an-operation.png) The **New operation** window will be displayed and the **Signature** tab will be selected by default. ## Add RRS Operation -First create an operation for the AzureML RRS service. Select **POST** as the **HTTP verb**. Type **/workspaces/{workspace}/services/{service}/execute?api-version={apiversion}&details={details}** as the **URL template**. Type **RRS Execute** as the **Display name**. +First create an operation for the AzureML RRS service: + +1. For the **HTTP verb**, select **POST**. +2. For the **URL template**, type "`/workspaces/{workspace}/services/{service}/execute?api-version={apiversion}&details={details}`". +3. Enter a **Display name** (this example uses "RRS Execute"). -![add-rrs-operation-signature](./media/manage-web-service-endpoints-using-api-management/add-rrs-operation-signature.png) + ![add-rrs-operation-signature](./media/manage-web-service-endpoints-using-api-management/add-rrs-operation-signature.png) -Click **Responses** > **ADD** on the left and select **200 OK**. Click **Save** to save this operation. +4. Click **Responses** > **ADD** on the left and select **200 OK**. +5. Click **Save** to save this operation. -![add-rrs-operation-response](./media/manage-web-service-endpoints-using-api-management/add-rrs-operation-response.png) + ![add-rrs-operation-response](./media/manage-web-service-endpoints-using-api-management/add-rrs-operation-response.png) ## Add BES Operations -Screenshots are not included for the BES operations as they are very similar to those for adding the RRS operation. + +> [!NOTE] +> Screenshots are not included here for the BES operations as they are very similar to those for adding the RRS operation. ### Submit (but not start) a Batch Execution job -Click **add operation** to add the AzureML BES operation to the API. Select **POST** for the **HTTP verb**. Type **/workspaces/{workspace}/services/{service}/jobs?api-version={apiversion}** for the **URL template**. Type **BES Submit** for the **Display name**. Click **Responses** > **ADD** on the left and select **200 OK**. Click **Save** to save this operation. + +1. Click **add operation** to add a BES operation to the API. +2. For the **HTTP verb**, select **POST**. +3. For the **URL template**, type "`/workspaces/{workspace}/services/{service}/jobs?api-version={apiversion}`". +4. Enter a **Display name** (this example uses "BES Submit"). +5. Click **Responses** > **ADD** on the left and select **200 OK**. +6. Click **Save**. ### Start a Batch Execution job -Click **add operation** to add the AzureML BES operation to the API. Select **POST** for the **HTTP verb**. Type **/workspaces/{workspace}/services/{service}/jobs/{jobid}/start?api-version={apiversion}** for the **URL template**. Type **BES Start** for the **Display name**. Click **Responses** > **ADD** on the left and select **200 OK**. Click **Save** to save this operation. + +1. Click **add operation** to add a BES operation to the API. +2. For the **HTTP verb**, select **POST**. +3. For the **HTTP verb**, type "`/workspaces/{workspace}/services/{service}/jobs/{jobid}/start?api-version={apiversion}`". +4. Enter a **Display name** (this example uses "BES Start"). +6. Click **Responses** > **ADD** on the left and select **200 OK**. +7. Click **Save**. ### Get the status or result of a Batch Execution job -Click **add operation** to add the AzureML BES operation to the API. Select **GET** for the **HTTP verb**. Type **/workspaces/{workspace}/services/{service}/jobs/{jobid}?api-version={apiversion}** for the **URL template**. Type **BES Status** for the **Display name**. Click **Responses** > **ADD** on the left and select **200 OK**. Click **Save** to save this operation. + +1. Click **add operation** to add a BES operation to the API. +2. For the **HTTP verb**, select **GET**. +3. For the **URL template**, type "`/workspaces/{workspace}/services/{service}/jobs/{jobid}?api-version={apiversion}`". +4. Enter a **Display name** (this example uses "BES Status"). +6. Click **Responses** > **ADD** on the left and select **200 OK**. +7. Click **Save**. ### Delete a Batch Execution job -Click **add operation** to add the AzureML BES operation to the API. Select **DELETE** for the **HTTP verb**. Type **/workspaces/{workspace}/services/{service}/jobs/{jobid}?api-version={apiversion}** for the **URL template**. Type **BES Delete** for the **Display name**. Click **Responses** > **ADD** on the left and select **200 OK**. Click **Save** to save this operation. -## Call an operation from the Developer Portal -Operations can be called directly from the Developer portal, which provides a convenient way to view and test the operations of an API. In this guide step you will call the **RRS Execute** method that was added to the **AzureML Demo API**. Click **Developer portal** from the menu at the top right of the Classic Portal. +1. Click **add operation** to add a BES operation to the API. +2. For the **HTTP verb**, select **DELETE**. +3. For the **URL template**, type "`/workspaces/{workspace}/services/{service}/jobs/{jobid}?api-version={apiversion}`". +4. Enter a **Display name** (this example uses "BES Delete"). +5. Click **Responses** > **ADD** on the left and select **200 OK**. +6. Click **Save**. + +## Call an operation from the Developer portal + +Operations can be called directly from the Developer portal, which provides a convenient way to view and test the operations of an API. In this step you will call the **RRS Execute** method that was added to the **AzureML Demo API**. + +1. Click **Developer portal**. -![developer-portal](./media/manage-web-service-endpoints-using-api-management/developer-portal.png) + ![developer-portal](./media/manage-web-service-endpoints-using-api-management/developer-portal.png) -Click **APIs** from the top menu, and then click **AzureML Demo API** to see the operations available. +2. Click **APIs** from the top menu, and then click **AzureML Demo API** to see the operations available. -![demoazureml-api](./media/manage-web-service-endpoints-using-api-management/demoazureml-api.png) + ![demoazureml-api](./media/manage-web-service-endpoints-using-api-management/demoazureml-api.png) -Select **RRS Execute** for the operation. Click **Try It**. +3. Select **RRS Execute** for the operation. Click **Try It**. -![try-it](./media/manage-web-service-endpoints-using-api-management/try-it.png) + ![try-it](./media/manage-web-service-endpoints-using-api-management/try-it.png) -For Request parameters, type your **workspace**, **service**, **2.0** for the **apiversion**, and **true** for the **details**. You can find your **workspace** and **service** in the AzureML web service dashboard (see **Test the web service** in Appendix A). +4. For **Request parameters**, type your **workspace** and **service**, type "2.0 for the **apiversion**, and "true" for the **details**. You can find your **workspace** and **service** in the AzureML web service dashboard (see **Test the web service** in Appendix A). -For Request headers, click **Add header** and type **Content-Type** and **application/json**, then click **Add header** and type **Authorization** and **Bearer **. You can find your **api key** in the AzureML web service dashboard (see **Test the web service** in Appendix A). + For **Request headers**, click **Add header** and type "Content-Type" and "application/json". Click **Add header** again and type "Authorization" and "Bearer *\*". You can find your API-KEY in the AzureML web service dashboard (see **Test the web service** in Appendix A). -Type **{"Inputs": {"input1": {"ColumnNames": ["Col2"], "Values": [["This is a good day"]]}}, "GlobalParameters": {}}** for the request body. + For **Request body**, type `{"Inputs": {"input1": {"ColumnNames": ["Col2"], "Values": [["This is a good day"]]}}, "GlobalParameters": {}}`. -![azureml-demo-api](./media/manage-web-service-endpoints-using-api-management/azureml-demo-api.png) + ![azureml-demo-api](./media/manage-web-service-endpoints-using-api-management/azureml-demo-api.png) -Click **Send**. +5. Click **Send**. -![send](./media/manage-web-service-endpoints-using-api-management/send.png) + ![send](./media/manage-web-service-endpoints-using-api-management/send.png) After an operation is invoked, the developer portal displays the **Requested URL** from the back-end service, the **Response status**, the **Response headers**, and any **Response content**. diff --git a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/add-an-operation.png b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/add-an-operation.png new file mode 100644 index 0000000000000..fdd4cfea60cdf Binary files /dev/null and b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/add-an-operation.png differ diff --git a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/add-rrs-operation-response.png b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/add-rrs-operation-response.png index 3b7b8d05acf03..bb67b4c0daf0f 100644 Binary files a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/add-rrs-operation-response.png and b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/add-rrs-operation-response.png differ diff --git a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/add-rrs-operation-signature.png b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/add-rrs-operation-signature.png index e8c050cbd2fee..d7f5c777027f5 100644 Binary files a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/add-rrs-operation-signature.png and b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/add-rrs-operation-signature.png differ diff --git a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/api-management.png b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/api-management.png new file mode 100644 index 0000000000000..5d751c9dd8cf2 Binary files /dev/null and b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/api-management.png differ diff --git a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/azureml-demo-api.png b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/azureml-demo-api.png index 352706f73ab99..901a0b8ed20aa 100644 Binary files a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/azureml-demo-api.png and b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/azureml-demo-api.png differ diff --git a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/create-service.png b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/create-service.png new file mode 100644 index 0000000000000..ed2f4cb5be065 Binary files /dev/null and b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/create-service.png differ diff --git a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/demoazureml-api.png b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/demoazureml-api.png index 3f209ec901d91..3c62eea561717 100644 Binary files a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/demoazureml-api.png and b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/demoazureml-api.png differ diff --git a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/find-api-key.png b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/find-api-key.png index 5c799d5dc1fa9..3af13056ed936 100644 Binary files a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/find-api-key.png and b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/find-api-key.png differ diff --git a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/find-workspace-and-service.png b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/find-workspace-and-service.png index 5fb2d021b8a54..a191882b6863f 100644 Binary files a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/find-workspace-and-service.png and b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/find-workspace-and-service.png differ diff --git a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/response-status.png b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/response-status.png index ccd3dc67c68a6..420306fd94f5a 100644 Binary files a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/response-status.png and b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/response-status.png differ diff --git a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/test.png b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/test.png index 71d5d282fc10a..f54a577ecbf26 100644 Binary files a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/test.png and b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/test.png differ diff --git a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/try-it.png b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/try-it.png index 42a39bb4f77ce..2722c09bdd7ae 100644 Binary files a/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/try-it.png and b/articles/machine-learning/studio/media/manage-web-service-endpoints-using-api-management/try-it.png differ diff --git a/articles/machine-learning/studio/media/troubleshooting-retraining-a-model/azure-portal-machine-learning-tab.png b/articles/machine-learning/studio/media/troubleshooting-retraining-a-model/azure-portal-machine-learning-tab.png deleted file mode 100644 index 3514bb5aac21f..0000000000000 Binary files a/articles/machine-learning/studio/media/troubleshooting-retraining-a-model/azure-portal-machine-learning-tab.png and /dev/null differ diff --git a/articles/machine-learning/studio/media/troubleshooting-retraining-a-model/azure-portal-update-resource.png b/articles/machine-learning/studio/media/troubleshooting-retraining-a-model/azure-portal-update-resource.png deleted file mode 100644 index 889a902db5534..0000000000000 Binary files a/articles/machine-learning/studio/media/troubleshooting-retraining-a-model/azure-portal-update-resource.png and /dev/null differ diff --git a/articles/machine-learning/studio/media/troubleshooting-retraining-a-model/check-workspace-region.png b/articles/machine-learning/studio/media/troubleshooting-retraining-a-model/check-workspace-region.png new file mode 100644 index 0000000000000..9c84b5cee2d16 Binary files /dev/null and b/articles/machine-learning/studio/media/troubleshooting-retraining-a-model/check-workspace-region.png differ diff --git a/articles/machine-learning/studio/publish-a-machine-learning-web-service.md b/articles/machine-learning/studio/publish-a-machine-learning-web-service.md index c4b1b01ce6c0c..d26d1e27a5ebe 100644 --- a/articles/machine-learning/studio/publish-a-machine-learning-web-service.md +++ b/articles/machine-learning/studio/publish-a-machine-learning-web-service.md @@ -105,7 +105,7 @@ On the **CONFIGURATION** page, you can change the description, title, update the Once you've deployed the web service, you can: * **Access** it through the web service API. -* **Manage** it through Azure Machine Learning web services portal or the Azure classic portal. +* **Manage** it through Azure Machine Learning web services portal. * **Update** it if your model changes. #### Access your New web service @@ -138,7 +138,7 @@ To test the Batch Execution Service, click **Test** preview link . On the Batch ![Test the web service](./media/publish-a-machine-learning-web-service/figure-3.png) -On the **CONFIGURATION** page, you can change the display name of the service and give it a description. The name and description is displayed in the [Azure classic portal](http://manage.windowsazure.com/) where you manage your web services. +On the **CONFIGURATION** page, you can change the display name of the service and give it a description. The name and description is displayed in the [Azure portal](https://portal.azure.com/) where you manage your web services. You can provide a description for your input data, output data, and web service parameters by entering a string for each column under **INPUT SCHEMA**, **OUTPUT SCHEMA**, and **Web SERVICE PARAMETER**. These descriptions are used in the sample code documentation provided for the web service. diff --git a/articles/machine-learning/studio/retrain-a-classic-web-service.md b/articles/machine-learning/studio/retrain-a-classic-web-service.md index a99a904614c64..30e21d540c204 100644 --- a/articles/machine-learning/studio/retrain-a-classic-web-service.md +++ b/articles/machine-learning/studio/retrain-a-classic-web-service.md @@ -30,7 +30,7 @@ You must have set up a training experiment and a predictive experiment as shown For additional information on Deploying web services, see [Deploy an Azure Machine Learning web service](publish-a-machine-learning-web-service.md). -## Add a new Endpoint +## Add a new endpoint The Predictive Web Service that you deployed contains a default scoring endpoint that is kept in sync with the original training and scoring experiments trained model. To update your web service to with a new trained model, you must create a new scoring endpoint. To create a new scoring endpoint, on the Predictive Web Service that can be updated with the trained model: @@ -40,11 +40,10 @@ To create a new scoring endpoint, on the Predictive Web Service that can be upda > > -There are three ways in which you can add a new end point to a web service: +There are two ways in which you can add a new end point to a web service: 1. Programmatically 2. Use the Microsoft Azure Web Services portal -3. Use the Azure classic portal ### Programmatically add an endpoint You can add scoring endpoints using the sample code provided in this [github repository](https://github.com/raymondlaghaeian/AML_EndpointMgmt/blob/master/Program.cs). @@ -55,18 +54,10 @@ You can add scoring endpoints using the sample code provided in this [github rep 3. Click **Add**. 4. Type a name and description for the new endpoint. Select the logging level and whether sample data is enabled. For more information on logging, see [Enable logging for Machine Learning web services](web-services-logging.md). -### Use the Azure classic portal to add an endpoint -1. Sign in to the [classic Azure portal](https://manage.windowsazure.com). -2. In the left menu, click **Machine Learning**. -3. Under Name, click your workspace and then click **Web Services**. -4. Under Name, click **Census Model [predictive exp.]**. -5. At the bottom of the page, click **Add Endpoint**. For more information on adding endpoints, see [Creating Endpoints](create-endpoint.md). - -## Update the added endpoint’s Trained Model +## Update the added endpoint’s trained model To complete the retraining process, you must update the trained model of the new endpoint that you added. -* If you added the new endpoint using the classic Azure portal, you can click the new endpoint's name in the portal, then the **UpdateResource** link to get the URL you would need to update the endpoint's model. -* If you added the endpoint using the sample code, this includes location of the help URL identified by the *HelpLocationURL* value in the output. +If you added the endpoint using the sample code, this includes location of the help URL identified by the *HelpLocationURL* value in the output. To retrieve the path URL: diff --git a/articles/machine-learning/studio/retrain-existing-resource-manager-based-web-service.md b/articles/machine-learning/studio/retrain-existing-resource-manager-based-web-service.md index 737ee69482246..bfd88618fe6fa 100644 --- a/articles/machine-learning/studio/retrain-existing-resource-manager-based-web-service.md +++ b/articles/machine-learning/studio/retrain-existing-resource-manager-based-web-service.md @@ -83,20 +83,19 @@ In the **Basic consumption info** section of the **Consume** page, locate the pr ### Update the Azure Storage information The BES sample code uploads a file from a local drive (for example, "C:\temp\CensusIpnput.csv") to Azure Storage, processes it, and writes the results back to Azure Storage. -To update the Azure Storage information, you must retrieve the storage account name, key, and container information for your storage account from the Azure classic portal, and then update the corresponding values in the code. After running your experiment, the resulting workflow should be similar to the following: ![Resulting workflow after run][4] -1. Sign in to the Azure classic portal. -2. In the left navigation column, click **Storage**. +1. Sign in to the Azure portal. +2. In the left navigation column, click **More services**, search for **Storage accounts**, and select it. 3. From the list of storage accounts, select one to store the retrained model. -4. At the bottom of the page, click **Manage Access Keys**. -5. Copy and save the **Primary Access Key** and close the dialog. -6. At the top of the page, click **Containers**. +4. In the left navigation column, click **Access keys**. +5. Copy and save the **Primary Access Key**. +6. In the left navigation column, click **Containers**. 7. Select an existing container, or create a new one and save the name. -Locate the *StorageAccountName*, *StorageAccountKey*, and *StorageContainerName* declarations, and update the values that you saved from the classic portal. +Locate the *StorageAccountName*, *StorageAccountKey*, and *StorageContainerName* declarations, and update the values that you saved from the portal. const string StorageAccountName = "mystorageacct"; // Replace this with your Azure storage account name const string StorageAccountKey = "a_storage_account_key"; // Replace this with your Azure Storage key diff --git a/articles/machine-learning/studio/scaling-webservice.md b/articles/machine-learning/studio/scaling-webservice.md index 4a73baf5ed1aa..c030a8ec7a575 100644 --- a/articles/machine-learning/studio/scaling-webservice.md +++ b/articles/machine-learning/studio/scaling-webservice.md @@ -24,11 +24,11 @@ ms.author: neerajkh > > -By default, each published Web service is configured to support 20 concurrent requests and can be as high as 200 concurrent requests. While the Azure classic portal provides a way to set this value, Azure Machine Learning automatically optimizes the setting to provide the best performance for your web service and the portal value is ignored. +By default, each published Web service is configured to support 20 concurrent requests and can be as high as 200 concurrent requests. Azure Machine Learning automatically optimizes the setting to provide the best performance for your web service and the portal value is ignored. If you plan to call the API with a higher load than a Max Concurrent Calls value of 200 will support, you should create multiple endpoints on the same Web service. You can then randomly distribute your load across all of them. -The scaling of a Web service is a common task. Some reasons to scale are to support more than 200 concurrent requests, increase availability through multiple endpoints, or provide separate endpoints for the web service. You can increase the scale by adding additional endpoints for the same Web service through [Azure classic portal](https://manage.windowsazure.com/) or the [Azure Machine Learning Web Service](https://services.azureml.net/) portal. +The scaling of a Web service is a common task. Some reasons to scale are to support more than 200 concurrent requests, increase availability through multiple endpoints, or provide separate endpoints for the web service. You can increase the scale by adding additional endpoints for the same Web service through the [Azure Machine Learning Web Service](https://services.azureml.net/) portal. For more information on adding new endpoints, see [Creating Endpoints](create-endpoint.md). diff --git a/articles/machine-learning/studio/troubleshooting-retraining-models.md b/articles/machine-learning/studio/troubleshooting-retraining-models.md index f10e04494e8b7..c01105b8ba82e 100644 --- a/articles/machine-learning/studio/troubleshooting-retraining-models.md +++ b/articles/machine-learning/studio/troubleshooting-retraining-models.md @@ -3,7 +3,7 @@ title: Troubleshoot retraining an Azure Machine Learning Classic web service | M description: Identify and correct common issues encounted when you are retraining the model for an Azure Machine Learning Web Service. services: machine-learning documentationcenter: '' -author: VDonGlover +author: garyericson manager: raymondl editor: '' @@ -13,23 +13,23 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 04/19/2017 -ms.author: v-donglo +ms.date: 011/01/2017 +ms.author: garye --- -# Troubleshooting the retraining of an Azure Machine Learning Classic Web service +# Troubleshooting the retraining of an Azure Machine Learning Classic web service ## Retraining overview When you deploy a predictive experiment as a scoring web service it is a static model. As new data becomes available or when the consumer of the API has their own data, the model needs to be retrained. -For a complete walkthrough of the retraining process of a Classic Web service, see [Retrain Machine Learning Models Programmatically](retrain-models-programmatically.md). +For a complete walkthrough of the retraining process of a Classic web service, see [Retrain Machine Learning Models Programmatically](retrain-models-programmatically.md). ## Retraining process -When you need to retrain the Web service, you must add some additional pieces: +When you need to retrain the web service, you must add some additional pieces: -* A Web service deployed from the Training Experiment. The experiment must have a **Web Service Output** module attached to the output of the **Train Model** module. +* A web service deployed from the Training Experiment. The experiment must have a **Web Service Output** module attached to the output of the **Train Model** module. ![Attach the web service output to the train model.][image1] -* A new endpoint added to your scoring Web service. You can add the endpoint programmatically using the sample code referenced in the Retrain Machine Learning models programmatically topic or through the Azure classic portal. +* A new endpoint added to your scoring web service. You can add the endpoint programmatically using the sample code referenced in the Retrain Machine Learning models programmatically topic or through the Azure Machine Learning Web Services portal. You can then use the sample C# code from the Training Web Service's API help page to retrain model. Once you have evaluated the results and are satisfied with them, you update the trained model scoring web service using the new endpoint that you added. @@ -42,7 +42,7 @@ With all the pieces in place, the major steps you must take to retrain the model ## Common obstacles ### Check to see if you have the correct PATCH URL -The PATCH URL you are using must be the one associated with the new scoring endpoint you added to the scoring Web service. There are a number of ways to obtain the PATCH URL: +The PATCH URL you are using must be the one associated with the new scoring endpoint you added to the scoring web service. There are a number of ways to obtain the PATCH URL: **Option 1: Programatically** @@ -52,61 +52,57 @@ To get the correct PATCH URL: 2. From the output of AddEndpoint, find the *HelpLocation* value and copy the URL. ![HelpLocation in the output of the addEndpoint sample.][image2] -3. Paste the URL into a browser to navigate to a page that provides help links for the Web service. +3. Paste the URL into a browser to navigate to a page that provides help links for the web service. 4. Click the **Update Resource** link to open the patch help page. -**Option 2: Use the Azure classic portal** +**Option 2: Use the Azure Machine Learning Web Services portal** -1. Sign in to the [Azure classic portal](https://manage.windowsazure.com). -2. Open the Machine Learning tab. - ![Machine leaning tab.][image4] -3. Click your workspace name, then **Web Services**. -4. Click the scoring Web service you are working with. (If you did not modify the default name of the web service, it will end in [Scoring Exp.].) -5. Click **Add Endpoint**. -6. After the endpoint is added, click the endpoint name. Then click **Update Resource** to open the patching help page. +1. Sign in to the [Azure Machine Learning Web Services](https://services.azureml.net/) portal. +2. Click **Web Services** or **Classic Web Services** at the top. +4. Click the scoring web service you are working with (if you didn't modify the default name of the web service, it will end in "[Scoring Exp.]"). +5. Click **+NEW**. +6. After the endpoint is added, click the endpoint name. +7. Under the **Patch** URL, click **API Help** to open the patching help page. > [!NOTE] -> If you have added the endpoint to the Training Web Service instead of the Predictive Web Service, you will receive the following error when you click the **Update Resource** link: Sorry, but this feature is not supported or available in this context. This Web Service has no updatable resources. We apologize for the inconvenience and are working on improving this workflow. +> If you have added the endpoint to the Training Web Service instead of the Predictive Web Service, you will receive the following error when you click the **Update Resource** link: "Sorry, but this feature is not supported or available in this context. This Web Service has no updatable resources. We apologize for the inconvenience and are working on improving this workflow." > > -![New endpoint dashboard.][image3] - The PATCH help page contains the PATCH URL you must use and provides sample code you can use to call it. ![Patch URL.][image5] ### Check to see that you are updating the correct scoring endpoint -* Do not patch the Training Web Service: The patch operation must be performed on the scoring Web service. -* Do not patch the default endpoint on Web service: The patch operation must be performed on the new scoring Web service endpoint that you added. +* Do not patch the training web service: The patch operation must be performed on the scoring web service. +* Do not patch the default endpoint on the web service: The patch operation must be performed on the new scoring web service endpoint that you added. -You can verify which Web service the endpoint is on by visiting the Azure classic portal. +You can verify which web service the endpoint is on by visiting the Web Services portal. > [!NOTE] -> Be sure you are adding the endpoint to the Predictive Web Service, not the Training Web Service. If you have correctly deployed both a Training and a Predictive Web Service, you should see two separate Web services listed. The Predictive Web Service should end with "[predictive exp.]". +> Be sure you are adding the endpoint to the Predictive Web Service, not the Training Web Service. If you have correctly deployed both a Training and a Predictive Web Service, you should see two separate web services listed. The Predictive Web Service should end with "[predictive exp.]". > > -1. Sign in to the [Azure classic portal](https://manage.windowsazure.com). -2. Open the Machine Learning tab. - ![Machine learning workspace UI.][image4] -3. Select your workspace. -4. Click **Web Services**. -5. Select your Predictive Web Service. -6. Verify that your new endpoint was added to the Web service. - -### Check the workspace that your web service is in to ensure it is in the correct region -1. Sign in to the [Azure classic portal](https://manage.windowsazure.com). -2. Select Machine Learning from the menu. +1. Sign in to the [Azure Machine Learning Web Services](https://services.azureml.net/) portal. +2. Click **Web Services** or **Classic Web Services**. +3. Select your Predictive Web Service. +4. Verify that your new endpoint was added to the web service. + +### Check that your workspace is in the same region as the web service +1. Sign in to [Machine Learning Studio](https://studio.azureml.net/). +2. At the top, click the drop-down list of your workspaces. + ![Machine learning region UI.][image4] -3. Verify the location of your workspace. + +3. Verify the region that your workspace is in. [image1]: ./media/troubleshooting-retraining-a-model/ml-studio-tm-connnected-to-web-service-out.png [image2]: ./media/troubleshooting-retraining-a-model/addEndpoint-output.png [image3]: ./media/troubleshooting-retraining-a-model/azure-portal-update-resource.png -[image4]: ./media/troubleshooting-retraining-a-model/azure-portal-machine-learning-tab.png +[image4]: ./media/troubleshooting-retraining-a-model/check-workspace-region.png [image5]: ./media/troubleshooting-retraining-a-model/ml-help-page-patch-url.png [image6]: ./media/troubleshooting-retraining-a-model/retraining-output.png [image7]: ./media/troubleshooting-retraining-a-model/web-services-tab.png diff --git a/articles/machine-learning/studio/walkthrough-1-create-ml-workspace.md b/articles/machine-learning/studio/walkthrough-1-create-ml-workspace.md index 304c9c2eb77da..820dd06214611 100644 --- a/articles/machine-learning/studio/walkthrough-1-create-ml-workspace.md +++ b/articles/machine-learning/studio/walkthrough-1-create-ml-workspace.md @@ -32,15 +32,6 @@ This is the first step of the walkthrough, [Develop a predictive analytics solut To use Machine Learning Studio, you need to have a Microsoft Azure Machine Learning workspace. This workspace contains the tools you need to create, manage, and publish experiments. - - The administrator for your Azure subscription needs to create the workspace and then add you as an owner or contributor. For details, see [Create and share an Azure Machine Learning workspace](create-workspace.md). After your workspace is created, open Machine Learning Studio ([https://studio.azureml.net/Home](https://studio.azureml.net/Home)). If you have more than one workspace, you can select the workspace in the toolbar in the upper-right corner of the window. diff --git a/articles/machine-learning/studio/walkthrough-5-publish-web-service.md b/articles/machine-learning/studio/walkthrough-5-publish-web-service.md index 5931d657be3a3..18dd7de12dccb 100644 --- a/articles/machine-learning/studio/walkthrough-5-publish-web-service.md +++ b/articles/machine-learning/studio/walkthrough-5-publish-web-service.md @@ -188,26 +188,6 @@ The results of the test are displayed on the right-hand side of the page in the ## Manage the web service -### Manage a Classic web service in the Azure classic portal - -Once you've deployed your Classic web service, you can manage it from the [Azure classic portal](https://manage.windowsazure.com). - -1. Sign in to the [Azure classic portal](https://manage.windowsazure.com) -2. In the Microsoft Azure services panel, click **MACHINE LEARNING** -3. Click your workspace -4. Click the **Web services** tab -5. Click the web service we created -6. Click the "default" endpoint - -From here, you can do things like monitor how the web service is doing and make performance tweaks by changing how many concurrent calls the service can handle. - -For more details, see: - -* [Creating Endpoints](create-endpoint.md) -* [Scaling web service](scaling-webservice.md) - -### Manage a Classic or New web service in the Azure Machine Learning Web Services portal - Once you've deployed your web service, whether Classic or New, you can manage it from the [Microsoft Azure Machine Learning Web Services](https://services.azureml.net/quickstart) portal. To monitor the performance of your web service: diff --git a/articles/machine-learning/team-data-science-process/agile-development.md b/articles/machine-learning/team-data-science-process/agile-development.md new file mode 100644 index 0000000000000..15a7d5b74dcc8 --- /dev/null +++ b/articles/machine-learning/team-data-science-process/agile-development.md @@ -0,0 +1,192 @@ +--- +title: Agile development of data science projects - Azure Machine Learning | Microsoft Docs +description: How developers can execute a data science project in a systematic, version controlled, and collaborative way within a project team by using the Team Data Science Process. +documentationcenter: '' +author: bradsev +manager: cgronlun +editor: cgronlun + +ms.assetid: +ms.service: machine-learning +ms.workload: data-services +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 11/28/2017 +ms.author: bradsev; + +--- + + +# Agile development of data science projects + +This document describes how developers can execute a data science project in a systematic, version controlled, and collaborative way within a project team by using the [Team Data Science Process](overview.md) (TDSP). The TDSP is a framework developed by Microsoft that provides a structured sequence of activities to execute cloud-based, predictive analytics solutions efficiently. For an outline of the personnel roles, and their associated tasks that are handled by a data science team standardizing on this process, see [Team Data Science Process roles and tasks](roles-tasks.md). + +This article includes instructions on how to: + +1. do **sprint planning** for work items involved in a project.
If you are unfamiliar with sprint planning, you can find details and general information [here](https://en.wikipedia.org/wiki/Sprint_(software_development) "here"). +2. **add work items** to sprints. + +> [!NOTE] +> The steps needed to set up a TDSP team environment using Visual Studio Team Services (VSTS) are outlined in the following set of instructions. They specify how to accomplish these tasks with VSTS because that is how to implement TDSP at Microsoft. If you choose to use VSTS, items (3) and (4) in the previous list are benefits that you get naturally. If another code hosting platform is used for your group, the tasks that need to be completed by the team lead generally do not change. But the way to complete these tasks is going to be different. For example, the item in section six, **Link a work item with a Git branch**, might not be as easy as it is on VSTS. +> +> + +The following figure illustrates a typical sprint planning, coding, and source-control workflow involved in implementing a data science project: + +![1](./media/agile-development/1-project-execute.png) + + +## 1. Terminology + +In the TDSP sprint planning framework, there are four frequently used types of **work items**: **Feature**, **User Story**, **Task**, and **Bug**. Each team project maintains a single backlog for all work items. There is no backlog at the Git repository level under a team project. Here are their definitions: + +- **Feature**: A feature corresponds to a project engagement. Different engagements with a client are considered different features. Similarly, it is best to consider different phases of a project with a client as different features. If you choose a schema such as ***ClientName-EngagementName*** to name your features, then you can easily recognize the context of the project/engagement from the names themselves. +- **Story**: Stories are different work items that are needed to complete a feature (project) end-to-end. Examples of stories include: + - Getting Data + - Exploring Data + - Generating Features + - Building Models + - Operationalizing Models + - Retraining Models +- **Task**: Tasks are assignable code or document work items or other activities that need to be done to complete a specific story. For example, tasks in the story *Getting Data* could be: + - Getting Credentials of SQL Server + - Uploading Data to SQL Data Warehouse. +- **Bug**: Bugs usually refer to fixes that are needed for an existing code or document that are done when completing a task. If the bug is caused by missing stages or tasks respectively, it can escalate to being a story or a task. + +> [!NOTE] +> Concepts are borrowed of features, stories, tasks, and bugs from software code management (SCM) to be used in data science. They might differ slightly from their conventional SCM definitions. +> +> + +> [!NOTE] +> Data scientists may feel more comfortable using an agile template that specifically aligns with the TDSP lifecycle stages. With that in mind, an Agile-derived sprint planning template has been created, where Epics, Stories etc. are replaced by TDSP lifecycle stages or substages. For instructions on how to create an agile template, see [Set up agile data science process in Visual Studio Online](agile-development.md#set-up-agile-dsp-6). +> +> + +## 2. Sprint planning + +Sprint planning is useful for project prioritization, and resource planning and allocation. Many data scientists are engaged with multiple projects, each of which can take months to complete. Projects often proceed at different paces. On the VSTS server, you can easily create, manage, and track work items in your team project and conduct sprint planning to ensure that your projects are moving forward as expected. + +Follow [this link](https://www.visualstudio.com/en-us/docs/work/scrum/sprint-planning) for the step-by-step instructions on sprint planning in VSTS. + + +## 3. Add a feature + +After your project repository is created under a team project, go to the team **Overview** page and click **Manage work**. + +![2](./media/agile-development/2-sprint-team-overview.png) + +To include a feature in the backlog, click **Backlogs** --> **Features** --> **New**, type in the feature **Title** (usually your project name), and then click **Add** . + +![3](./media/agile-development/3-sprint-team-add-work.png) + +Double-click the feature you created. Fill in the descriptions, assign team members for this feature, and set planning parameters for this feature. + +You can also link this feature to the project repository. Click **Add link** under the **Development** section. After you have finished editing the feature, click **Save & Close** to exit. + + +## 4. Add Story under feature + +Under the feature, stories can be added to describe major steps needed to finish the (feature) project. To add a new story, click the **+** sign to the left of the feature in backlog view. + +![4](./media/agile-development/4-sprint-add-story.png) + +You can edit the details of the story, such as the status, description, comments, planning, and priority In the pop-up window. + +![5](./media/agile-development/5-sprint-edit-story.png) + +You can link this story to an existing repository by clicking **+ Add link** under **Development**. + +![6](./media/agile-development/6-sprint-link-existing-branch.png) + + +## 5. Add a task to a story + +Tasks are specific detailed steps that are needed to complete each story. After all tasks of a story are completed, the story should be completed too. + +To add a task to a story, click the **+** sign next to the story item, select **Task**, and then fill in the detailed information of this task in the pop-up window. + +![7](./media/agile-development/7-sprint-add-task.png) + +After the features, stories, and tasks are created, you can view them in the **Backlog** or **Board** views to track their status. + +![8](./media/agile-development/8-sprint-backlog-view.png) + +![9](./media/agile-development/9-link-to-a-new-branch.png) + + +## 6. Set up an Agile TDSP work template in Visual Studio Online + +This article explains how to set up an agile data science process template that uses the TDSP data science lifecycle stages and tracks work items with Visual Studio Online (vso). The steps below walk through an example of setting up the data science-specific agile process template *AgileDataScienceProcess* and show how to create data science work items based on the template. + +### Agile Data Science Process Template Setup + +1. Navigate to server homepage, **Configure** -> **Process**. + + ![10](./media/agile-development/10-settings.png) + +2. Navigate to **All processes** -> **Processes**, under **Agile** and click on **Create inherited process**. Then put the process name "AgileDataScienceProcess" and click **Create process**. + + ![11](./media/agile-development/11-agileds.png) + +3. Under the **AgileDataScienceProcess** -> **Work item types** tab, disable **Epic**, **Feature**, **User Story**, and **Task** work item types by **Configure -> Disable** + + ![12](./media/agile-development/12-disable.png) + +4. Navigate to **AgileDataScienceProcess** -> **Backlog levels** tab. Rename "Epics" to "TDSP Projects" by clicking on the **Configure** -> **Edit/Rename**. In the same dialog box, click **+New work item type** in "Data Science Project" and set the value of **Default work item type** to "TDSP Project" + + ![13](./media/agile-development/13-rename.png) + +5. Similarly, change Backlog name "Features" to "TDSP Stages" and add the following to the **New work item type**: + + - Business Understanding + - Data Acquisition + - Modeling + - Deployment + +6. Rename "User Story" to "TDSP Substages" with default work item type set to newly created "TDSP Substage" type. + +7. Set the "Tasks" to newly created Work item type "TDSP Task" + +8. After these steps, the Backlog levels should look like this: + + ![14](./media/agile-development/14-template.png) + + +### Create Data Science Work Items + +After the data science process template is created, you can create and track your data science work items that correspond to the TDSP lifecycle. + +1. When you create a new team project, select "Agile\AgileDataScienceProcess" as the **Work item process**: + + ![15](./media/agile-development/15-newproject.png) + +2. Navigate to the newly created team project, and click on **Work** -> **Backlogs**. + +3. Make "TDSP Projects" visible by clicking on **Configure team settings** and check "TDSP Projects"; then save. + + ![16](./media/agile-development/16-enabledsprojects.png) + +4. Now you can start creating the data science-specific work items. + + ![17](./media/agile-development/17-dsworkitems.png) + +5. Here is an example of how the data science project work items should appear: + + ![18](./media/agile-development/18-workitems.png) + + +## Next steps + +[Collaborative coding with Git](collaborative-coding-with-git.md) describes how to do collaborative code development for data science projects using Git as the shared code development framework and how to link these coding activities to the work planned with the agile process. + +Here are additional links to resources on agile processes. + +- Agile process + [https://www.visualstudio.com/en-us/docs/work/guidance/agile-process](https://www.visualstudio.com/en-us/docs/work/guidance/agile-process) +- Agile process work item types and workflow + [https://www.visualstudio.com/en-us/docs/work/guidance/agile-process-workflow](https://www.visualstudio.com/en-us/docs/work/guidance/agile-process-workflow) + + +Walkthroughs that demonstrate all the steps in the process for **specific scenarios** are also provided. They are listed and linked with thumbnail descriptions in the [Example walkthroughs](walkthroughs.md) article. They illustrate how to combine cloud, on-premises tools, and services into a workflow or pipeline to create an intelligent application. diff --git a/articles/machine-learning/team-data-science-process/collaborative-coding-with-git.md b/articles/machine-learning/team-data-science-process/collaborative-coding-with-git.md new file mode 100644 index 0000000000000..1c572f6d668c2 --- /dev/null +++ b/articles/machine-learning/team-data-science-process/collaborative-coding-with-git.md @@ -0,0 +1,117 @@ +--- +title: Collaborative coding with Git - Azure Machine Learning | Microsoft Docs +description: How to do collaborative code development for data science projects using Git with agile planning. +documentationcenter: '' +author: bradsev +manager: cgronlun +editor: cgronlun + +ms.assetid: +ms.service: machine-learning +ms.workload: data-services +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 11/28/2017 +ms.author: bradsev; + +--- + + +# Collaborative coding with Git + +In this article we describe how to do collaborative code development for data science projects using Git as the shared code development framework. It covers how to link these coding activities to the work planned in [Agile development](agile-development.md) and how to do code reviews. + + +## 1. Link a work item with a Git branch + +VSTS provides a convenient way to connect a work item (a story or task) with a Git branch. This enables you to link your story or task directly to the code associated with it. + +To connect a work item to a new branch, double-click a work item, and in the pop-up window, click **Create a new branch** under **+ Add link**. + +![1](./media/collaborative-coding-with-git/1-sprint-board-view.png) + +Provide the information for this new branch, such as the branch name, base Git repository, and the branch. The Git repository chosen must be the repository under the same team project that the work item belongs to. The base branch can be the master branch or some other existing branch. + +![2](./media/collaborative-coding-with-git/2-create-a-branch.png) + +A good practice is to create a Git branch for each story work item. Then, for each task work item, you create a branch based on the story branch. Organizing the branches in this hierarchical way that corresponds to the story-task relationships is helpful when you have multiple people working on different stories of the same project, or you have multiple people working on different tasks of the same story. Conflicts can be minimized when each team member works on a different branch and when each member works on different codes or other artifacts when sharing a branch. + +The following picture depicts the recommended branching strategy for TDSP. You might not need as many branches as are shown here, especially when you only have one or two people working on the same project, or only one person works on all tasks of a story. But separating the development branch from the master branch is always a good practice. This can help prevent the release branch from being interrupted by the development activities. More complete description of Git branch model can be found in [A Successful Git Branching Model](http://nvie.com/posts/a-successful-git-branching-model/). + +![3](./media/collaborative-coding-with-git/3-git-branches.png) + +To switch to the branch that you want to work on, run the following command in a shell command (Windows or Linux). + + git checkout + +Changing the ** to **master** switches you back to the **master** branch. After you switch to the working branch, you can start working on that work item, developing the code or documentation artifacts needed to complete the item. + +You can also link a work item to an existing branch. In the **Detail** page of a work item, instead of clicking **Create a new branch**, you click **+ Add link**. Then, select the branch you want to link the work item to. + +![4](./media/collaborative-coding-with-git/4-link-to-an-existing-branch.png) + +You can also create a new branch in Git Bash commands. If is missing, the is based on _master_ branch. + + git checkout -b + + +## 2. Work on a branch and commit the changes + +Now suppose you make some change to the *data\_ingestion* branch for the work item, such as adding an R file on the branch in your local machine. You can commit the R file added to the branch for this work item, provided you are in that branch in your Git shell, using the following Git commands: + + git status + git add . + git commit -m"added a R scripts" + git push origin data_ingestion + +![5](./media/collaborative-coding-with-git/5-sprint-push-to-branch.png) + +## 3. Create a pull request on VSTS + +When you are ready after a few commits and pushes, to merge the current branch into its base branch, you can submit a **pull request** on VSTS server. + +Go to the main page of your team project and click **CODE**. Select the branch to be merged and the Git repository name that you want to merge the branch into. Then click **Pull Requests**, click **New pull request** to create a pull request review before the work on the branch is merged to its base branch. + +![6](./media/collaborative-coding-with-git/6-spring-create-pull-request.png) + +Fill in some description about this pull request, add reviewers, and send it out. + +![7](./media/collaborative-coding-with-git/7-spring-send-pull-request.png) + +## 4. Review and merge + +When the pull request is created, your reviewers get an email notification to review the pull requests. The reviewers need to check whether the changes are working or not and test the changes with the requester if possible. Based on their assessment, the reviewers can approve or reject the pull request. + +![8](./media/collaborative-coding-with-git/8-add_comments.png) + +![9](./media/collaborative-coding-with-git/9-spring-approve-pullrequest.png) + +After the review is done, the working branch is merged to its base branch by clicking the **Complete** button. You may choose to delete the working branch after it has merged. + +![10](./media/collaborative-coding-with-git/10-spring-complete-pullrequest.png) + +Confirm on the top left corner that the request is marked as **COMPLETED**. + +![11](./media/collaborative-coding-with-git/11-spring-merge-pullrequest.png) + +When you go back to the repository under **CODE**, you are told that you have been switched to the master branch. + +![12](./media/collaborative-coding-with-git/12-spring-branch-deleted.png) + +You can also use the following Git commands to merge your working branch to its base branch and delete the working branch after merging: + + git checkout master + git merge data_ingestion + git branch -d data_ingestion + +![13](./media/collaborative-coding-with-git/13-spring-branch-deleted-commandline.png) + + + +## Next steps + +[Execute of data science tasks](execute-data-science-tasks.md) shows how to use utilities to complete several common data science tasks such as interactive data exploration, data analysis, reporting, and model creation. + +Walkthroughs that demonstrate all the steps in the process for **specific scenarios** are also provided. They are listed and linked with thumbnail descriptions in the [Example walkthroughs](walkthroughs.md) article. They illustrate how to combine cloud, on-premises tools, and services into a workflow or pipeline to create an intelligent application. + diff --git a/articles/machine-learning/team-data-science-process/cortana-analytics-playbook-vehicle-telemetry-deep-dive.md b/articles/machine-learning/team-data-science-process/cortana-analytics-playbook-vehicle-telemetry-deep-dive.md index 25c70df3f718a..7d4adbf3d6b05 100644 --- a/articles/machine-learning/team-data-science-process/cortana-analytics-playbook-vehicle-telemetry-deep-dive.md +++ b/articles/machine-learning/team-data-science-process/cortana-analytics-playbook-vehicle-telemetry-deep-dive.md @@ -4,7 +4,7 @@ description: Use the capabilities of Cortana Intelligence to gain real-time and services: machine-learning documentationcenter: '' author: bradsev -manager: jhubbard +manager: cgronlun editor: cgronlun ms.assetid: d8866fa6-aba6-40e5-b3b3-33057393c1a8 @@ -13,7 +13,7 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 03/24/2017 +ms.date: 11/24/2017 ms.author: bradsev --- diff --git a/articles/machine-learning/team-data-science-process/cortana-analytics-playbook-vehicle-telemetry-powerbi.md b/articles/machine-learning/team-data-science-process/cortana-analytics-playbook-vehicle-telemetry-powerbi.md index e39e708573659..a5f3c4441b463 100644 --- a/articles/machine-learning/team-data-science-process/cortana-analytics-playbook-vehicle-telemetry-powerbi.md +++ b/articles/machine-learning/team-data-science-process/cortana-analytics-playbook-vehicle-telemetry-powerbi.md @@ -4,7 +4,7 @@ description: Use the capabilities of Cortana Intelligence to gain real-time and services: machine-learning documentationcenter: '' author: bradsev -manager: jhubbard +manager: cgronlun editor: cgronlun ms.assetid: aaeb29a5-4a13-4eab-bbf1-885690d86c56 @@ -13,7 +13,7 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 12/16/2016 +ms.date: 11/15/2017 ms.author: bradsev --- diff --git a/articles/machine-learning/team-data-science-process/cortana-analytics-playbook-vehicle-telemetry.md b/articles/machine-learning/team-data-science-process/cortana-analytics-playbook-vehicle-telemetry.md index b96d3f6f32369..e1b617c1ec943 100644 --- a/articles/machine-learning/team-data-science-process/cortana-analytics-playbook-vehicle-telemetry.md +++ b/articles/machine-learning/team-data-science-process/cortana-analytics-playbook-vehicle-telemetry.md @@ -4,7 +4,7 @@ description: Use the capabilities of Cortana Intelligence to gain real-time and services: machine-learning documentationcenter: '' author: bradsev -manager: jhubbard +manager: cgronlun editor: cgronlun ms.assetid: 09fad60b-2f48-488b-8a7e-47d1f969ec6f @@ -13,7 +13,7 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 03/24/2017 +ms.date: 11/15/2017 ms.author: bradsev --- diff --git a/articles/machine-learning/team-data-science-process/environment-setup.md b/articles/machine-learning/team-data-science-process/environment-setup.md index 48b84e766c468..99b8318983b0b 100644 --- a/articles/machine-learning/team-data-science-process/environment-setup.md +++ b/articles/machine-learning/team-data-science-process/environment-setup.md @@ -4,7 +4,7 @@ description: Set up data science environments on Azure for use in the Team Data services: machine-learning documentationcenter: '' author: bradsev -manager: jhubbard +manager: cgronlun editor: cgronlun ms.assetid: 481cfa6a-7ea3-46ac-b0f9-2e3982c37153 @@ -13,7 +13,7 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 02/08/2017 +ms.date: 11/29/2017 ms.author: bradsev --- diff --git a/articles/machine-learning/team-data-science-process/execute-data-science-tasks.md b/articles/machine-learning/team-data-science-process/execute-data-science-tasks.md new file mode 100644 index 0000000000000..a492eaeca2812 --- /dev/null +++ b/articles/machine-learning/team-data-science-process/execute-data-science-tasks.md @@ -0,0 +1,112 @@ +--- +title: Execute data science tasks - Azure Machine Learning | Microsoft Docs +description: How a data scientist can execute a data science project in a trackable, version controlled, and collaborative way. +documentationcenter: '' +author: bradsev +manager: cgronlun +editor: cgronlun + +ms.assetid: +ms.service: machine-learning +ms.workload: data-services +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 11/28/2017 +ms.author: bradsev; + +--- + + +# Execute data science tasks: exploration, modeling, and deployment + +Typical data science tasks include data exploration, modeling, and deployment. This article shows how to use the **Interactive Data Exploration, Analysis, and Reporting (IDEAR)** and **Automated Modeling and Reporting (AMAR)** utilities to complete several common data science tasks such as interactive data exploration, data analysis, reporting, and model creation. It also outlines options for deploying a model into a production environment using a variety of toolkits and data platforms, such as the following: + +- [Azure Machine Learning](../preview/index.yml) +- [SQL-Server with ML services](https://docs.microsoft.com/sql/advanced-analytics/r/r-services#in-database-analytics-with-sql-server) +- [Microsoft Machine Learning Server](https://docs.microsoft.com/machine-learning-server/what-is-machine-learning-server) + + +## 1. Exploration + +A data scientist can perform exploration and reporting in a variety of ways: by using libraries and packages available for Python (matplotlib for example) or with R (ggplot or lattice for example). Data scientists can customize such code to fit the needs of data exploration for specific scenarios. The needs for dealing with structured data are different that for unstructured data such as text or images. + +Products such as Azure Machine Learning Workbench also provide [advanced data preparation](../preview/tutorial-bikeshare-dataprep.md) for data wrangling and exploration, including feature creation. The user should decide on the tools, libraries, and packages that best suite their needs. + +The deliverable at the end of this phase is a data exploration report. The report should provide a fairly comprehensive view of the data to be used for modeling and an assessment of whether the data is suitable to proceed to the modeling step. The Team Data Science Process (TDSP) utilities discussed in the following sections for semi-automated exploration, modeling, and reporting also provide standardized data exploration and modeling reports. + +### Interactive data exploration, analysis, and reporting using the IDEAR utility + +This R markdown-based or Python notebook-based utility provides a flexible and interactive tool to evaluate and explore data sets. Users can quickly generate reports from the data set with minimal coding. Users can click buttons to export the exploration results in the interactive tool to a final report, which can be delivered to clients or used to make decisions on which variables to include in the subsequent modeling step. + +At this time, the tool only works on data-frames in memory. A YAML file is needed to specify the parameters of the data-set to be explored. For more information, see [IDEAR in TDSP Data Science Utilities](https://github.com/Azure/Azure-TDSP-Utilities/tree/master/DataScienceUtilities/DataReport-Utils). + + +## 2. Modeling + +There are numerous toolkits and packages for training models in a variety of languages. Data scientists should feel free to use which ever ones they are comfortable with, as long as performance considerations regarding accuracy and latency are satisfied for the relevant business use cases and production scenarios. + +The next section shows how to use an R-based TDSP utility for semi-automated modeling. This AMAR utility can be used to generate base line models quickly as well as the parameters that need to be tuned to provide a better performing model. +The following model management section shows how to have a system for registering and managing multiple models. + + +### Model training: modeling and reporting using the AMAR utility + +The [Automated Modeling and Reporting (AMAR) Utility](https://github.com/Azure/Azure-TDSP-Utilities/tree/master/DataScienceUtilities/Modeling) provides a customizable, semi-automated tool to perform model creation with hyper-parameter sweeping and to compare the accuracy of those models. + +The model creation utility is an R Markdown file that can be run to produce self-contained HTML output with a table of contents for easy navigation through its different sections. Three algorithms are executed when the Markdown file is run (knit): regularized regression using the glmnet package, random forest using the randomForest package, and boosting trees using the xgboost package). Each of these algorithms produces a trained model. The accuracy of these models is then compared and the relative feature importance plots are reported. Currently, there are two utilities: one is for a binary classification task and one is for a regression task. The primary differences between them is the way control parameters and accuracy metrics are specified for these learning tasks. + +A YAML file is used to specify: + +- the data input (a SQL source or an R-Data file) +- what portion of the data is used for training and what portion for testing +- which algorithms to run +- the choice of control parameters for model optimization: + - cross-validation + - bootstrapping + - folds of cross-validation +- the hyper-parameter sets for each algorithm. + +The number of algorithms, the number of folds for optimization, the hyper-parameters, and the number of hyper-parameter sets to sweep over can also be modified in the Yaml file to run the models quickly. For example, they can be run with a lower number of CV folds, a lower number of parameter sets. If it is warranted, they can also be run more comprehensively with a higher number of CV folds or a larger number of parameter sets. + +For more information, see [Automated Modeling and Reporting Utility in TDSP Data Science Utilities](https://github.com/Azure/Azure-TDSP-Utilities/tree/master/DataScienceUtilities/Modeling). + +### Model management +After multiple models have been built, you usually need to have a system for registering and managing the models. Typically you need a combination of scripts or APIs and a backend database or versioning system. A few options that you can consider for these management tasks are: + +1. [Azure Machine Learning - model management service](../preview/index.yml) +2. [ModelDB from MIT](https://mitdbg.github.io/modeldb/) +3. [SQL-seerver as a model management system](https://blogs.technet.microsoft.com/dataplatforminsider/2016/10/17/sql-server-as-a-machine-learning-model-management-system/) +4. [Microsoft Machine Learning Server](https://docs.microsoft.com/sql/advanced-analytics/r/r-server-standalone) + +## 3. Deployment + +Production deployment enables a model to play an active role in a business. Predictions from a deployed model can be used for business decisions. + +### Production platforms +There are various approaches and platforms to put models into production. Here are a few options: + + +- [Model deployment in Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/preview/model-management-overview) +- [Deployment of a model in SQL-server](https://docs.microsoft.com/sql/advanced-analytics/tutorials/sqldev-py6-operationalize-the-model) +- [Microsoft Machine Learning Server](https://docs.microsoft.com/sql/advanced-analytics/r/r-server-standalone) + +> +> +>NOTE: Prior to deployment, one has to insure the latency of model scoring is low enough to use in production. +> + +Further examples are available in walkthroughs that demonstrate all the steps in the process for **specific scenarios**. They are listed and linked with thumbnail descriptions in the [Example walkthroughs](walkthroughs.md) article. They illustrate how to combine cloud, on-premises tools, and services into a workflow or pipeline to create an intelligent application. + +NOTE: For deployment using Azure Machine Learning Studio, see [Deploy an Azure Machine Learning web service](../studio/publish-a-machine-learning-web-service.md). + +### A/B testing +When multiple models are in production, it can be useful to perform [A/B testing](https://en.wikipedia.org/wiki/A/B_testing) to compare performance of the models. + + +## Next steps + +[Track progress of data science projects](track-progress.md) shows how a data scientist can track the progress of a data science project. + + + diff --git a/articles/machine-learning/team-data-science-process/hive-criteo-walkthrough.md b/articles/machine-learning/team-data-science-process/hive-criteo-walkthrough.md index 587196d430f4f..d26880dd1bca4 100644 --- a/articles/machine-learning/team-data-science-process/hive-criteo-walkthrough.md +++ b/articles/machine-learning/team-data-science-process/hive-criteo-walkthrough.md @@ -4,7 +4,7 @@ description: Using the Team Data Science Process for an end-to-end scenario empl services: machine-learning,hdinsight documentationcenter: '' author: bradsev -manager: jhubbard +manager: cgronlun editor: cgronlun ms.assetid: 72d958c4-3205-49b9-ad82-47998d400d2b @@ -13,18 +13,18 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 01/29/2017 +ms.date: 11/29/2017 ms.author: bradsev --- # The Team Data Science Process in action - Using an Azure HDInsight Hadoop Cluster on a 1 TB dataset -In this walkthrough, we demonstrate using the Team Data Science Process in an end-to-end scenario with an [Azure HDInsight Hadoop cluster](https://azure.microsoft.com/services/hdinsight/) to store, explore, feature engineer, and down sample data from one of the publicly available [Criteo](http://labs.criteo.com/downloads/download-terabyte-click-logs/) datasets. We use Azure Machine Learning to build a binary classification model on this data. We also show how to publish one of these models as a Web service. +This walkthrough demonstrates how to use the Team Data Science Process in an end-to-end scenario with an [Azure HDInsight Hadoop cluster](https://azure.microsoft.com/services/hdinsight/) to store, explore, feature engineer, and down sample data from one of the publicly available [Criteo](http://labs.criteo.com/downloads/download-terabyte-click-logs/) datasets. It uses Azure Machine Learning to build a binary classification model on this data. It also shows how to publish one of these models as a Web service. It is also possible to use an IPython notebook to accomplish the tasks presented in this walkthrough. Users who would like to try this approach should consult the [Criteo walkthrough using a Hive ODBC connection](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/DataScienceProcess/iPythonNotebooks/machine-Learning-data-science-process-hive-walkthrough-criteo.ipynb) topic. ## Criteo Dataset Description -The Criteo data is a click prediction dataset that is approximately 370GB of gzip compressed TSV files (~1.3TB uncompressed), comprising more than 4.3 billion records. It is taken from 24 days of click data made available by [Criteo](http://labs.criteo.com/downloads/download-terabyte-click-logs/). For the convenience of data scientists, we have unzipped data available to us to experiment with. +The Criteo data is a click prediction dataset that is approximately 370GB of gzip compressed TSV files (~1.3TB uncompressed), comprising more than 4.3 billion records. It is taken from 24 days of click data made available by [Criteo](http://labs.criteo.com/downloads/download-terabyte-click-logs/). For the convenience of data scientists, the data available to us to experiment with has been unzipped. Each record in this dataset contains 40 columns: @@ -41,7 +41,7 @@ Here is an excerpt of the first 20 columns of two observations (rows) from this 0 40 42 2 54 3 0 0 2 16 0 1 4448 4 1acfe1ee 1b2ff61f 2e8b2631 6faef306 c6fc10d3 6fcd6dcb 0 24 27 5 0 2 1 3 10064 9a8cb066 7a06385f 417e6103 2170fc56 acf676aa 6fcd6dcb -There are missing values in both the numeric and categorical columns in this dataset. We describe a simple method for handling the missing values. Additional details of the data are explored when we store them into Hive tables. +There are missing values in both the numeric and categorical columns in this dataset. A simple method for handling the missing values is described. Additional details of the data are explored when storing them into Hive tables. **Definition:** *Clickthrough rate (CTR):* This is the percentage of clicks in the data. In this Criteo dataset, the CTR is about 3.3% or 0.033. @@ -91,9 +91,9 @@ Here is what a typical first log in to the cluster headnode looks like: ![Log in to cluster](./media/hive-criteo-walkthrough/Yys9Vvm.png) -On the left, we see the "Hadoop Command Line", which is our workhorse for the data exploration. We also see two useful URLs - "Hadoop Yarn Status" and "Hadoop Name Node". The yarn status URL shows job progress and the name node URL gives details on the cluster configuration. +On the left is the "Hadoop Command Line", which is our workhorse for the data exploration. Notice two useful URLs - "Hadoop Yarn Status" and "Hadoop Name Node". The yarn status URL shows job progress and the name node URL gives details on the cluster configuration. -Now we are set up and ready to begin first part of the walkthrough: data exploration using Hive and getting data ready for Azure Machine Learning. +Now you are set up and ready to begin first part of the walkthrough: data exploration using Hive and getting data ready for Azure Machine Learning. ## Create Hive database and tables To create Hive tables for our Criteo dataset, open the ***Hadoop Command Line*** on the desktop of the head node, and enter the Hive directory by entering the command @@ -101,7 +101,7 @@ To create Hive tables for our Criteo dataset, open the ***Hadoop Command Line*** cd %hive_home%\bin > [!NOTE] -> Run all Hive commands in this walkthrough from the Hive bin/ directory prompt. This takes care of any path issues automatically. We use the terms "Hive directory prompt", "Hive bin/ directory prompt", and "Hadoop Command Line" interchangeably. +> Run all Hive commands in this walkthrough from the Hive bin/ directory prompt. This takes care of any path issues automatically. You can use the terms "Hive directory prompt", "Hive bin/ directory prompt", and "Hadoop Command Line" interchangeably. > > [!NOTE] > To execute any Hive query, one can always use the following commands: @@ -119,7 +119,7 @@ The following code creates a database "criteo" and then generates 4 tables: * a *table for use as the train dataset* built on day\_21, and * two *tables for use as the test datasets* built on day\_22 and day\_23 respectively. -We split our test dataset into two different tables because one of the days is a holiday, and we want to determine if the model can detect differences between a holiday and non-holiday from the clickthrough rate. +Split the test dataset into two different tables because one of the days is a holiday. The objective is to determine if the model can detect differences between a holiday and non-holiday from the click-through rate. The script [sample_hive_create_criteo_database_and_tables.hql](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/DataScienceProcess/DataScienceScripts/sample_hive_create_criteo_database_and_tables.hql) is displayed here for convenience: @@ -152,9 +152,9 @@ The script [sample_hive_create_criteo_database_and_table LINES TERMINATED BY '\n' STORED AS TEXTFILE LOCATION 'wasb://criteo@azuremlsampleexperiments.blob.core.windows.net/raw/test/day_23'; -We note that all these tables are external as we simply point to Azure Blob Storage (wasb) locations. +All these tables are external so you can simply point to their Azure Blob Storage (wasb) locations. -**There are two ways to execute ANY Hive query that we now mention.** +**There are two ways to execute ANY Hive query:** 1. **Using the Hive REPL command-line**: The first is to issue a "hive" command and copy and paste a query at the Hive REPL command-line. To do this, do: @@ -167,7 +167,7 @@ We note that all these tables are external as we simply point to Azure Blob Stor hive -f C:\temp\sample_hive_create_criteo_database_and_tables.hql ### Confirm database and table creation -Next, we confirm the creation of the database with the following command from the Hive bin/ directory prompt: +Next, confirm the creation of the database with the following command from the Hive bin/ directory prompt: hive -e "show databases;" @@ -179,11 +179,11 @@ This gives: This confirms the creation of the new database, "criteo". -To see what tables we created, we simply issue the command here from the Hive bin/ directory prompt: +To see what tables were created, simply issue the command here from the Hive bin/ directory prompt: hive -e "show tables in criteo;" -We then see the following output: +You should then see the following output: criteo_count criteo_test_day_22 @@ -192,7 +192,7 @@ We then see the following output: Time taken: 1.437 seconds, Fetched: 4 row(s) ## Data exploration in Hive -Now we are ready to do some basic data exploration in Hive. We begin by counting the number of examples in the train and test data tables. +Now you are ready to do some basic data exploration in Hive. You begin by counting the number of examples in the train and test data tables. ### Number of train examples The contents of [sample_hive_count_train_table_examples.hql](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/DataScienceProcess/DataScienceScripts/sample_hive_count_train_table_examples.hql) are shown here: @@ -209,7 +209,7 @@ Alternatively, one may also issue the following command from the Hive bin/ direc hive -f C:\temp\sample_hive_count_criteo_train_table_examples.hql ### Number of test examples in the two test datasets -We now count the number of examples in the two test datasets. The contents of [sample_hive_count_criteo_test_day_22_table_examples.hql](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/DataScienceProcess/DataScienceScripts/sample_hive_count_criteo_test_day_22_table_examples.hql) are here: +Now count the number of examples in the two test datasets. The contents of [sample_hive_count_criteo_test_day_22_table_examples.hql](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/DataScienceProcess/DataScienceScripts/sample_hive_count_criteo_test_day_22_table_examples.hql) are here: SELECT COUNT(*) FROM criteo.criteo_test_day_22; @@ -218,11 +218,11 @@ This yields: 189747893 Time taken: 267.968 seconds, Fetched: 1 row(s) -As usual, we may also call the script from the Hive bin/ directory prompt by issuing the command: +As usual, you may also call the script from the Hive bin/ directory prompt by issuing the command: hive -f C:\temp\sample_hive_count_criteo_test_day_22_table_examples.hql -Finally, we examine the number of test examples in the test dataset based on day\_23. +Finally, you examine the number of test examples in the test dataset based on day\_23. The command to do this is similar to the one just shown (refer to [sample_hive_count_criteo_test_day_23_examples.hql](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/DataScienceProcess/DataScienceScripts/sample_hive_count_criteo_test_day_23_examples.hql)): @@ -234,7 +234,7 @@ This gives: Time taken: 253.089 seconds, Fetched: 1 row(s) ### Label distribution in the train dataset -The label distribution in the train dataset is of interest. To see this, we show contents of [sample_hive_criteo_label_distribution_train_table.hql](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/DataScienceProcess/DataScienceScripts/sample_hive_criteo_label_distribution_train_table.hql): +The label distribution in the train dataset is of interest. To see this, show contents of [sample_hive_criteo_label_distribution_train_table.hql](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/DataScienceProcess/DataScienceScripts/sample_hive_criteo_label_distribution_train_table.hql): SELECT Col1, COUNT(*) AS CT FROM criteo.criteo_train GROUP BY Col1; @@ -247,7 +247,7 @@ This yields the label distribution: Note that the percentage of positive labels is about 3.3% (consistent with the original dataset). ### Histogram distributions of some numeric variables in the train dataset -We can use Hive's native "histogram\_numeric" function to find out what the distribution of the numeric variables looks like. Here are the contents of [sample_hive_criteo_histogram_numeric.hql](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/DataScienceProcess/DataScienceScripts/sample_hive_criteo_histogram_numeric.hql): +You can use Hive's native "histogram\_numeric" function to find out what the distribution of the numeric variables looks like. Here are the contents of [sample_hive_criteo_histogram_numeric.hql](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/DataScienceProcess/DataScienceScripts/sample_hive_criteo_histogram_numeric.hql): SELECT CAST(hist.x as int) as bin_center, CAST(hist.y as bigint) as bin_height FROM (SELECT @@ -293,10 +293,10 @@ This yields: 1.0 2.1418600917169246 2.1418600917169246 6.21887086390288 27.53454893115633 65535.0 Time taken: 564.953 seconds, Fetched: 1 row(s) -We remark that the distribution of percentiles is closely related to the histogram distribution of any numeric variable usually. +The distribution of percentiles is closely related to the histogram distribution of any numeric variable usually. ### Find number of unique values for some categorical columns in the train dataset -Continuing the data exploration, we now find, for some categorical columns, the number of unique values they take. To do this, we show contents of [sample_hive_criteo_unique_values_categoricals.hql](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/DataScienceProcess/DataScienceScripts/sample_hive_criteo_unique_values_categoricals.hql): +Continuing the data exploration, find, for some categorical columns, the number of unique values they take. To do this, show contents of [sample_hive_criteo_unique_values_categoricals.hql](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/DataScienceProcess/DataScienceScripts/sample_hive_criteo_unique_values_categoricals.hql): SELECT COUNT(DISTINCT(Col15)) AS num_uniques FROM criteo.criteo_train; @@ -305,9 +305,9 @@ This yields: 19011825 Time taken: 448.116 seconds, Fetched: 1 row(s) -We note that Col15 has 19M unique values! Using naive techniques like "one-hot encoding" to encode such high-dimensional categorical variables is infeasible. In particular, we explain and demonstrate a powerful, robust technique called [Learning With Counts](http://blogs.technet.com/b/machinelearning/archive/2015/02/17/big-learning-made-easy-with-counts.aspx) for tackling this problem efficiently. +Note that Col15 has 19M unique values! Using naive techniques like "one-hot encoding" to encode such high-dimensional categorical variables is not feasible. In particular, a powerful, robust technique called [Learning With Counts](http://blogs.technet.com/b/machinelearning/archive/2015/02/17/big-learning-made-easy-with-counts.aspx) for tackling this problem efficiently is explained and demonstrated. -We end this sub-section by looking at the number of unique values for some other categorical columns as well. The contents of [sample_hive_criteo_unique_values_multiple_categoricals.hql](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/DataScienceProcess/DataScienceScripts/sample_hive_criteo_unique_values_multiple_categoricals.hql) are: +Finally look at the number of unique values for some other categorical columns as well. The contents of [sample_hive_criteo_unique_values_multiple_categoricals.hql](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/DataScienceProcess/DataScienceScripts/sample_hive_criteo_unique_values_multiple_categoricals.hql) are: SELECT COUNT(DISTINCT(Col16)), COUNT(DISTINCT(Col17)), COUNT(DISTINCT(Col18), COUNT(DISTINCT(Col19), COUNT(DISTINCT(Col20)) @@ -318,7 +318,7 @@ This yields: 30935 15200 7349 20067 3 Time taken: 1933.883 seconds, Fetched: 1 row(s) -Again we see that except for Col20, all the other columns have many unique values. +Again, note that except for Col20, all the other columns have many unique values. ### Co-occurrence counts of pairs of categorical variables in the train dataset @@ -326,7 +326,7 @@ The co-occurrence counts of pairs of categorical variables is also of interest. SELECT Col15, Col16, COUNT(*) AS paired_count FROM criteo.criteo_train GROUP BY Col15, Col16 ORDER BY paired_count DESC LIMIT 15; -We reverse order the counts by their occurrence and look at the top 15 in this case. This gives us: +Reverse order the counts by their occurrence and look at the top 15 in this case. This gives us: ad98e872 cea68cd3 8964458 ad98e872 3dbb483e 8444762 @@ -346,9 +346,9 @@ We reverse order the counts by their occurrence and look at the top 15 in this c Time taken: 560.22 seconds, Fetched: 15 row(s) ## Down sample the datasets for Azure Machine Learning -Having explored the datasets and demonstrated how we may do this type of exploration for any variables (including combinations), we now down sample the data sets so that we can build models in Azure Machine Learning. Recall that the problem we focus on is: given a set of example attributes (feature values from Col2 - Col40), we predict if Col1 is a 0 (no click) or a 1 (click). +Having explored the datasets and demonstrated how to do this type of exploration for any variables (including combinations), down sample the data sets so that models in Azure Machine Learning can be built. Recall that the focus of the problem is: given a set of example attributes (feature values from Col2 - Col40), predict if Col1 is a 0 (no click) or a 1 (click). -To down sample our train and test datasets to 1% of the original size, we use Hive's native RAND() function. The next script, [sample_hive_criteo_downsample_train_dataset.hql](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/DataScienceProcess/DataScienceScripts/sample_hive_criteo_downsample_train_dataset.hql) does this for the train dataset: +To down sample the train and test datasets to 1% of the original size, use Hive's native RAND() function. The next script, [sample_hive_criteo_downsample_train_dataset.hql](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Misc/DataScienceProcess/DataScienceScripts/sample_hive_criteo_downsample_train_dataset.hql) does this for the train dataset: CREATE TABLE criteo.criteo_train_downsample_1perc ( col1 string,col2 double,col3 double,col4 double,col5 double,col6 double,col7 double,col8 double,col9 double,col10 double,col11 double,col12 double,col13 double,col14 double,col15 string,col16 string,col17 string,col18 string,col19 string,col20 string,col21 string,col22 string,col23 string,col24 string,col25 string,col26 string,col27 string,col28 string,col29 string,col30 string,col31 string,col32 string,col33 string,col34 string,col35 string,col36 string,col37 string,col38 string,col39 string,col40 string) @@ -399,18 +399,18 @@ This yields: Time taken: 1.86 seconds Time taken: 300.02 seconds -With this, we are ready to use our down sampled train and test datasets for building models in Azure Machine Learning. +With this, you are ready to use our down sampled train and test datasets for building models in Azure Machine Learning. -There is a final important component before we move on to Azure Machine Learning, which is concerns the count table. In the next sub-section, we discuss this in some detail. +There is a final important component before moving on to Azure Machine Learning, which concerns the count table. In the next sub-section, the count table is discussed in some detail. ## A brief discussion on the count table -As we saw, several categorical variables have a very high dimensionality. In our walkthrough, we present a powerful technique called [Learning With Counts](http://blogs.technet.com/b/machinelearning/archive/2015/02/17/big-learning-made-easy-with-counts.aspx) to encode these variables in an efficient, robust manner. More information on this technique is in the link provided. +As you saw, several categorical variables have a very high dimensionality. In the walkthrough, a powerful technique called [Learning With Counts](http://blogs.technet.com/b/machinelearning/archive/2015/02/17/big-learning-made-easy-with-counts.aspx) to encode these variables in an efficient, robust manner is presented. More information on this technique is in the link provided. [!NOTE] ->In this walkthrough, we focus on using count tables to produce compact representations of high-dimensional categorical features. This is not the only way to encode categorical features; for more information on other techniques, interested users can check out [one-hot-encoding](http://en.wikipedia.org/wiki/One-hot) and [feature hashing](http://en.wikipedia.org/wiki/Feature_hashing). +>In this walkthrough, the focus is on using count tables to produce compact representations of high-dimensional categorical features. This is not the only way to encode categorical features; for more information on other techniques, interested users can check out [one-hot-encoding](http://en.wikipedia.org/wiki/One-hot) and [feature hashing](http://en.wikipedia.org/wiki/Feature_hashing). > -To build count tables on the count data, we use the data in the folder raw/count. In the modeling section, we show users how to build these count tables for categorical features from scratch, or alternatively to use a pre-built count table for their explorations. In what follows, when we refer to "pre-built count tables", we mean using the count tables that we provide. Detailed instructions on how to access these tables are provided in the next section. +To build count tables on the count data, use the data in the folder raw/count. In the modeling section, users are shown how to build these count tables for categorical features from scratch, or alternatively to use a pre-built count table for their explorations. In what follows, when "pre-built count tables" are referred to, we mean using the count tables that have been provided. Detailed instructions on how to access these tables are provided in the next section. ## Build a model with Azure Machine Learning Our model building process in Azure Machine Learning follows these steps: @@ -421,7 +421,7 @@ Our model building process in Azure Machine Learning follows these steps: 4. [Evaluate the model](#step4) 5. [Publish the model as a web-service](#step5) -Now we are ready to build models in Azure Machine Learning studio. Our down sampled data is saved as Hive tables in the cluster. We use the Azure Machine Learning **Import Data** module to read this data. The credentials to access the storage account of this cluster are provided in what follows. +Now you are ready to build models in Azure Machine Learning studio. Our down sampled data is saved as Hive tables in the cluster. Use the Azure Machine Learning **Import Data** module to read this data. The credentials to access the storage account of this cluster are provided in what follows. ### Step 1: Get data from Hive tables into Azure Machine Learning using the Import Data module and select it for a machine learning experiment Start by selecting a **+NEW** -> **EXPERIMENT** -> **Blank Experiment**. Then, from the **Search** box on the top left, search for "Import Data". Drag and drop the **Import Data** module on to the experiment canvas (the middle portion of the screen) to use the module for data access. @@ -462,49 +462,49 @@ Our Azure ML experiment looks like this: ![Machine Learning experiment](./media/hive-criteo-walkthrough/xRpVfrY.png) -We now examine the key components of this experiment. As a reminder, we need to drag our saved train and test datasets on to our experiment canvas first. +Now examine the key components of this experiment. Drag our saved train and test datasets on to our experiment canvas first. #### Clean Missing Data -The **Clean Missing Data** module does what its name suggests: it cleans missing data in ways that can be user-specified. Looking into this module, we see this: +The **Clean Missing Data** module does what its name suggests: it cleans missing data in ways that can be user-specified. Look into this module to see this: ![Clean missing data](./media/hive-criteo-walkthrough/0ycXod6.png) -Here, we chose to replace all missing values with a 0. There are other options as well, which can be seen by looking at the dropdowns in the module. +Here, chose to replace all missing values with a 0. There are other options as well, which can be seen by looking at the dropdowns in the module. #### Feature engineering on the data -There can be millions of unique values for some categorical features of large datasets. Using naive methods such as one-hot encoding for representing such high-dimensional categorical features is entirely unfeasible. In this walkthrough, we demonstrate how to use count features using built-in Azure Machine Learning modules to generate compact representations of these high-dimensional categorical variables. The end-result is a smaller model size, faster training times, and performance metrics that are quite comparable to using other techniques. +There can be millions of unique values for some categorical features of large datasets. Using naive methods such as one-hot encoding for representing such high-dimensional categorical features is entirely unfeasible. This walkthrough demonstrates how to use count features using built-in Azure Machine Learning modules to generate compact representations of these high-dimensional categorical variables. The end-result is a smaller model size, faster training times, and performance metrics that are quite comparable to using other techniques. ##### Building counting transforms -To build count features, we use the **Build Counting Transform** module that is available in Azure Machine Learning. The module looks like this: +To build count features, use the **Build Counting Transform** module that is available in Azure Machine Learning. The module looks like this: ![Build Counting Transform module](./media/hive-criteo-walkthrough/e0eqKtZ.png) ![Build Counting Transform module](./media/hive-criteo-walkthrough/OdDN0vw.png) > [!IMPORTANT] -> In the **Count columns** box, we enter those columns that we wish to perform counts on. Typically, these are (as mentioned) high-dimensional categorical columns. At the start, we mentioned that the Criteo dataset has 26 categorical columns: from Col15 to Col40. Here, we count on all of them and give their indices (from 15 to 40 separated by commas as shown). +> In the **Count columns** box, enter those columns that you wish to perform counts on. Typically, these are (as mentioned) high-dimensional categorical columns. Remember that the Criteo dataset has 26 categorical columns: from Col15 to Col40. Here, count on all of them and give their indices (from 15 to 40 separated by commas as shown). > -To use the module in the MapReduce mode (appropriate for large datasets), we need access to an HDInsight Hadoop cluster (the one used for feature exploration can be reused for this purpose as well) and its credentials. The previous figures illustrate what the filled-in values look like (replace the values provided for illustration with those relevant for your own use-case). +To use the module in the MapReduce mode (appropriate for large datasets), you need access to an HDInsight Hadoop cluster (the one used for feature exploration can be reused for this purpose as well) and its credentials. The previous figures illustrate what the filled-in values look like (replace the values provided for illustration with those relevant for your own use-case). ![Module parameters](./media/hive-criteo-walkthrough/05IqySf.png) -In the figure above, we show how to enter the input blob location. This location has the data reserved for building count tables on. +The preceding figure shows how to enter the input blob location. This location has the data reserved for building count tables on. -After this module finishes running, we can save the transform for later by right-clicking the module and selecting the **Save as Transform** option: +When this module finishes running, save the transform for later by right-clicking the module and selecting the **Save as Transform** option: !["Save as Transform" option](./media/hive-criteo-walkthrough/IcVgvHR.png) -In our experiment architecture shown above, the dataset "ytransform2" corresponds precisely to a saved count transform. For the remainder of this experiment, we assume that the reader used a **Build Counting Transform** module on some data to generate counts, and can then use those counts to generate count features on the train and test datasets. +In our experiment architecture shown above, the dataset "ytransform2" corresponds precisely to a saved count transform. For the remainder of this experiment, it is assumed that the reader used a **Build Counting Transform** module on some data to generate counts, and can then use those counts to generate count features on the train and test datasets. ##### Choosing what count features to include as part of the train and test datasets -Once we have a count transform ready, the user can choose what features to include in their train and test datasets using the **Modify Count Table Parameters** module. We just show this module here for completeness, but in interests of simplicity do not actually use it in our experiment. +Once a count transform ready, the user can choose what features to include in their train and test datasets using the **Modify Count Table Parameters** module. For completeness, this module is shown here. But in interests of simplicity do not actually use it in our experiment. ![Modify Count Table parameters](./media/hive-criteo-walkthrough/PfCHkVg.png) -In this case, as can be seen, we have chosen to use just the log-odds and to ignore the back off column. We can also set parameters such as the garbage bin threshold, how many pseudo-prior examples to add for smoothing, and whether to use any Laplacian noise or not. All these are advanced features and it is to be noted that the default values are a good starting point for users who are new to this type of feature generation. +In this case, as can be seen, the log-odds are to be used and the back off column is ignored. You can also set parameters such as the garbage bin threshold, how many pseudo-prior examples to add for smoothing, and whether to use any Laplacian noise or not. All these are advanced features and it is to be noted that the default values are a good starting point for users who are new to this type of feature generation. ##### Data transformation before generating the count features -Now we focus on an important point about transforming our train and test data prior to actually generating count features. Note that there are two **Execute R Script** modules used before we apply the count transform to our data. +Now the focus is on an important point about transforming our train and test data prior to actually generating count features. Note that there are two **Execute R Script** modules used before the count transform is applied to our data. ![Execute R Script modules](./media/hive-criteo-walkthrough/aF59wbc.png) @@ -512,64 +512,64 @@ Here is the first R script: ![First R script](./media/hive-criteo-walkthrough/3hkIoMx.png) -In this R script, we rename our columns to names "Col1" to "Col40". This is because the count transform expects names of this format. +This R script renames our columns to names "Col1" to "Col40". This is because the count transform expects names of this format. -In the second R script, we balance the distribution between positive and negative classes (classes 1 and 0 respectively) by downsampling the negative class. The R script here shows how to do this: +The second R script balances the distribution between positive and negative classes (classes 1 and 0 respectively) by down-sampling the negative class. The R script here shows how to do this: ![Second R script](./media/hive-criteo-walkthrough/91wvcwN.png) -In this simple R script, we use "pos\_neg\_ratio" to set the amount of balance between the positive and the negative classes. This is important to do since improving class imbalance usually has performance benefits for classification problems where the class distribution is skewed (recall that in our case, we have 3.3% positive class and 96.7% negative class). +In this simple R script, the "pos\_neg\_ratio" is used to set the amount of balance between the positive and the negative classes. This is important to do since improving class imbalance usually has performance benefits for classification problems where the class distribution is skewed (recall that in this case, you have 3.3% positive class and 96.7% negative class). ##### Applying the count transformation on our data -Finally, we can use the **Apply Transformation** module to apply the count transforms on our train and test datasets. This module takes the saved count transform as one input and the train or test datasets as the other input, and returns data with count features. It is shown here: +Finally, you can use the **Apply Transformation** module to apply the count transforms on our train and test datasets. This module takes the saved count transform as one input and the train or test datasets as the other input, and returns data with count features. It is shown here: ![Apply Transformation module](./media/hive-criteo-walkthrough/xnQvsYf.png) ##### An excerpt of what the count features look like -It is instructive to see what the count features look like in our case. Here we show an excerpt of this: +It is instructive to see what the count features look like in our case. Here is an excerpt of this: ![Count features](./media/hive-criteo-walkthrough/FO1nNfw.png) -In this excerpt, we show that for the columns that we counted on, we get the counts and log odds in addition to any relevant backoffs. +This excerpt shows that for the columns counted on, you get the counts and log odds in addition to any relevant backoffs. -We are now ready to build an Azure Machine Learning model using these transformed datasets. In the next section, we show how this can be done. +You are now ready to build an Azure Machine Learning model using these transformed datasets. In the next section shows how this can be done. ### Step 3: Build, train, and score the model #### Choice of learner -First, we need to choose a learner. We are going to use a two class boosted decision tree as our learner. Here are the default options for this learner: +First, you need to choose a learner. Use a two-class boosted decision tree as our learner. Here are the default options for this learner: ![Two-Class Boosted Decision Tree parameters](./media/hive-criteo-walkthrough/bH3ST2z.png) -For our experiment, we are going to choose the default values. We note that the defaults are usually meaningful and a good way to get quick baselines on performance. You can improve on performance by sweeping parameters if you choose to once you have a baseline. +For the experiment, choose the default values. Note that the defaults are usually meaningful and a good way to get quick baselines on performance. You can improve on performance by sweeping parameters if you choose to once you have a baseline. #### Train the model -For training, we simply invoke a **Train Model** module. The two inputs to it are the Two-Class Boosted Decision Tree learner and our train dataset. This is shown here: +For training, simply invoke a **Train Model** module. The two inputs to it are the Two-Class Boosted Decision Tree learner and our train dataset. This is shown here: ![Train Model module](./media/hive-criteo-walkthrough/2bZDZTy.png) #### Score the model -Once we have a trained model, we are ready to score on the test dataset and to evaluate its performance. We do this by using the **Score Model** module shown in the following figure, along with an **Evaluate Model** module: +Once you have a trained model, you are ready to score on the test dataset and to evaluate its performance. Do this by using the **Score Model** module shown in the following figure, along with an **Evaluate Model** module: ![Score Model module](./media/hive-criteo-walkthrough/fydcv6u.png) ### Step 4: Evaluate the model -Finally, we would like to analyze model performance. Usually, for two class (binary) classification problems, a good measure is the AUC. To visualize this, we hook up the **Score Model** module to an **Evaluate Model** module for this. Clicking **Visualize** on the **Evaluate Model** module yields a graphic like the following one: +Finally, you should analyze model performance. Usually, for two class (binary) classification problems, a good measure is the AUC. To visualize this, hook up the **Score Model** module to an **Evaluate Model** module for this. Clicking **Visualize** on the **Evaluate Model** module yields a graphic like the following one: ![Evaluate module BDT model](./media/hive-criteo-walkthrough/0Tl0cdg.png) -In binary (or two class) classification problems, a good measure of prediction accuracy is the Area Under Curve (AUC). In what follows, we show our results using this model on our test dataset. To get this, right-click the output port of the **Evaluate Model** module and then **Visualize**. +In binary (or two class) classification problems, a good measure of prediction accuracy is the Area Under Curve (AUC). The following section shows our results using this model on our test dataset. To get this, right-click the output port of the **Evaluate Model** module and then **Visualize**. ![Visualize Evaluate Model module](./media/hive-criteo-walkthrough/IRfc7fH.png) ### Step 5: Publish the model as a Web service The ability to publish an Azure Machine Learning model as web services with a minimum of fuss is a valuable feature for making it widely available. Once that is done, anyone can make calls to the web service with input data that they need predictions for, and the web service uses the model to return those predictions. -To do this, we first save our trained model as a Trained Model object. This is done by right-clicking the **Train Model** module and using the **Save as Trained Model** option. +To do this, first save our trained model as a Trained Model object. This is done by right-clicking the **Train Model** module and using the **Save as Trained Model** option. -Next, we need to create input and output ports for our web service: +Next, create input and output ports for our web service: -* an input port takes data in the same form as the data that we need predictions for +* an input port takes data in the same form as the data that you need predictions for * an output port returns the Scored Labels and the associated probabilities. #### Select a few rows of data for the input port @@ -578,24 +578,24 @@ It is convenient to use an **Apply SQL Transformation** module to select just 10 ![Input port data](./media/hive-criteo-walkthrough/XqVtSxu.png) #### Web service -Now we are ready to run a small experiment that can be used to publish our web service. +Now you are ready to run a small experiment that can be used to publish our web service. #### Generate input data for webservice -As a zeroth step, since the count table is large, we take a few lines of test data and generate output data from it with count features. This can serve as the input data format for our webservice. This is shown here: +As a zeroth step, since the count table is large, take a few lines of test data and generate output data from it with count features. This can serve as the input data format for our webservice. This is shown here: ![Create BDT input data](./media/hive-criteo-walkthrough/OEJMmst.png) > [!NOTE] -> For the input data format, we now use the OUTPUT of the **Count Featurizer** module. Once this experiment finishes running, save the output from the **Count Featurizer** module as a Dataset. This Dataset is used for the input data in the webservice. +> For the input data format, use the OUTPUT of the **Count Featurizer** module. Once this experiment finishes running, save the output from the **Count Featurizer** module as a Dataset. This Dataset is used for the input data in the webservice. > > #### Scoring experiment for publishing webservice -First, we show what this looks like. The essential structure is a **Score Model** module that accepts our trained model object and a few lines of input data that we generated in the previous steps using the **Count Featurizer** module. We use "Select Columns in Dataset" to project out the Scored labels and the Score probabilities. +First, it is shown what this looks like. The essential structure is a **Score Model** module that accepts our trained model object and a few lines of input data that were generated in the previous steps using the **Count Featurizer** module. Use "Select Columns in Dataset" to project out the Scored labels and the Score probabilities. ![Select Columns in Dataset](./media/hive-criteo-walkthrough/kRHrIbe.png) -Notice how the **Select Columns in Dataset** module can be used for 'filtering' data out from a dataset. We show the contents here: +Notice how the **Select Columns in Dataset** module can be used for 'filtering' data out from a dataset. The contents are shown here: ![Filtering with the Select Columns in Dataset module](./media/hive-criteo-walkthrough/oVUJC9K.png) @@ -603,28 +603,28 @@ To get the blue input and output ports, you simply click **prepare webservice** ![Publish Web service](./media/hive-criteo-walkthrough/WO0nens.png) -Once the webservice is published, we get redirected to a page that looks thus: +Once the webservice is published, get redirected to a page that looks thus: ![Web service dashboard](./media/hive-criteo-walkthrough/YKzxAA5.png) -We see two links for webservices on the left side: +Notice the two links for webservices on the left side: -* The **REQUEST/RESPONSE** Service (or RRS) is meant for single predictions and is what we utilize in this workshop. +* The **REQUEST/RESPONSE** Service (or RRS) is meant for single predictions and is what has been utilized in this workshop. * The **BATCH EXECUTION** Service (BES) is used for batch predictions and requires that the input data used to make predictions reside in Azure Blob Storage. Clicking on the link **REQUEST/RESPONSE** takes us to a page that gives us pre-canned code in C#, python, and R. This code can be conveniently used for making calls to the webservice. Note that the API key on this page needs to be used for authentication. It is convenient to copy this python code over to a new cell in the IPython notebook. -Here we show a segment of python code with the correct API key. +Here is a segment of python code with the correct API key. ![Python code](./media/hive-criteo-walkthrough/f8N4L4g.png) -Note that we replaced the default API key with our webservices's API key. Clicking **Run** on this cell in an IPython notebook yields the following response: +Note that the default API key has been replaced with our webservices's API key. Clicking **Run** on this cell in an IPython notebook yields the following response: ![IPython response](./media/hive-criteo-walkthrough/KSxmia2.png) -We see that for the two test examples we asked about (in the JSON framework of the python script), we get back answers in the form "Scored Labels, Scored Probabilities". Note that in this case, we chose the default values that the pre-canned code provides (0's for all numeric columns and the string "value" for all categorical columns). +For the two test examples asked about (in the JSON framework of the python script), you get back answers in the form "Scored Labels, Scored Probabilities". In this case, the default values have been chosen that the pre-canned code provides (0's for all numeric columns and the string "value" for all categorical columns). -This concludes our end-to-end walkthrough showing how to handle large-scale dataset using Azure Machine Learning. We started with a terabyte of data, constructed a prediction model and deployed it as a web service in the cloud. +This concludes our walkthrough showing how to handle large-scale dataset using Azure Machine Learning. You started with a terabyte of data, constructed a prediction model, and deployed it as a web service in the cloud. diff --git a/articles/machine-learning/team-data-science-process/hive-walkthrough.md b/articles/machine-learning/team-data-science-process/hive-walkthrough.md index 8db52ee98f921..1e2209df6b2bc 100644 --- a/articles/machine-learning/team-data-science-process/hive-walkthrough.md +++ b/articles/machine-learning/team-data-science-process/hive-walkthrough.md @@ -7,7 +7,7 @@ description: Using the Team Data Science Process for an end-to-end scenario empl services: machine-learning,hdinsight documentationcenter: '' author: bradsev -manager: jhubbard +manager: cgronlun editor: cgronlun ms.assetid: e9e76c91-d0f6-483d-bae7-2d3157b86aa0 @@ -16,10 +16,8 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article - -ms.date: 01/29/2017 - -ms.author: hangzh;bradsev +ms.date: 11/29/2017 +ms.author: bradsev --- # The Team Data Science Process in action: Use Azure HDInsight Hadoop clusters diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-1-project-execute.png b/articles/machine-learning/team-data-science-process/media/agile-development/1-project-execute.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-1-project-execute.png rename to articles/machine-learning/team-data-science-process/media/agile-development/1-project-execute.png diff --git a/articles/machine-learning/team-data-science-process/media/agile-development/10-settings.png b/articles/machine-learning/team-data-science-process/media/agile-development/10-settings.png new file mode 100644 index 0000000000000..5fccf5ef7c79b Binary files /dev/null and b/articles/machine-learning/team-data-science-process/media/agile-development/10-settings.png differ diff --git a/articles/machine-learning/team-data-science-process/media/agile-development/11-agileds.png b/articles/machine-learning/team-data-science-process/media/agile-development/11-agileds.png new file mode 100644 index 0000000000000..969880a432cb4 Binary files /dev/null and b/articles/machine-learning/team-data-science-process/media/agile-development/11-agileds.png differ diff --git a/articles/machine-learning/team-data-science-process/media/agile-development/12-disable.png b/articles/machine-learning/team-data-science-process/media/agile-development/12-disable.png new file mode 100644 index 0000000000000..d0e09ea9ad183 Binary files /dev/null and b/articles/machine-learning/team-data-science-process/media/agile-development/12-disable.png differ diff --git a/articles/machine-learning/team-data-science-process/media/agile-development/13-rename.png b/articles/machine-learning/team-data-science-process/media/agile-development/13-rename.png new file mode 100644 index 0000000000000..32fce4541c867 Binary files /dev/null and b/articles/machine-learning/team-data-science-process/media/agile-development/13-rename.png differ diff --git a/articles/machine-learning/team-data-science-process/media/agile-development/14-template.png b/articles/machine-learning/team-data-science-process/media/agile-development/14-template.png new file mode 100644 index 0000000000000..7af05f7c6503b Binary files /dev/null and b/articles/machine-learning/team-data-science-process/media/agile-development/14-template.png differ diff --git a/articles/machine-learning/team-data-science-process/media/agile-development/15-newproject.png b/articles/machine-learning/team-data-science-process/media/agile-development/15-newproject.png new file mode 100644 index 0000000000000..4ab0089ec0311 Binary files /dev/null and b/articles/machine-learning/team-data-science-process/media/agile-development/15-newproject.png differ diff --git a/articles/machine-learning/team-data-science-process/media/agile-development/16-enabledsprojects.PNG b/articles/machine-learning/team-data-science-process/media/agile-development/16-enabledsprojects.PNG new file mode 100644 index 0000000000000..2281a4e46660b Binary files /dev/null and b/articles/machine-learning/team-data-science-process/media/agile-development/16-enabledsprojects.PNG differ diff --git a/articles/machine-learning/team-data-science-process/media/agile-development/17-dsworkitems.PNG b/articles/machine-learning/team-data-science-process/media/agile-development/17-dsworkitems.PNG new file mode 100644 index 0000000000000..a661f021ce987 Binary files /dev/null and b/articles/machine-learning/team-data-science-process/media/agile-development/17-dsworkitems.PNG differ diff --git a/articles/machine-learning/team-data-science-process/media/agile-development/18-workitems.PNG b/articles/machine-learning/team-data-science-process/media/agile-development/18-workitems.PNG new file mode 100644 index 0000000000000..abb0bce975130 Binary files /dev/null and b/articles/machine-learning/team-data-science-process/media/agile-development/18-workitems.PNG differ diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-2-sprint-team-overview.png b/articles/machine-learning/team-data-science-process/media/agile-development/2-sprint-team-overview.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-2-sprint-team-overview.png rename to articles/machine-learning/team-data-science-process/media/agile-development/2-sprint-team-overview.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-3-sprint-team-add-work.png b/articles/machine-learning/team-data-science-process/media/agile-development/3-sprint-team-add-work.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-3-sprint-team-add-work.png rename to articles/machine-learning/team-data-science-process/media/agile-development/3-sprint-team-add-work.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-4-sprint-add-story.png b/articles/machine-learning/team-data-science-process/media/agile-development/4-sprint-add-story.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-4-sprint-add-story.png rename to articles/machine-learning/team-data-science-process/media/agile-development/4-sprint-add-story.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-5-sprint-edit-story.png b/articles/machine-learning/team-data-science-process/media/agile-development/5-sprint-edit-story.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-5-sprint-edit-story.png rename to articles/machine-learning/team-data-science-process/media/agile-development/5-sprint-edit-story.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-6-sprint-link-existing-branch.png b/articles/machine-learning/team-data-science-process/media/agile-development/6-sprint-link-existing-branch.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-6-sprint-link-existing-branch.png rename to articles/machine-learning/team-data-science-process/media/agile-development/6-sprint-link-existing-branch.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-7-sprint-add-task.png b/articles/machine-learning/team-data-science-process/media/agile-development/7-sprint-add-task.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-7-sprint-add-task.png rename to articles/machine-learning/team-data-science-process/media/agile-development/7-sprint-add-task.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-8-sprint-backlog-view.png b/articles/machine-learning/team-data-science-process/media/agile-development/8-sprint-backlog-view.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-8-sprint-backlog-view.png rename to articles/machine-learning/team-data-science-process/media/agile-development/8-sprint-backlog-view.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-9-link-to-a-new-branch.png b/articles/machine-learning/team-data-science-process/media/agile-development/9-link-to-a-new-branch.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-9-link-to-a-new-branch.png rename to articles/machine-learning/team-data-science-process/media/agile-development/9-link-to-a-new-branch.png diff --git a/articles/machine-learning/team-data-science-process/media/agile-development/agile.png b/articles/machine-learning/team-data-science-process/media/agile-development/agile.png new file mode 100644 index 0000000000000..969880a432cb4 Binary files /dev/null and b/articles/machine-learning/team-data-science-process/media/agile-development/agile.png differ diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-10-sprint-board-view.png b/articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/1-sprint-board-view.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-10-sprint-board-view.png rename to articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/1-sprint-board-view.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-19-spring-complete-pullrequest.png b/articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/10-spring-complete-pullrequest.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-19-spring-complete-pullrequest.png rename to articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/10-spring-complete-pullrequest.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-20-spring-merge-pullrequest.png b/articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/11-spring-merge-pullrequest.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-20-spring-merge-pullrequest.png rename to articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/11-spring-merge-pullrequest.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-21-spring-branch-deleted.png b/articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/12-spring-branch-deleted.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-21-spring-branch-deleted.png rename to articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/12-spring-branch-deleted.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-22-spring-branch-deleted-commandline.png b/articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/13-spring-branch-deleted-commandline.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-22-spring-branch-deleted-commandline.png rename to articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/13-spring-branch-deleted-commandline.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-11-create-a-branch.png b/articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/2-create-a-branch.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-11-create-a-branch.png rename to articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/2-create-a-branch.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-12-git-branches.png b/articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/3-git-branches.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-12-git-branches.png rename to articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/3-git-branches.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-13-link-to-an-existing-branch.png b/articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/4-link-to-an-existing-branch.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-13-link-to-an-existing-branch.png rename to articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/4-link-to-an-existing-branch.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-14-sprint-push-to-branch.png b/articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/5-sprint-push-to-branch.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-14-sprint-push-to-branch.png rename to articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/5-sprint-push-to-branch.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-15-spring-create-pull-request.png b/articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/6-spring-create-pull-request.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-15-spring-create-pull-request.png rename to articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/6-spring-create-pull-request.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-16-spring-send-pull-request.png b/articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/7-spring-send-pull-request.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-16-spring-send-pull-request.png rename to articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/7-spring-send-pull-request.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-17-add_comments.png b/articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/8-add_comments.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-17-add_comments.png rename to articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/8-add_comments.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-18-spring-approve-pullrequest.png b/articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/9-spring-approve-pullrequest.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-18-spring-approve-pullrequest.png rename to articles/machine-learning/team-data-science-process/media/collaborative-coding-with-git/9-spring-approve-pullrequest.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-23-powerbi-git.png b/articles/machine-learning/team-data-science-process/media/track-progress/1-powerbi-git.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-23-powerbi-git.png rename to articles/machine-learning/team-data-science-process/media/track-progress/1-powerbi-git.png diff --git a/articles/machine-learning/team-data-science-process/media/project-execution/project-execution-24-powerbi-workitem.png b/articles/machine-learning/team-data-science-process/media/track-progress/2-powerbi-workitem.png similarity index 100% rename from articles/machine-learning/team-data-science-process/media/project-execution/project-execution-24-powerbi-workitem.png rename to articles/machine-learning/team-data-science-process/media/track-progress/2-powerbi-workitem.png diff --git a/articles/machine-learning/team-data-science-process/media/track-progress/dashboard.png b/articles/machine-learning/team-data-science-process/media/track-progress/dashboard.png new file mode 100644 index 0000000000000..ebaaf47d1b2b6 Binary files /dev/null and b/articles/machine-learning/team-data-science-process/media/track-progress/dashboard.png differ diff --git a/articles/machine-learning/team-data-science-process/move-data-to-azure-blob-using-python.md b/articles/machine-learning/team-data-science-process/move-data-to-azure-blob-using-python.md deleted file mode 100644 index 3302a1faf69d7..0000000000000 --- a/articles/machine-learning/team-data-science-process/move-data-to-azure-blob-using-python.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -title: Move Data to and from Azure Blob Storage using Python | Microsoft Docs -description: Move Data to and from Azure Blob Storage using Python -services: machine-learning,storage -documentationcenter: '' -author: bradsev -manager: jhubbard -editor: cgronlun - -ms.assetid: 24276252-b3dd-4edf-9e5d-f6803f8ccccc -ms.service: machine-learning -ms.workload: data-services -ms.tgt_pltfrm: na -ms.devlang: na -ms.topic: article -ms.date: 11/04/2017 -ms.author: bradsev - ---- -# Move data to and from Azure Blob Storage using Python -This topic describes how to list, upload, and download blobs using the Python API. With the Python API provided in Azure SDK, you can: - -* Create a container -* Upload a blob into a container -* Download blobs -* List the blobs in a container -* Delete a blob - -For more information about using the Python API, see [How to Use the Blob Storage Service from Python](../../storage/blobs/storage-python-how-to-use-blob-storage.md). - -[!INCLUDE [blob-storage-tool-selector](../../../includes/machine-learning-blob-storage-tool-selector.md)] - -> [!NOTE] -> If you are using VM that was set up with the scripts provided by [Data Science Virtual machines in Azure](virtual-machines.md), then AzCopy is already installed on the VM. -> -> [!NOTE] -> For a complete introduction to Azure blob storage, refer to [Azure Blob Basics](../../storage/blobs/storage-dotnet-how-to-use-blobs.md) and to [Azure Blob Service](https://msdn.microsoft.com/library/azure/dd179376.aspx). -> -> - -## Prerequisites -This document assumes that you have an Azure subscription, a storage account, and the corresponding storage key for that account. Before uploading/downloading data, you must know your Azure storage account name and account key. - -* To set up an Azure subscription, see [Free one-month trial](https://azure.microsoft.com/pricing/free-trial/). -* For instructions on creating a storage account and for getting account and key information, see [About Azure storage accounts](../../storage/common/storage-create-storage-account.md). - -## Upload Data to Blob -Add the following snippet near the top of any Python code in which you wish to programmatically access Azure Storage: - - from azure.storage.blob import BlobService - -The **BlobService** object lets you work with containers and blobs. The following code creates a BlobService object using the storage account name and account key. Replace account name and account key with your real account and key. - - blob_service = BlobService(account_name="", account_key="") - -Use the following methods to upload data to a blob: - -1. put\_block\_blob\_from\_path (uploads the contents of a file from the specified path) -2. put\_block_blob\_from\_file (uploads the contents from an already opened file/stream) -3. put\_block\_blob\_from\_bytes (uploads an array of bytes) -4. put\_block\_blob\_from\_text (uploads the specified text value using the specified encoding) - -The following sample code uploads a local file to a container: - - blob_service.put_block_blob_from_path("", "", "") - -The following sample code uploads all the files (excluding directories) in a local directory to blob storage: - - from azure.storage.blob import BlobService - from os import listdir - from os.path import isfile, join - - # Set parameters here - ACCOUNT_NAME = "" - ACCOUNT_KEY = "" - CONTAINER_NAME = "" - LOCAL_DIRECT = "" - - blob_service = BlobService(account_name=ACCOUNT_NAME, account_key=ACCOUNT_KEY) - # find all files in the LOCAL_DIRECT (excluding directory) - local_file_list = [f for f in listdir(LOCAL_DIRECT) if isfile(join(LOCAL_DIRECT, f))] - - file_num = len(local_file_list) - for i in range(file_num): - local_file = join(LOCAL_DIRECT, local_file_list[i]) - blob_name = local_file_list[i] - try: - blob_service.put_block_blob_from_path(CONTAINER_NAME, blob_name, local_file) - except: - print "something wrong happened when uploading the data %s"%blob_name - - -## Download Data from Blob -Use the following methods to download data from a blob: - -1. get\_blob\_to\_path -2. get\_blob\_to\_file -3. get\_blob\_to\_bytes -4. get\_blob\_to\_text - -These methods that perform the necessary chunking when the size of the data exceeds 64 MB. - -The following sample code downloads the contents of a blob in a container to a local file: - - blob_service.get_blob_to_path("", "", "") - -The following sample code downloads all blobs from a container. It uses list\_blobs to get the list of available blobs in the container and downloads them to a local directory. - - from azure.storage.blob import BlobService - from os.path import join - - # Set parameters here - ACCOUNT_NAME = "" - ACCOUNT_KEY = "" - CONTAINER_NAME = "" - LOCAL_DIRECT = "" - - blob_service = BlobService(account_name=ACCOUNT_NAME, account_key=ACCOUNT_KEY) - - # List all blobs and download them one by one - blobs = blob_service.list_blobs(CONTAINER_NAME) - for blob in blobs: - local_file = join(LOCAL_DIRECT, blob.name) - try: - blob_service.get_blob_to_path(CONTAINER_NAME, blob.name, local_file) - except: - print "something wrong happened when downloading the data %s"%blob.name diff --git a/articles/machine-learning/team-data-science-process/project-execution.md b/articles/machine-learning/team-data-science-process/project-execution.md deleted file mode 100644 index d69173a948522..0000000000000 --- a/articles/machine-learning/team-data-science-process/project-execution.md +++ /dev/null @@ -1,252 +0,0 @@ ---- -title: Execution of data science projects - Azure | Microsoft Docs -description: How a data scientist can execute a data science project in a trackable, version controlled, and collaborative way. -documentationcenter: '' -author: bradsev -manager: cgronlun -editor: cgronlun - -ms.assetid: -ms.service: machine-learning -ms.workload: data-services -ms.tgt_pltfrm: na -ms.devlang: na -ms.topic: article -ms.date: 11/16/2017 -ms.author: bradsev; - ---- - - -# Execution of data science projects - -This document describes how developers can execute a data science project in a systematic, version controlled, and collaborative way within a project team by using the [Team Data Science Process](overview.md) (TDSP). The TDSP is a framework developed by Microsoft that provides a structured sequence of activities to execute cloud-based, predictive analytics solutions efficiently. For an outline of the personnel roles, and their associated tasks that are handled by a data science team standardizing on this process, see [Team Data Science Process roles and tasks](roles-tasks.md). - -This article includes instructions on how to: - -1. do **sprint planning** for work items involved in a project.
If you are unfamiliar with sprint planning, you can find details and general information [here](https://en.wikipedia.org/wiki/Sprint_(software_development) "here"). -2. **add work items** to sprints. -3. **link the work items with coding activities** tracked by git. -4. do **code review**. - -> [!NOTE] -> The steps needed to set up a TDSP team environment using Visual Studio Team Services (VSTS) are outlined in the following set of instructions. They specify how to accomplish these tasks with VSTS because that is how to implement TDSP at Microsoft. If you choose to use VSTS, items (3) and (4) in the previous list are benefits that you get naturally. If another code hosting platform is used for your group, the tasks that need to be completed by the team lead generally do not change. But the way to complete these tasks is going to be different. For example, the item in section six, **Link a work item with a git branch**, might not be as easy as it is on VSTS. -> -> - -The following figure illustrates a typical sprint planning, coding, and source-control workflow involved in implementing a data science project: - -![1](./media/project-execution/project-execution-1-project-execute.png) - - -## 1. Terminology - -In the TDSP sprint planning framework, there are four frequently used types of **work items**: **Feature**, **User Story**, **Task**, and **Bug**. Each team project maintains a single backlog for all work items. There is no backlog at the git repository level under a team project. Here are their definitions: - -- **Feature**: A feature corresponds to a project engagement. Different engagements with a client are considered different features. Similarly, it is best to consider different phases of a project with a client as different features. If you choose a schema such as ***ClientName-EngagementName*** to name your features, then you can easily recognize the context of the project/engagement from the names themselves. -- **Story**: Stories are different work items that are needed to complete a feature (project) end-to-end. Examples of stories include: - - Getting Data - - Exploring Data - - Generating Features - - Building Models - - Operationalizing Models - - Retraining Models -- **Task**: Tasks are assignable code or document work items or other activities that need to be done to complete a specific story. For example, tasks in the story *Getting Data* could be: - - Getting Credentials of SQL Server - - Uploading Data to SQL Data Warehouse. -- **Bug**: Bugs usually refer to fixes that are needed for an existing code or document that are done when completing a task. If the bug is caused by missing stages or tasks respectively, it can escalate to being a story or a task. - -> [!NOTE] -> Concepts are borrowed of features, stories, tasks, and bugs from software code management (SCM) to be used in data science. They might differ slightly from their conventional SCM definitions. -> -> - -Data scientists may feel more comfortable using an Agile template that specifically aligns with the TDSP lifecycle stages. With that in mind, an Agile-derived sprint planning template has been created, where Epics, Stories etc. are replaced by TDSP lifecycle stages or substages. Documentation on how to create such an Agile template can be found [here](https://msdata.visualstudio.com/AlgorithmsAndDataScience/TDSP/_git/TDSP?path=%2FDocs%2Fteam-data-science-process-agile-template.md&version=GBxibingao&_a=preview). - - -## 2. Sprint planning - -Sprint planning is useful for project prioritization, and resource planning and allocation. Many data scientists are engaged with multiple projects, each of which can take months to complete. Projects often proceed at different paces. On the VSTS server, you can easily create, manage, and track work items in your team project and conduct sprint planning to ensure that your projects are moving forward as expected. - -Follow [this link](https://www.visualstudio.com/en-us/docs/work/scrum/sprint-planning) for the step-by-step instructions on sprint planning in VSTS. - - -## 3. Add a Feature - -After your project repository is created under a team project, go to the team **Overview** page and click **Manage work**. - -![2](./media/project-execution/project-execution-2-sprint-team-overview.png) - -To include a feature in the backlog, click **Backlogs** --> **Features** --> **New**, type in the feature **Title** (usually your project name), and then click **Add** . - -![3](./media/project-execution/project-execution-3-sprint-team-add-work.png) - -Double-click the feature you created. Fill in the descriptions, assign team members for this feature, and set planning parameters for this feature. - -You can also link this feature to the project repository. Click **Add link** under the **Development** section. After you have finished editing the feature, click **Save & Close** to exit. - - -## 4. Add Story under Feature - -Under the feature, stories can be added to describe major steps needed to finish the (feature) project. To add a new story, click the **+** sign to the left of the feature in backlog view. - -![4](./media/project-execution/project-execution-4-sprint-add-story.png) - -You can edit the details of the story, such as the status, description, comments, planning, and priority In the pop-up window. - -![5](./media/project-execution/project-execution-5-sprint-edit-story.png) - -You can link this story to an existing repository by clicking **+ Add link** under **Development**. - -![6](./media/project-execution/project-execution-6-sprint-link-existing-branch.png) - - -## 5. Add a task to a story - -Tasks are specific detailed steps that are needed to complete each story. After all tasks of a story are completed, the story should be completed too. - -To add a task to a story, click the **+** sign next to the story item, select **Task**, and then fill in the detailed information of this task in the pop-up window. - -![7](./media/project-execution/project-execution-7-sprint-add-task.png) - -After the features, stories, and tasks are created, you can view them in the **Backlog** or **Board** views to track their status. - -![8](./media/project-execution/project-execution-8-sprint-backlog-view.png) - -![9](./media/project-execution/project-execution-9-link-to-a-new-branch.png) - - -## 6. Link a work item with a git branch - -VSTS provides a convenient way to connect a work item (a story or task) with a git branch. This enables you to link your story or task directly to the code associated with it. - -To connect a work item to a new branch, double-click a work item, and in the pop-up window, click **Create a new branch** under **+ Add link**. - -![10](./media/project-execution/project-execution-10-sprint-board-view.png) - -Provide the information for this new branch, such as the branch name, base git repository and the branch. The git repository chosen must be the repository under the same team project that the work item belongs to. The base branch can be the master branch or some other existing branch. - -![11](./media/project-execution/project-execution-11-create-a-branch.png) - -A good practice is to create a git branch for each story work item. Then, for each task work item, you create a branch based on the story branch. Organizing the branches in this hierarchical way that corresponds to the story-task relationships is helpful when you have multiple people working on different stories of the same project, or you have multiple people working on different tasks of the same story. Conflicts can be minimized when each team member works on a different branch and when each member works on different codes or other artifacts when sharing a branch. - -The following picture depicts the recommended branching strategy for TDSP. You might not need as many branches as are shown here, especially when you only have one or two people working on the same project, or only one person works on all tasks of a story. But separating the development branch from the master branch is always a good practice. This can help prevent the release branch from being interrupted by the development activities. More complete description of git branch model can be found in [A Successful Git Branching Model](http://nvie.com/posts/a-successful-git-branching-model/). - -![12](./media/project-execution/project-execution-12-git-branches.png) - -To switch to the branch that you want to work on, run the following command in a shell command (Windows or Linux). - - git checkout - -Changing the ** to **master** switches you back to the **master** branch. After you switch to the working branch, you can start working on that work item, developing the code or documentation artifacts needed to complete the item. - -You can also link a work item to an existing branch. In the **Detail** page of a work item, instead of clicking **Create a new branch**, you click **+ Add link**. Then, select the branch you want to link the work item to. - -![13](./media/project-execution/project-execution-13-link-to-an-existing-branch.png) - -You can also create a new branch in git bash commands. If is missing, the is based on _master_ branch. - - git checkout -b - - -## 7. Work on a branch and commit the changes - -Now suppose you make some change to the *data\_ingestion* branch for the work item, such as adding an R file on the branch in your local machine. You can commit the R file added to the branch for this work item, provided you are in that branch in your Git shell, using the following Git commands: - - git status - git add . - git commit -m"added a R scripts" - git push origin data_ingestion - -![14](./media/project-execution/project-execution-14-sprint-push-to-branch.png) - -## 8. Create a pull request on VSTS - -When you are ready after a few commits and pushes, to merge the current branch into its base branch, you can submit a **pull request** on VSTS server. - -Go to the main page of your team project and click **CODE**. Select the branch to be merged and the git repository name that you want to merge the branch into. Then click **Pull Requests**, click **New pull request** to create a pull request review before the work on the branch is merged to its base branch. - -![15](./media/project-execution/project-execution-15-spring-create-pull-request.png) - -Fill in some description about this pull request, add reviewers, and send it out. - -![16](./media/project-execution/project-execution-16-spring-send-pull-request.png) - -## 9. Review and merge - -When the pull request is created, your reviewers get an email notification to review the pull requests. The reviewers need to check whether the changes are working or not and test the changes with the requester if possible. Based on their assessment, the reviewers can approve or reject the pull request. - -![17](./media/project-execution/project-execution-17-add_comments.png) - -![18](./media/project-execution/project-execution-18-spring-approve-pullrequest.png) - -After the review is done, the working branch is merged to its base branch by clicking the **Complete** button. You may choose to delete the working branch after it has merged. - -![19](./media/project-execution/project-execution-19-spring-complete-pullrequest.png) - -Confirm on the top left corner that the request is marked as **COMPLETED**. - -![20](./media/project-execution/project-execution-20-spring-merge-pullrequest.png) - -When you go back to the repository under **CODE**, you are told that you have been switched to the master branch. - -![21](./media/project-execution/project-execution-21-spring-branch-deleted.png) - -You can also use the following Git commands to merge your working branch to its base branch and delete the working branch after merging: - - git checkout master - git merge data_ingestion - git branch -d data_ingestion - -![22](./media/project-execution/project-execution-22-spring-branch-deleted-commandline.png) - - -## 10. Interactive Data Exploration, Analysis, and Reporting (IDEAR) Utility - -This R markdown-based utility provides a flexible and interactive tool to evaluate and explore data sets. Users can quickly generate reports from the data set with minimal coding. Users can click buttons to export the exploration results in the interactive tool to a final report, which can be delivered to clients or used to make decisions on which variables to include in the subsequent modeling step. - -At this time, the tool only works on data-frames in memory. A .yaml file is needed to specify the parameters of the data-set to be explored. For more information, see [IDEAR in TDSP Data Science Utilities](https://github.com/Azure/Azure-TDSP-Utilities/tree/master/DataScienceUtilities/DataReport-Utils). - - -## 11. Baseline Modeling and Reporting Utility - -This utility provides a customizable, semi-automated tool to perform model creation with hyper-parameter sweeping, to and compare the accuracy of those models. - -The model creation utility is an R markdown file that can be run to produce self-contained html output with a table of contents for easy navigation through its different sections. Three algorithms are executed when the markdown file is run (knit): regularized regression using the glmnet package, random forest using the randomForest package, and boosting trees using the xgboost package). Each of these algorithms produces a trained model. The accuracy of these models is then compared and the relative feature importance plots are reported. Currently, there are two utilities: one is for a binary classification task and one is for a regression task. The primary differences between them is the way control parameters and accuracy metrics are specified for these learning tasks. - -A Yaml file is used to specify: - -- the data input (a SQL source or an R-Data file) -- what portion of the data is used for training and what portion for testing -- which algorithms to run -- the choice of control parameters for model optimization: - - cross-validation - - bootstrapping - - folds of cross-validation -- the hyper-parameter sets for each algorithm. - -The number of algorithms, the number of folds for optimization, the hyper-parameters, and the number of hyper-parameter sets to sweep over can also be modified in the Yaml file to run the models quickly. For example, they can be run with a lower number of CV folds, a lower number of parameter sets. They can also be run more comprehensively with a higher number of CV folds or a larger number of parameter sets, if that is warranted. - -For more information, see [Automated Modeling and Reporting Utility in TDSP Data Science Utilities](https://github.com/Azure/Azure-TDSP-Utilities/tree/master/DataScienceUtilities/Modeling). - - -## 12. Tracking progress of projects with Power BI dashboards - -Data science group managers, team leads, and project leads need to track the progress of their projects, what work has been done on them and by whom, and remains on the to-do lists. If you are using VSTS, you are able to build Power BI dashboards to track the activities and the work items associated with a Git repository. For more information on how to connect Power BI to Visual Studio Team Services, see [Connect Power BI to Team Services](https://www.visualstudio.com/en-us/docs/report/powerbi/connect-vso-pbi-vs). - -To learn how to create Power BI dashboards and reports to track your Git repository activities and your work items after the data of VSTS is connected to Power BI, see [Create Power BI dashboards and reports](https://www.visualstudio.com/en-us/docs/report/powerbi/report-on-vso-with-power-bi-vs). - -Here are two simple example dashboards that are built to track Git activities and work items. In the first example dashboard, the git commitment activities are listed by different users, on different dates, and on different repositories. You can easily slice and dice to filter the ones that you are interested in. - -![23](./media/project-execution/project-execution-23-powerbi-git.png) - -In the second example dashboard, the work items (stories and tasks) in different iterations are presented. They are grouped by assignees and priority levels, and colored by state. - -![24](./media/project-execution/project-execution-24-powerbi-workitem.png) - - -## Next steps - -Full end-to-end walkthroughs that demonstrate all the steps in the process for **specific scenarios** are also provided. They are listed and linked with thumbnail descriptions in the [Example walkthroughs](walkthroughs.md) article. They illustrate how to combine cloud, on-premises tools, and services into a workflow or pipeline to create an intelligent application. - -For examples executing steps in the Team Data Science Process that use Azure Machine Learning Studio, see the [With Azure ML](http://aka.ms/datascienceprocess) learning path. \ No newline at end of file diff --git a/articles/machine-learning/team-data-science-process/sqldw-walkthrough.md b/articles/machine-learning/team-data-science-process/sqldw-walkthrough.md index d74526e5aa423..4844c9f52385f 100644 --- a/articles/machine-learning/team-data-science-process/sqldw-walkthrough.md +++ b/articles/machine-learning/team-data-science-process/sqldw-walkthrough.md @@ -4,7 +4,7 @@ description: Advanced Analytics Process and Technology in Action services: machine-learning documentationcenter: '' author: bradsev -manager: jhubbard +manager: cgronlun editor: cgronlun ms.assetid: 88ba8e28-0bd7-49fe-8320-5dfa83b65724 @@ -13,8 +13,8 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 03/24/2017 -ms.author: bradsev;hangzh;weig +ms.date: 11/24/2017 +ms.author: bradsev;weig --- # The Team Data Science Process in action: using SQL Data Warehouse diff --git a/articles/machine-learning/team-data-science-process/team-data-science-process-for-data-scientists.md b/articles/machine-learning/team-data-science-process/team-data-science-process-for-data-scientists.md index 3fb523ee8f405..afc881e8e574b 100644 --- a/articles/machine-learning/team-data-science-process/team-data-science-process-for-data-scientists.md +++ b/articles/machine-learning/team-data-science-process/team-data-science-process-for-data-scientists.md @@ -13,7 +13,7 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 10/30/2017 +ms.date: 11/21/2017 ms.author: bradsev;BuckWoody --- @@ -29,6 +29,8 @@ This article provides guidance to a set of objectives that are typically used to - providing data source documentation - using tools for analytics processing +These training materials are related to the Team Data Science Process (TDSP) and Microsoft and open-source software and toolkits, which are helpful for envisioning, executing and delivering data science solutions. + ## Lesson Path You can use the items in the following table to guide your own self-study. Read the *Description* column to follow the path, click on the *Topic* links for study references, and check your skills using the *Knowledge Check* column. diff --git a/articles/machine-learning/team-data-science-process/team-data-science-process-for-devops.md b/articles/machine-learning/team-data-science-process/team-data-science-process-for-devops.md index fbdfce1664147..6a29e4cf24304 100644 --- a/articles/machine-learning/team-data-science-process/team-data-science-process-for-devops.md +++ b/articles/machine-learning/team-data-science-process/team-data-science-process-for-devops.md @@ -13,14 +13,14 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 10/30/2017 +ms.date: 11/21/2017 ms.author: bradsev;BuckWoody --- # Team Data Science Process for Developer Operations -This article explores the Developer Operations (DevOps) functions that are specific to an Advanced Analytics and Cognitive Services solution implementation. It references topics that cover understanding the Data Science Process and Platform, the DevOps processes, and the DevOps Toolchain that is specific to Data Science and AI projects and solutions. +This article explores the Developer Operations (DevOps) functions that are specific to an Advanced Analytics and Cognitive Services solution implementation. These training materials are related to the Team Data Science Process (TDSP) and Microsoft and open-source software and toolkits, which are helpful for envisioning, executing and delivering data science solutions. It references topics that cover the DevOps Toolchain that is specific to Data Science and AI projects and solutions. ## Lesson Path The following table provides guidance at specified levels to help complete the DevOps objectives that are needed to implement data science solutions with Azure technologies. diff --git a/articles/machine-learning/team-data-science-process/team-data-science-process-project-templates.md b/articles/machine-learning/team-data-science-process/team-data-science-process-project-templates.md index cc599a43ca994..4529357f2a5ec 100644 --- a/articles/machine-learning/team-data-science-process/team-data-science-process-project-templates.md +++ b/articles/machine-learning/team-data-science-process/team-data-science-process-project-templates.md @@ -1,6 +1,6 @@ --- title: Team Data Science Process project planning - Azure | Microsoft Docs -description: TBD +description: Microsoft Project and Excel templates that help you plan and manage data science projects. documentationcenter: '' author: bradsev manager: cgronlun @@ -12,14 +12,16 @@ ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 11/16/2017 +ms.date: 11/27/2017 ms.author: bradsev; --- # Team Data Science Process project planning -The Team Data Science Process (TDSP) provides a lifecycle to structure the development of your data science projects. The lifecycle outlines the major stages that projects typically execute, often iteratively: +The Team Data Science Process (TDSP) provides a lifecycle to structure the development of your data science projects. This article provides links to Microsoft Project and Excel templates that help you plan and manage these project stages. + +The lifecycle outlines the major stages that projects typically execute, often iteratively: - Business Understanding - Data Acquisition and Understanding @@ -29,8 +31,7 @@ The Team Data Science Process (TDSP) provides a lifecycle to structure the devel For descriptions of each of these stages, see [The Team Data Science Process lifecycle](https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/lifecycle). -This article provides links to Microsoft Project and Excel templates that help you plan and manage these project stages. - + ## Microsoft Project template The Microsoft Project template for the Team Data Science Process is available from here: [Microsoft Project template](https://github.com/Azure/Azure-MachineLearning-DataScience/blob/master/Team-Data-Science-Process/Project-Planning-and-Governance/Advanced%20Analytics%20Microsoft%20Project%20Plan.mpp) @@ -53,7 +54,7 @@ Use these templates at your own risk. The [usual disclaimers](https://www.gnu.or ## Next steps -[Execution of data science projects](project-execution.md) This document describes to execute a data science project in a systematic, version controlled, and collaborative way within a project team by using the Team Data Science Process. +[Agile development of data science projects](agile-development.md) This document describes to execute a data science project in a systematic, version controlled, and collaborative way within a project team by using the Team Data Science Process. Walkthroughs that demonstrate all the steps in the process for **specific scenarios** are also provided. They are listed and linked with thumbnail descriptions in the [Example walkthroughs](walkthroughs.md) topic. They illustrate how to combine cloud, on-premises tools, and services into a workflow or pipeline to create an intelligent application. diff --git a/articles/machine-learning/team-data-science-process/toc.yml b/articles/machine-learning/team-data-science-process/toc.yml index bacbd89f82351..a463334e506cb 100644 --- a/articles/machine-learning/team-data-science-process/toc.yml +++ b/articles/machine-learning/team-data-science-process/toc.yml @@ -29,14 +29,18 @@ href: project-ic-tasks.md - name: Project structure href: https://github.com/Azure/Azure-TDSP-ProjectTemplate -- name: Project planning and execution +- name: Project planning + href: team-data-science-process-project-templates.md +- name: Project execution items: - - name: Project planning - href: team-data-science-process-project-templates.md - - name: Project execution - items: - - name: Data science projects - href: project-execution.md + - name: Agile development + href: agile-development.md + - name: Collaborative coding with Git + href: collaborative-coding-with-git.md + - name: Execute data science tasks + href: execute-data-science-tasks.md + - name: Track progress + href: track-progress.md - name: Examples href: walkthroughs.md items: @@ -52,12 +56,12 @@ - name: Spark with PySpark and Scala href: walkthroughs-spark.md items: - - name: Explore data + - name: Explore and model data href: spark-data-exploration-modeling.md - - name: Score models - href: spark-model-consumption.md - name: Advanced data exploration href: spark-advanced-data-exploration-modeling.md + - name: Score models + href: spark-model-consumption.md - name: Hive with HDInsight Hadoop href: walkthroughs-hdinsight-hadoop.md - name: U-SQL with Azure Data Lake @@ -136,7 +140,8 @@ - name: Use AzCopy href: move-data-to-azure-blob-using-azcopy.md - name: Use Python - href: move-data-to-azure-blob-using-python.md + href: ../../storage/blobs/storage-python-how-to-use-blob-storage.md + maintainContext: true - name: Use SSIS href: move-data-to-azure-blob-using-ssis.md - name: Move to a VM diff --git a/articles/machine-learning/team-data-science-process/track-progress.md b/articles/machine-learning/team-data-science-process/track-progress.md new file mode 100644 index 0000000000000..fd1451cc3d9e6 --- /dev/null +++ b/articles/machine-learning/team-data-science-process/track-progress.md @@ -0,0 +1,53 @@ +--- +title: Execution of data science projects - Azure Machine Learning | Microsoft Docs +description: How a data scientist can track the progress of a data science project. +documentationcenter: '' +author: bradsev +manager: cgronlun +editor: cgronlun + +ms.assetid: +ms.service: machine-learning +ms.workload: data-services +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 11/28/2017 +ms.author: bradsev; + +--- + + +# Track progress of data science projects + +Data science group managers, team leads, and project leads need to track the progress of their team projects, what work has been done on them and by whom, and remains on the to-do lists. + +## VSTS dashboards +If you are using Visual Studio Team Services (VSTS), you are able to build dashboards to track the activities and the work items associated with a given Agile project. + +For more information on how to create and customize dashboards and widgets on Visual Studio Team Services, see the following sets of instructions: + +- [Add and manage dashboards](https://docs.microsoft.com/vsts/report/dashboards/dashboards) +- [Add widgets to a dashboard](https://docs.microsoft.com/vsts/report/dashboards/add-widget-to-dashboard). + +## Example dashboard + +Here is a simple example dashboard that is built to track the sprint activities of an Agile data science project, as well as the number of commits to associated repositories. The **top left** panel shows: + +- the countdown of the current sprint, +- the number of commits for each repository in the last 7 days +- the work item for specific users. + +The remaining panels show the cumulative flow diagram (CFD), burndown, and burnup for a project: + +- **Bottom left**: CFD the quantity of work in a given state, showing approved in gray, committed in blue, and done in green. +- **Top right**: burndown chart the work left to complete versus the time remaining). +- **Bottom right**: burnup chart the work that has been completed versus the total amount of work. + +![dashboard](./media/track-progress/dashboard.png) + +For a description of how to build these charts, see the quickstarts and tutorials at [Dashboards](https://docs.microsoft.com/vsts/report/dashboards/). + +## Next steps + +Walkthroughs that demonstrate all the steps in the process for **specific scenarios** are also provided. They are listed and linked with thumbnail descriptions in the [Example walkthroughs](walkthroughs.md) article. They illustrate how to combine cloud, on-premises tools, and services into a workflow or pipeline to create an intelligent application. diff --git a/articles/media-services/media-services-release-notes.md b/articles/media-services/media-services-release-notes.md index 0f77ea2faa493..a783d58d57133 100644 --- a/articles/media-services/media-services-release-notes.md +++ b/articles/media-services/media-services-release-notes.md @@ -408,7 +408,7 @@ The following changes were made in 3.0.0.3: The latest version of the Media Services SDK is now 3.0.0.0. You can download the latest package from Nuget or get the bits from [GitHub]. -Starting with the Media Services SDK version 3.0.0.0, you can reuse the [Azure Active Directory Access Control Service (ACS)] tokens. For more information, see the “Reusing Access Control Service Tokens” section in the [Connecting to Media Services with the Media Services SDK for .NET] article. +Starting with the Media Services SDK version 3.0.0.0, you can reuse the Azure Active Directory Access Control Service (ACS) tokens. ### Azure Media Services .NET SDK Extensions 2.0.0.0 The Azure Media Services .NET SDK Extensions is a set of extension methods and helper functions that will simplify your code and make it easier to develop with Azure Media Services. You can get the latest bits from [Azure Media Services .NET SDK Extensions]. diff --git a/articles/migrate/how-to-get-migration-tool.md b/articles/migrate/how-to-get-migration-tool.md index 64e677eaa4491..3960b33fe615c 100644 --- a/articles/migrate/how-to-get-migration-tool.md +++ b/articles/migrate/how-to-get-migration-tool.md @@ -26,13 +26,13 @@ This article describes how to get suggestions for a migration tool after you've ## Migration tool suggestion -To get suggestions regarding migration tools, you need to install agents on the on-premises machines. +To get suggestions regarding migration tools, you need to do a deep discovery of the on-premises environment. The deep discovery is done by installing agents on the on-premises machines. 1. Create an Azure Migrate project, discover on-premises machines, and create a migration assessment. [Learn more](tutorial-assessment-vmware.md). 2. Download and install the Azure Migrate agents on each on-premises machine for which you want to see a recommended migration method. [Follow this procedure](how-to-create-group-machine-dependencies.md#prepare-machines-for-dependency-mapping) to install the agents. 2. Identify your on-premises machines that are suitable for lift-and-shift migration. These are the VMs that don't require any changes to apps running on them, and can be migrated as is. -3. For lift-and-shift migration, we suggest using Azure Site Recovery. [Learn more](../site-recovery/tutorial-migrate-on-premises-to-azure.md). Alternately, you can use 3rd party tools that support migration to Azure. -4. If you have on-premises machines that aren't suitable for a lift-and-shift migration, for example if you want to migrate specific app rather than an entire VM, you can use other migration tools. For example, we suggest the [Azure Database Migration service](https://azure.microsoft.com/campaigns/database-migration/) if you want to migrate on-premises databases such a SQL Server, MySQL, or Oracle to Azure. +3. For lift-and-shift migration, we suggest using Azure Site Recovery. [Learn more](../site-recovery/tutorial-migrate-on-premises-to-azure.md). Alternately, you can use third-party tools that support migration to Azure. +4. If you have on-premises machines that aren't suitable for a lift-and-shift migration, that is, if you want to migrate specific app rather than an entire VM, you can use other migration tools. For example, we suggest the [Azure Database Migration service](https://azure.microsoft.com/campaigns/database-migration/) if you want to migrate on-premises databases such a SQL Server, MySQL, or Oracle to Azure. ## Review suggested migration methods diff --git a/articles/migrate/how-to-modify-assessment.md b/articles/migrate/how-to-modify-assessment.md index 5ca83e3ba421f..4b488240e2160 100644 --- a/articles/migrate/how-to-modify-assessment.md +++ b/articles/migrate/how-to-modify-assessment.md @@ -29,7 +29,7 @@ ms.author: raynew **Setting** | **Details** | **Default** --- | --- | --- - **Target location** | The Azure location to which you want to migrate. | Only East US is currently supported. + **Target location** | The Azure location to which you want to migrate. | West US 2 is the default location. **Storage redundancy** | The type of storage that the Azure VMs will use after migration. | Only [Locally redundant storage (LRS)](../storage/common/storage-redundancy.md#locally-redundant-storage) replication is currently supported. **Comfort factor** | Comfort factor is a buffer that is used during assessment. Use it to account for things such as seasonal usage, short performance history, likely increase in future usage. | Default setting is 1.3x. **Perfomance history** | Time used in evaluating performance history. | Default is one month. diff --git a/articles/migrate/migrate-overview.md b/articles/migrate/migrate-overview.md index 0c46978a8277f..fe04851da5947 100644 --- a/articles/migrate/migrate-overview.md +++ b/articles/migrate/migrate-overview.md @@ -36,16 +36,18 @@ Azure Migrate helps you to: ## Current limitations -- Currently, you can assess on-premises VMware virtual machines (VMs) for migration to Azure VMs. Support for Hyper-V is in the roadmap and will be enabled in few months. In the interim, use Azure Site Recovery Deployment Planner to plan migration for Hyper-V workloads. +- Currently, you can assess on-premises VMware virtual machines (VMs) for migration to Azure VMs. +> [!NOTE] +> Support for Hyper-V is in the roadmap and will be enabled in few months. In the interim, we recommend you to use Azure Site Recovery Deployment Planner to plan migration of Hyper-V workloads. - You can assess up to 1000 VMs in a single assessment, and up to 1500 machines in a single Azure Migrate project. If you need to assess more, you can increase the number of projects or assessments. [Learn more](how-to-scale-assessment.md). -- VM you want to assess must be managed by a vCenter server, version 5.5, 6.0, or 6.5. -- The Azure Migrate portal is currently available in English only. -- You can only create an Azure Migrate project in the West Central US region. However, this does not impact your ability to plan your migration for a different target Azure location. The location of the migration project is used only to store the metadata discovered from the on-premises environment. +- VM you want to assess must be managed by a vCenter Server, version 5.5, 6.0, or 6.5. +- You can only create an Azure Migrate project in the West Central US region. However, this does not impact your ability to plan your migration for a different target Azure location. The location of the migration project is used only to store the metadata discovered from the on-premises environment. +- The Azure Migrate portal is currently available in English only. - Azure Migrate currently supports only [Locally Redundant Storage (LRS)](../storage/common/storage-introduction.md#replication) replication. ## What do I need to pay for? -You don't need to pay for assessment. To support [dependency visualization](concepts-dependency-visualization.md), Azure Migrate creates a Log Analytics workspace by default. If you use dependency visualization, or use the workspace outside Azure Migrate), you're charged for workspace usage. [Learn more](https://www.microsoft.com/cloud-platform/operations-management-suite) about Service Map solution pricing. +Azure Migrate is available at no additional charge. However, during public preview, additional charges will apply for use of dependency visualization capabilities. To support [dependency visualization](concepts-dependency-visualization.md), Azure Migrate creates a Log Analytics workspace by default. If you use dependency visualization, or use the workspace outside Azure Migrate, you're charged for the workspace usage. [Learn more](https://azure.microsoft.com/en-us/pricing/details/insight-analytics/) about the charges. When the service becomes generally available, there will be no charge for use of dependency visualization capabilities. ## What's in an assessment? diff --git a/articles/monitoring-and-diagnostics/media/monitoring-service-notifications/service-health-summary.PNG b/articles/monitoring-and-diagnostics/media/monitoring-service-notifications/service-health-summary.PNG index 2aa7feba76158..aed94803e238e 100644 Binary files a/articles/monitoring-and-diagnostics/media/monitoring-service-notifications/service-health-summary.PNG and b/articles/monitoring-and-diagnostics/media/monitoring-service-notifications/service-health-summary.PNG differ diff --git a/articles/monitoring-and-diagnostics/monitoring-service-notifications.md b/articles/monitoring-and-diagnostics/monitoring-service-notifications.md index cb9d9c1fd050d..ec47c7e5bac41 100644 --- a/articles/monitoring-and-diagnostics/monitoring-service-notifications.md +++ b/articles/monitoring-and-diagnostics/monitoring-service-notifications.md @@ -22,18 +22,18 @@ ms.author: ancav This article shows you how to view service health notifications using the Azure portal. -Service health notifications allow you to view service health messages published by the Azure team that may be affecting the resources under your subscription. These notifications are a sub-class of activity log events and can also be found on the activity log blade. Service health notifications can be informational or actionable depending on the class. +Service health notifications allow you to view service health messages published by the Azure team that may be affecting the resources under your subscription. These notifications are a sub-class of activity log events and can also be found in the activity log. Service health notifications can be informational or actionable depending on the class. There are five classes of service health notifications: -- **Action Required:** From time to time we may notice something unusual happen on your account. We may need to work with you to remedy this. We will send you a notification either detailing the actions you will need to take or with details on how to contact Azure engineering or support. -- **Assisted Recovery:** An event has occurred and engineers have confirmed that you are still experiencing impact. Engineering will need to work with you directly to bring your services to restoration. +- **Action Required:** From time to time Azure may notice something unusual happen on your account. Azure may need to work with you to remedy this. Azure will send you a notification either detailing the actions you need to take or with details on how to contact Azure engineering or support. +- **Assisted Recovery:** An event has occurred and engineers have confirmed that you are still experiencing impact. Azure engineering needs to work with you directly to restore your services to full health. - **Incident:** A service impacting event is currently affecting one or more of the resources in your subscription. - **Maintenance:** This is a notification informing you of a planned maintenance activity that may impact one or more of the resources under your subscription. -- **Information:** From time to time we may send you notifications that a communicate to you about potential optimizations that may help improve your resource utilization. +- **Information:** From time to time, Azure may send you notifications that inform you about potential optimizations that may help improve your resource utilization. - **Security:** Urgent security related information regarding your solution(s) running on Azure. -Each service health notification will carry details on the scope and impact to your resources. Details will include: +Each service health notification includes details on the scope and impact to your resources. Details include: Property Name | Description -------- | ----------- @@ -51,7 +51,7 @@ subscriptionId | The Azure subscription in which this event was logged status | String describing the status of the operation. Some common values are: Started, In Progress, Succeeded, Failed, Active, Resolved. operationName | Name of the operation. category | "ServiceHealth" -resourceId | Resource id of the impacted resource. +resourceId | Resource ID of the impacted resource. Properties.title | The localized title for this communication. English is the default language. Properties.communication | The localized details of the communication with HTML markup. English is the default. Properties.incidentType | Possible values: AssistedRecovery, ActionRequired, Information, Incident, Maintenance, Security @@ -67,14 +67,12 @@ Properties.communicationId | The communication this event is associated. 1. In the [portal](https://portal.azure.com), navigate to the **Monitor** service ![Monitor](./media/monitoring-service-notifications/home-monitor.png) -2. Click the **Monitor** option to open up the Monitor blade. This blade brings together all your monitoring settings and data into one consolidated view. It first opens to the **Activity log** section. +2. Click the **Monitor** option to open up the Monitor experience. Azure Monitor brings together all your monitoring settings and data into one consolidated view. It first opens to the **Activity log** section. -3. Now click on **Service Notifications** section +3. Now click on **Alerts** section ![Monitor](./media/monitoring-service-notifications/service-health-summary.png) -4. Click on any of the line items to view more details - -5. Click on the **+Add Activity Log Alert** operation to receive notifications to ensure you are notified for future service notifications of this type. To learn more on configuring alerts on service notifications [click here](monitoring-activity-log-alerts-on-service-notifications.md) +4. Click on the **+Add Activity Log Alert** and configure an alert to ensure you are notified for future service notifications. To learn more about configuring alerts on service notifications [visit the Activity Log Alerts and Service Notifications page](monitoring-activity-log-alerts-on-service-notifications.md). ## Next Steps: Receive [alert notifications whenever a service health notification](monitoring-activity-log-alerts-on-service-notifications.md) is posted diff --git a/articles/multi-factor-authentication/multi-factor-authentication-sdk.md b/articles/multi-factor-authentication/multi-factor-authentication-sdk.md index f6e26c6edea0e..d0934a49d997d 100644 --- a/articles/multi-factor-authentication/multi-factor-authentication-sdk.md +++ b/articles/multi-factor-authentication/multi-factor-authentication-sdk.md @@ -13,7 +13,7 @@ ms.workload: identity ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 05/03/2017 +ms.date: 11/29/2017 ms.author: joflore --- diff --git a/articles/multi-factor-authentication/multi-factor-authentication-whats-next.md b/articles/multi-factor-authentication/multi-factor-authentication-whats-next.md index 72ed4c5cecfe3..1695e961a76c8 100644 --- a/articles/multi-factor-authentication/multi-factor-authentication-whats-next.md +++ b/articles/multi-factor-authentication/multi-factor-authentication-whats-next.md @@ -12,7 +12,7 @@ ms.workload: identity ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 10/02/2017 +ms.date: 11/29/2017 ms.author: joflore ms.reviewer: richagi @@ -167,22 +167,40 @@ When Trusted IPs is enabled, two-step verification is *not* required for browser Whether Trusted IPs is enabled or not, two-step verification is required for browser flows, and app passwords are required for older rich client apps. -### To enable Trusted IPs -1. Sign in to the [Azure classic portal](https://manage.windowsazure.com). -2. On the left, select **Active Directory**. -3. Select the directory you want to manage. -4. Select **Configure** -5. Under Multi-Factor Authentication, select **Manage service settings**. -6. On the Service Settings page, under Trusted IPs, you have two options: +### Enable named locations using conditional access + +1. Sign in to the [Azure portal](https://portal.azure.com). +2. On the left, select **Azure Active Directory** > **Conditional access** > **Named locations** +3. Select **New location** +4. Provide a name for the location +5. Select **Mark as trusted location** +6. Specify the IP Range in CIDR notation (Example 192.168.1.1/24) +7. Select **Create** + +### Enable Trusted IPs using conditional access + +1. Sign in to the [Azure portal](https://portal.azure.com). +2. On the left, select **Azure Active Directory** > **Conditional access** > **Named locations** +3. Select **Configure MFA trusted IPs** +4. On the Service Settings page, under Trusted IPs, you have two options: -   * **For requests from federated users originating from my intranet** – Check the box. All federated users who are signing in from the corporate network will bypass two-step verification using a claim issued by AD FS. Ensure that AD FS has a rule to add the intranet claim to the appropriate traffic. If the rule does not exist, create the following rule in AD FS: "c:[Type -== "http://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork"] => issue(claim = c);" + * **For requests from federated users originating from my intranet** – Check the box. All federated users who are signing in from the corporate network will bypass two-step verification using a claim issued by AD FS. Ensure that AD FS has a rule to add the intranet claim to the appropriate traffic. If the rule does not exist, create the following rule in AD FS: "c:[Type== "http://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork"] => issue(claim = c);" + * **For requests from a specific range of public IPs** – Enter the IP addresses in the text box provided using CIDR notation. For example: xxx.xxx.xxx.0/24 for IP addresses in the range xxx.xxx.xxx.1 – xxx.xxx.xxx.254, or xxx.xxx.xxx.xxx/32 for a single IP address. You can enter up to 50 IP address ranges. Users who sign in from these IP addresses bypass two-step verification. +5. Select **Save**. + +### Enable Trusted IPs using service settings +1. Sign in to the [Azure portal](https://portal.azure.com). +2. On the left, select **Azure Active Directory** > **Users and groups** > **All users** +3. Select **Multi-Factor Authentication** +4. Under Multi-Factor Authentication, select **service settings**. +5. On the Service Settings page, under Trusted IPs, you have two options: + + * **For requests from federated users originating from my intranet** – Check the box. All federated users who are signing in from the corporate network will bypass two-step verification using a claim issued by AD FS. Ensure that AD FS has a rule to add the intranet claim to the appropriate traffic. If the rule does not exist, create the following rule in AD FS: "c:[Type== "http://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork"] => issue(claim = c);" * **For requests from a specific range of public IPs** – Enter the IP addresses in the text box provided using CIDR notation. For example: xxx.xxx.xxx.0/24 for IP addresses in the range xxx.xxx.xxx.1 – xxx.xxx.xxx.254, or xxx.xxx.xxx.xxx/32 for a single IP address. You can enter up to 50 IP address ranges. Users who sign in from these IP addresses bypass two-step verification. -7. Click **Save**. -8. Once the updates have been applied, click **Close**. +6. Select **Save**. ![Trusted IPs](./media/multi-factor-authentication-whats-next/trustedips3.png) @@ -237,11 +255,10 @@ Azure AD supports federation (single sign-on) with on-premises Windows Server Ac ### Allow app password creation By default, users cannot create app passwords. This feature must be enabled. To allow users the ability to create app passwords, use the following procedure: -1. Sign in to the [Azure classic portal](https://manage.windowsazure.com). -2. On the left, select **Active Directory**. -3. Select the directory you want to manage. -4. Select **Configure** -5. Under Multi-Factor Authentication, select **Manage service settings**. +1. Sign in to the [Azure portal](https://portal.azure.com). +2. On the left, select **Azure Active Directory** > **Users and groups** > **All users** +3. Select **Multi-Factor Authentication** +4. Under Multi-Factor Authentication, select **service settings**. 6. Select the radio button next to **Allow users to create app passwords to sign into non-browser apps**. ![Create App Passwords](./media/multi-factor-authentication-whats-next/trustedips3.png) @@ -268,16 +285,16 @@ Therefore, remembering MFA on trusted devices reduces the number of authenticati >This feature is not compatible with the "Keep me signed in" feature of AD FS when users perform two-step verification for AD FS through the Azure MFA Server or a third-party MFA solution. If your users select "Keep me signed in" on AD FS and also mark their device as trusted for MFA, they won't be able to verify after the "Remember MFA" number of days expires. Azure AD requests a fresh two-step verification, but AD FS returns a token with the original MFA claim and date instead of performing two-step verification again. This sets off a verification loop between Azure AD and AD FS. ### Enable Remember multi-factor authentication -1. Sign in to the [Azure classic portal](https://manage.windowsazure.com). -2. On the left, select **Active Directory**. -3. Select the directory you want to manage. -4. Select **Configure** -5. Under Multi-Factor Authentication, select **Manage service settings**. -6. On the Service Settings page, under manage user device settings, check the **Allow users to remember multi-factor authentication on devices they trust** box. +1. Sign in to the [Azure portal](https://portal.azure.com). +2. On the left, select **Azure Active Directory** > **Users and groups** > **All users** +3. Select **Multi-Factor Authentication** +4. Under Multi-Factor Authentication, select **service settings**. +5. On the Service Settings page, under **manage remember multi-factor authentication**, check the **Allow users to remember multi-factor authentication on devices they trust** box. + ![Remember devices](./media/multi-factor-authentication-whats-next/remember.png) -7. Set the number of days that you want to allow the trusted devices to bypass two-step verification. The default is 14 days. -8. Click **Save**. -9. Click **Close**. + +6. Set the number of days that you want to allow the trusted devices to bypass two-step verification. The default is 14 days. +7. Select **Save**. ### Mark a device as trusted @@ -298,13 +315,12 @@ When your users enroll their accounts for MFA, they choose their preferred verif | Verification code from mobile app |The Microsoft Authenticator app generates a new OATH verification code every 30 seconds. The user enters this verification code into the sign-in interface.
The Microsoft Authenticator app is available for [Windows Phone](http://go.microsoft.com/fwlink/?Linkid=825071), [Android](http://go.microsoft.com/fwlink/?Linkid=825072), and [IOS](http://go.microsoft.com/fwlink/?Linkid=825073). | ### How to enable/disable authentication methods -1. Sign in to the [Azure classic portal](https://manage.windowsazure.com). -2. On the left, select **Active Directory**. -3. Select the directory you want to manage. -4. Select **Configure** -5. Under Multi-Factor Authentication, select **Manage service settings**. -6. On the Service Settings page, under verification options, select/unselect the options you wish to use. +1. Sign in to the [Azure portal](https://portal.azure.com). +2. On the left, select **Azure Active Directory** > **Users and groups** > **All users** +3. Select **Multi-Factor Authentication** +4. Under Multi-Factor Authentication, select **service settings**. +5. On the Service Settings page, under **verification options**, select/unselect the options you wish to use. + ![Verification options](./media/multi-factor-authentication-whats-next/authmethods.png) -7. Click **Save**. -8. Click **Close**. +6. Click **Save**. diff --git a/articles/mysql/howto-configure-server-logs-in-cli.md b/articles/mysql/howto-configure-server-logs-in-cli.md index 2c64d39ce5d43..b3d969cccfa5a 100644 --- a/articles/mysql/howto-configure-server-logs-in-cli.md +++ b/articles/mysql/howto-configure-server-logs-in-cli.md @@ -9,7 +9,7 @@ editor: jasonwhowell ms.service: mysql-database ms.devlang: azure-cli ms.topic: article -ms.date: 10/18/2017 +ms.date: 11/28/2017 --- # Configure and access server logs using Azure CLI You can download the Azure Database for MySQL server logs using the Azure CLI, Azure's command-line utility. @@ -35,14 +35,14 @@ az mysql server configuration list --resource-group myresourcegroup --server mys ``` ## List logs for Azure Database for MySQL server -To list the available log files for your server, run the [az mysql server-logs list](/cli/azure/mysql/server-logs#list) command. +To list the available log files for your server, run the [az mysql server-logs list](/cli/azure/mysql/server-logs#az_mysql_server_logs_list) command. You can list the log files for server **myserver4demo.mysql.database.azure.com** under Resource Group **myresourcegroup**, and direct it to a text file called **log\_files\_list.txt.** ```azurecli-interactive az mysql server-logs list --resource-group myresourcegroup --server myserver4demo > log_files_list.txt ``` ## Download logs from the server -The [az mysql server-logs download](/cli/azure/mysql/server-logs#download) command allows you to download individual log files for your server. +The [az mysql server-logs download](/cli/azure/mysql/server-logs#az_mysql_server_logs_download) command allows you to download individual log files for your server. This example downloads the specific log file for the server **myserver4demo.mysql.database.azure.com** under Resource Group **myresourcegroup** to your local environment. ```azurecli-interactive diff --git a/articles/mysql/howto-configure-server-parameters-using-cli.md b/articles/mysql/howto-configure-server-parameters-using-cli.md index 0bbb0cc515834..6e436f2d834ec 100644 --- a/articles/mysql/howto-configure-server-parameters-using-cli.md +++ b/articles/mysql/howto-configure-server-parameters-using-cli.md @@ -9,7 +9,7 @@ editor: jasonwhowell ms.service: mysql-database ms.devlang: azure-cli ms.topic: article -ms.date: 10/12/2017 +ms.date: 11/29/2017 --- # Customize server configuration parameters by using Azure CLI You can list, show, and update configuration parameters for an Azure Database for MySQL server by using Azure CLI, the Azure command-line utility. A subset of engine configurations is exposed at the server-level and can be modified. @@ -20,7 +20,7 @@ To step through this how-to guide, you need: - [Azure CLI 2.0](/cli/azure/install-azure-cli) command-line utility or use the Azure Cloud Shell in the browser. ## List server configuration parameters for Azure Database for MySQL server -To list all modifiable parameters in a server and their values, run the [az mysql server configuration list](/cli/azure/mysql/server/configuration#list) command. +To list all modifiable parameters in a server and their values, run the [az mysql server configuration list](/cli/azure/mysql/server/configuration#az_mysql_server_configuration_list) command. You can list the server configuration parameters for the server **myserver4demo.mysql.database.azure.com** under resource group **myresourcegroup**. ```azurecli-interactive @@ -29,14 +29,14 @@ az mysql server configuration list --resource-group myresourcegroup --server mys For the definition of each of the listed parameters, see the MySQL reference section on [Server System Variables](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html). ## Show server configuration parameter details -To show details about a particular configuration parameter for a server, run the [az mysql server configuration show](/cli/azure/mysql/server/configuration#show) command. +To show details about a particular configuration parameter for a server, run the [az mysql server configuration show](/cli/azure/mysql/server/configuration#az_mysql_server_configuration_show) command. This example shows details of the **slow\_query\_log** server configuration parameter for server **myserver4demo.mysql.database.azure.com** under resource group **myresourcegroup.** ```azurecli-interactive az mysql server configuration show --name slow_query_log --resource-group myresourcegroup --server myserver4demo ``` ## Modify a server configuration parameter value -You can also modify the value of a certain server configuration parameter, which updates the underlying configuration value for the MySQL server engine. To update the configuration, use the [az mysql server configuration set](/cli/azure/mysql/server/configuration#set) command. +You can also modify the value of a certain server configuration parameter, which updates the underlying configuration value for the MySQL server engine. To update the configuration, use the [az mysql server configuration set](/cli/azure/mysql/server/configuration#az_mysql_server_configuration_set) command. To update the **slow\_query\_log** server configuration parameter of server **myserver4demo.mysql.database.azure.com** under resource group **myresourcegroup.** ```azurecli-interactive diff --git a/articles/mysql/howto-manage-firewall-using-cli.md b/articles/mysql/howto-manage-firewall-using-cli.md index c95a7c42f4695..fe2f266c4ac6a 100644 --- a/articles/mysql/howto-manage-firewall-using-cli.md +++ b/articles/mysql/howto-manage-firewall-using-cli.md @@ -9,7 +9,7 @@ editor: jasonwhowell ms.service: mysql-database ms.devlang: azure-cli ms.topic: article -ms.date: 09/15/2017 +ms.date: 11/28/2017 --- # Create and manage Azure Database for MySQL firewall rules by using the Azure CLI @@ -44,25 +44,25 @@ This command outputs a code to use in the next step. 3. At the prompt, log in using your Azure credentials. -4. After your login is authorized, a list of subscriptions is printed in the console. Copy the ID of the desired subscription to set the current subscription to use. +4. After your login is authorized, a list of subscriptions is printed in the console. Copy the ID of the desired subscription to set the current subscription to use. Use the [az account set](/cli/azure/account#az_account_set) command. ```azurecli-interactive az account set --subscription {your subscription id} ``` -5. List the Azure Databases for MySQL servers for your subscription and resource group if you are unsure of the names. +5. List the Azure Databases for MySQL servers for your subscription and resource group if you are unsure of the names. Use the [az mysql server list](/cli/azure/mysql/server#az_mysql_server_list) command. ```azurecli-interactive az mysql server list --resource-group myResourceGroup ``` - Note the name attribute in the listing, which you need to specify the MySQL server to work on. If needed, confirm the details for that server and using the name attribute to ensure it is correct: + Note the name attribute in the listing, which you need to specify the MySQL server to work on. If needed, confirm the details for that server and using the name attribute to ensure it is correct. Use the [az mysql server show](/cli/azure/mysql/server#az_mysql_server_show) command. ```azurecli-interactive az mysql server show --resource-group myResourceGroup --name mysqlserver4demo ``` ## List firewall rules on Azure Database for MySQL Server -Using the server name and the resource group name, list the existing server firewall rules on the server. Notice that the server name attribute is specified in the **--server** switch and not in the **--name** switch. +Using the server name and the resource group name, list the existing server firewall rules on the server. Use the [az mysql server firewall list](/cli/azure/mysql/server/firewall-rule#az_mysql_server_firewall_rule_list) command. Notice that the server name attribute is specified in the **--server** switch and not in the **--name** switch. ```azurecli-interactive az mysql server firewall-rule list --resource-group myResourceGroup --server mysqlserver4demo ``` @@ -71,7 +71,7 @@ The output lists the rules, if any, in JSON format (by default). You can use the az mysql server firewall-rule list --resource-group myResourceGroup --server mysqlserver4demo --output table ``` ## Create a firewall rule on Azure Database for MySQL Server -Using the Azure MySQL server name and the resource group name, create a new firewall rule on the server. Provide a name for the rule, as well as the start IP and end IP (to provide access to a range of IP addresses) for the rule. +Using the Azure MySQL server name and the resource group name, create a new firewall rule on the server. Use the [az mysql server firewall create](/cli/azure/mysql/server/firewall-rule#az_mysql_server_firewall_rule_create) command. Provide a name for the rule, as well as the start IP and end IP (to provide access to a range of IP addresses) for the rule. ```azurecli-interactive az mysql server firewall-rule create --resource-group myResourceGroup --server mysqlserver4demo --name "Firewall Rule 1" --start-ip-address 13.83.152.0 --end-ip-address 13.83.152.15 ``` @@ -83,7 +83,7 @@ az mysql server firewall-rule create --resource-group myResourceGroup Upon success, the command output lists the details of the firewall rule you have created, in JSON format (by default). If there is a failure, the output shows error message text instead. ## Update a firewall rule on Azure Database for MySQL server -Using the Azure MySQL server name and the resource group name, update an existing firewall rule on the server. Provide the name of the existing firewall rule as input, as well as the start IP and end IP attributes to update. +Using the Azure MySQL server name and the resource group name, update an existing firewall rule on the server. Use the [az mysql server firewall update](/cli/azure/mysql/server/firewall-rule#az_mysql_server_firewall_rule_update) command. Provide the name of the existing firewall rule as input, as well as the start IP and end IP attributes to update. ```azurecli-interactive az mysql server firewall-rule update --resource-group myResourceGroup --server mysqlserver4demo --name "Firewall Rule 1" --start-ip-address 13.83.152.0 --end-ip-address 13.83.152.1 ``` @@ -93,14 +93,14 @@ Upon success, the command output lists the details of the firewall rule you have > If the firewall rule does not exist, the rule is created by the update command. ## Show firewall rule details on Azure Database for MySQL Server -Using the Azure MySQL server name and the resource group name, show the existing firewall rule details from the server. Provide the name of the existing firewall rule as input. +Using the Azure MySQL server name and the resource group name, show the existing firewall rule details from the server. Use the [az mysql server firewall show](/cli/azure/mysql/server/firewall-rule#az_mysql_server_firewall_rule_show) command. Provide the name of the existing firewall rule as input. ```azurecli-interactive az mysql server firewall-rule show --resource-group myResourceGroup --server mysqlserver4demo --name "Firewall Rule 1" ``` Upon success, the command output lists the details of the firewall rule you have specified, in JSON format (by default). If there is a failure, the output shows error message text instead. ## Delete a firewall rule on Azure Database for MySQL Server -Using the Azure MySQL server name and the resource group name, remove an existing firewall rule from the server. Provide the name of the existing firewall rule. +Using the Azure MySQL server name and the resource group name, remove an existing firewall rule from the server. Use the [az mysql server firewall delete](/cli/azure/mysql/server/firewall-rule#az_mysql_server_firewall_rule_delete) command. Provide the name of the existing firewall rule. ```azurecli-interactive az mysql server firewall-rule delete --resource-group myResourceGroup --server mysqlserver4demo --name "Firewall Rule 1" ``` diff --git a/articles/mysql/howto-restore-server-cli.md b/articles/mysql/howto-restore-server-cli.md index bcfda078215e7..d121fcacaec65 100644 --- a/articles/mysql/howto-restore-server-cli.md +++ b/articles/mysql/howto-restore-server-cli.md @@ -9,7 +9,7 @@ editor: jasonwhowell ms.service: mysql-database ms.devlang: azure-cli ms.topic: article -ms.date: 10/30/2017 +ms.date: 11/28/2017 --- # How to backup and restore a server in Azure Database for MySQL by using the Azure CLI @@ -35,7 +35,7 @@ With this automatic backup feature, you can restore the server and its databases ## Restore a database to a previous point in time by using the Azure CLI Use Azure Database for MySQL to restore the server to a previous point in time. The restored data is copied to a new server, and the existing server is left as is. For example, if a table is accidentally dropped at noon today, you can restore to the time just before noon. Then, you can retrieve the missing table and data from the restored copy of the server. -To restore the server, use the Azure CLI [az mysql server restore](/cli/azure/mysql/server#restore) command. +To restore the server, use the Azure CLI [az mysql server restore](/cli/azure/mysql/server#az_mysql_server_restore) command. ### Run the restore command diff --git a/articles/mysql/quickstart-create-mysql-server-database-using-azure-cli.md b/articles/mysql/quickstart-create-mysql-server-database-using-azure-cli.md index 4b8a3cc167d66..4b342d30b09fc 100644 --- a/articles/mysql/quickstart-create-mysql-server-database-using-azure-cli.md +++ b/articles/mysql/quickstart-create-mysql-server-database-using-azure-cli.md @@ -9,7 +9,7 @@ editor: jasonwhowell ms.service: mysql-database ms.devlang: azure-cli ms.topic: hero-article -ms.date: 11/02/2017 +ms.date: 11/29/2017 ms.custom: mvc --- @@ -22,13 +22,13 @@ If you don't have an Azure subscription, create a [free](https://azure.microsoft If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI 2.0]( /cli/azure/install-azure-cli). -If you have multiple subscriptions, choose the appropriate subscription in which the resource exists or is billed for. Select a specific subscription ID under your account using [az account set](/cli/azure/account#set) command. +If you have multiple subscriptions, choose the appropriate subscription in which the resource exists or is billed for. Select a specific subscription ID under your account using [az account set](/cli/azure/account#az_account_set) command. ```azurecli-interactive az account set --subscription 00000000-0000-0000-0000-000000000000 ``` ## Create a resource group -Create an [Azure resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview) using the [az group create](https://docs.microsoft.com/cli/azure/group#az_group_create) command. A resource group is a logical container into which Azure resources are deployed and managed as a group. +Create an [Azure resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview) using the [az group create](/cli/azure/group#az_group_create) command. A resource group is a logical container into which Azure resources are deployed and managed as a group. The following example creates a resource group named `myresourcegroup` in the `westus` location. @@ -37,7 +37,7 @@ az group create --name myresourcegroup --location westus ``` ## Create an Azure Database for MySQL server -Create an Azure Database for MySQL server with the **az mysql server create** command. A server can manage multiple databases. Typically, a separate database is used for each project or for each user. +Create an Azure Database for MySQL server with the **[az mysql server create](/cli/azure/mysql/server#az_mysql_server_create)** command. A server can manage multiple databases. Typically, a separate database is used for each project or for each user. The following example creates an Azure Database for MySQL server located in `westus` in the resource group `myresourcegroup` with name `myserver4demo`. The server has an administrator log in named `myadmin` and password `Password01!`. The server is created with **Basic** performance tier and **50** compute units shared between all the databases in the server. You can scale compute and storage up or down depending on the application needs. @@ -46,7 +46,7 @@ az mysql server create --resource-group myresourcegroup --name myserver4demo --l ``` ## Configure firewall rule -Create an Azure Database for MySQL server-level firewall rule using the **az mysql server firewall-rule create** command. A server-level firewall rule allows an external application, such as the **mysql.exe** command-line tool or MySQL Workbench to connect to your server through the Azure MySQL service firewall. +Create an Azure Database for MySQL server-level firewall rule using the **[az mysql server firewall-rule create](/cli/azure/mysql/server/firewall-rule#az_mysql_server_firewall_rule_create)** command. A server-level firewall rule allows an external application, such as the **mysql.exe** command-line tool or MySQL Workbench to connect to your server through the Azure MySQL service firewall. The following example creates a firewall rule for a predefined address range, which in this example is the entire possible range of IP addresses. diff --git a/articles/mysql/scripts/sample-change-server-configuration.md b/articles/mysql/scripts/sample-change-server-configuration.md index 9f4828e5ca080..a226f187517b0 100644 --- a/articles/mysql/scripts/sample-change-server-configuration.md +++ b/articles/mysql/scripts/sample-change-server-configuration.md @@ -33,12 +33,12 @@ This script uses the following commands. Each command in the table links to comm | **Command** | **Notes** | |---|---| -| [az group create](/cli/azure/group#create) | Creates a resource group in which all resources are stored. | -| [az mysql server create](/cli/azure/mysql/server#create) | Creates a MySQL server that hosts the databases. | -| [az mysql server configuration list](/cli/azure/mysql/server/configuration#list) | List the configurations of an Azure Database for MySQL server. | -| [az mysql server configuration set](/cli/azure/mysql/server/configuration#set) | Update the configuration of an Azure Database for MySQL server. | -| [az mysql server configuration show](/cli/azure/mysql/server/configuration#show) | Show the configuration of an Azure Database for MySQL server. | -| [az group delete](/cli/azure/group#delete) | Deletes a resource group including all nested resources. | +| [az group create](/cli/azure/group#az_group_create) | Creates a resource group in which all resources are stored. | +| [az mysql server create](/cli/azure/mysql/server#az_msql_server_create) | Creates a MySQL server that hosts the databases. | +| [az mysql server configuration list](/cli/azure/mysql/server/configuration#az_msql_server_configuration_list) | List the configurations of an Azure Database for MySQL server. | +| [az mysql server configuration set](/cli/azure/mysql/server/configuration#az_msql_server_configuration_set) | Update the configuration of an Azure Database for MySQL server. | +| [az mysql server configuration show](/cli/azure/mysql/server/configuration#az_msql_server_configuration_show) | Show the configuration of an Azure Database for MySQL server. | +| [az group delete](/cli/azure/group#az_group_delete) | Deletes a resource group including all nested resources. | ## Next steps - Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure/overview). diff --git a/articles/mysql/scripts/sample-create-server-and-firewall-rule.md b/articles/mysql/scripts/sample-create-server-and-firewall-rule.md index dd627de4fc83f..ff03fc1182591 100644 --- a/articles/mysql/scripts/sample-create-server-and-firewall-rule.md +++ b/articles/mysql/scripts/sample-create-server-and-firewall-rule.md @@ -33,10 +33,10 @@ This script uses the following commands. Each command in the table links to comm | **Command** | **Notes** | |---|---| -| [az group create](/cli/azure/group#create) | Creates a resource group in which all resources are stored. | -| [az mysql server create](/cli/azure/mysql/server#create) | Creates a MySQL server that hosts the databases. | -| [az mysql server firewall create](/cli/azure/mysql/server/firewall-rule#create) | Creates a firewall rule to allow access to the server and databases under it from the entered IP address range. | -| [az group delete](/cli/azure/group#delete) | Deletes a resource group including all nested resources. | +| [az group create](/cli/azure/group#az_group_create) | Creates a resource group in which all resources are stored. | +| [az mysql server create](/cli/azure/mysql/server#az_msql_server_create) | Creates a MySQL server that hosts the databases. | +| [az mysql server firewall create](/cli/azure/mysql/server/firewall-rule#az_mysql_server_firewall_rule_create) | Creates a firewall rule to allow access to the server and databases under it from the entered IP address range. | +| [az group delete](/cli/azure/group#az_group_delete) | Deletes a resource group including all nested resources. | ## Next steps - Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure/overview). diff --git a/articles/mysql/scripts/sample-scale-server.md b/articles/mysql/scripts/sample-scale-server.md index 713f0de71c30d..c02ed9b457f6d 100644 --- a/articles/mysql/scripts/sample-scale-server.md +++ b/articles/mysql/scripts/sample-scale-server.md @@ -33,10 +33,10 @@ This script uses the following commands. Each command in the table links to comm | **Command** | **Notes** | |---|---| -| [az group create](/cli/azure/group#create) | Creates a resource group in which all resources are stored. | -| [az mysql server create](/cli/azure/mysql/server#create) | Creates a MySQL server that hosts the databases. | -| [az monitor metrics list](/cli/azure/monitor/metrics#list) | List the metric value for the resources. | -| [az group delete](/cli/azure/group#delete) | Deletes a resource group including all nested resources. | +| [az group create](/cli/azure/group#az_group_create) | Creates a resource group in which all resources are stored. | +| [az mysql server create](/cli/azure/mysql/server#az_mysql_server_create) | Creates a MySQL server that hosts the databases. | +| [az monitor metrics list](/cli/azure/monitor/metrics#az_monitor_metrics_list) | List the metric value for the resources. | +| [az group delete](/cli/azure/group#az_group_delete) | Deletes a resource group including all nested resources. | ## Next steps - Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure/overview). diff --git a/articles/mysql/tutorial-design-database-using-cli.md b/articles/mysql/tutorial-design-database-using-cli.md index 9086f15e8cfc3..f1d6d0e559101 100644 --- a/articles/mysql/tutorial-design-database-using-cli.md +++ b/articles/mysql/tutorial-design-database-using-cli.md @@ -9,7 +9,7 @@ editor: jasonwhowell ms.service: mysql ms.devlang: azure-cli ms.topic: tutorial -ms.date: 11/03/2017 +ms.date: 11/28/2017 ms.custom: mvc --- @@ -32,7 +32,7 @@ You may use the Azure Cloud Shell in the browser, or [Install Azure CLI 2.0]( /c If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI 2.0]( /cli/azure/install-azure-cli). -If you have multiple subscriptions, choose the appropriate subscription in which the resource exists or is billed for. Select a specific subscription ID under your account using [az account set](/cli/azure/account#set) command. +If you have multiple subscriptions, choose the appropriate subscription in which the resource exists or is billed for. Select a specific subscription ID under your account using [az account set](/cli/azure/account#az_account_set) command. ```azurecli-interactive az account set --subscription 00000000-0000-0000-0000-000000000000 ``` @@ -52,9 +52,7 @@ Create an Azure Database for MySQL server with the az mysql server create comman The following example creates an Azure Database for MySQL server located in `westus` in the resource group `mycliresource` with name `mycliserver`. The server has an administrator log in named `myadmin` and password `Password01!`. The server is created with **Basic** performance tier and **50** compute units shared between all the databases in the server. You can scale compute and storage up or down depending on the application needs. ```azurecli-interactive -az mysql server create --resource-group mycliresource --name mycliserver ---location westus --user myadmin --password Password01! ---performance-tier Basic --compute-units 50 +az mysql server create --resource-group mycliresource --name mycliserver --location westus --admin-user myadmin --admin-password Password01! --performance-tier Basic --compute-units 50 ``` ## Configure firewall rule diff --git a/articles/postgresql/quickstart-create-server-database-azure-cli.md b/articles/postgresql/quickstart-create-server-database-azure-cli.md index b262651a59e4c..badcaa11c62bb 100644 --- a/articles/postgresql/quickstart-create-server-database-azure-cli.md +++ b/articles/postgresql/quickstart-create-server-database-azure-cli.md @@ -25,7 +25,7 @@ If you are running the CLI locally, you need to log in to your account using the az login ``` -If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. Select a specific subscription ID under your account using [az account set](/cli/azure/account#set) command. +If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. Select a specific subscription ID under your account using [az account set](/cli/azure/account#az_account_set) command. ```azurecli-interactive az account set --subscription 00000000-0000-0000-0000-000000000000 ``` @@ -39,7 +39,7 @@ az group create --name myresourcegroup --location westus ## Create an Azure Database for PostgreSQL server -Create an [Azure Database for PostgreSQL server](overview.md) using the [az postgres server create](/cli/azure/postgres/server#create) command. A server contains a group of databases managed as a group. +Create an [Azure Database for PostgreSQL server](overview.md) using the [az postgres server create](/cli/azure/postgres/server#az_postgres_server_create) command. A server contains a group of databases managed as a group. The following example creates a server named `mypgserver-20170401` in your resource group `myresourcegroup` with server admin login `mylogin`. The name of a server maps to DNS name and is thus required to be globally unique in Azure. Substitute the `` with your own value. ```azurecli-interactive diff --git a/articles/search/search-faq-frequently-asked-questions.md b/articles/search/search-faq-frequently-asked-questions.md index 3bc35763feafc..1bbdfe74bcec5 100644 --- a/articles/search/search-faq-frequently-asked-questions.md +++ b/articles/search/search-faq-frequently-asked-questions.md @@ -27,7 +27,7 @@ Azure Search supports multiple data sources, [linguistic analysis for many langu When comparing search technologies, customers frequently ask for specifics on how Azure Search compares with Elasticsearch. Customers who choose Azure Search over Elasticsearch for their search application projects typically do so because we've made a key task easier or they need the built-in integration with other Microsoft technologies: + Azure Search is a fully-managed cloud service with 99.9% service level agreements (SLA) when provisioned with sufficient redundancy (2 replicas for read access, 3 replicas for read-write). -+ Microsoft's [Natural language processors](https://docs.microsoft.com/rest/api/searchservice/language-support) offer leading edge inguistic analysis. ++ Microsoft's [Natural language processors](https://docs.microsoft.com/rest/api/searchservice/language-support) offer leading edge linguistic analysis. + [Azure Search indexers](search-indexer-overview.md) can crawl a variety of Azure data sources for initial and incremental indexing. + If you need rapid response to fluctuations in query or indexing volumes, you can use [slider controls](search-manage.md#scale-up-or-down) in the Azure portal, or run a [PowerShell script](search-manage-powershell.md), bypassing shard management directly. + [Scoring and tuning features](https://docs.microsoft.com/rest/api/searchservice/add-scoring-profiles-to-a-search-index) provide the means for influencing search rank scores beyond what the search engine alone can provide. @@ -86,4 +86,4 @@ Is your question about a missing feature or functionality? Request the feature o [How full text search works in Azure Search](search-lucene-query-architecture.md) [What is Azure Search?](search-what-is-azure-search.md) - \ No newline at end of file + diff --git a/articles/security/blueprints/payment-processing-blueprint.md b/articles/security/blueprints/payment-processing-blueprint.md index 504487e14e518..f4556ab9d8683 100644 --- a/articles/security/blueprints/payment-processing-blueprint.md +++ b/articles/security/blueprints/payment-processing-blueprint.md @@ -14,19 +14,21 @@ ms.devlang: na ms.topic: article ms.tgt_pltfrm: na ms.workload: na -ms.date: 11/15/2017 +ms.date: 11/29/2017 ms.author: frasim --- -# Payment Processing Blueprint for PCI DSS-compliant environments +# Azure Blueprint Automation: Payment Processing for PCI DSS-compliant environments -The Payment Processing Blueprint for PCI DSS-Compliant Environments provides guidance for the deployment of a PCI DSS-compliant Platform-as-a-Service (PaaS) environment suitable for handling sensitive payment card data. It showcases a common reference architecture and is designed to simplify adoption of Microsoft Azure. This foundational architecture illustrates an end-to-end solution to meet the needs of organizations seeking a cloud-based approach to reducing the burden and cost of deployment. +## Overview -This foundational architecture meets the requirements of stringent Payment Card Industry Data Security Standards (PCI DSS 3.2) for the collection, storage, and retrieval of payment card data. It demonstrates the proper handling of credit card data (including card number, expiration, and verification data) in a secure, compliant multi-tier environment deployed as an end-to-end Azure-based solution. For more information about PCI DSS 3.2 requirements and this solution, see [PCI DSS Requirements - High-Level Overview](pci-dss-requirements-overview.md). +The Payment Processing for PCI DSS-Compliant Environments provides guidance for the deployment of a PCI DSS-compliant Platform-as-a-Service (PaaS) environment suitable for handling sensitive payment card data. It showcases a common reference architecture and is designed to simplify adoption of Microsoft Azure. This blueprint illustrates an end-to-end solution to meet the needs of organizations seeking a cloud-based approach to reducing the burden and cost of deployment. -This architecture is intended to serve as a foundation for customers to adjust to their specific requirements, and should not be used as-is in a production environment. Deploying an application into this environment without modification is not sufficient to completely meet the requirements of a PCI DSS-compliant solution. Please note the following: -- This foundational architecture provides a baseline to help customers use Microsoft Azure in a PCI DSS-compliant manner. +This blueprint is designed to help meet the requirements of stringent Payment Card Industry Data Security Standards (PCI DSS 3.2) for the collection, storage, and retrieval of payment card data. It demonstrates the proper handling of credit card data (including card number, expiration, and verification data) in a secure, compliant multi-tier environment deployed as an end-to-end Azure-based PaaS solution. For more information about PCI DSS 3.2 requirements and this solution, see [PCI DSS Requirements - High-Level Overview](pci-dss-requirements-overview.md). + +This blueprint is intended to serve as a foundation for customers to better understand the specific requirements, and should not be used as-is in a production environment. Deploying an application into this environment without modification is not sufficient to completely meet the requirements of a PCI DSS-compliant solution for a custom solution. Please note the following: +- This blueprint provides a baseline to help customers use Microsoft Azure in a PCI DSS-compliant manner. - Achieving PCI DSS-compliance requires that an accredited Qualified Security Assessor (QSA) certify a production customer solution. - Customers are responsible for conducting appropriate security and compliance reviews of any solution built using this foundational architecture, as requirements may vary based on the specifics of each customer’s implementation and geography. @@ -40,7 +42,7 @@ The foundational architecture is comprised of the following components: - **Deployment templates**. In this deployment, [Azure Resource Manager templates](/azure/azure-resource-manager/resource-group-overview#template-deployment) are used to automatically deploy the components of the architecture into Microsoft Azure by specifying configuration parameters during setup. - **Automated deployment scripts**. These scripts help deploy the end-to-end solution. The scripts consist of: - A module installation and [global administrator](/azure/active-directory/active-directory-assign-admin-roles-azure-portal) setup script is used to install and verify that required PowerShell modules and global administrator roles are configured correctly. - - An installation PowerShell script is used to deploy the end-to-end solution, provided via a .zip file and a .bacpac file that contain a pre-built demo web application with SQL database sample content. The source code for this solution is available for review [here](https://github.com/Microsoft/azure-sql-security-sample). + - An installation PowerShell script is used to deploy the end-to-end solution, provided via a .zip file and a .bacpac file that contain a pre-built demo web application with [SQL database sample](https://github.com/Microsoft/azure-sql-security-sample). content. The source code for this solution is available for review [Payment Processing Blueprint code repository][code-repo]. ## Architectural diagram @@ -48,9 +50,9 @@ The foundational architecture is comprised of the following components: ## User scenario -The foundational architecture addresses the use case below. +The blueprint addresses the use case below. -> This scenario illustrates how a fictitious webstore moved their payment card processing to an Azure-based solution. The solution handles collection of basic user information including payment data. The solution does not process payments with this cardholder data; once the data is collected, customers are responsible for initiating and completing transactions with a payment processor. For more information, see the "Review and Guidance for Implementation" document at the [Microsoft Service Trust Portal](http://aka.ms/stp). +> This scenario illustrates how a fictitious webstore moved their payment card processing to an Azure-based PaaS solution. The solution handles collection of basic user information including payment data. The solution does not process payments with this cardholder data; once the data is collected, customers are responsible for initiating and completing transactions with a payment processor. For more information, see the ["Review and Guidance for Implementation"](https://aka.ms/pciblueprintprocessingoverview). ### Use case A small webstore called *Contoso Webstore* is ready to move their payment system to the cloud. They have selected Microsoft Azure to host the process for purchasing and to allow a clerk to collect credit card payments from their customers. @@ -75,9 +77,9 @@ User roles used to illustrate the use case, and provide insight into the user in | Name: |`Global Admin Azure PCI Samples`| |User type:| `Subscription Administrator and Azure Active Directory Global Administrator`| -* The admin account cannot read credit card information unmasked. All actions are logged. -* The admin account cannot manage or log into SQL Database. -* The admin account can manage Active Directory and subscription. +- The admin account cannot read credit card information unmasked. All actions are logged. +- The admin account cannot manage or log into SQL Database. +- The admin account can manage Active Directory and subscription. #### Role: SQL administrator @@ -89,8 +91,8 @@ User roles used to illustrate the use case, and provide insight into the user in |Last name: |`PCI Samples`| |User type:| `Administrator`| -* The sqladmin account cannot view unfiltered credit card information. All actions are logged. -* The sqladmin account can manage SQL database. +- The sqladmin account cannot view unfiltered credit card information. All actions are logged. +- The sqladmin account can manage SQL database. #### Role: Clerk @@ -112,13 +114,13 @@ Edna Benson is the receptionist and business manager. She is responsible for ens ### Contoso Webstore - Estimated pricing -This foundational architecture and example web application have a monthly fee structure and a usage cost per hour which must be considered when sizing the solution. These costs can be estimated using the [Azure costing calculator](https://azure.microsoft.com/pricing/calculator/). As of September 2017, the estimated monthly cost for this solution is ~$900. These costs will vary based on the usage amount and are subject to change. It is incumbent on the customer to calculate their estimated monthly costs at the time of deployment for a more accurate estimate. +This foundational architecture and example web application have a monthly fee structure and a usage cost per hour which must be considered when sizing the solution. These costs can be estimated using the [Azure costing calculator](https://azure.microsoft.com/pricing/calculator/). As of September 2017, the estimated monthly cost for this solution is ~$2500 this includes a $1000/mo usage charge for ASE v2. These costs will vary based on the usage amount and are subject to change. It is incumbent on the customer to calculate their estimated monthly costs at the time of deployment for a more accurate estimate. This solution used the following Azure services. Details of the deployment architecture are located in the [Deployment Architecture](#deployment-architecture) section. >- Application Gateway >- Azure Active Directory ->- App Service Environment +>- App Service Environment v2 >- OMS Log Analytics >- Azure Key Vault >- Network Security Groups @@ -234,7 +236,7 @@ To learn more about using the security features of Azure SQL Database, see the [ [Azure App Service](/azure/app-service/) is a managed service for deploying web apps. The Contoso Webstore application is deployed as an [App Service Web App](/azure/app-service-web/app-service-web-overview). -[Azure App Service Environment (ASE)](/azure/app-service/app-service-environment/intro) is an App Service feature that provides a fully isolated and dedicated environment for securely running App Service apps at high scale. it is a Premium service plan used by this foundational architecture to enable PCI DSS compliance. +[Azure App Service Environment (ASE v2)](/azure/app-service/app-service-environment/intro) is an App Service feature that provides a fully isolated and dedicated environment for securely running App Service apps at high scale. it is a Premium service plan used by this foundational architecture to enable PCI DSS compliance. ASEs are isolated to running only a single customer's applications, and are always deployed into a virtual network. Customers have fine-grained control over both inbound and outbound application network traffic, and applications can establish high-speed secure connections over virtual networks to on-premises corporate resources. @@ -283,7 +285,7 @@ Use [Application Insights](https://azure.microsoft.com/services/application-insi #### OMS solutions -The following OMS solutions are pre-installed as part of the foundational architecture: +These additional OMS solutions should be considered and configured: - [Activity Log Analytics](/azure/monitoring-and-diagnostics/monitoring-overview-activity-logs) - [Azure Networking Analytics](/azure/log-analytics/log-analytics-azure-networking-analytics?toc=%2fazure%2foperations-management-suite%2ftoc.json) - [Azure SQL Analytics](/azure/log-analytics/log-analytics-azure-sql) @@ -339,7 +341,7 @@ It is highly recommended that a clean installation of PowerShell be used to depl For detailed usage instructions, see [Script Instructions - Deploy and Configure Azure Resources](https://github.com/Azure/pci-paas-webapp-ase-sqldb-appgateway-keyvault-oms/blob/master/1-DeployAndConfigureAzureResources.md). -3. OMS logging and monitoring. Once the solution is deployed, a [Microsoft Operations Management Suite (OMS)](/azure/operations-management-suite/operations-management-suite-overview) workspace can be opened, and the sample templates provided in the solution repository can be used to illustrate how a monitoring dashboard can be configured. For the sample OMS templates refer to the [omsDashboards folder](https://github.com/Azure/pci-paas-webapp-ase-sqldb-appgateway-keyvault-oms/blob/master/1-DeployAndConfigureAzureResources.md). +3. OMS logging and monitoring. Once the solution is deployed, a [Microsoft Operations Management Suite (OMS)](/azure/operations-management-suite/operations-management-suite-overview) workspace can be opened, and the sample templates provided in the solution repository can be used to illustrate how a monitoring dashboard can be configured. For the sample OMS templates refer to the [omsDashboards folder](https://github.com/Azure/pci-paas-webapp-ase-sqldb-appgateway-keyvault-oms/blob/master/1-DeployAndConfigureAzureResources.md). Note that data must be collected in OMS for templates to deploy correctly. This can take up to an hour or more depending on site activity. When setting up your OMS logging, consider including these resources: @@ -356,11 +358,11 @@ It is highly recommended that a clean installation of PowerShell be used to depl ## Threat model -A data flow diagram (DFD) and sample threat model for the Contoso Webstore are available in the Documents section of the [code repository][code-repo]. +A data flow diagram (DFD) and sample threat model for the Contoso Webstore [Payment Processing Blueprint Threat Model](https://aka.ms/pciblueprintthreatmodel). ![](images/pci-threat-model.png) -For more information, see the [PCI Blueprint Threat Model](https://aka.ms/pciblueprintthreatmodel). + ## Customer responsibility matrix @@ -377,7 +379,10 @@ The solution was reviewed by Coalfire systems, Inc. (PCI-DSS Qualified Security - This document is for informational purposes only. MICROSOFT AND AVYAN MAKE NO WARRANTIES, EXPRESS, IMPLIED, OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT. This document is provided “as-is.” Information and views expressed in this document, including URL and other Internet website references, may change without notice. Customers reading this document bear the risk of using it. - This document does not provide customers with any legal rights to any intellectual property in any Microsoft or Avyan product or solutions. - Customers may copy and use this document for internal reference purposes. -- Certain recommendations in this paper may result in increased data, network, or compute resource usage in Azure, and may increase a customer’s Azure license or subscription costs. + + > [!NOTE] + > Certain recommendations in this paper may result in increased data, network, or compute resource usage in Azure, and may increase a customer’s Azure license or subscription costs. + - The solution in this document is intended as a foundational architecture and must not be used as-is for production purposes. Achieving PCI compliance requires that customers consult with their Qualified Security Assessor. - All customer names, transaction records, and any related data on this page are fictitious, created for the purpose of this foundational architecture and provided for illustration only. No real association or connection is intended, and none should be inferred. - This solution was developed jointly by Microsoft and Avyan Consulting, and is available under the [MIT License](https://opensource.org/licenses/MIT). @@ -385,8 +390,8 @@ The solution was reviewed by Coalfire systems, Inc. (PCI-DSS Qualified Security ### Document authors -* *Frank Simorjay (Microsoft)* -* *Gururaj Pandurangi (Avyan Consulting)* +- *Frank Simorjay (Microsoft)* +- *Gururaj Pandurangi (Avyan Consulting)* [code-repo]: https://github.com/Azure/pci-paas-webapp-ase-sqldb-appgateway-keyvault-oms "Code Repository" diff --git a/articles/service-fabric/TOC.md b/articles/service-fabric/TOC.md index 024a433f8ebd9..615f0d9ac078e 100644 --- a/articles/service-fabric/TOC.md +++ b/articles/service-fabric/TOC.md @@ -30,6 +30,7 @@ #### [1b- Create a Linux cluster](service-fabric-tutorial-create-vnet-and-linux-cluster.md) ### [2- Scale the cluster](service-fabric-tutorial-scale-cluster.md) ### [3- Deploy API Management with Service Fabric](service-fabric-tutorial-deploy-api-management.md) +### [4- Upgrade the cluster runtime](service-fabric-tutorial-upgrade-cluster.md) # Samples diff --git a/articles/service-fabric/scripts/service-fabric-powershell-add-nsg-rule.md b/articles/service-fabric/scripts/service-fabric-powershell-add-nsg-rule.md new file mode 100644 index 0000000000000..1a6f8c2b95a4b --- /dev/null +++ b/articles/service-fabric/scripts/service-fabric-powershell-add-nsg-rule.md @@ -0,0 +1,44 @@ +--- +title: Azure PowerShell Script Sample - Add a network security group rule | Microsoft Docs +description: Azure PowerShell Script Sample - Adds a network security group to allow inbound traffic on a specific port. +services: service-fabric +documentationcenter: +author: rwike77 +manager: timlt +editor: +tags: azure-service-management + +ms.assetid: +ms.service: service-fabric +ms.workload: multiple +ms.devlang: na +ms.topic: sample +ms.date: 11/28/2017 +ms.author: ryanwi +ms.custom: mvc +--- + +# Add an inbound network security group rule + +This sample script creates a network security group rule to allow inbound traffic on port 8081. The script gets the `Microsoft.Network/networkSecurityGroups` resource that the cluster is located in, creates a new network security configuration rule, and updates the network security group. Customize the parameters as needed. + +If needed, install the Azure PowerShell using the instructions found in the [Azure PowerShell guide](/powershell/azure/overview). + +## Sample script + +[!code-powershell[main](../../../powershell_scripts/service-fabric/add-inbound-nsg-rule/add-inbound-nsg-rule.ps1 "Update the RDP port range values")] + +## Script explanation + +This script uses the following commands. Each command in the table links to command specific documentation. + +| Command | Notes | +|---|---| +| [Get-AzureRmResource](/powershell/module/azurerm.resources/get-azurermresource) | Gets the `Microsoft.Network/networkSecurityGroups` resource. | +|[Get-AzureRmNetworkSecurityGroup](/powershell/module/azurerm.network/get-azurermnetworksecuritygroup)| Gets the network security group by name.| +|[Add-AzureRmNetworkSecurityRuleConfig](/powershell/module/azurerm.network/add-azurermnetworksecurityruleconfig)| Adds a network security rule configuration to a network security group. | +|[Set-AzureRmNetworkSecurityGroup](/powershell/module/azurerm.network/set-azurermnetworksecuritygroup)| Sets the goal state for a network security group.| + +## Next steps + +For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/overview). diff --git a/articles/service-fabric/scripts/service-fabric-powershell-change-rdp-port-range.md b/articles/service-fabric/scripts/service-fabric-powershell-change-rdp-port-range.md index f0c4f5c83faa2..3571d4784d948 100644 --- a/articles/service-fabric/scripts/service-fabric-powershell-change-rdp-port-range.md +++ b/articles/service-fabric/scripts/service-fabric-powershell-change-rdp-port-range.md @@ -1,5 +1,5 @@ --- -title: Azure PowerShell Script Sample - Change the RDP port range. | Microsoft Docs +title: Azure PowerShell Script Sample - Change the RDP port range | Microsoft Docs description: Azure PowerShell Script Sample - Changes the RDP port range of a deployed cluster. services: service-fabric documentationcenter: diff --git a/articles/service-fabric/scripts/service-fabric-powershell-change-rdp-user-and-pw.md b/articles/service-fabric/scripts/service-fabric-powershell-change-rdp-user-and-pw.md index 88703cdad6077..95dc2b31122a3 100644 --- a/articles/service-fabric/scripts/service-fabric-powershell-change-rdp-user-and-pw.md +++ b/articles/service-fabric/scripts/service-fabric-powershell-change-rdp-user-and-pw.md @@ -1,5 +1,5 @@ --- -title: Azure PowerShell Script Sample - Update the RDP username and password| Microsoft Docs +title: Azure PowerShell Script Sample - Update the RDP username and password | Microsoft Docs description: Azure PowerShell Script Sample - Update the RDP username and password for all Service Fabric cluster nodes of a specific node type. services: service-fabric documentationcenter: diff --git a/articles/service-fabric/scripts/service-fabric-powershell-open-port-in-load-balancer.md b/articles/service-fabric/scripts/service-fabric-powershell-open-port-in-load-balancer.md index 4807b94ff944c..3cfa38be7e803 100644 --- a/articles/service-fabric/scripts/service-fabric-powershell-open-port-in-load-balancer.md +++ b/articles/service-fabric/scripts/service-fabric-powershell-open-port-in-load-balancer.md @@ -1,5 +1,5 @@ --- -title: Azure PowerShell Script Sample - Open application port in load balancer| Microsoft Docs +title: Azure PowerShell Script Sample - Open application port in load balancer | Microsoft Docs description: Azure PowerShell Script Sample - Open a port in the Azure load balancer for a Service Fabric application. services: service-fabric documentationcenter: diff --git a/articles/service-fabric/service-fabric-cicd-your-linux-applications-with-jenkins.md b/articles/service-fabric/service-fabric-cicd-your-linux-applications-with-jenkins.md index b8f0a6854fd16..fb0f78ea2ec7a 100644 --- a/articles/service-fabric/service-fabric-cicd-your-linux-applications-with-jenkins.md +++ b/articles/service-fabric/service-fabric-cicd-your-linux-applications-with-jenkins.md @@ -22,7 +22,7 @@ Jenkins is a popular tool for continuous integration and deployment of your apps ## General prerequisites - Have Git installed locally. You can install the appropriate Git version from [the Git downloads page](https://git-scm.com/downloads), based on your operating system. If you are new to Git, learn more about it from the [Git documentation](https://git-scm.com/docs). -- Have the Service Fabric Jenkins plug-in handy. You can download it from [Service Fabric downloads](https://servicefabricdownloads.blob.core.windows.net/jenkins/serviceFabric.hpi). +- Have the Service Fabric Jenkins plug-in handy. You can download it from [Service Fabric downloads](https://servicefabricdownloads.blob.core.windows.net/jenkins/serviceFabric.hpi). If you are using edge browser rename the extension of downloaded file from .zip to .hpi. ## Set up Jenkins inside a Service Fabric cluster @@ -127,8 +127,8 @@ You need to have Docker installed. The following commands can be used to install Now when you run ``docker info`` in the terminal, you should see in the output that the Docker service is running. ### Steps - 1. Pull the Service Fabric Jenkins container image: ``docker pull sayantancs/jenkins:v9`` - 2. Run the container image: ``docker run -itd -p 8080:8080 sayantancs/jenkins:v9`` + 1. Pull the Service Fabric Jenkins container image: ``docker pull rapatchi/jenkins:v9`` + 2. Run the container image: ``docker run -itd -p 8080:8080 rapatchi/jenkins:v9`` 3. Get the ID of the container image instance. You can list all the Docker containers with the command ``docker ps –a`` 4. Sign in to the Jenkins portal by using the following steps: diff --git a/articles/service-fabric/service-fabric-powershell-samples.md b/articles/service-fabric/service-fabric-powershell-samples.md index c012cf1a17151..fab9f7f62fc58 100644 --- a/articles/service-fabric/service-fabric-powershell-samples.md +++ b/articles/service-fabric/service-fabric-powershell-samples.md @@ -28,12 +28,13 @@ The following table includes links to PowerShell scripts samples that create and |-|-| | **Create cluster** || | [Create a cluster (Azure)](./scripts/service-fabric-powershell-create-secure-cluster-cert.md)| Creates an Azure Service Fabric cluster. | -| **Manage cluster and nodes** || +| **Manage cluster, nodes, and infrastructure** || | [Add an application certificate](./scripts/service-fabric-powershell-add-application-certificate.md)| Adds an application X.509 certificate to all nodes in a cluster. | -|[Change the RDP port range on cluster node VMs](./scripts/service-fabric-powershell-change-rdp-port-range.md)|Changes the RDP port range on cluster node VMs in a deployed cluster.| +| [Update the RDP port range on cluster VMs](./scripts/service-fabric-powershell-change-rdp-port-range.md)|Changes the RDP port range on cluster node VMs in a deployed cluster.| | [Update the admin user and password for cluster node VMs](./scripts/service-fabric-powershell-change-rdp-user-and-pw.md) | Updates the admin username and password for cluster node VMs. | +| [Open a port in the load balancer](./scripts/service-fabric-powershell-open-port-in-load-balancer.md) | Open an application port in the Azure load balancer to allow inbound traffic on a specific port. | +| [Create an inbound network security group rule](./scripts/service-fabric-powershell-add-nsg-rule.md) | Create an inbound network security group rule to allow inbound traffic to the cluster on a specific port. | | **Manage applications** || | [Deploy an application](./scripts/service-fabric-powershell-deploy-application.md)| Deploy an application to a cluster.| -| [Upgrade an application](./scripts/service-fabric-powershell-upgrade-application.md)| Upgrade an application | +| [Upgrade an application](./scripts/service-fabric-powershell-upgrade-application.md)| Upgrade an application.| | [Remove an application](./scripts/service-fabric-powershell-remove-application.md)| Remove an application from a cluster.| -| [Open a port in the load balancer](./scripts/service-fabric-powershell-open-port-in-load-balancer.md) | Open an application port in the Azure load balancer. | diff --git a/articles/service-fabric/service-fabric-reliable-actors-get-started.md b/articles/service-fabric/service-fabric-reliable-actors-get-started.md index 1e1d5ce4bd74e..30ec479c058a3 100644 --- a/articles/service-fabric/service-fabric-reliable-actors-get-started.md +++ b/articles/service-fabric/service-fabric-reliable-actors-get-started.md @@ -98,7 +98,7 @@ Create a simple console application to call the actor service. ![Add New Project dialog][6] > [!NOTE] - > A console application is not the type of app you would typically use as a client in Service Fabric, but it makes a convenient example for debugging and testing using the local Service Fabric emulator. + > A console application is not the type of app you would typically use as a client in Service Fabric, but it makes a convenient example for debugging and testing using the local Service Fabric cluster. 3. The console application must be a 64-bit application to maintain compatibility with the interface project and other dependencies. In Solution Explorer, right-click the **ActorClient** project, and then click **Properties**. On the **Build** tab, set **Platform target** to **x64**. diff --git a/articles/service-fabric/service-fabric-support.md b/articles/service-fabric/service-fabric-support.md index a55f50f142395..acef3eb929bc5 100644 --- a/articles/service-fabric/service-fabric-support.md +++ b/articles/service-fabric/service-fabric-support.md @@ -13,7 +13,7 @@ ms.devlang: dotnet ms.topic: article ms.tgt_pltfrm: NA ms.workload: NA -ms.date: 10/12/2017 +ms.date: 11/22/2017 ms.author: pkc --- @@ -66,15 +66,15 @@ Refer to the following documents on details on how to keep your cluster running Here are the list of the Service Fabric versions that are supported and their support end dates. -| **Service Fabric runtime cluster** | **Compatible SDK / NuGet Package Versions** | **End of Support Date** | -| --- | --- | --- | -| All cluster versions prior to 5.3.121 |Less than or equal to version 2.3 |January 20, 2017 | -| 5.3.* |Less than or equal to version 2.3 |February 24, 2017 | -| 5.4.* |Less than or equal to version 2.4 |May 10,2017 | -| 5.5.* |Less than or equal to version 2.5 |August 10,2017 | -| 5.6.* |Less than or equal to version 2.6 |October 13,2017 | -| 5.7.* |Less than or equal to version 2.7 |December 15,2017 | -| 6.0.* |Less than or equal to version 2.8 |Current version and so no end date +| **Service Fabric runtime in the cluster** | **Can upgrade directly from cluster version** |**Compatible SDK / NuGet Package Versions** | **End of Support Date** | +| --- | --- |--- | --- | +| All cluster versions prior to 5.3.121 | 5.1.158* |Less than or equal to version 2.3 |January 20, 2017 | +| 5.3.* | 5.1.158.* |Less than or equal to version 2.3 |February 24, 2017 | +| 5.4.* | 5.1.158.* |Less than or equal to version 2.4 |May 10,2017 | +| 5.5.* | 5.4.164.* |Less than or equal to version 2.5 |August 10,2017 | +| 5.6.* | 5.4.164.* |Less than or equal to version 2.6 |October 13,2017 | +| 5.7.* | 5.4.164.* |Less than or equal to version 2.7 |December 15,2017 | +| 6.0.* | 5.6.205.* |Less than or equal to version 2.8 |Current version and so no end date | ## Service Fabric Preview Versions - unsupported for production use. From time to time, we release versions that have significant features we want feedback on, which are released as previews. These preview versions should only be used for test purposes. Your production cluster should always be running a supported, stable, Service Fabric version. A preview version always begins with a major and minor version number of 255. For example, if you see a Service Fabric version 255.255.5703.949, that release version is only to be used in test clusters and is in preview. These preview releases are also announced on the [Service Fabric team blog](https://blogs.msdn.microsoft.com/azureservicefabric) and will have details on the features included. diff --git a/articles/service-fabric/service-fabric-tutorial-create-container-images.md b/articles/service-fabric/service-fabric-tutorial-create-container-images.md index 560281a23d1d8..393088f317c9f 100644 --- a/articles/service-fabric/service-fabric-tutorial-create-container-images.md +++ b/articles/service-fabric/service-fabric-tutorial-create-container-images.md @@ -50,9 +50,9 @@ The sample application used in this tutorial is a voting app. The application co Use git to download a copy of the application to your development environment. ```bash -git clone https://github.com/Azure-Samples/service-fabric-dotnet-containers.git +git clone https://github.com/Azure-Samples/service-fabric-containers.git -cd service-fabric-dotnet-containers/Linux/container-tutorial/ +cd service-fabric-containers/Linux/container-tutorial/ ``` The 'container-tutorial' directory contains a folder named 'azure-vote'. This 'azure-vote' folder contains the front-end source code and a Dockerfile to build the front-end. The 'container-tutorial' directory also contains the 'redis' directory which has the Dockerfile to build the redis image. These directories contain the necessary assets for this tutorial set. diff --git a/articles/service-fabric/service-fabric-tutorial-deploy-api-management.md b/articles/service-fabric/service-fabric-tutorial-deploy-api-management.md index d8dacad5c8b89..e0fa628d21eab 100644 --- a/articles/service-fabric/service-fabric-tutorial-deploy-api-management.md +++ b/articles/service-fabric/service-fabric-tutorial-deploy-api-management.md @@ -325,7 +325,7 @@ ResourceGroupName="sfclustertutorialgroup" az group delete --name $ResourceGroupName ``` -## Conclusion +## Next steps In this tutorial, you learned how to: > [!div class="checklist"] @@ -335,6 +335,10 @@ In this tutorial, you learned how to: > * Configure a backend policy > * Add the API to a product +Next, advance to the following tutorial to learn how to upgrade the cluster runtime. +> [!div class="nextstepaction"] +> [Upgrade the Azure Service Fabric cluster runtime](service-fabric-tutorial-upgrade-cluster.md) + [azure-powershell]: https://azure.microsoft.com/documentation/articles/powershell-install-configure/ [apim-arm]:https://github.com/Azure-Samples/service-fabric-api-management/blob/master/apim.json diff --git a/articles/service-fabric/service-fabric-tutorial-upgrade-cluster.md b/articles/service-fabric/service-fabric-tutorial-upgrade-cluster.md new file mode 100644 index 0000000000000..8cba5c38403ea --- /dev/null +++ b/articles/service-fabric/service-fabric-tutorial-upgrade-cluster.md @@ -0,0 +1,190 @@ +--- +title: Upgrade Azure Service Fabric runtime | Microsoft Docs +description: Learn how to use PowerShell to upgrade the runtime of an Azure-hosted Service Fabric cluster. +services: service-fabric +documentationcenter: .net +author: Thraka +manager: timlt +editor: '' + +ms.assetid: +ms.service: service-fabric +ms.devlang: dotNet +ms.topic: tutorial +ms.tgt_pltfrm: NA +ms.workload: NA +ms.date: 11/28/2017 +ms.author: adegeo + +--- + +# Upgrade the runtime of a Service Fabric cluster + +This tutorial is part four of a series, and shows you how to upgrade the Service Fabric runtime on an Azure Service Fabric cluster. This tutorial part is written for Service Fabric clusters running on Azure and does not apply to self-hosted Service Fabric clusters. + +> [!WARNING] +> This part of the tutorial requires PowerShell. Support for upgrading the cluster runtime is not yet supported by the Azure CLI tools. Alternatively, a cluster can be upgraded in the portal. For more information, see [Upgrade an Azure Service Fabric cluster](service-fabric-cluster-upgrade.md). + +If your cluster is already running the latest Service Fabric runtime, you do not need to do this step. However, this article can be used to install any supported runtime on an Azure Service Fabric cluster. + +In this tutorial, you learn how to: + +> [!div class="checklist"] +> * Read the cluster version +> * Set the cluster version + +## Prerequisites +Before you begin this tutorial: +- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) +- Install the [Azure Powershell module version 4.1 or higher](https://docs.microsoft.com/powershell/azure/install-azurerm-ps) or [Azure CLI 2.0](/cli/azure/install-azure-cli). +- Create a secure [Windows cluster](service-fabric-tutorial-create-vnet-and-windows-cluster.md) or [Linux cluster](service-fabric-tutorial-create-vnet-and-linux-cluster.md) on Azure +- If you deploy a Windows cluster, set up a Windows development environment. Install [Visual Studio 2017](http://www.visualstudio.com) and the **Azure development**, **ASP.NET and web development**, and **.NET Core cross-platform development** workloads. Then set up a [.NET development environment](service-fabric-get-started.md). +- If you deploy a Linux cluster, set up a Java development environment on [Linux](service-fabric-get-started-linux.md) or [MacOS](service-fabric-get-started-mac.md). Install the [Service Fabric CLI](service-fabric-cli.md). + +### Sign in to Azure +Sign in to your Azure account select your subscription before you execute Azure commands. + +```powershell +Login-AzureRmAccount +Get-AzureRmSubscription +Set-AzureRmContext -SubscriptionId +``` + +## Get the runtime version + +After you have connected to Azure, selected the subscription containing the Service Fabric cluster, you can get the runtime version of the cluster. + +```powershell +Get-AzureRmServiceFabricCluster -ResourceGroupName SFCLUSTERTUTORIALGROUP -Name aztestcluster ` + | Select-Object ClusterCodeVersion +``` + +Or, just get a list of all clusters in your subscription with the following: + +```powershell +Get-AzureRmServiceFabricCluster | Select-Object Name, ClusterCodeVersion +``` + +Note the **ClusterCodeVersion** value. This value will be used in the next section. + +## Upgrade the runtime + +Use the value of **ClusterCodeVersion** from the previous section with the `Get-ServiceFabricRuntimeUpgradeVersion` cmdlet to discover what versions are available to upgrade to. This cmdlet can only be run from a computer connected to the internet. For example, if you wanted to see what runtime versions you could upgrade to from version `5.7.198.9494`, use the following command: + +```powershell +Get-ServiceFabricRuntimeUpgradeVersion -BaseVersion "5.7.198.9494" +``` + +With a list of versions, you can tell the Azure Service Fabric cluster to upgrade to a newer runtime. For example, if version `6.0.219.9494` is available to upgrade to, use the following command to upgrade your cluster. + +```powershell +Set-AzureRmServiceFabricUpgradeType -ResourceGroupName SFCLUSTERTUTORIALGROUP ` + -Name aztestcluster ` + -UpgradeMode Manual ` + -Version "6.0.219.9494" +``` + +> [!IMPORTANT] +> The cluster runtime upgrade may take a long time to complete. PowerShell is blocked while the upgrade is running. You can use another PowerShell session to check the status of the upgrade. + +The status of the upgrade can be monitored with either PowerShell or the `sfctl` CLI. + +First connect to the cluster with the SSL certificate created in the first part of the tutorial. Use the `Connect-ServiceFabricCluster` cmdlet or `sfctl cluster upgrade-status`. + +```powershell +$endpoint = ".southcentralus.cloudapp.azure.com:19000" +$thumbprint = "63EB5BA4BC2A3BADC42CA6F93D6F45E5AD98A1E4" + +Connect-ServiceFabricCluster -ConnectionEndpoint $endpoint ` + -KeepAliveIntervalInSec 10 ` + -X509Credential -ServerCertThumbprint $thumbprint ` + -FindType FindByThumbprint -FindValue $thumbprint ` + -StoreLocation CurrentUser -StoreName My +``` + +```azurecli +sfctl cluster select --endpoint https://aztestcluster.southcentralus.cloudapp.azure.com:19080 \ +--pem ./aztestcluster201709151446.pem --no-verify +``` + +Next, use `Get-ServiceFabricClusterUpgrade` or `sfctl cluster upgrade-status` to display the status. Something similar to the following result is shown. + +```powershell +Get-ServiceFabricClusterUpgrade + +TargetCodeVersion : 6.0.219.9494 +TargetConfigVersion : 3 +StartTimestampUtc : 11/28/2017 3:09:48 AM +UpgradeState : RollingForwardPending +UpgradeDuration : 00:09:00 +CurrentUpgradeDomainDuration : 00:09:00 +NextUpgradeDomain : 1 +UpgradeDomainsStatus : { "0" = "Completed"; + "1" = "Pending"; + "2" = "Pending"; + "3" = "Pending"; + "4" = "Pending" } +UpgradeKind : Rolling +RollingUpgradeMode : Monitored +FailureAction : Rollback +ForceRestart : False +UpgradeReplicaSetCheckTimeout : 37201.09:59:01 +HealthCheckWaitDuration : 00:05:00 +HealthCheckStableDuration : 00:05:00 +HealthCheckRetryTimeout : 00:45:00 +UpgradeDomainTimeout : 02:00:00 +UpgradeTimeout : 12:00:00 +ConsiderWarningAsError : False +MaxPercentUnhealthyApplications : 0 +MaxPercentUnhealthyNodes : 100 +ApplicationTypeHealthPolicyMap : {} +EnableDeltaHealthEvaluation : True +MaxPercentDeltaUnhealthyNodes : 0 +MaxPercentUpgradeDomainDeltaUnhealthyNodes : 0 +ApplicationHealthPolicyMap : {} +``` + +```azurecli +sfctl cluster upgrade-status + +{ + "codeVersion": "6.0.219.9494", + "configVersion": "3", + +... item cut to save space ... + + }, + "upgradeDomains": [ + { + "name": "0", + "state": "Completed" + }, + { + "name": "1", + "state": "Pending" + }, + { + "name": "2", + "state": "Pending" + }, + { + "name": "3", + "state": "Pending" + }, + { + "name": "4", + "state": "Pending" + } + ], + "upgradeDurationInMilliseconds": "PT1H2M4.63889S", + "upgradeState": "RollingForwardPending" +} +``` + +## Conclusion +In this tutorial, you learned how to: + +> [!div class="checklist"] +> * Get the version of the cluster runtime +> * Upgrade the cluster runtime +> * Monitor the upgrade diff --git a/articles/site-recovery/azure-to-azure/azure-to-azure-quickstart.md b/articles/site-recovery/azure-to-azure/azure-to-azure-quickstart.md new file mode 100644 index 0000000000000..ce2c05c9149f3 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/azure-to-azure-quickstart.md @@ -0,0 +1,73 @@ +--- +title: Replicate an Azure VM to another Azure region (Preview) +description: This quickstart provides the steps required to replicate an Azure VM in one Azure region to a different region. +services: site-recovery +author: rajani-janaki-ram +manager: carmonm + +ms.service: site-recovery +ms.workload: storage-backup-recovery +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 11/01/2017 +ms.author: rajanaki +ms.custom: mvc +--- +# Replicate an Azure VM to another Azure region (Preview) + +The [Azure Site Recovery](../site-recovery-overview.md) service contributes to your business continuity and disaster recovery (BCDR) strategy by keeping your business apps up and running available during planned and unplanned outages. Site Recovery manages and orchestrates disaster recovery of on-premises machines and Azure virtual machines (VMs), including replication, failover, and recovery. + +This quickstart describes how to replicate an Azure VM to a different Azure region. + +If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. + +## Log in to Azure + +Log in to the Azure portal at http://portal.azure.com. + +## Enable replication for the Azure VM + +1. In the Azure portal, click **Virtual machines**, and select the VM you want to replicate. + +2. In **Settings**, click **Disaster recovery (preview)**. +3. In **Configure disaster recovery** > **Target region** select the target region to which you'll replicate. +4. For this Quickstart, accept the other default settings. +5. Click **Enable replication**. This starts a job to enable replication for the VM. + + ![enable replication](media/azure-to-azure-quickstart/enable-replication1.png) + + + +## Verify settings + +After the replication job has finished, you can check the replication status, modify replication settings +settings, and test the deployment. + +1. In the VM menu, click **Disaster recovery (preview)**. +2. You can verify replication health, recovery points that have been created, and source and target regions on the map. + + ![Replication status](media/azure-to-azure-quickstart/replication-status.png) + +## Clean up resources + +The VM in the primary region stops replicating when you disable replication for it: + +- The source replication settings are cleaned up automatically. +- Site Recovery billing for the +VM also stops. + +Stop replication as follows: + +1. Select the VM. +2. In **Disaster recovery (preview)**, click **More**. +3. Click **Disable Replication**. + + ![Disable replication](media/azure-to-azure-quickstart/disable2-replication.png) + +## Next steps + +In this quickstart, you replicated a single VM to a secondary region. + +> [!div class="nextstepaction"] +> [Configure disaster recovery for Azure VMs](azure-to-azure-tutorial-enable-replication.md) diff --git a/articles/site-recovery/azure-to-azure/azure-to-azure-tutorial-dr-drill.md b/articles/site-recovery/azure-to-azure/azure-to-azure-tutorial-dr-drill.md new file mode 100644 index 0000000000000..ad0fa753cfb01 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/azure-to-azure-tutorial-dr-drill.md @@ -0,0 +1,63 @@ +--- +title: Run a disaster recovery drill for Azure VMs to a secondary Azure region with Azure Site Recovery (Preview) +description: Learn how to run a disaster recovery drill for Azure VMs to a secondary Azure region using the Azure Site Recovery service. +services: site-recovery +author: rayne-wiselman +manager: carmonm + +ms.service: site-recovery +ms.workload: storage-backup-recovery +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 11/01/2017 +ms.author: raynew +ms.custom: mvc +--- + +# Run a disaster recovery drill for Azure VMs to a secondary Azure region (Preview) + +The [Azure Site Recovery](../site-recovery-overview.md) service contributes to your business continuity and disaster recovery (BCDR) strategy by keeping your business apps up and running available during planned and unplanned outages. Site Recovery manages and orchestrates disaster recovery of on-premises machines and Azure virtual machines (VMs), including replication, failover, and recovery. + +This tutorial shows you how to run a disaster recovery drill for an Azure VM, from one Azure region to another, with a test failover. A drill validates your replication strategy without data loss or downtime, and doesn't affect your production environment. In this tutorial, you learn how to: + +> [!div class="checklist"] +> * Check the prerequisites +> * Run a test failover for a single VM + +## Prerequisites + +- Before you run a test failover, we recommend that you verify the VM properties to make sure everything's as expected. Access the VM properties in **Replicated items**. The **Essentials** blade shows information about machines settings and status. +- We recommend you use a separate Azure VM network for the test failover, and not the default network that was set up when you enabled replication. + + +## Run a test failover + +1. In **Settings** > **Replicated Items**, click the VM **+Test Failover** icon. + +2. In **Test Failover**, Select a recovery point to use for the failover: + + - **Latest processed**: Fails the VM over to the latest recovery point that was processed by the + Site Recovery service. The time stamp is shown. With this option, no time is spent processing + data, so it provides a low RTO (Recovery Time Objective) + - **Latest app-consistent**: This option fails over all VMs to the latest app-consistent + recovery point. The time stamp is shown. + - **Custom**: Select any recovery point. + +3. Select the target Azure virtual network to which Azure VMs in the secondary region will be + connected, after the failover occurs. + +4. To start the failover, click **OK**. To track progress, click the VM to open its properties. Or, + you can click the **Test Failover** job in the vault name > **Settings** > **Jobs** > **Site + Recovery jobs**. +5. After the failover finishes, the replica Azure VM appears in the Azure portal > **Virtual + Machines**. Make sure that the VM is running, sized appropriately, and connected to the + appropriate network. +6. To delete the VMs that were created during the test failover, click **Cleanup test failover** on + the replicated item or the recovery plan. In **Notes**, record and save any observations + associated with the test failover. + +## Next steps + +> [!div class="nextstepaction"] +> [Run a production failover](azure-to-azure-tutorial-failover-failback.md) diff --git a/articles/site-recovery/azure-to-azure/azure-to-azure-tutorial-enable-replication.md b/articles/site-recovery/azure-to-azure/azure-to-azure-tutorial-enable-replication.md new file mode 100644 index 0000000000000..64298c8f94c65 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/azure-to-azure-tutorial-enable-replication.md @@ -0,0 +1,207 @@ +--- +title: Set up disaster recovery for Azure VMs to a secondary Azure region with Azure Site Recovery (Preview) +description: Learn how to set up disaster recovery for Azure VMs to a different Azure region, using the Azure Site Recovery service. +services: site-recovery +author: rayne-wiselman +manager: carmonm + +ms.service: site-recovery +ms.devlang: na +ms.topic: article +ms.tgt_pltfrm: na +ms.workload: storage-backup-recovery +ms.date: 11/01/2017 +ms.author: raynew +ms.custom: mvc +--- +# Set up disaster recovery for Azure VMs to a secondary Azure region (Preview) + +The [Azure Site Recovery](../site-recovery-overview.md) service contributes to your disaster recovery strategy by managing and orchestrating replication, failover, and failback of on-premises machines, and Azure virtual machines (VMs). + +This tutorial shows you how to set up disaster recovery to a secondary Azure region for Azure VMs. In this tutorial, you learn how to: + +> [!div class="checklist"] +> * Create a Recovery Services vault +> * Verify target resource settings +> * Set up outbound access for VMs +> * Enable replication for a VM + +## Prerequisites + +To complete this tutorial: + +- Make sure that you understand the [scenario architecture and components](concepts-azure-to-azure-architecture.md). +- Review the [support requirements](site-recovery-support-matrix-azure-to-azure.md) for all components. + +## Create a vault + +Create the vault in any region, except the source region. + +1. Sign in to the [Azure portal](https://portal.azure.com) > **Recovery Services**. +2. Click **New** > **Monitoring & Management** > **Backup and Site Recovery**. +3. In **Name**, specify a friendly name to identify the vault. If you have more than one + subscription, select the appropriate one. +4. Create a resource group or select an existing one. Specify an Azure region. To check supported + regions, see geographic availability in + [Azure Site Recovery Pricing Details](https://azure.microsoft.com/pricing/details/site-recovery/). +5. To quickly access the vault from the dashboard, click **Pin to dashboard** and then + click **Create**. + + ![New vault](./media/azure-to-azure-tutorial-enable-replication/new-vault-settings.png) + + The new vault is added to the **Dashboard** under **All resources**, and on the main **Recovery Services vaults** page. + +## Verify target resources + +1. Verify that your Azure subscription allows you to create VMs in the target region used for + disaster recovery. Contact support to enable the required quota. + +2. Make sure your subscription has enough resources to support VMs with sizes that match your source + VMs. Site Recovery picks the same size or the closest possible size for the target VM. + +## Configure outbound network connectivity + +For Site Recovery to work as expected, you need to make some changes in outbound network connectivity, +from VMs that you want to replicate. + +- Site Recovery doesn't support use of an authentication proxy to control network connectivity. +- If you have an authentication proxy, replication can't be enabled. + +### Outbound connectivity for URLs + +If you're using a URL-based firewall proxy to control outbound connectivity, allow access +to the following URLs used by Site Recovery. + +| **URL** | **Details** | +| ------- | ----------- | +| *.blob.core.windows.net | Allows data to be written from the VM to the cache storage account in the source region. | +| login.microsoftonline.com | Provides authorization and authentication to Site Recovery service URLs. | +| *.hypervrecoverymanager.windowsazure.com | Allows the VM to communicate with the Site Recovery service. | +| *.servicebus.windows.net | Allows the VM to write Site Recovery monitoring and diagnostics data. | + +### Outbound connectivity for IP address ranges + +When using any IP-based firewall, proxy, or NSG rules to control outbound connectivity, the +following IP address ranges need to be whitelisted. Download a list of ranges from the following links: + + - [Microsoft Azure Datacenter IP Ranges](http://www.microsoft.com/download/details.aspx?id=41653) + - [Windows Azure Datacenter IP Ranges in Germany](http://www.microsoft.com/download/details.aspx?id=54770) + - [Windows Azure Datacenter IP Ranges in China](http://www.microsoft.com/download/details.aspx?id=42064) + - [Office 365 URLs and IP address ranges](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2#bkmk_identity) + - [Site Recovery service endpoint IP addresses](https://aka.ms/site-recovery-public-ips) + +Use these lists to configure the network access controls in your network. You can use this +[script](https://gallery.technet.microsoft.com/Azure-Recovery-script-to-0c950702) to create +required NSG rules. + +## Verify Azure VM certificates + +Check that all the latest root certificates are present on the Windows or Linux VMs you want to +replicate. If the latest root certificates aren't, the VM can't registered to Site +Recovery, due to security constraints. + +- For Windows VMs, install all the latest Windows updates on the VM, so that all the trusted root + certificates are on the machine. In a disconnected environment, follow the standard Windows + Update and certificate update processes for your organization. + +- For Linux VMs, follow the guidance provided by your Linux distributor, to get the latest trusted + root certificates and certificate revocation list on the VM. + +## Set permissions on the account + +Azure Site Recovery provides three built-in roles to control Site Recovery management operations. + +- **Site Recovery Contributor** - This role has all permissions required to manage Azure Site + Recovery operations in a Recovery Services vault. A user with this role, however, can't create or + delete a Recovery Services vault or assign access rights to other users. This role is best suited + for disaster recovery administrators who can enable and manage disaster recovery for applications + or entire organizations. + +- **Site Recovery Operator** - This role has permissions to execute and manager Failover and + Failback operations. A user with this role can't enable or disable replication, create or delete + vaults, register new infrastructure, or assign access rights to other users. This role is best + suited for a disaster recovery operator who can fail over virtual machines or applications when + instructed by application owners and IT administrators. Post resolution of the disaster, the DR + operator can reprotect and failback the virtual machines. + +- **Site Recovery Reader** - This role has permissions to view all Site Recovery management + operations. This role is best suited for an IT monitoring executive who can monitor the current + state of protection and raise support tickets. + +Learn more on [Azure RBAC built-in roles](../../active-directory/role-based-access-built-in-roles.md) + +## Enable replication + +### Select the source + +1. In Recovery Services vaults, click the vault name > **+Replicate**. +2. In **Source**, select **Azure - PREVIEW**. +3. In **Source location**, select the source Azure region where your VMs are currently running. +4. Select the **Azure virtual machine deployment model** for VMs: **Resource Manager** or + **Classic**. +5. Select the **Source resource group** for Resource Manager VMs, or **cloud service** for classic + VMs. +6. Click **OK** to save the settings. + +### Select the VMs + +Site Recovery retrieves a list of the VMs associated with the subscription and resource group/cloud service. + +1. In **Virtual Machines**, select the VMs you want to replicate. +2. Click **OK**. + +### Configure replication settings + +Site Recovery creates default settings and replication policy for the target region. You can change the settings based on +your requirements. + +1. Click **Settings** to view the target settings. +2. To override the default target settings, click **Customize**. + +![Configure settings](./media/azure-to-azure-tutorial-enable-replication/settings.png) + + +- **Target location**: The target region used for disaster recovery. We recommend that the target + location matches the location of the Site Recovery vault. + +- **Target resource group**: The resource group in the target region that holds Azure VMs after + failover. By default, Site Recovery creates a new resource group in the target region with an + "asr" suffix. + +- **Target virtual network**: The network in the target region that VMs are located after failover. + By default, Site Recovery creates a new virtual network (and subnets) in the target region with + an "asr" suffix. + +- **Cache storage accounts**: Site Recovery uses a storage account in the source region. Changes to + source VMs are sent to this account before replication to the target location. + +- **Target storage accounts**: By default, Site Recovery creates a new storage account in the + target region to mirror the source VM storage account. + +- **Target availability sets**: By default, Site Recovery creates a new availability set in the + target region with the "asr" suffix. You can only add availability sets if VMs are part of a set in the source region. + +- **Replication policy name**: Policy name. + +- **Recovery point retention**: By default, Site Recovery keeps recovery points for 24 hours. You + can configure a value between 1 and 72 hours. + +- **App-consistent snapshot frequency**: By default, Site Recovery takes an app-consistent snapshot + every 4 hours. You can configure any value between 1 and 12 hours. A app-consistent snapshot is a point-in-time snapshot of the application data inside the VM. Volume Shadow Copy Service (VSS) ensures that app on the VM are in a consistent state when the snapshot is taken. + +### Track replication status + +1. In **Settings**, click **Refresh** to get the latest status. + +2. You can track progress of the **Enable protection** job in **Settings** > **Jobs** > **Site + Recovery Jobs**. + +3. In **Settings** > **Replicated Items**, you can view the status of VMs and the initial + replication progress. Click the VM to drill down into its settings. + +## Next steps + +In this tutorial you configured disaster recovery for an Azure VM. Next step is to test your configuration. + +> [!div class="nextstepaction"] +> [Run a disaster recovery drill](azure-to-azure-tutorial-dr-drill.md) diff --git a/articles/site-recovery/azure-to-azure/azure-to-azure-tutorial-failover-failback.md b/articles/site-recovery/azure-to-azure/azure-to-azure-tutorial-failover-failback.md new file mode 100644 index 0000000000000..5a5572e45efbc --- /dev/null +++ b/articles/site-recovery/azure-to-azure/azure-to-azure-tutorial-failover-failback.md @@ -0,0 +1,80 @@ +--- +title: Fail over and fail back Azure VMs replicated to a secondary Azure region with Azure Site Recovery (Preview) +description: Learn how to fail over and fail back Azure VMs replication to a secondary Azure region with Azure Site Recovery +services: site-recovery +author: rayne-wiselman +manager: carmonm + +ms.service: site-recovery +ms.devlang: na +ms.topic: article +ms.tgt_pltfrm: na +ms.workload: storage-backup-recovery +ms.date: 11/01/2017 +ms.author: raynew +ms.custom: mvc +--- +# Fail over and fail back Azure VMs between Azure regions (Preview) + +The [Azure Site Recovery](../site-recovery-overview.md) service contributes to your disaster recovery strategy by managing and orchestrating replication, failover, and failback of on-premises machines, and Azure virtual machines (VMs). + +This tutorial describes how to fail over a single Azure VM to a secondary Azure region. After you've failed over, you fail back to the primary region when it's available. In this tutorial, you learn how to: + +> [!div class="checklist"] +> * Fail over the Azure VM +> * Reprotect the secondary Azure VM, so that it replicates to the primary region +> * Fail back the secondary VM +> * Reprotect the primary VM back to the secondary region + +## Prerequisites + +- Make sure that you've completed a [disaster recovery drill](azure-to-azure-tutorial-dr-drill.md) to check everything is working as +expected. +- Verify the VM properties before you run the test failover. The VM must comply with [Azure requirements](../site-recovery-support-matrix-to-azure.md#failed-over-azure-vm-requirements). + +## Run a failover to the secondary region + +1. In **Replicated items**, select the VM that you want to fail over > **Failover** + + ![Failover](./media/azure-to-azure-tutorial-failover-failback/failover.png) + +2. In **Failover**, select a **Recovery Point** to fail over to. You can use one of the + following options: + + * **Latest** (default): This option processes all the data in the Site Recovery service and + provides the lowest Recovery Point Objective (RPO). + * **Latest processed**: This option reverts the virtual machine to the latest recovery point that + has been processed by Site Recovery service. + * **Custom**: Use this option to fail over to a particular recovery point. This option is useful + for performing a test failover. + +3. Select **Shut down machine before beginning failover** if you want Site Recovery to attempt to + do a shutdown of source virtual machines before triggering the failover. Failover continues even + if shutdown fails. + +4. Follow the failover progress on the **Jobs** page. + +5. After the failover, validate the virtual machine by logging in to it. If you want to go another + recovery point for the virtual machine, then you can use **Change recovery point** option. + +6. Once you are satisfied with the failed over virtual machine, you can **Commit** the failover. + Committing deletes all the recovery points available with the service. The **Change recovery + point** option is no longer available. + +## Reprotect the secondary VM + +After failover of the VM, you need to reprotect it so that it replicates back to the primary region. + +1. Make sure that the VM is in the **Failover committed** state, and check that the primary region is available, and you're able to create and access new resources in it. +2. In **Vault** > **Replicated items**, right-click the VM that's been failed over, and then select **Re-Protect**. + + ![Right-click to reprotect](./media/azure-to-azure-tutorial-failover-failback/reprotect.png) + +2. Notice that the direction of protection, secondary to primary region, is already selected. +3. Review the **Resource group, Network, Storage, and Availability sets** information. Any + resources marked (new) are created as part of the reprotect operation. +4. Click **OK** to trigger a reprotect job. This job seeds the target site with the latest data. Then, it replicates the deltas to the primary region. The VM is now in a protected state. + +## Fail back to the primary region + +After VMs are reprotected, you can fail back to the primary region as you need to. To do this, follow the [failover](#run-a-failover) instructions. diff --git a/articles/site-recovery/azure-to-azure/concepts-azure-to-azure-architecture.md b/articles/site-recovery/azure-to-azure/concepts-azure-to-azure-architecture.md new file mode 100644 index 0000000000000..e4f14c622f8e4 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/concepts-azure-to-azure-architecture.md @@ -0,0 +1,84 @@ +--- +title: Review the architecture for replication of Azure VMs between Azure regions | Microsoft Docs +description: This article provides an overview of components and architecture used when replicating Azure VMs between Azure regions using the Azure Site Recovery service. +services: site-recovery +documentationcenter: '' +author: rayne-wiselman +manager: carmonm +editor: '' + +ms.assetid: +ms.service: site-recovery +ms.devlang: na +ms.topic: article +ms.tgt_pltfrm: na +ms.workload: storage-backup-recovery +ms.date: 09/10/2017 +ms.author: raynew + +--- + +# Azure to Azure replication architecture + + +This article describes the architecture and processes used when you replicate, fail over, and recover Azure virtual machines (VMs) between Azure regions, using the [Azure Site Recovery](../site-recovery-overview.md) service. + +>[!NOTE] +>Azure VM replication with the Site Recovery service is currently in preview. + + + +## Architectural components + +The following graphic provides a high-level view of an Azure VM environment in a specific region (in this example, the East US location). In an Azure VM environment: +- Apps can be running on VMs with disks spread across storage accounts. +- The VMs can be included in one or more subnets within a virtual network. + + +**Azure to Azure replication** + +![customer-environment](./media/concepts-azure-to-azure-architecture/source-environment.png) + +## Replication process + +### Step 1 + +When you enable Azure VM replication, the resources shown below are automatically created in the target region, based on source region settings. You can customize target resources settings as required. + +![Enable replication process, step 1](./media/concepts-azure-to-azure-architecture/enable-replication-step-1.png) + +**Resource** | **Details** +--- | --- +**Target resource group** | The resource group to which replicated VMs belong after failover. +**Target virtual network** | The virtual network in which replicated VMs are located after failover. A network mapping is created between source and target virtual networks, and vice versa. +**Cache storage accounts** | Before source VMs changes are replicated to a target storage account, they are tracked and sent to the cache storage account in the target location. This ensures minimal impact on production apps running on the VM. +**Target storage accounts** | Storage accounts in the target location to which the data is replicated. +**Target availability sets** | Availability sets in which the replicated VMs are located after failover. + +### Step 2 + +As replication is enabled, the Site Recovery extension Mobility service is automatically installed on the VM: + +1. The VM is registered with Site Recovery. + +2. Continuous replication is configured for the VM. Data writes on the VM disks are continuously transferred to the cache storage account, in the source location. + + ![Enable replication process, step 2](./media/concepts-azure-to-azure-architecture/enable-replication-step-2.png) + + + Site Recovery never needs inbound connectivity to the VM. Only outbound connectivity is needed, to Site Recovery service URLs/IP addresses, Office 365 authentication URLs/IP addresses, and cache storage account IP addresses. + +### Step 3 + +After continuous replication is in progress, disk writes are immediately transferred to the cache storage account. Site Recovery processes the data, and sends it to the target storage account. After the data is processed, recovery points are generated in the target storage account every few minutes. + +## Failover process + +When you initiate a failover, the VMs are created in the target resource group, target virtual network, target subnet, and in the target availability set. During a failover, you can use any recovery point. + +![Failover process](./media/concepts-azure-to-azure-architecture/failover.png) + +## Next steps + +Follow the [quickstart](azure-to-azure-quickstart.md) to enable replication of an Azure VM to a secondary region. + diff --git a/articles/site-recovery/azure-to-azure/index.yml b/articles/site-recovery/azure-to-azure/index.yml new file mode 100644 index 0000000000000..8e5c4e93e81c2 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/index.yml @@ -0,0 +1,51 @@ +### YamlMime:YamlDocument +documentType: LandingData +title: Site Recovery documentation - Azure to Azure disaster recovery +metadata: + title: Azure Site Recovery documentation - Tutorials, API Reference | Microsoft Docs + meta.description: Learn how to set up disaster recovery for Azure VMs + services: site-recovery + author: rayne-wiselman + manager: carmonm + ms.service: site-recovery + ms.tgt_pltfrm: na + ms.devlang: na + ms.topic: landing-page + ms.date: 11/26/2017 + ms.author: raynew +abstract: + description: Azure Site Recovery orchestrates and manages disaster recovery for Azure VMs. Learn how to replicate to a secondary region, and run a failover and failback, with our quickstarts and tutorials. +sections: +- title: 5-Minute Quickstarts + items: + - type: paragraph + text: 'Quickly set up disaster recovery for an Azure VM' + - type: list + style: icon48 + items: + - image: + src: /azure/site-recovery/azure-to-azure/media/index/portal.svg + text: Azure portal + href: /azure/site-recovery/azure-to-azure-quickstart +- title: Step-by-step Tutorials + items: + - type: paragraph + text: 'Learn how to set up disaster recovery, failover, and failback for Azure VMs.' + - type: list + style: ordered + items: + - html: Set up disaster recovery + - html: Run a disaster recovery drill + - html: Run failover and failback +- title: Reference + items: + - type: list + style: cards + className: cardsD + items: + - title: Command-Line + html:

Azure PowerShell

+ - title: REST + html:

REST API Reference

+ + diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/a2avmbutton.jpg b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/a2avmbutton.jpg new file mode 100644 index 0000000000000..68be4d7289d57 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/a2avmbutton.jpg differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/disable-replication.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/disable-replication.png new file mode 100644 index 0000000000000..3c7462803a5a9 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/disable-replication.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/disable2-replication.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/disable2-replication.png new file mode 100644 index 0000000000000..d3b5f0ce940a9 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/disable2-replication.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/disablereplication.jpg b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/disablereplication.jpg new file mode 100644 index 0000000000000..404ef03eff987 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/disablereplication.jpg differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/drconfigbladea2a.jpg b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/drconfigbladea2a.jpg new file mode 100644 index 0000000000000..fa44bc6f86fa1 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/drconfigbladea2a.jpg differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/enable-replication-a2a.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/enable-replication-a2a.png new file mode 100644 index 0000000000000..61b7127fd1624 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/enable-replication-a2a.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/enable-replication1.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/enable-replication1.png new file mode 100644 index 0000000000000..196f402c435bc Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/enable-replication1.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/replication-status.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/replication-status.png new file mode 100644 index 0000000000000..7f7a44c63cc29 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/replication-status.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/replicationstatus-azure-to-azure.jpg b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/replicationstatus-azure-to-azure.jpg new file mode 100644 index 0000000000000..757182d5f577b Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-quickstart/replicationstatus-azure-to-azure.jpg differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-tutorial-enable-replication/new-vault-settings.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-tutorial-enable-replication/new-vault-settings.png new file mode 100644 index 0000000000000..0eb89f119fe79 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-tutorial-enable-replication/new-vault-settings.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-tutorial-enable-replication/settings.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-tutorial-enable-replication/settings.png new file mode 100644 index 0000000000000..57920d6b34a4c Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-tutorial-enable-replication/settings.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-tutorial-failover-failback/failover.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-tutorial-failover-failback/failover.png new file mode 100644 index 0000000000000..9fc979bd1f9a4 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-tutorial-failover-failback/failover.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-tutorial-failover-failback/reprotect.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-tutorial-failover-failback/reprotect.png new file mode 100644 index 0000000000000..b4251ecfb0b85 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-tutorial-failover-failback/reprotect.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-architecture/enable-replication-step-1.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-architecture/enable-replication-step-1.png new file mode 100644 index 0000000000000..aa37e0dc30eb0 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-architecture/enable-replication-step-1.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-architecture/enable-replication-step-2.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-architecture/enable-replication-step-2.png new file mode 100644 index 0000000000000..4116f3764fae1 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-architecture/enable-replication-step-2.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-architecture/failover.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-architecture/failover.png new file mode 100644 index 0000000000000..e07e35dcf9368 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-architecture/failover.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-architecture/source-environment-expressroute.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-architecture/source-environment-expressroute.png new file mode 100644 index 0000000000000..bb1e98060e84b Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-architecture/source-environment-expressroute.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-architecture/source-environment.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-architecture/source-environment.png new file mode 100644 index 0000000000000..3f5cfa93d10c4 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-architecture/source-environment.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-enable-replication/customize-policy.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-enable-replication/customize-policy.png new file mode 100644 index 0000000000000..4a4b5c3bf0111 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-enable-replication/customize-policy.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-enable-replication/customize-target.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-enable-replication/customize-target.png new file mode 100644 index 0000000000000..d2dbd406de51f Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-enable-replication/customize-target.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-enable-replication/settings.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-enable-replication/settings.png new file mode 100644 index 0000000000000..57920d6b34a4c Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-enable-replication/settings.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-enable-replication/source.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-enable-replication/source.png new file mode 100644 index 0000000000000..d28d6a4d8b568 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-enable-replication/source.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-enable-replication/vms.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-enable-replication/vms.png new file mode 100644 index 0000000000000..84cff1fa04882 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-enable-replication/vms.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-network/source-environment-expressroute.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-network/source-environment-expressroute.png new file mode 100644 index 0000000000000..bb1e98060e84b Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-network/source-environment-expressroute.png differ diff --git a/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-network/source-environment.png b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-network/source-environment.png new file mode 100644 index 0000000000000..3f5cfa93d10c4 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/azure-to-azure-walkthrough-network/source-environment.png differ diff --git a/articles/site-recovery/azure-to-azure/media/concepts-azure-to-azure-architecture/enable-replication-step-1.png b/articles/site-recovery/azure-to-azure/media/concepts-azure-to-azure-architecture/enable-replication-step-1.png new file mode 100644 index 0000000000000..aa37e0dc30eb0 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/concepts-azure-to-azure-architecture/enable-replication-step-1.png differ diff --git a/articles/site-recovery/azure-to-azure/media/concepts-azure-to-azure-architecture/enable-replication-step-2.png b/articles/site-recovery/azure-to-azure/media/concepts-azure-to-azure-architecture/enable-replication-step-2.png new file mode 100644 index 0000000000000..4116f3764fae1 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/concepts-azure-to-azure-architecture/enable-replication-step-2.png differ diff --git a/articles/site-recovery/azure-to-azure/media/concepts-azure-to-azure-architecture/failover.png b/articles/site-recovery/azure-to-azure/media/concepts-azure-to-azure-architecture/failover.png new file mode 100644 index 0000000000000..e07e35dcf9368 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/concepts-azure-to-azure-architecture/failover.png differ diff --git a/articles/site-recovery/azure-to-azure/media/concepts-azure-to-azure-architecture/source-environment-expressroute.png b/articles/site-recovery/azure-to-azure/media/concepts-azure-to-azure-architecture/source-environment-expressroute.png new file mode 100644 index 0000000000000..bb1e98060e84b Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/concepts-azure-to-azure-architecture/source-environment-expressroute.png differ diff --git a/articles/site-recovery/azure-to-azure/media/concepts-azure-to-azure-architecture/source-environment.png b/articles/site-recovery/azure-to-azure/media/concepts-azure-to-azure-architecture/source-environment.png new file mode 100644 index 0000000000000..3f5cfa93d10c4 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/concepts-azure-to-azure-architecture/source-environment.png differ diff --git a/articles/site-recovery/azure-to-azure/media/index/article.svg b/articles/site-recovery/azure-to-azure/media/index/article.svg new file mode 100644 index 0000000000000..c1c4fc48502a2 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/media/index/article.svg @@ -0,0 +1,307 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/articles/site-recovery/azure-to-azure/media/index/azuredefaultblack.svg b/articles/site-recovery/azure-to-azure/media/index/azuredefaultblack.svg new file mode 100644 index 0000000000000..c7575a9aa6e93 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/media/index/azuredefaultblack.svg @@ -0,0 +1,31 @@ + + + + + + + + + + + + + + + + + + + + + + + diff --git a/articles/site-recovery/azure-to-azure/media/index/deploy.svg b/articles/site-recovery/azure-to-azure/media/index/deploy.svg new file mode 100644 index 0000000000000..92d9010ae30d9 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/media/index/deploy.svg @@ -0,0 +1,46 @@ + + + + + + + + + + + + + + + + + + + + diff --git a/articles/site-recovery/azure-to-azure/media/index/gear.svg b/articles/site-recovery/azure-to-azure/media/index/gear.svg new file mode 100644 index 0000000000000..419fbce9c9c37 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/media/index/gear.svg @@ -0,0 +1,63 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/articles/site-recovery/azure-to-azure/media/index/get-started.svg b/articles/site-recovery/azure-to-azure/media/index/get-started.svg new file mode 100644 index 0000000000000..03646c34c1dfe --- /dev/null +++ b/articles/site-recovery/azure-to-azure/media/index/get-started.svg @@ -0,0 +1,53 @@ + + + + + + + + + + + + + + + + + + + + + diff --git a/articles/site-recovery/azure-to-azure/media/index/guide.svg b/articles/site-recovery/azure-to-azure/media/index/guide.svg new file mode 100644 index 0000000000000..ba1f476a0193a --- /dev/null +++ b/articles/site-recovery/azure-to-azure/media/index/guide.svg @@ -0,0 +1,31 @@ + + + + + + + + + + + + + diff --git a/articles/site-recovery/azure-to-azure/media/index/placeholder.svg b/articles/site-recovery/azure-to-azure/media/index/placeholder.svg new file mode 100644 index 0000000000000..a482e7e3d44e1 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/media/index/placeholder.svg @@ -0,0 +1,9 @@ + + + + + + \ No newline at end of file diff --git a/articles/site-recovery/azure-to-azure/media/index/portal.svg b/articles/site-recovery/azure-to-azure/media/index/portal.svg new file mode 100644 index 0000000000000..0a34915a6746a --- /dev/null +++ b/articles/site-recovery/azure-to-azure/media/index/portal.svg @@ -0,0 +1 @@ +i_portal \ No newline at end of file diff --git a/articles/site-recovery/azure-to-azure/media/index/site-recovery.svg b/articles/site-recovery/azure-to-azure/media/index/site-recovery.svg new file mode 100644 index 0000000000000..532a369ccbd31 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/media/index/site-recovery.svg @@ -0,0 +1,12 @@ + + + + + + + diff --git a/articles/site-recovery/azure-to-azure/media/index/siterecovery.svg b/articles/site-recovery/azure-to-azure/media/index/siterecovery.svg new file mode 100644 index 0000000000000..532a369ccbd31 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/media/index/siterecovery.svg @@ -0,0 +1,12 @@ + + + + + + + diff --git a/articles/site-recovery/azure-to-azure/media/index/tutorial.svg b/articles/site-recovery/azure-to-azure/media/index/tutorial.svg new file mode 100644 index 0000000000000..fb824bf6365fc --- /dev/null +++ b/articles/site-recovery/azure-to-azure/media/index/tutorial.svg @@ -0,0 +1,54 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/articles/site-recovery/azure-to-azure/media/index/video-library.svg b/articles/site-recovery/azure-to-azure/media/index/video-library.svg new file mode 100644 index 0000000000000..45f0d2e45195e --- /dev/null +++ b/articles/site-recovery/azure-to-azure/media/index/video-library.svg @@ -0,0 +1,67 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-azure-to-azure-architecture/enable-replication-step-1.png b/articles/site-recovery/azure-to-azure/media/site-recovery-azure-to-azure-architecture/enable-replication-step-1.png new file mode 100644 index 0000000000000..aa37e0dc30eb0 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-azure-to-azure-architecture/enable-replication-step-1.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-azure-to-azure-architecture/enable-replication-step-2.png b/articles/site-recovery/azure-to-azure/media/site-recovery-azure-to-azure-architecture/enable-replication-step-2.png new file mode 100644 index 0000000000000..4116f3764fae1 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-azure-to-azure-architecture/enable-replication-step-2.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-azure-to-azure-architecture/failover.png b/articles/site-recovery/azure-to-azure/media/site-recovery-azure-to-azure-architecture/failover.png new file mode 100644 index 0000000000000..e07e35dcf9368 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-azure-to-azure-architecture/failover.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-azure-to-azure-architecture/source-environment-expressroute.png b/articles/site-recovery/azure-to-azure/media/site-recovery-azure-to-azure-architecture/source-environment-expressroute.png new file mode 100644 index 0000000000000..bb1e98060e84b Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-azure-to-azure-architecture/source-environment-expressroute.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-azure-to-azure-architecture/source-environment.png b/articles/site-recovery/azure-to-azure/media/site-recovery-azure-to-azure-architecture/source-environment.png new file mode 100644 index 0000000000000..3f5cfa93d10c4 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-azure-to-azure-architecture/source-environment.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-how-to-reprotect-azure-to-azure/customize.png b/articles/site-recovery/azure-to-azure/media/site-recovery-how-to-reprotect-azure-to-azure/customize.png new file mode 100644 index 0000000000000..f5fe2a89e6912 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-how-to-reprotect-azure-to-azure/customize.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-how-to-reprotect-azure-to-azure/customizeblade.png b/articles/site-recovery/azure-to-azure/media/site-recovery-how-to-reprotect-azure-to-azure/customizeblade.png new file mode 100644 index 0000000000000..4e6cd6bf0b9a4 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-how-to-reprotect-azure-to-azure/customizeblade.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-how-to-reprotect-azure-to-azure/reprotect.png b/articles/site-recovery/azure-to-azure/media/site-recovery-how-to-reprotect-azure-to-azure/reprotect.png new file mode 100644 index 0000000000000..b4251ecfb0b85 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-how-to-reprotect-azure-to-azure/reprotect.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-how-to-reprotect-azure-to-azure/reprotectblade.png b/articles/site-recovery/azure-to-azure/media/site-recovery-how-to-reprotect-azure-to-azure/reprotectblade.png new file mode 100644 index 0000000000000..0a974fe96540e Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-how-to-reprotect-azure-to-azure/reprotectblade.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/modify-network-mapping.png b/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/modify-network-mapping.png new file mode 100644 index 0000000000000..8d7aaa849a3e2 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/modify-network-mapping.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/modify-subnet.png b/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/modify-subnet.png new file mode 100644 index 0000000000000..1501503dfbf8b Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/modify-subnet.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping1.png b/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping1.png new file mode 100644 index 0000000000000..9a94d34fad717 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping1.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping2.png b/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping2.png new file mode 100644 index 0000000000000..a98cfb9574f6e Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping2.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping3.png b/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping3.png new file mode 100644 index 0000000000000..f24c6e843c092 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping3.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping4.png b/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping4.png new file mode 100644 index 0000000000000..85e8356234736 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping4.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping5.png b/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping5.png new file mode 100644 index 0000000000000..c3f43e477a4b9 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping5.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping6.png b/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping6.png new file mode 100644 index 0000000000000..34040c8ac6d56 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-network-mapping-azure-to-azure/network-mapping6.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/customize.png b/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/customize.png new file mode 100644 index 0000000000000..e97848854b56a Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/customize.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/enabledrwizard.png b/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/enabledrwizard.png new file mode 100644 index 0000000000000..038bc9d9cb09f Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/enabledrwizard.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/enabledrwizard1.png b/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/enabledrwizard1.png new file mode 100644 index 0000000000000..be54b286b3626 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/enabledrwizard1.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/enabledrwizard3.png b/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/enabledrwizard3.png new file mode 100644 index 0000000000000..33b15749fc871 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/enabledrwizard3.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/plusreplicate.png b/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/plusreplicate.png new file mode 100644 index 0000000000000..411514e31b066 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/plusreplicate.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/replicateditems.png b/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/replicateditems.png new file mode 100644 index 0000000000000..395d23bbbc28d Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/replicateditems.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/virtualmachine_selection.png b/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/virtualmachine_selection.png new file mode 100644 index 0000000000000..e6c11852c8d24 Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/virtualmachine_selection.png differ diff --git a/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/vmsettings_protection.png b/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/vmsettings_protection.png new file mode 100644 index 0000000000000..138439173353e Binary files /dev/null and b/articles/site-recovery/azure-to-azure/media/site-recovery-replicate-azure-to-azure/vmsettings_protection.png differ diff --git a/articles/site-recovery/azure-to-azure/site-recovery-azure-to-azure-after-migration.md b/articles/site-recovery/azure-to-azure/site-recovery-azure-to-azure-after-migration.md new file mode 100644 index 0000000000000..7ea48f20371c6 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/site-recovery-azure-to-azure-after-migration.md @@ -0,0 +1,103 @@ +--- +title: Prepare machines to set up disaster recovery between Azure regions after migration to Azure by using Site Recovery | Microsoft Docs +description: This article describes how to prepare machines to set up disaster recovery between Azure regions after migration to Azure by using Azure Site Recovery. +services: site-recovery +documentationcenter: '' +author: ponatara +manager: abhemraj +editor: '' + +ms.assetid: 9126f5e8-e9ed-4c31-b6b4-bf969c12c184 +ms.service: site-recovery +ms.workload: storage-backup-recovery +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 05/22/2017 +ms.author: ponatara + +--- +# Replicate Azure VMs to another region after migration to Azure by using Azure Site Recovery + +>[!NOTE] +> Azure Site Recovery replication for Azure virtual machines (VMs) is currently in preview. + +## Overview + +This article helps you prepare Azure virtual machines for replication between two Azure regions after these machines have been migrated from an on-premises environment to Azure by using Azure Site Recovery. + +## Disaster recovery and compliance +Today, more and more enterprises are moving their workloads to Azure. With enterprises moving mission-critical on-premises production workloads to Azure, setting up disaster recovery for these workloads is mandatory for compliance and to safeguard against any disruptions in an Azure region. + +## Steps for preparing migrated machines for replication +To prepare migrated machines for setting up replication to another Azure region: + +1. Complete migration. +2. Install the Azure agent if needed. +3. Remove the Mobility service. +4. Restart the VM. + +These steps are described in more detail in the following sections. + +### Step 1: Migrate workloads running on Hyper-V VMs, VMware VMs, and physical servers to run on Azure VMs + +To set up replication and migrate your on-premises Hyper-V, VMware, and physical workloads to Azure, follow the steps in the [Migrate Azure IaaS virtual machines between Azure regions with Azure Site Recovery](site-recovery-migrate-azure-to-azure.md) article. + +After migration, you don't need to commit or delete a failover. Instead, select the **Complete Migration** option for each machine you want to migrate: +1. In **Replicated Items**, right-click the VM, and click **Complete Migration**. Click **OK** to complete the step. You can track progress in the VM properties by monitoring the Complete Migration job in **Site Recovery jobs**. +2. The **Complete Migration** action completes the migration process, removes replication for the machine, and stops Site Recovery billing for the machine. + +### Step 2: Install the Azure VM agent on the virtual machine +The Azure [VM agent](../../virtual-machines/windows/classic/agents-and-extensions.md#azure-vm-agents-for-windows-and-linux) must be installed on the virtual machine for the Site Recovery extension to work and to help protect the VM. + +>[!IMPORTANT] +>Beginning with version 9.7.0.0, on Windows virtual machines, the Mobility service installer also installs the latest available Azure VM agent. On migration, the virtual machine meets the +agent installation prerequisite for using any VM extension, including the Site Recovery extension. The Azure VM agent needs to be manually installed only if the Mobility service installed on the migrated machine is version 9.6 or earlier. + +The following table provides additional information about installing the VM agent and validating that it was installed: + +| **Operation** | **Windows** | **Linux** | +| --- | --- | --- | +| Installing the VM agent |Download and install the [agent MSI](http://go.microsoft.com/fwlink/?LinkID=394789&clcid=0x409). You need administrator privileges to complete the installation. |Install the latest [Linux agent](../../virtual-machines/linux/agent-user-guide.md). You need administrator privileges to complete the installation. We recommend installing the agent from your distribution repository. We *do not recommend* installing the Linux VM agent directly from GitHub. | +| Validating the VM agent installation |1. Browse to the C:\WindowsAzure\Packages folder in the Azure VM. You should see the WaAppAgent.exe file.
2. Right-click the file, go to **Properties**, and then select the **Details** tab. The **Product Version** field should be 2.6.1198.718 or higher. |N/A | + + +### Step 3: Remove the Mobility service from the migrated virtual machine + +If you have migrated your on-premises VMware machines or physical servers on Windows/Linux, you need to manually remove/uninstall the Mobility service from the migrated virtual machine. + +>[!IMPORTANT] +>This step is not required for Hyper-V VMs migrated to Azure. + +#### Uninstall the Mobility service on a Windows Server VM +Use one of the following methods to uninstall the Mobility service on a Windows Server computer. + +##### Uninstall by using the Windows UI +1. In the Control Panel, select **Programs**. +2. Select **Microsoft Azure Site Recovery Mobility Service/Master Target server**, and then select **Uninstall**. + +##### Uninstall at a command prompt +1. Open a Command Prompt window as an administrator. +2. To uninstall the Mobility service, run the following command: + + ``` + MsiExec.exe /qn /x {275197FC-14FD-4560-A5EB-38217F80CBD1} /L+*V "C:\ProgramData\ASRSetupLogs\UnifiedAgentMSIUninstall.log" + ``` + +#### Uninstall the Mobility service on a Linux computer +1. On your Linux server, sign in as a **root** user. +2. In a terminal, go to /user/local/ASR. +3. To uninstall the Mobility service, run the following command: + + ``` + uninstall.sh -Y + ``` + +### Step 4: Restart the VM + +After you uninstall the Mobility service, restart the VM before you set up replication to another Azure region. + + +## Next steps +- Start protecting your workloads by [replicating Azure virtual machines](azure-to-azure-quickstart.md). +- Learn more about [networking guidance for replicating Azure virtual machines](site-recovery-azure-to-azure-networking-guidance.md). diff --git a/articles/site-recovery/azure-to-azure/site-recovery-azure-to-azure-networking-guidance.md b/articles/site-recovery/azure-to-azure/site-recovery-azure-to-azure-networking-guidance.md new file mode 100644 index 0000000000000..2505604630d4b --- /dev/null +++ b/articles/site-recovery/azure-to-azure/site-recovery-azure-to-azure-networking-guidance.md @@ -0,0 +1,183 @@ +--- +title: Azure Site Recovery networking guidance for replicating virtual machines from Azure to Azure | Microsoft Docs +description: Networking guidance for replicating Azure virtual machines +services: site-recovery +documentationcenter: '' +author: sujayt +manager: rochakm +editor: '' + +ms.assetid: +ms.service: site-recovery +ms.devlang: na +ms.topic: article +ms.tgt_pltfrm: na +ms.workload: storage-backup-recovery +ms.date: 08/31/2017 +ms.author: sujayt + +--- +# Networking guidance for replicating Azure virtual machines + +>[!NOTE] +> Site Recovery replication for Azure virtual machines is currently in preview. + +This article details the networking guidance for Azure Site Recovery when you're replicating and recovering Azure virtual machines from one region to another region. For more about Azure Site Recovery requirements, see the [prerequisites](site-recovery-support-matrix-azure-to-azure.md) article. + +## Site Recovery architecture + +Site Recovery provides a simple and easy way to replicate applications running on Azure virtual machines to another Azure region so that they can be recovered if there is a disruption in the primary region. Learn more about [this scenario and Site Recovery architecture](concepts-azure-to-azure-architecture.md). + +## Your network infrastructure + +The following diagram depicts the typical Azure environment for an application running on Azure virtual machines: + +![customer-environment](./media/site-recovery-azure-to-azure-architecture/source-environment.png) + +If you are using Azure ExpressRoute or a VPN connection from an on-premises network to Azure, the environment looks like this: + +![customer-environment](./media/site-recovery-azure-to-azure-architecture/source-environment-expressroute.png) + +Typically, customers protect their networks using firewalls and/or network security groups (NSGs). The firewalls can use either URL-based or IP-based whitelisting for controlling network connectivity. NSGs allow rules for using IP ranges to control network connectivity. + +>[!IMPORTANT] +> If you are using an authenticated proxy to control network connectivity, it is not supported, and Site Recovery replication cannot be enabled. + +The following sections discuss the network outbound connectivity changes that are required from Azure virtual machines for Site Recovery replication to work. + +## Outbound connectivity for Azure Site Recovery URLs + +If you are using any URL-based firewall proxy to control outbound connectivity, be sure to whitelist these required Azure Site Recovery service URLs: + + +**URL** | **Purpose** +--- | --- +*.blob.core.windows.net | Required so that data can be written to the cache storage account in the source region from the VM. +login.microsoftonline.com | Required for authorization and authentication to the Site Recovery service URLs. +*.hypervrecoverymanager.windowsazure.com | Required so that the Site Recovery service communication can occur from the VM. +*.servicebus.windows.net | Required so that the Site Recovery monitoring and diagnostics data can be written from the VM. + +## Outbound connectivity for Azure Site Recovery IP ranges + +>[!NOTE] +> To automatically create the required NSG rules on the network security group, you can [download and use this script](https://gallery.technet.microsoft.com/Azure-Recovery-script-to-0c950702). + +>[!IMPORTANT] +> * We recommend that you create the required NSG rules on a test network security group and verify that there are no problems before you create the rules on a production network security group. +> * To create the required number of NSG rules, ensure that your subscription is whitelisted. Contact support to increase the NSG rule limit in your subscription. + +If you are using any IP-based firewall proxy or NSG rules to control outbound connectivity, the following IP ranges need to be whitelisted, depending on the source and target locations of the virtual machines: + +- All IP ranges that correspond to the source location. (You can download the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=41653).) Whitelisting is required so that data can be written to the cache storage account from the VM. + +- All IP ranges that correspond to Office 365 [authentication and identity IP V4 endpoints](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2#bkmk_identity). + + >[!NOTE] + > If new IPs get added to Office 365 IP ranges in the future, you need to create new NSG rules. + +- Site Recovery service endpoint IPs ([available in an XML file](https://aka.ms/site-recovery-public-ips)), which depend on your target location: + + **Target location** | **Site Recovery service IPs** | **Site Recovery monitoring IP** + --- | --- | --- + East Asia | 52.175.17.132 | 13.94.47.61 + Southeast Asia | 52.187.58.193 | 13.76.179.223 + Central India | 52.172.187.37 | 104.211.98.185 + South India | 52.172.46.220 | 104.211.224.190 + North Central US | 23.96.195.247 | 168.62.249.226 + North Europe | 40.69.212.238 | 52.169.18.8 + West Europe | 52.166.13.64 | 40.68.93.145 + East US | 13.82.88.226 | 104.45.147.24 + West US | 40.83.179.48 | 104.40.26.199 + South Central US | 13.84.148.14 | 104.210.146.250 + Central US | 40.69.144.231 | 52.165.34.144 + East US 2 | 52.184.158.163 | 40.79.44.59 + Japan East | 52.185.150.140 | 138.91.1.105 + Japan West | 52.175.146.69 | 138.91.17.38 + Brazil South | 191.234.185.172 | 23.97.97.36 + Australia East | 104.210.113.114 | 191.239.64.144 + Australia Southeast | 13.70.159.158 | 191.239.160.45 + Canada Central | 52.228.36.192 | 40.85.226.62 + Canada East | 52.229.125.98 | 40.86.225.142 + West Central US | 52.161.20.168 | 13.78.149.209 + West US 2 | 52.183.45.166 | 13.66.228.204 + UK West | 51.141.3.203 | 51.141.14.113 + UK South | 51.140.43.158 | 51.140.189.52 + UK South 2 | 13.87.37.4| 13.87.34.139 + UK North | 51.142.209.167 | 13.87.102.68 + Korea Central | 52.231.28.253 | 52.231.32.85 + Korea South | 52.231.298.185 | 52.231.200.144 + +## Sample NSG configuration +This section explains the steps to configure NSG rules so that Site Recovery replication can work on a virtual machine. If you are using NSG rules to control outbound connectivity, use "Allow HTTPS outbound" rules for all the required IP ranges. + +>[!Note] +> To automatically create the required NSG rules on the network security group, you can [download and use this script](https://gallery.technet.microsoft.com/Azure-Recovery-script-to-0c950702). + +For example, if your VM's source location is "East US" and your replication target location is "Central US," follow the steps in the next two sections. + +>[!IMPORTANT] +> * We recommend that you create the required NSG rules on a test network security group and verify that there are no problems before you create the rules on a production network security group. +> * To create the required number of NSG rules, ensure that your subscription is whitelisted. Contact support to increase the NSG rule limit in your subscription. + +### NSG rules on the East US network security group + +* Create rules that correspond to [East US IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=41653). This is required so that data can be written to the cache storage account from the VM. + +* Create rules for all IP ranges that correspond to Office 365 [authentication and identity IP V4 endpoints](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2#bkmk_identity). + +* Create rules that correspond to the target location: + + **Location** | **Site Recovery service IPs** | **Site Recovery monitoring IP** + --- | --- | --- + Central US | 40.69.144.231 | 52.165.34.144 + +### NSG rules on the Central US network security group + +These rules are required so that replication can be enabled from the target region to the source region post-failover: + +* Rules that correspond to [Central US IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=41653). These are required so that data can be written to the cache storage account from the VM. + +* Rules for all IP ranges that correspond to Office 365 [authentication and identity IP V4 endpoints](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2#bkmk_identity). + +* Rules that correspond to the source location: + + **Location** | **Site Recovery service IPs** | **Site Recovery monitoring IP** + --- | --- | --- + East US | 13.82.88.226 | 104.45.147.24 + + +## Guidelines for existing Azure-to-on-premises ExpressRoute/VPN configuration + +If you have an ExpressRoute or VPN connection between on-premises and the source location in Azure, follow the guidelines in this section. + +### Forced tunneling configuration + +A common customer configuration is to define a default route (0.0.0.0/0) that forces outbound Internet traffic to flow through the on-premises location. We do not recommend this. The replication traffic and Site Recovery service communication should not leave the Azure boundary. The solution is to add user-defined routes (UDRs) for [these IP ranges](#outbound-connectivity-for-azure-site-recovery-ip-ranges) so that the replication traffic doesn’t go on-premises. + +### Connectivity between the target and on-premises location + +Follow these guidelines for connections between the target location and the on-premises location: +- If your application needs to connect to the on-premises machines or if there are clients that connect to the application from on-premises over VPN/ExpressRoute, ensure that you have at least a [site-to-site connection](../../vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-portal.md) between your target Azure region and the on-premises datacenter. + +- If you expect a lot of traffic to flow between your target Azure region and the on-premises datacenter, you should create another [ExpressRoute connection](../../expressroute/expressroute-introduction.md) between the target Azure region and the on-premises datacenter. + +- If you want to retain IPs for the virtual machines after they fail over, keep the target region's site-to-site/ExpressRoute connection in a disconnected state. This is to make sure there is no range clash between the source region's IP ranges and target region's IP ranges. + +### Best practices for ExpressRoute configuration +Follow these best practices for ExpressRoute configuration: + +- You need to create an ExpressRoute circuit in both the source and target regions. Then you need to create a connection between: + - The source virtual network and the ExpressRoute circuit. + - The target virtual network and the ExpressRoute circuit. + +- As part of ExpressRoute standard, you can create circuits in the same geopolitical region. To create ExpressRoute circuits in different geopolitical regions, Azure ExpressRoute Premium is required, which involves an incremental cost. (If you are already using ExpressRoute Premium, there is no extra cost.) For more details, see the [ExpressRoute locations document](../../expressroute/expressroute-locations.md#azure-regions-to-expressroute-locations-within-a-geopolitical-region) and [ExpressRoute pricing](https://azure.microsoft.com/pricing/details/expressroute/). + +- We recommend that you use different IP ranges in source and target regions. The ExpressRoute circuit won't be able to connect with two Azure virtual networks of the same IP ranges at the same time. + +- You can create virtual networks with the same IP ranges in both regions and then create ExpressRoute circuits in both regions. In the case of a failover event, disconnect the circuit from the source virtual network, and connect the circuit in the target virtual network. + + >[!IMPORTANT] + > If the primary region is completely down, the disconnect operation can fail. That will prevent the target virtual network from getting ExpressRoute connectivity. + +## Next steps +Start protecting your workloads by [replicating Azure virtual machines](azure-to-azure-quickstart.md). diff --git a/articles/site-recovery/azure-to-azure/site-recovery-azure-to-azure-troubleshoot-errors.md b/articles/site-recovery/azure-to-azure/site-recovery-azure-to-azure-troubleshoot-errors.md new file mode 100644 index 0000000000000..1e03ee6dcd516 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/site-recovery-azure-to-azure-troubleshoot-errors.md @@ -0,0 +1,133 @@ +--- +title: Azure Site Recovery troubleshooting for Azure-to-Azure replication issues and errors| Microsoft Docs +description: Troubleshooting errors and issues when replicating Azure virtual machines for disaster recovery +services: site-recovery +documentationcenter: '' +author: sujayt +manager: rochakm +editor: '' + +ms.assetid: +ms.service: site-recovery +ms.devlang: na +ms.topic: article +ms.tgt_pltfrm: na +ms.workload: storage-backup-recovery +ms.date: 11/21/2017 +ms.author: sujayt + +--- +# Troubleshoot Azure-to-Azure VM replication issues + +This article describes the common issues in Azure Site Recovery when replicating and recovering Azure virtual machines from one region to another region and explains how to troubleshoot them. For more information about supported configurations, see the [support matrix for replicating Azure VMs](site-recovery-support-matrix-azure-to-azure.md). + +## Azure resource quota issues (error code 150097) +Your subscription should be enabled to create Azure VMs in the target region that you plan to use as your disaster recovery region. Also, your subscription should have sufficient quota enabled to create VMs of specific size. By default, Site Recovery picks the same size for the target VM as the source VM. If the matching size isn't available, the closest possible size is picked automatically. If there's no matching size that supports source VM configuration, this error message appears: + +**Error code** | **Possible causes** | **Recommendation** +--- | --- | --- +150097

**Message**: Replication couldn't be enabled for the virtual machine VmName. | - Your subscription ID might not be enabled to create any VMs in the target region location.

- Your subscription ID might not be enabled or doesn't have sufficient quota to create specific VM sizes in the target region location.

- A suitable target VM size that matches the source VM NIC count (2) isn't found for the subscription ID in the target region location.| Contact [Azure billing support](https://docs.microsoft.com/azure/azure-supportability/resource-manager-core-quotas-request) to enable VM creation for the required VM sizes in the target location for your subscription. After it's enabled, retry the failed operation. + +### Fix the problem +You can contact [Azure billing support](https://docs.microsoft.com/azure/azure-supportability/resource-manager-core-quotas-request) to enable your subscription to create VMs of required sizes in the target location. + +If the target location has a capacity constraint, disable replication and enable it to a different location where your subscription has sufficient quota to create VMs of the required sizes. + +## Trusted root certificates (error code 151066) + +If all the latest trusted root certificates aren't present on the VM, your "enable replication" job might fail. Without the certificates, the authentication and authorization of Site Recovery service calls from the VM fail. The error message for the failed "enable replication" Site Recovery job appears: + +**Error code** | **Possible cause** | **Recommendations** +--- | --- | --- +151066

**Message**: Site Recovery configuration failed. | The required trusted root certificates used for authorization and authentication aren't present on the machine. | - For a VM running the Windows operating system, ensure that the trusted root certificates are present on the machine. For information, see [Configure trusted roots and disallowed certificates](https://technet.microsoft.com/library/dn265983.aspx).

- For a VM running the Linux operating system, follow the guidance for trusted root certificates published by the Linux operating system version distributor. + +### Fix the problem +**Windows** + +Install all the latest Windows updates on the VM so that all the trusted root certificates are present on the machine. If you're in a disconnected environment, follow the standard Windows update process in your organization to get the certificates. If the required certificates aren't present on the VM, the calls to the Site Recovery service fail for security reasons. + +Follow the typical Windows update management or certificate update management process in your organization to get all the latest root certificates and the updated certificate revocation list on the VMs. + +To verify that the issue is resolved, go to login.microsoftonline.com from a browser in your VM. + +**Linux** + +Follow the guidance provided by your Linux distributor to get the latest trusted root certificates and the latest certificate revocation list on the VM. + +Because SuSE Linux uses symlinks to maintain a certificate list, follow these steps: + +1. Sign in as a root user. + +2. Run this command: + + ``# cd /etc/ssl/certs`` + +3. To see if the Symantec root CA certificate is present or not, run this command: + + ``# ls VeriSign_Class_3_Public_Primary_Certification_Authority_G5.pem`` + +4. If the file isn't found, run these commands: + + ``# wget https://www.symantec.com/content/dam/symantec/docs/other-resources/verisign-class-3-public-primary-certification-authority-g5-en.pem -O VeriSign_Class_3_Public_Primary_Certification_Authority_G5.pem`` + + ``# c_rehash`` + +5. To create a symlink with b204d74a.0 -> VeriSign_Class_3_Public_Primary_Certification_Authority_G5.pem, run this command: + + ``# ln -s VeriSign_Class_3_Public_Primary_Certification_Authority_G5.pem b204d74a.0`` + +6. Check to see if this command has the following output. If not, you have to create a symlink: + + ``# ls -l | grep Baltimore + -rw-r--r-- 1 root root 1303 Apr 7 2016 Baltimore_CyberTrust_Root.pem + lrwxrwxrwx 1 root root 29 May 30 04:47 3ad48a91.0 -> Baltimore_CyberTrust_Root.pem + lrwxrwxrwx 1 root root 29 May 30 05:01 653b494a.0 -> Baltimore_CyberTrust_Root.pem`` + +7. If symlink 653b494a.0 isn't present, use this command to create a symlink: + + ``# ln -s Baltimore_CyberTrust_Root.pem 653b494a.0`` + + +## Outbound connectivity for Site Recovery URLs or IP ranges (error code 151037 or 151072) + +For Site Recovery replication to work, outbound connectivity to specific URLs or IP ranges is required from the VM. If your VM is behind a firewall or uses network security group (NSG) rules to control outbound connectivity, you might see one of these error messages: + +**Error codes** | **Possible causes** | **Recommendations** +--- | --- | --- +151037

**Message**: Failed to register Azure virtual machine with Site Recovery. | - You're using NSG to control outbound access on the VM and the required IP ranges aren't whitelisted for outbound access.

- You're using third-party firewall tools and the required IP ranges/URLs are not whitelisted.
| - If you're using firewall proxy to control outbound network connectivity on the VM, ensure that the prerequisite URLs or datacenter IP ranges are whitelisted. For information, see [firewall proxy guidance](https://aka.ms/a2a-firewall-proxy-guidance).

- If you're using NSG rules to control outbound network connectivity on the VM, ensure that the prerequisite datacenter IP ranges are whitelisted. For information, see [network security group guidance](https://aka.ms/a2a-nsg-guidance). +151072

**Message**: Site Recovery configuration failed. | Connection cannot be established to Site Recovery service endpoints. | - If you're using firewall proxy to control outbound network connectivity on the VM, ensure that the prerequisite URLs or datacenter IP ranges are whitelisted. For information, see [firewall proxy guidance](https://aka.ms/a2a-firewall-proxy-guidance).

- If you're using NSG rules to control outbound network connectivity on the VM, ensure that the prerequisite datacenter IP ranges are whitelisted. For information, see [network security group guidance](https://aka.ms/a2a-nsg-guidance). + +### Fix the problem +To whitelist [the required URLs](site-recovery-azure-to-azure-networking-guidance.md#outbound-connectivity-for-azure-site-recovery-urls) or the [required IP ranges](site-recovery-azure-to-azure-networking-guidance.md#outbound-connectivity-for-azure-site-recovery-ip-ranges), follow the steps in the [networking guidance document](site-recovery-azure-to-azure-networking-guidance.md). + +## Disk not found in the machine (error code 150039) + +A new disk attached to the VM must be initialized. + +**Error code** | **Possible causes** | **Recommendations** +--- | --- | --- +150039

**Message**: Azure data disk (DiskName) (DiskURI) with logical unit number (LUN) (LUNValue) was not mapped to a corresponding disk being reported from within the VM that has the same LUN value. | - A new data disk was attached to the VM but it wasn't initialized.

- The data disk inside the VM is not correctly reporting the LUN value at which the disk was attached to the VM.| Ensure that the data disks are initialized, and then retry the operation.

For Windows: [Attach and initialize a new disk](https://docs.microsoft.com/azure/virtual-machines/windows/attach-disk-portal#option-1-attach-and-initialize-a-new-disk).

For Linux: [Initialize a new data disk in Linux](https://docs.microsoft.com/azure/virtual-machines/linux/classic/attach-disk#initialize-a-new-data-disk-in-linux). + +### Fix the problem +Ensure that the data disks have been initialized, and then retry the operation: + +- For Windows: [Attach and initialize a new disk](https://docs.microsoft.com/azure/virtual-machines/windows/attach-disk-portal#option-1-attach-and-initialize-a-new-disk). +- For Linux: [Initialize a new data disk in Linux](https://docs.microsoft.com/azure/virtual-machines/linux/classic/attach-disk#initialize-a-new-data-disk-in-linux). + +If the problem persists, contact support. + + +## Unable to see the Azure VM for selection in "enable replication" + +If you don't see your Azure VM for selection when you enable replication, this might be due to stale Site Recovery configuration left on the Azure VM. The stale configuration could be left on an Azure VM in the following cases: + +- You enabled replication for the Azure VM by using Site Recovery and then deleted the Site Recovery vault without explicitly disabling replication on the VM. +- You enabled replication for the Azure VM by using Site Recovery and then deleted the resource group containing the Site Recovery vault without explicitly disabling replication on the VM. + +### Fix the problem + +You can use [Remove stale ASR configuration script](https://gallery.technet.microsoft.com/Azure-Recovery-ASR-script-3a93f412) and remove the stale Site Recovery configuration on the Azure VM. You should see the VM when you enable replication, after removing the stale configuration. + + +## Next steps +[Replicate Azure virtual machines](azure-to-azure-quickstart.md) diff --git a/articles/site-recovery/azure-to-azure/site-recovery-how-to-reprotect-azure-to-azure.md b/articles/site-recovery/azure-to-azure/site-recovery-how-to-reprotect-azure-to-azure.md new file mode 100644 index 0000000000000..567764785f6e4 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/site-recovery-how-to-reprotect-azure-to-azure.md @@ -0,0 +1,105 @@ +--- +title: How to Reprotect from failed over Azure virtual machines back to primary Azure region | Microsoft Docs +description: After failover of VMs from one Azure region to another, you can use Azure Site Recovery to protect the machines in reverse direction. Learn the steps how to do a reprotect before a failover again. +services: site-recovery +documentationcenter: '' +author: ruturaj +manager: gauravd +editor: '' + +ms.assetid: 44813a48-c680-4581-a92e-cecc57cc3b1e +ms.service: site-recovery +ms.devlang: na +ms.topic: article +ms.tgt_pltfrm: na +ms.workload: storage-backup-recovery +ms.date: 08/11/2017 +ms.author: ruturajd + +--- +# Reprotect Azure VMs back to the primary region + + + +>[!NOTE] +> +> Site Recovery replication for Azure virtual machines (VMs) is currently in preview. + + +When you [fail over](../site-recovery-failover.md) VMs from one Azure region to another, the failed over VMs are in an unprotected state. If you want to bring them back to the primary region, you need to first start replicating the VMs, and then fail over again. There is no difference in failover in one direction or other. Similarly, after you enable VM replication, there is no difference between reprotection post-failover, or post-failback. + +To explain the reprotection process, as an example we assume that the primary site of the protected VMs is East Asia region, and the recovery site is South East Asia. During failover, you fail over the VMs to the South East Asia region. Before you fail back, you need to replicate the VMs from South East Asia, back to East Asia. This article describes the steps needed for reprotection. + +> [!WARNING] +> If you have completed migration and moved the virtual machine to another resource group (or deleted the Azure virtual machine) you cannot fail back. + +After reprotection finishes, and the VMs are replicating, you can initiate a failover on the VMs, to bring them back to the East Asia region. + +## Prerequisites +1. The VM should have been committed. +2. The target site (East Asia) should be available, and you should be able to access/create new resources in that region. + +## Reprotect + +Follow these steps to reprotect a VM using the default settings. + +1. In **Vault** > **Replicated items**, right-click the VM that failed over, and then select **Re-Protect**. You can also click the machine and select **Re-Protect** from the command buttons. + + ![Reprotection](./media/site-recovery-how-to-reprotect-azure-to-azure/reprotect.png) + +2. Verify that the replication direction **Southeast Asia to East Asia**, is selected. + + ![Reprotect](./media/site-recovery-how-to-reprotect-azure-to-azure/reprotectblade.png) + +3. Review the **Resource group, Network, Storage, and Availability sets** information, and click **OK**. If there are any resources marked (new), they are created during reprotection. + +This triggers a reprotection job that seeds the target site with the latest data, and once that completes, replicates the deltas before you fail back to Southeast Asia. + +### Reprotect customization +If you want to choose the extract storage account or the network during reprotect, you can do so using the customize option provided on the reprotection page. + +![Customize option](./media/site-recovery-how-to-reprotect-azure-to-azure/customize.png) + +You can customize the following properties of the target virtual machine during reprotection. + +![Customize](./media/site-recovery-how-to-reprotect-azure-to-azure/customizeblade.png) + +|Property |Notes | +|---------|---------| +|Target resource group | You can choose to change the target resource group in which th virtual machine will be created. As the part of reprotect, the target virtual machine will be deleted, hence you can choose a new resource group under which you can create the VM post failover | +|Target Virtual Network | Network cannot be changed during reprotection. To change the network, redo the network mapping. | +|Target Storage | You can change the storage account to which the virtual machine will be created post failover. | +|Cache Storage | You can specify a cache storage account which will be used during replication. If you go with the defaults, a new cache storage account will be created, if it does not already exist. | +|Availability Set |If the virtual machine in East Asia is part of an availability set, you can choose an availability set for the target virtual machine in Southeast Asia. Defaults will find the existing SEA availability set and try to use it. During customization, you can specify a completely new AV set. | + + +### What happens during reprotect? + +Just like after the first enable protection, following are the artifacts that get created if you use the defaults. +1. A cache storage account gets created in the East Asia region. +2. If the target storage account (the original storage account of the Southeast Asia VM) does not exist, a new one is created. The name is the East Asia virtual machine's storage account suffixed with "asr". +3. If the target AV set does not exist, and the defaults detect that it needs to create a new AV set, then it will be created as part of the reprotection job. If you have customized reprotection, then the selected AV set will be used. + + +The following are the list of steps that happen when you trigger a reprotection job. This is in the case the target side virtual machine exists. + +1. The required artifacts are created as part of reprotect. If they already exist, then they are reused. +2. The target side (Southeast Asia) virtual machine is first turned off, if it is running. +3. The target side virtual machine's disk is copied by Azure Site Recovery into a container as a seed blob. +4. The target side virtual machine is then deleted. +5. The seed blob is used by the current source side (East Asia) virtual machine to replicate. This ensures that only deltas are replicated. +6. The major changes between the source disk and the seed blob are synchronized. This can take some time to complete. +7. Once the reprotection job completes, the delta replication begins that creates a recovery point as per the policy. + +> [!NOTE] +> You cannot protect at a recovery plan level. You can only reprotect at a per VM level. + +After the reprotection succeeds, the virtual machine will enter a protected state. + +## Next steps + +After the virtual machine has entered a protected state, you can initiate a failover. The failover will shut down the virtual machine in East Asia Azure region and then create and boot the Southeast Asia region virtual machine. Hence there is a small downtime for the application. So, choose the time for failover when your application can tolerate a downtime. It is recommended that you run a test failover of the virtual machine first, to make sure it is coming up correctly, before initiating a failover. + +- [Steps to initiate test failover of the virtual machine](../site-recovery-test-failover-to-azure.md) + +- [Steps to initiate failover of the virtual machine](../site-recovery-failover.md) diff --git a/articles/site-recovery/azure-to-azure/site-recovery-migrate-azure-to-azure.md b/articles/site-recovery/azure-to-azure/site-recovery-migrate-azure-to-azure.md new file mode 100644 index 0000000000000..a5a1625894f42 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/site-recovery-migrate-azure-to-azure.md @@ -0,0 +1,35 @@ +--- +title: Migrate Azure IaaS VMs between Azure regions | Microsoft Docs +description: Use Azure Site Recovery to migrate Azure IaaS virtual machines from one Azure region to another. +services: site-recovery +documentationcenter: '' +author: rayne-wiselman +manager: jwhit +editor: tysonn + +ms.assetid: 8a29e0d9-0010-4739-972f-02b8bdf360f6 +ms.service: site-recovery +ms.workload: storage-backup-recovery +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 08/31/2017 +ms.author: raynew + +--- +# Migrate Azure IaaS virtual machines between Azure regions with Azure Site Recovery + +Use this article if you want to migrate Azure virtual machines (VMs) between Azure regions, using the [Site Recovery](../site-recovery-overview.md) service. + +## Prerequisites + +You need IaaS VMs you want to migrate. + +## Deployment steps + +1. [Create a vault](azure-to-azure-tutorial-enable-replication.md#create-a-vault). +2. [Enable replication](azure-to-azure-tutorial-enable-replication.md#enable-replication) for the VMs you want to migrate, and choose Azure as the source. Currently, native replication of Azure VMs using managed disks isn't supported. +3. [Run a failover](../site-recovery-failover.md). After initial replication is complete, you can run a failover from one Azure region to another. Optionally, you can create a recovery plan and run a failover, to migrate multiple virtual machines between regions. [Learn more](../site-recovery-create-recovery-plans.md) about recovery plans. + +## Next steps +Learn about other replication scenarios in [What is Azure Site Recovery?](../site-recovery-overview.md) diff --git a/articles/site-recovery/azure-to-azure/site-recovery-network-mapping-azure-to-azure.md b/articles/site-recovery/azure-to-azure/site-recovery-network-mapping-azure-to-azure.md new file mode 100644 index 0000000000000..e1666ac606c16 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/site-recovery-network-mapping-azure-to-azure.md @@ -0,0 +1,97 @@ +--- +title: Network mapping between two Azure regions in Azure Site Recovery | Microsoft Docs +description: Azure Site Recovery coordinates the replication, failover and recovery of virtual machines and physical servers. Learn about failover to Azure or a secondary datacenter. +services: site-recovery +documentationcenter: '' +author: prateek9us +manager: gauravd +editor: '' + +ms.assetid: 44813a48-c680-4581-a92e-cecc57cc3b1e +ms.service: site-recovery +ms.devlang: na +ms.topic: article +ms.tgt_pltfrm: na +ms.workload: storage-backup-recovery +ms.date: 11/21/2017 +ms.author: pratshar + +--- +# Network mapping between two Azure regions + + +This article describes how to map Azure virtual networks of two Azure regions with each other. Network mapping ensures that when replicated virtual machine is created in the target Azure region, it is created on the virtual network that is mapped to virtual network of the source virtual machine. + +## Prerequisites +Before you map networks make sure, you have created [Azure virtual networks](../../virtual-network/virtual-networks-overview.md) in both source and target Azure regions. + +## Map networks + +To map an Azure virtual network in one Azure region to another virtual network in another region, go to Site Recovery Infrastructure -> Network Mapping (For Azure Virtual Machines) and create a network mapping. + +![Network Mapping](./media/site-recovery-network-mapping-azure-to-azure/network-mapping1.png) + + +In the example below my virtual machine is running in East Asia region and is being replicated to Southeast Asia. + +Select the source and target network and then click OK to create a network mapping from East Asia to Southeast Asia. + +![Network Mapping](./media/site-recovery-network-mapping-azure-to-azure/network-mapping2.png) + + +Do the same thing to create a network mapping from Southeast Asia to East Asia. +![Network Mapping](./media/site-recovery-network-mapping-azure-to-azure/network-mapping3.png) + + +## Mapping network when enabling replication + +If network mapping is not done when you are replicating a virtual machine for the first time from one Azure region to another, then you can choose target network as part of the same process. Site Recovery creates network mappings from source region to target region and from target region to source region based on this selection. + +![Network Mapping](./media/site-recovery-network-mapping-azure-to-azure/network-mapping4.png) + +By default, Site Recovery creates a network in the target region that is identical to the source network and by adding '-asr' as a suffix to the name of the source network. You can choose an already created network by clicking Customize. + +![Network Mapping](./media/site-recovery-network-mapping-azure-to-azure/network-mapping5.png) + + +If the network mapping is already done, you can't change the target virtual network while enabling replication. To change it, modify existing network mapping. + +![Network Mapping](./media/site-recovery-network-mapping-azure-to-azure/network-mapping6.png) + +![Network Mapping](./media/site-recovery-network-mapping-azure-to-azure/modify-network-mapping.png) + +> [!IMPORTANT] +> If you modify a network mapping from region-1 to region-2, make sure you modify the network mapping from region-2 to region-1 as well. +> +> + + +## Subnet selection +Subnet of the target virtual machine is selected based on the name of the subnet of the source virtual machine. If there is a subnet of the same name as that of the source virtual machine available in the target network, then that is chosen for the target virtual machine. If there is no subnet with the same name in the target network, then alphabetically first subnet is chosen as the target subnet. You can modify this subnet by going to Compute and Network settings of the virtual machine. + +![Modify Subnet](./media/site-recovery-network-mapping-azure-to-azure/modify-subnet.png) + + +## IP address + +IP address for each of the network interface of the target virtual machine is chosen as follows: + +### DHCP +If the network interface of the source virtual machine is using DHCP, then the network interface of the target virtual machine is also set as DHCP. + +### Static IP +If the network interface of the source virtual machine is using Static IP, then the network interface of the target virtual machine is also set to use Static IP. Static IP is chosen as follows: + +#### Same address space + +If the source subnet and the target subnet have the same address space, then the target IP is set same as the IP of the network interface of the source virtual machine. If same IP is not available, then some other available IP is set as the target IP. + +#### Different address space + +If the source subnet and the target subnet have different address space, then the target IP is set as any available IP in the target subnet. + +You can modify the target IP on each network interface by going to Compute and Network settings of the virtual machine. + +## Next steps + +- Learn about [networking guidance for replicating Azure VMs](site-recovery-azure-to-azure-networking-guidance.md). diff --git a/articles/site-recovery/azure-to-azure/site-recovery-replicate-azure-to-azure.md b/articles/site-recovery/azure-to-azure/site-recovery-replicate-azure-to-azure.md new file mode 100644 index 0000000000000..ac15df3e841cc --- /dev/null +++ b/articles/site-recovery/azure-to-azure/site-recovery-replicate-azure-to-azure.md @@ -0,0 +1,89 @@ +--- +title: Replicate Azure VMs to a secondary region with Azure Site Recovery | Microsoft Docs +description: This article describes how to replicate Azure VMs running in one Azure region to another region, using the Azure Site Recovery service. +services: site-recovery +documentationcenter: '' +author: asgang +manager: rochakm +editor: '' + +ms.assetid: +ms.service: site-recovery +ms.devlang: na +ms.topic: article +ms.tgt_pltfrm: na +ms.workload: storage-backup-recovery +ms.date: 11/29/2017 +ms.author: asgang + +--- + + +# Replicate Azure virtual machines to another Azure region + + +This article describes how to replicate Azure virtual machines (VMs) in one Azure region, to a secondary Azure region, using the Azure Site Recovery service. + +>[!NOTE] +> +> Azure VM replication with Site Recovery is currently in preview. + +## Prerequisites + +* You should have a Recovery Services vault in place. We recommend that you create the vault in the target region to which you want your VMs to replicate. +* If you are using Network Security Groups (NSG) rules or a firewall proxy to control access to outbound internet connectivity on the Azure VMs, make sure that you allow the required URLs or IPs. [Learn more](./site-recovery-azure-to-azure-networking-guidance.md). +* If you have an ExpressRoute or a VPN connection between on-premises and the source location in Azure, [learn how to set them up](site-recovery-azure-to-azure-networking-guidance.md#guidelines-for-existing-azure-to-on-premises-expressroutevpn-configuration). +* Your Azure user account needs [specific permissions](../site-recovery-role-based-linked-access-control.md#permissions-required-to-enable-replication-for-new-virtual-machines), to enable replication of an Azure VM. +Your Azure subscription should be enabled to create VMs in the target location that you want to use as a disaster recovery region. Contact support to enable the required quota. + +## Enable replication + +In this procedure, East Asia is used as the source location, and South East Asia as the target. + +1. Click **+Replicate** in the vault to enable replication for the virtual machines. +2. Verify that **Source:** is set to **Azure**. +3. Set **Source location** to East Asia. +4. In **Deployment model**, select **classic** or **Resource Manager**. +5. In **Resource Group**, select the group to which your source VMs belong. All the VMs under the selected resource group are listed. + + ![Enable replication](./media/site-recovery-replicate-azure-to-azure/enabledrwizard1.png) + +6. In **Virtual Machines > Select virtual machines**, click, and select each VM you want to replicate. You can only select machines for which replication can be enabled. Then click **OK**. + + ![Enable replication](./media/site-recovery-replicate-azure-to-azure/virtualmachine_selection.png) + +7. Under **Settings** > **Target Location**, specify to where the source VM data replicates. Site Recovery provides a list of suitable target regions, depending on the region of the selected VMs. +8. Site Recovery sets default target settings. These can be modified as required. + + - **Target resource group**. By default, Site Recovery creates a new resource group in the target region with the "asr" suffix. If the created resource group already exists, it's reused. + - **Target virtual network**. By default, Site Recovery creates a new virtual network in the target region, with the "asr" suffix. This network is mapped to your source network. [Learn more](site-recovery-network-mapping-azure-to-azure.md) about network mapping. + - **Target storage accounts**. By default, Site Recovery creates a new target storage account that matches the source VM storage configuration. If the created account already exists, it's reused. + - **Cache storage accounts**. Azure Site Recovery creates an extra cache storage account, the source region. All changes on the source VMs are tracked and sent to cache storage account before replication to the target location. + - **Availability set**. By default, Site Recovery creates a new availability set in the target region, with an "asr" suffix. If the created set already exists, it's reused. + - **Replication policy**. Site Recovery defines the settings for recovery point retention history, and app-consistent snapshot frequency. By default, Site Recovery creates a new replication policy with default settings of 24 hours for recovery point retention, and 60 minutes for app-consistent snapshot frequency. + + ![Enable replication](./media/site-recovery-replicate-azure-to-azure/enabledrwizard3.PNG) +9. Click **Enable Replication** to start protecting VMs. + +## Customize target resources + +1. Modify any of these target defaults: + + - **Target resource group**. Select any resource group from the list of all the resource groups in the target location, within the subscription. + - **Target virtual network**. Select from the list of all the virtual networks in the target location. + - **Availability set** You can only add availability sets settings to VMs located in a set in the source region. + - **Target storage accounts**: Add any account that's available. + + ![Enable replication](./media/site-recovery-replicate-azure-to-azure/customize.PNG) + +2. Click on **Create target resource** > **Enable Replication**. During initial replication, VM status might take some time to refresh. Click **Refresh** to get the latest status. + + ![Enable replication](./media/site-recovery-replicate-azure-to-azure/replicateditems.PNG) + +3. After VMs are protected, check VM health in **Replicated items**. + + + +## Next steps +[Learn](../azure-to-azure-tutorial-dr-drill.md) how to run a test failover. + diff --git a/articles/site-recovery/azure-to-azure/site-recovery-support-matrix-azure-to-azure.md b/articles/site-recovery/azure-to-azure/site-recovery-support-matrix-azure-to-azure.md new file mode 100644 index 0000000000000..c390a25cd71b7 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/site-recovery-support-matrix-azure-to-azure.md @@ -0,0 +1,192 @@ +--- +title: Azure Site Recovery support matrix for replicating from Azure to Azure | Microsoft Docs +description: Summarizes the supported operating systems and configurations for Azure Site Recovery replication of Azure virtual machines (VMs) from one region to another for disaster recovery (DR) needs. +services: site-recovery +documentationcenter: '' +author: sujayt +manager: rochakm +editor: '' + +ms.assetid: +ms.service: site-recovery +ms.devlang: na +ms.topic: article +ms.tgt_pltfrm: na +ms.workload: storage-backup-recovery +ms.date: 08/31/2017 +ms.author: sujayt + +--- +# Azure Site Recovery support matrix for replicating from Azure to Azure + + +>[!NOTE] +> +> Site Recovery replication for Azure virtual machines is currently in preview. + +This article summarizes supported configurations and components for Azure Site Recovery when replicating and recovering Azure virtual machines from one region to another region. + +## User interface options + +**User interface** | **Supported / Not supported** +--- | --- +**Azure portal** | Supported +**Classic portal** | Not supported +**PowerShell** | Not currently supported +**REST API** | Not currently supported +**CLI** | Not currently supported + + +## Resource move support + +**Resource move type** | **Supported / Not supported** | **Remarks** +--- | --- | --- +**Move vault across resource groups** | Not supported |You cannot move the Recovery services vault across resource groups. +**Move Compute, Storage and Network across resource groups** | Not supported |If you move a virtual machine (or its associated components such as storage and network) after enabling replication, you need to disable replication and enable replication for the virtual machine again. + + + +## Support for deployment models + +**Deployment model** | **Supported / Not supported** | **Remarks** +--- | --- | --- +**Classic** | Supported | You can only replicate a classic virtual machine and recover it as a classic virtual machine. You cannot recover it as a Resource Manager virtual machine. If you deploy a classic VM without a virtual network and directly to an Azure region, it is not supported. +**Resource Manager** | Supported | + +>[!NOTE] +> +> - Replicating Azure virtual machines from one subscription to another for disaster recovery scenarios is not supported. +> - Migrating Azure virtual machines across subscriptions is not supported. +> - Migrating Azure virtual machines within the same region is not supported. +> - Migrating Azure virtual machines from Classic deployment model to Resource manager deployment model is not supported. + +## Support for replicated machine OS versions + +The below support is applicable for any workload running on the mentioned OS. + +#### Windows + +- Windows Server 2016 (Server Core, Server with Desktop Experience)* +- Windows Server 2012 R2 +- Windows Server 2012 +- Windows Server 2008 R2 with at least SP1 + +>[!NOTE] +> +> \* Windows Server 2016 Nano Server is not supported. + +#### Linux + +- Red Hat Enterprise Linux 6.7, 6.8, 6.9, 7.0, 7.1, 7.2, 7.3 +- CentOS 6.5, 6.6, 6.7, 6.8, 6.9, 7.0, 7.1, 7.2, 7.3 +- Ubuntu 14.04 LTS Server [ (supported kernel versions)](#supported-ubuntu-kernel-versions-for-azure-virtual-machines) +- Ubuntu 16.04 LTS Server [ (supported kernel versions)](#supported-ubuntu-kernel-versions-for-azure-virtual-machines) +- Debian 7 +- Debian 8 +- Oracle Enterprise Linux 6.4, 6.5 running either the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3 (UEK3) +- SUSE Linux Enterprise Server 11 SP3 +- SUSE Linux Enterprise Server 11 SP4 + +(Upgrade of replicating machines from SLES 11 SP3 to SLES 11 SP4 is not supported. If a replicated machine has been upgraded from SLES 11SP3 to SLES 11 SP4, you'll need to disable replication and protect the machine again post the upgrade.) + +>[!NOTE] +> +> Ubuntu servers using password based authentication and login, and using the cloud-init package to configure cloud virtual machines, may have password based login disabled upon failover (depending on the cloudinit configuration.) Password based login can be re-enabled on the virtual machine by resetting the password from the settings menu (under the SUPPORT + TROUBLESHOOTING section) of the failed over virtual machine on the Azure portal. + +### Supported Ubuntu kernel versions for Azure virtual machines + +**Release** | **Mobility service version** | **Kernel version** | +--- | --- | --- | +14.04 LTS | 9.9 | 3.13.0-24-generic to 3.13.0-117-generic,
3.16.0-25-generic to 3.16.0-77-generic,
3.19.0-18-generic to 3.19.0-80-generic,
4.2.0-18-generic to 4.2.0-42-generic,
4.4.0-21-generic to 4.4.0-75-generic | +14.04 LTS | 9.10 | 3.13.0-24-generic to 3.13.0-121-generic,
3.16.0-25-generic to 3.16.0-77-generic,
3.19.0-18-generic to 3.19.0-80-generic,
4.2.0-18-generic to 4.2.0-42-generic,
4.4.0-21-generic to 4.4.0-81-generic | +14.04 LTS | 9.11 | 3.13.0-24-generic to 3.13.0-125-generic,
3.16.0-25-generic to 3.16.0-77-generic,
3.19.0-18-generic to 3.19.0-80-generic,
4.2.0-18-generic to 4.2.0-42-generic,
4.4.0-21-generic to 4.4.0-83-generic | +16.04 LTS | 9.10 | 4.4.0-21-generic to 4.4.0-81-generic,
4.8.0-34-generic to 4.8.0-56-generic,
4.10.0-14-generic to 4.10.0-24-generic | +16.04 LTS | 9.11 | 4.4.0-21-generic to 4.4.0-83-generic,
4.8.0-34-generic to 4.8.0-58-generic,
4.10.0-14-generic to 4.10.0-27-generic | + +## Supported file systems and guest storage configurations on Azure virtual machines running Linux OS + +* File systems: ext3, ext4, ReiserFS (Suse Linux Enterprise Server only), XFS +* Volume manager : LVM2 +* Multipath software : Device Mapper + +## Region support + +You can replicate and recover VMs between any two regions within the same geographic cluster. + +**Geographic cluster** | **Azure regions** +-- | -- +America | Canada East, Canada Central, South Central US, West Central US, East US, East US 2, West US, West US 2, Central US, North Central US +Europe | UK West, UK South, North Europe, West Europe +Asia | South India, Central India, Southeast Asia, East Asia, Japan East, Japan West, Korea Central, Korea South +Australia | Australia East, Australia Southeast + +>[!NOTE] +> +> For Brazil South region, you can only replicate and failover to one of South Central US, West Central US, East US, East US 2, West US, West US 2 and North Central US regions and fail back. + + +## Support for Compute configuration + +**Configuration** | **Supported/Not supported** | **Remarks** +--- | --- | --- +Size | Any Azure VM size with atleast 2 CPU cores and 1-GB RAM | Refer to [Azure virtual machine sizes](../../virtual-machines/windows/sizes.md) +Availability sets | Supported | If you use the default option during 'Enable replication' step in portal, the availability set is auto created based on source region configuration. You can change the target availability set in 'Replicated item > Settings > Compute and Network > Availability set' any time. +Hybrid Use Benefit (HUB) VMs | Supported | If the source VM has HUB license enabled, the Test failover or Failover VM also uses the HUB license. +Virtual machine scale sets | Not supported | +Azure Gallery Images - Microsoft published | Supported | Supported as long as the VM runs on a supported operating system by Site Recovery +Azure Gallery images - Third party published | Supported | Supported as long as the VM runs on a supported operating system by Site Recovery. +Custom images - Third party published | Supported | Supported as long as the VM runs on a supported operating system by Site Recovery. +VMs migrated using Site Recovery | Supported | If it is a VMware/Physical machine migrated to Azure using Site Recovery, you need to uninstall the older version of mobility service and restart the machine before replicating it to another Azure region. + +## Support for Storage configuration + +**Configuration** | **Supported/Not supported** | **Remarks** +--- | --- | --- +Maximum OS disk size | 2048 GB | Refer to [Disks used by VMs.](../../virtual-machines/windows/about-disks-and-vhds.md#disks-used-by-vms) +Maximum data disk size | 4095 GB | Refer to [Disks used by VMs.](../../virtual-machines/windows/about-disks-and-vhds.md#disks-used-by-vms) +Number of data disks | Upto 64 as supported by a specific Azure VM size | Refer to [Azure virtual machine sizes](../../virtual-machines/windows/sizes.md) +Temporary disk | Always excluded from replication | Temporary disk is excluded from replication always. You should not put any persistent data on temporary disk as per Azure guidance. Refer to [Temporary disk on Azure VMs](../../virtual-machines/windows/about-disks-and-vhds.md#temporary-disk) for more details. +Data change rate on the disk | Maximum of 6 MBps per disk | If the average data change rate on the disk is beyond 6 MBps continuously, replication will not catch up. However, if it is an occasional data burst and the data change rate is greater than 6 MBps for some time and comes down, replication will catch up. In this case, you might see slightly delayed recovery points. +Disks on standard storage accounts | Supported | +Disks on premium storage accounts | Supported | If a VM has disks spread across premium and standard storage accounts, you can select a different target storage account for each disk to ensure you have the same storage configuration in target region +Standard Managed disks | Not supported | +Premium Managed disks | Not supported | +Storage spaces | Supported | +Encryption at rest (SSE) | Supported | For cache and target storage accounts, you can select an SSE enabled storage account. +Azure Disk Encryption (ADE) | Not supported | +Hot add/remove disk | Not supported | If you add or remove data disk on the VM, you need to disable replication and enable replication again for the VM. +Exclude disk | Not supported| Temporary disk is excluded by default. +LRS | Supported | +GRS | Supported | +RA-GRS | Supported | +ZRS | Not supported | +Cool and Hot Storage | Not supported | Virtual machine disks are not supported on cool and hot storage +Virtual Network Service Endpoints (Azure Storage firewalls and Virtual networks) | No | Allowing access to specific Azure virtual networks on cache storage accounts used to store replicated data is not supported. + +>[!IMPORTANT] +> Ensure that you observe the VM disk scalability and performance targets for [Linux](../../virtual-machines/linux/disk-scalability-targets.md) or [Windows](../../virtual-machines/windows/disk-scalability-targets.md) virtual machines to avoid any performance issues. If you follow the default settings, Site Recovery will create the required disks and storage accounts based on the source configuration. If you customize and select your own settings, ensure that you follow the disk scalability and performance targets for your source VMs. + +## Support for Network configuration +**Configuration** | **Supported/Not supported** | **Remarks** +--- | --- | --- +Network interface (NIC) | Upto maximum number of NICs supported by a specific Azure VM size | NICs are created when the VM is created as part of Test failover or Failover operation. The number of NICs on the failover VM depends on the number of NICs the source VM has at the time of enabling replication. If you add/remove NIC after enabling replication, it does not impact NIC count on the failover VM. +Internet Load Balancer | Supported | You need to associate the pre-configured load balancer using an azure automation script in a recovery plan. +Internal Load balancer | Supported | You need to associate the pre-configured load balancer using an azure automation script in a recovery plan. +Public IP| Supported | You need to associate an already existing public IP to the NIC or create one and associate to the NIC using an azure automation script in a recovery plan. +NSG on NIC (Resource Manager)| Supported | You need to associate the NSG to the NIC using an azure automation script in a recovery plan. +NSG on subnet (Resource Manager and Classic)| Supported | You need to associate the NSG to the NIC using an azure automation script in a recovery plan. +NSG on VM (Classic)| Supported | You need to associate the NSG to the NIC using an azure automation script in a recovery plan. +Reserved IP (Static IP) / Retain source IP | Supported | If the NIC on the source VM has static IP configuration and the target subnet has the same IP available, it is assigned to the failover VM. If the target subnet does not have the same IP available, one of the available IPs in the subnet is reserved for this VM. You can specify a fixed IP of your choice in 'Replicated item > Settings > Compute and Network > Network interfaces'. You can select the NIC and specify the subnet and IP of your choice. +Dynamic IP| Supported | If the NIC on the source VM has dynamic IP configuration, the NIC on the failover VM is also Dynamic by default. You can specify a fixed IP of your choice in 'Replicated item > Settings > Compute and Network > Network interfaces'. You can select the NIC and specify the subnet and IP of your choice. +Traffic Manager integration | Supported | You can pre-configure your traffic manager in such a way that the traffic is routed to the endpoint in source region on a regular basis and to the endpoint in target region in case of failover. +Azure managed DNS | Supported | +Custom DNS | Supported | +Unauthenticated Proxy | Supported | Refer to [networking guidance document.](site-recovery-azure-to-azure-networking-guidance.md) +Authenticated Proxy | Not supported | If the VM is using an authenticated proxy for outbound connectivity, it cannot be replicated using Azure Site Recovery. +Site to Site VPN with on-premises (with or without ExpressRoute)| Supported | Ensure that the UDRs and NSGs are configured in such a way that the Site recovery traffic is not routed to on-premises. Refer to [networking guidance document.](site-recovery-azure-to-azure-networking-guidance.md) +VNET to VNET connection | Supported | Refer to [networking guidance document.](site-recovery-azure-to-azure-networking-guidance.md) + + +## Next steps +- Learn more about [networking guidance for replicating Azure VMs](site-recovery-azure-to-azure-networking-guidance.md) +- Start protecting your workloads by [replicating Azure VMs](azure-to-azure-quickstart.md) diff --git a/articles/site-recovery/azure-to-azure/toc.yml b/articles/site-recovery/azure-to-azure/toc.yml new file mode 100644 index 0000000000000..83d1bbb8a6601 --- /dev/null +++ b/articles/site-recovery/azure-to-azure/toc.yml @@ -0,0 +1,113 @@ +- name: Site Recovery Documentation + href: index.yml +- name: Overview + items: + - name: What is Site Recovery? + href: ../site-recovery-overview.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json +- name: Quickstarts + expanded: true + items: + - name: Replicate an Azure VM to a secondary region + href: azure-to-azure-quickstart.md +- name: Tutorials + items: + - name: Set up disaster recovery + href: azure-to-azure-tutorial-enable-replication.md + - name: Run a disaster recovery drill + href: azure-to-azure-tutorial-dr-drill.md + - name: Run a failover and failback + href: azure-to-azure-tutorial-failover-failback.md +- name: Concepts + items: + - name: Azure to Azure architecture + href: concepts-azure-to-azure-architecture.md + - name: Azure to Azure support matrix + href: site-recovery-support-matrix-azure-to-azure.md + - name: Role-based access permissions + href: ../site-recovery-role-based-linked-access-control.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: FAQ + href: ../site-recovery-faq.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json +- name: How-to guides + items: + - name: Networking + items: + - name: Set up networking for Azure VM disaster recovery (preview) + href: site-recovery-azure-to-azure-networking-guidance.md + - name: Set up network mapping for Azure VM disaster recovery (preview) + href: site-recovery-network-mapping-azure-to-azure.md + - name: Configure + items: + - name: Enable replication between Azure regions + href: site-recovery-replicate-azure-to-azure.md + - name: Failover and failback + items: + - name: Set up recovery plans + href: ../site-recovery-create-recovery-plans.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: Add Azure runbooks to recovery plans + href: ../site-recovery-runbook-automation.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: Reprotect machines after failover + href: site-recovery-how-to-reprotect-azure-to-azure.md + - name: Migrate + items: + - name: Migrate Azure VMs to another Azure region + href: ../site-recovery-migrate-azure-to-azure.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: Replicate machines to another Azure region after migration + href: ../site-recovery-azure-to-azure-after-migration.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: Workloads + items: + - name: Active Directory and DNS + href: ../site-recovery-active-directory.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: Replicate SQL Server + href: ../site-recovery-sql.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: SharePoint + href: ../site-recovery-sharepoint.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: Dynamics AX + href: ../site-recovery-dynamicsax.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: RDS + href: ../site-recovery-workload.md#protect-rds?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: Exchange + href: ../site-recovery-workload.md#protect-exchange?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: SAP + href: ../site-recovery-sap.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: File Server + href: ../file-server-disaster-recovery.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: IIS based web applications + href: ../site-recovery-iis.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: Citrix XenApp and XenDesktop + href: ../site-recovery-citrix-xenapp-and-xendesktop.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: Other workloads + href: ../site-recovery-workload.md#workload-summary?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: Manage + items: + - name: Remove servers and disable protection + href: ../site-recovery-manage-registration-and-protection.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.jsos + - name: Delete a vault + href: ../delete-vault.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: Troubleshoot + items: + - name: Azure to Azure replication issues + href: site-recovery-azure-to-azure-troubleshoot-errors.md +- name: Reference + items: + - name: Azure PowerShell + href: /powershell/module/azurerm.siterecovery + - name: Azure PowerShell classic + href: /powershell/module/azure/?view=azuresmps-3.7.0 + - name: REST + href: https://msdn.microsoft.com/library/mt750497 +- name: Resources + items: + - name: Azure Roadmap + href: https://azure.microsoft.com/roadmap/ + - name: Blog + href: http://azure.microsoft.com/blog/tag/azure-site-recovery/ + - name: Forum + href: https://social.msdn.microsoft.com/Forums/azure/en-US/home?forum=hypervrecovmgr + - name: Learning path + href: https://azure.microsoft.com/documentation/learning-paths/site-recovery/ + - name: Pricing + href: https://azure.microsoft.com/pricing/details/site-recovery/ + - name: Pricing calculator + href: https://azure.microsoft.com/pricing/calculator/ + - name: Service updates + href: https://azure.microsoft.com/updates/?product=site-recovery diff --git a/articles/site-recovery/migrate/index.yml b/articles/site-recovery/migrate/index.yml new file mode 100644 index 0000000000000..c07673fb9f1d9 --- /dev/null +++ b/articles/site-recovery/migrate/index.yml @@ -0,0 +1,38 @@ +### YamlMime:YamlDocument +documentType: LandingData +title: Azure Site Recovery documentation +metadata: + title: Azure Site Recovery documentation - tutorials, API reference | Microsoft Docs + meta.description: Learn how to migrate to Azure with Azure Site Recovery + services: site-recovery + author: rayne-wiselman + manager: carmonm + ms.service: site-recovery + ms.tgt_pltfrm: na + ms.devlang: na + ms.topic: landing-page + ms.date: 11/26/2017 + ms.author: raynew +abstract: + description: In addition to orchestrating disaster recovery of Azure VMs and on-premises machines, Azure Site Recovery orchestrates and manages migration of on-premises machines to Azure, and migration of Azure VMs between Azure regions. Learn how to set up migration with our tutorials. +sections: +- title: Step-by-step tutorials + items: + - type: paragraph + text: 'Learn how to migrate on-premises VMs.' + - type: list + style: ordered + items: + - html: Prepare Azure + - html: Migrate on-premises machines to Azure + - html: Migrate AWS instances to Azure +- title: Reference + items: + - type: list + style: cards + className: cardsD + items: + - title: Command-Line + html:

Azure PowerShell

+ - title: REST + html:

REST API Reference

diff --git a/articles/site-recovery/migrate/media/index/article.svg b/articles/site-recovery/migrate/media/index/article.svg new file mode 100644 index 0000000000000..c1c4fc48502a2 --- /dev/null +++ b/articles/site-recovery/migrate/media/index/article.svg @@ -0,0 +1,307 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/articles/site-recovery/migrate/media/index/azuredefaultblack.svg b/articles/site-recovery/migrate/media/index/azuredefaultblack.svg new file mode 100644 index 0000000000000..c7575a9aa6e93 --- /dev/null +++ b/articles/site-recovery/migrate/media/index/azuredefaultblack.svg @@ -0,0 +1,31 @@ + + + + + + + + + + + + + + + + + + + + + + + diff --git a/articles/site-recovery/migrate/media/index/deploy.svg b/articles/site-recovery/migrate/media/index/deploy.svg new file mode 100644 index 0000000000000..92d9010ae30d9 --- /dev/null +++ b/articles/site-recovery/migrate/media/index/deploy.svg @@ -0,0 +1,46 @@ + + + + + + + + + + + + + + + + + + + + diff --git a/articles/site-recovery/migrate/media/index/gear.svg b/articles/site-recovery/migrate/media/index/gear.svg new file mode 100644 index 0000000000000..419fbce9c9c37 --- /dev/null +++ b/articles/site-recovery/migrate/media/index/gear.svg @@ -0,0 +1,63 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/articles/site-recovery/migrate/media/index/get-started.svg b/articles/site-recovery/migrate/media/index/get-started.svg new file mode 100644 index 0000000000000..03646c34c1dfe --- /dev/null +++ b/articles/site-recovery/migrate/media/index/get-started.svg @@ -0,0 +1,53 @@ + + + + + + + + + + + + + + + + + + + + + diff --git a/articles/site-recovery/migrate/media/index/guide.svg b/articles/site-recovery/migrate/media/index/guide.svg new file mode 100644 index 0000000000000..ba1f476a0193a --- /dev/null +++ b/articles/site-recovery/migrate/media/index/guide.svg @@ -0,0 +1,31 @@ + + + + + + + + + + + + + diff --git a/articles/site-recovery/migrate/media/index/placeholder.svg b/articles/site-recovery/migrate/media/index/placeholder.svg new file mode 100644 index 0000000000000..a482e7e3d44e1 --- /dev/null +++ b/articles/site-recovery/migrate/media/index/placeholder.svg @@ -0,0 +1,9 @@ + + + + + + \ No newline at end of file diff --git a/articles/site-recovery/migrate/media/index/portal.svg b/articles/site-recovery/migrate/media/index/portal.svg new file mode 100644 index 0000000000000..0a34915a6746a --- /dev/null +++ b/articles/site-recovery/migrate/media/index/portal.svg @@ -0,0 +1 @@ +i_portal \ No newline at end of file diff --git a/articles/site-recovery/migrate/media/index/site-recovery.svg b/articles/site-recovery/migrate/media/index/site-recovery.svg new file mode 100644 index 0000000000000..532a369ccbd31 --- /dev/null +++ b/articles/site-recovery/migrate/media/index/site-recovery.svg @@ -0,0 +1,12 @@ + + + + + + + diff --git a/articles/site-recovery/migrate/media/index/siterecovery.svg b/articles/site-recovery/migrate/media/index/siterecovery.svg new file mode 100644 index 0000000000000..532a369ccbd31 --- /dev/null +++ b/articles/site-recovery/migrate/media/index/siterecovery.svg @@ -0,0 +1,12 @@ + + + + + + + diff --git a/articles/site-recovery/migrate/media/index/tutorial.svg b/articles/site-recovery/migrate/media/index/tutorial.svg new file mode 100644 index 0000000000000..fb824bf6365fc --- /dev/null +++ b/articles/site-recovery/migrate/media/index/tutorial.svg @@ -0,0 +1,54 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/articles/site-recovery/migrate/media/index/video-library.svg b/articles/site-recovery/migrate/media/index/video-library.svg new file mode 100644 index 0000000000000..45f0d2e45195e --- /dev/null +++ b/articles/site-recovery/migrate/media/index/video-library.svg @@ -0,0 +1,67 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/articles/site-recovery/migrate/media/tutorial-migrate-aws-to-azure/complete-migration.png b/articles/site-recovery/migrate/media/tutorial-migrate-aws-to-azure/complete-migration.png new file mode 100644 index 0000000000000..7863093e8d774 Binary files /dev/null and b/articles/site-recovery/migrate/media/tutorial-migrate-aws-to-azure/complete-migration.png differ diff --git a/articles/site-recovery/migrate/media/tutorial-migrate-aws-to-azure/configuration-server.png b/articles/site-recovery/migrate/media/tutorial-migrate-aws-to-azure/configuration-server.png new file mode 100644 index 0000000000000..8ceb52f9bcc9a Binary files /dev/null and b/articles/site-recovery/migrate/media/tutorial-migrate-aws-to-azure/configuration-server.png differ diff --git a/articles/site-recovery/migrate/media/tutorial-migrate-on-premises-to-azure/complete-migration.png b/articles/site-recovery/migrate/media/tutorial-migrate-on-premises-to-azure/complete-migration.png new file mode 100644 index 0000000000000..7863093e8d774 Binary files /dev/null and b/articles/site-recovery/migrate/media/tutorial-migrate-on-premises-to-azure/complete-migration.png differ diff --git a/articles/site-recovery/migrate/site-recovery-azure-to-azure-after-migration.md b/articles/site-recovery/migrate/site-recovery-azure-to-azure-after-migration.md new file mode 100644 index 0000000000000..7294a99fb02dd --- /dev/null +++ b/articles/site-recovery/migrate/site-recovery-azure-to-azure-after-migration.md @@ -0,0 +1,103 @@ +--- +title: Prepare machines to set up disaster recovery between Azure regions after migration to Azure by using Site Recovery | Microsoft Docs +description: This article describes how to prepare machines to set up disaster recovery between Azure regions after migration to Azure by using Azure Site Recovery. +services: site-recovery +documentationcenter: '' +author: ponatara +manager: abhemraj +editor: '' + +ms.assetid: 9126f5e8-e9ed-4c31-b6b4-bf969c12c184 +ms.service: site-recovery +ms.workload: storage-backup-recovery +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 11/27/2017 +ms.author: ponatara + +--- +# Replicate Azure VMs to another region after migration to Azure by using Azure Site Recovery + +>[!NOTE] +> Azure Site Recovery replication for Azure virtual machines (VMs) is currently in preview. + +## Overview + +This article helps you prepare Azure virtual machines for replication between two Azure regions after these machines have been migrated from an on-premises environment to Azure by using Azure Site Recovery. + +## Disaster recovery and compliance +Today, more and more enterprises are moving their workloads to Azure. With enterprises moving mission-critical on-premises production workloads to Azure, setting up disaster recovery for these workloads is mandatory for compliance and to safeguard against any disruptions in an Azure region. + +## Steps for preparing migrated machines for replication +To prepare migrated machines for setting up replication to another Azure region: + +1. Complete migration. +2. Install the Azure agent if needed. +3. Remove the Mobility service. +4. Restart the VM. + +These steps are described in more detail in the following sections. + +### Step 1: Migrate workloads running on Hyper-V VMs, VMware VMs, and physical servers to run on Azure VMs + +To set up replication and migrate your on-premises Hyper-V, VMware, and physical workloads to Azure, follow the steps in the [Migrate Azure IaaS virtual machines between Azure regions with Azure Site Recovery](site-recovery-migrate-azure-to-azure.md) article. + +After migration, you don't need to commit or delete a failover. Instead, select the **Complete Migration** option for each machine you want to migrate: +1. In **Replicated Items**, right-click the VM, and click **Complete Migration**. Click **OK** to complete the step. You can track progress in the VM properties by monitoring the Complete Migration job in **Site Recovery jobs**. +2. The **Complete Migration** action completes the migration process, removes replication for the machine, and stops Site Recovery billing for the machine. + +### Step 2: Install the Azure VM agent on the virtual machine +The Azure [VM agent](../../virtual-machines/windows/classic/agents-and-extensions.md#azure-vm-agents-for-windows-and-linux) must be installed on the virtual machine for the Site Recovery extension to work and to help protect the VM. + +>[!IMPORTANT] +>Beginning with version 9.7.0.0, on Windows virtual machines, the Mobility service installer also installs the latest available Azure VM agent. On migration, the virtual machine meets the +agent installation prerequisite for using any VM extension, including the Site Recovery extension. The Azure VM agent needs to be manually installed only if the Mobility service installed on the migrated machine is version 9.6 or earlier. + +The following table provides additional information about installing the VM agent and validating that it was installed: + +| **Operation** | **Windows** | **Linux** | +| --- | --- | --- | +| Installing the VM agent |Download and install the [agent MSI](http://go.microsoft.com/fwlink/?LinkID=394789&clcid=0x409). You need administrator privileges to complete the installation. |Install the latest [Linux agent](../../virtual-machines/linux/agent-user-guide.md). You need administrator privileges to complete the installation. We recommend installing the agent from your distribution repository. We *do not recommend* installing the Linux VM agent directly from GitHub. | +| Validating the VM agent installation |1. Browse to the C:\WindowsAzure\Packages folder in the Azure VM. You should see the WaAppAgent.exe file.
2. Right-click the file, go to **Properties**, and then select the **Details** tab. The **Product Version** field should be 2.6.1198.718 or higher. |N/A | + + +### Step 3: Remove the Mobility service from the migrated virtual machine + +If you have migrated your on-premises VMware machines or physical servers on Windows/Linux, you need to manually remove/uninstall the Mobility service from the migrated virtual machine. + +>[!IMPORTANT] +>This step is not required for Hyper-V VMs migrated to Azure. + +#### Uninstall the Mobility service on a Windows Server VM +Use one of the following methods to uninstall the Mobility service on a Windows Server computer. + +##### Uninstall by using the Windows UI +1. In the Control Panel, select **Programs**. +2. Select **Microsoft Azure Site Recovery Mobility Service/Master Target server**, and then select **Uninstall**. + +##### Uninstall at a command prompt +1. Open a Command Prompt window as an administrator. +2. To uninstall the Mobility service, run the following command: + + ``` + MsiExec.exe /qn /x {275197FC-14FD-4560-A5EB-38217F80CBD1} /L+*V "C:\ProgramData\ASRSetupLogs\UnifiedAgentMSIUninstall.log" + ``` + +#### Uninstall the Mobility service on a Linux computer +1. On your Linux server, sign in as a **root** user. +2. In a terminal, go to /user/local/ASR. +3. To uninstall the Mobility service, run the following command: + + ``` + uninstall.sh -Y + ``` + +### Step 4: Restart the VM + +After you uninstall the Mobility service, restart the VM before you set up replication to another Azure region. + + +## Next steps +- Start protecting your workloads by [replicating Azure virtual machines](../azure-to-azure-quickstart.md). +- Learn more about [networking guidance for replicating Azure virtual machines](../site-recovery-azure-to-azure-networking-guidance.md). diff --git a/articles/site-recovery/migrate/site-recovery-migrate-azure-to-azure.md b/articles/site-recovery/migrate/site-recovery-migrate-azure-to-azure.md new file mode 100644 index 0000000000000..9f3a75318ba17 --- /dev/null +++ b/articles/site-recovery/migrate/site-recovery-migrate-azure-to-azure.md @@ -0,0 +1,44 @@ +--- +title: Migrate Azure IaaS VMs between Azure regions | Microsoft Docs +description: Use Azure Site Recovery to migrate Azure IaaS virtual machines from one Azure region to another. +services: site-recovery +documentationcenter: '' +author: rayne-wiselman +manager: jwhit +editor: tysonn + +ms.assetid: 8a29e0d9-0010-4739-972f-02b8bdf360f6 +ms.service: site-recovery +ms.workload: storage-backup-recovery +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 11/27/2017 +ms.author: raynew + +--- +# Migrate Azure IaaS virtual machines between Azure regions with Azure Site Recovery + +Welcome to Azure Site Recovery! Use this article if you want to migrate Azure VMs between Azure regions. +>[!NOTE] +> +> For replicating Azure VMs to another region for disaster recovery and migration needs, refer to [this document](../site-recovery-azure-to-azure.md). Site Recovery replication for Azure virtual machines is currently in preview. + + +## Prerequisites +Here's what you need for this deployment: + +* **IaaS virtual machines**: The VMs you want to migrate. You migrate these VMs by treating them as physical machines. + +## Deployment steps +This section describes the deployment steps in the new Azure portal. + +1. [Create a vault](../site-recovery-azure-to-azure.md#create-a-recovery-services-vault). +2. [Enable replication]../(site-recovery-azure-to-azure.md) for the VMs you want to migrate, and choose Azure as source. + >[!NOTE] + > + > Currently, native replication of Azure VMs using managed disks are not supported. You can use "Physical to Azure" option in [this document](../site-recovery-vmware-to-azure.md) to migrate VMs with managed disks. +3. [Run a failover](../site-recovery-failover.md). After initial replication is complete, you can run a failover from one Azure region to another. Optionally, you can create a recovery plan and run a failover, to migrate multiple virtual machines between regions. [Learn more](../site-recovery-create-recovery-plans.md) about recovery plans. + +## Next steps +Learn more about other replication scenarios in [What is Azure Site Recovery?](../site-recovery-overview.md) diff --git a/articles/site-recovery/migrate/toc.yml b/articles/site-recovery/migrate/toc.yml new file mode 100644 index 0000000000000..3d34a80378b02 --- /dev/null +++ b/articles/site-recovery/migrate/toc.yml @@ -0,0 +1,166 @@ +- name: Site Recovery Documentation + href: index.yml +- name: Overview + items: + - name: What is Site Recovery? + href: ../site-recovery-overview.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json +- name: Tutorials + items: + - name: Set up Azure + href: ../tutorial-prepare-azure.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Prepare VMware VMs for migration + href: ../tutorial-prepare-on-premises-vmware.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Migrate on-premises machines to Azure + href: tutorial-migrate-on-premises-to-azure.md + - name: Migrate AWS instances to Azure + href: tutorial-migrate-aws-to-azure.md + +- name: Concepts + items: + - name: Migration to Azure + href: ../site-recovery-migrate-to-azure.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: On-premises to Azure support matrix + href: ../site-recovery-support-matrix-to-azure.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Azure to Azure support matrix + href: ../site-recovery-support-matrix-azure-to-azure.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Azure to Azure architecture + href: ../concepts-azure-to-azure-architecture.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: VMware to Azure architecture + href: ../concepts-vmware-to-azure-architecture.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Hyper-V to Azure architecture + href: ../concepts-hyper-v-to-azure-architecture.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Physical to Azure architecture + href: ../concepts-hyper-v-to-azure-architecture.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: On-premises to Azure support matrix + href: ../site-recovery-support-matrix-azure-to-azure.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Role-based access permissions + href: ../site-recovery-role-based-linked-access-control.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: FAQ + href: ../site-recovery-faq.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json +- name: How-to Guides + items: + - name: Migration + items: + - name: Migrate Azure VMs to another region + href: site-recovery-migrate-azure-to-azure.md + - name: Replicate machines after migration + href: site-recovery-azure-to-azure-after-migration.md + - name: Disaster recovery + items: + - name: Hyper-V VMs to Azure + href: ../tutorial-hyper-v-to-azure.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Physical servers to Azure + href: ../tutorial-physical-to-azure.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Failover and failback + items: + - name: Run a test failover to Azure + href: ../tutorial-dr-drill-azure.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Set up recovery plans + href: ../site-recovery-create-recovery-plans.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Add Azure runbooks to recovery plans + href: ../site-recovery-runbook-automation.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Run a failover + href: ../site-recovery-failover.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Configure + items: + - name: Set up the source environment + items: + - name: Set up the source environment for VMware + href: ../site-recovery-set-up-vmware-to-azure.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Set up the source environment for physical servers + href: ../site-recovery-set-up-physical-to-azure.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Set up the target environment + items: + - name: Set up the target environment for VMware + href: ../site-recovery-prepare-target-vmware-to-azure.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Set up the target environment for physical servers + href: ../site-recovery-prepare-target-physical-to-azure.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Configure replication settings for VMware + href: ../site-recovery-setup-replication-settings-vmware.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Deploy the Mobility service for VMware replication + href: ../site-recovery-vmware-to-azure-install-mob-svc.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + items: + - name: Deploy the Mobility service with System Center Configuration Manager + href: ../site-recovery-install-mobility-service-using-sccm.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Deploy the Mobility service with Azure Automation DSC + href: ../site-recovery-automate-mobility-service-install.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Enable replication + items: + - name: Enable Azure to Azure replication + href: ../site-recovery-replicate-azure-to-azure.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Enable VMware to Azure replication + href: ../site-recovery-replicate-vmware-to-azure.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Workloads + items: + - name: Active Directory and DNS + href: ../site-recovery-active-directory.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Replicate SQL Server + href: ../site-recovery-sql.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: SharePoint + href: ../site-recovery-sharepoint.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Dynamics AX + href: ../site-recovery-dynamicsax.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: RDS + href: ../site-recovery-workload.md#protect-rds?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Exchange + href: ../site-recovery-workload.md#protect-exchange?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: SAP + href: ../site-recovery-sap.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: File Server + href: ../file-server-disaster-recovery.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: IIS based web applications + href: ../site-recovery-iis.md?toc=%2fazure%2fsite-recovery%2fazure-to-azure%2ftoc.json + - name: Citrix XenApp and XenDesktop + href: ../site-recovery-citrix-xenapp-and-xendesktop.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Other workloads + href: ../site-recovery-workload.md#workload-summary?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Manage + items: + - name: Upgrade to a Recovery Services vault + href: ../upgrade-site-recovery-vaults.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Manage process servers in Azure + href: ../site-recovery-vmware-setup-azure-ps-resource-manager.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Manage the configuration server + href: ../site-recovery-vmware-to-azure-manage-configuration-server.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Manage vCenter servers + href: ../site-recovery-vmware-to-azure-manage-vCenter.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Remove servers and disable protection + href: ../site-recovery-manage-registration-and-protection.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Delete a vault + href: ../delete-vault.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Troubleshoot + items: + - name: Azure to Azure replication issues + href: ../site-recovery-azure-to-azure-troubleshoot-errors.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: On-premises to Azure replication issues + href: ../site-recovery-vmware-to-azure-protection-troubleshoot.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Mobility service issues + href: ../site-recovery-vmware-to-azure-push-install-error-codes.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Failover to Azure issues + href: ../site-recovery-failover-to-azure-troubleshoot.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json + - name: Collect logs and troubleshoot on-premises issues + href: ../site-recovery-monitoring-and-troubleshooting.md?toc=%2fazure%2fsite-recovery%2fmigrate%2ftoc.json +- name: Reference + items: + - name: Azure PowerShell + href: /powershell/module/azurerm.siterecovery + - name: Azure PowerShell classic + href: /powershell/module/azure/?view=azuresmps-3.7.0 + - name: REST + href: https://msdn.microsoft.com/en-us/library/mt750497 +- name: Resources + items: + - name: Azure Roadmap + href: https://azure.microsoft.com/roadmap/ + - name: Blog + href: http://azure.microsoft.com/blog/tag/azure-site-recovery/ + - name: Forum + href: https://social.msdn.microsoft.com/Forums/azure/en-US/home?forum=hypervrecovmgr + - name: Learning path + href: https://azure.microsoft.com/documentation/learning-paths/site-recovery/ + - name: Pricing + href: https://azure.microsoft.com/pricing/details/site-recovery/ + - name: Pricing calculator + href: https://azure.microsoft.com/pricing/calculator/ + - name: Service updates + href: https://azure.microsoft.com/updates/?product=site-recovery diff --git a/articles/site-recovery/migrate/tutorial-migrate-aws-to-azure.md b/articles/site-recovery/migrate/tutorial-migrate-aws-to-azure.md new file mode 100644 index 0000000000000..83d4a9cc3181f --- /dev/null +++ b/articles/site-recovery/migrate/tutorial-migrate-aws-to-azure.md @@ -0,0 +1,279 @@ +--- +title: Migrate VMs from AWS to Azure with Azure Site Recovery | Microsoft Docs +description: This article describes how to migrate VMs running in Amazon Web Services (AWS) to Azure, using Azure Site Recovery. +services: site-recovery +documentationcenter: '' +author: rayne-wiselman +manager: carmonm +editor: '' + +ms.assetid: ddb412fd-32a8-4afa-9e39-738b11b91118 +ms.service: site-recovery +ms.devlang: na +ms.topic: article +ms.tgt_pltfrm: na +ms.workload: storage-backup-recovery +ms.date: 11/27/2017 +ms.author: raynew +ms.custom: MVC + +--- +# Migrate Amazon Web Services (AWS) VMs to Azure + +This tutorial teaches you how to migrate Amazon Web Services (AWS) virtual machines (VMs), to Azure VMs using Site Recovery. When migrating EC2 instances to Azure, the VMsare treated as if they are physical, on-premises computers. In this tutorial, you learn how to: + +> [!div class="checklist"] +> * Prepare Azure resources +> * Prepare the AWS EC2 instances for migration +> * Deploy a configuration server +> * Enable replication for VMs +> * Test the failover to make sure everything's working +> * Run a one-time failover to Azure + +If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin. + + +## Prepare Azure resources + +You need to have a few resources ready in Azure for the migrated EC2 instances to use. These include a storage account, a vault, and a virtual network. + +### Create a storage account + +Images of replicated machines are held in Azure storage. Azure VMs are created from the storage +when you failover from on-premises to Azure. + +1. In the [Azure portal](https://portal.azure.com) menu, click **New** -> **Storage** -> **Storage account**. +2. Enter a name for your storage account. For these tutorials, we use the name + **awsmigrated2017**. The name must be unique within Azure, and be between 3 and 24 + characters, only numbers and lowercase letters. +3. Keep the defaults for **Deployment model**, **Account kind**, **Performance**, and **Secure transfer required**. +5. Select the default **RA-GRS** for **Replication**. +6. Select the subscription you want to use for this tutorial. +7. For **Resource group**, select **Create new**. In this example, we use **migrationRG** as the name. +8. Select **West Europe** as the location. +9. Click **Create** to create the storage account. + +### Create a vault + +1. In the [Azure portal](https://portal.azure.com), in the left navigation, click **More services** and search for and select **Recovery Services vaults**. +2. In the Recovery Services vaults page, click **+ Add** in the upper left of the page. +3. For **Name**, type *myVault*. +4. For **Subscription**, select the appropriate subscription. +4. For **Resource Group**, select **Use existing** and select *migrationRG*. +5. In **Location**, select *West Europe*. +5. To quickly access the new vault from the dashboard, select **Pin to dashboard**. +7. When you are done, click **Create**. + +The new vault appears on the **Dashboard** > **All resources**, and on the main **Recovery Services vaults** page. + +### Set up an Azure network + +When the Azure VMs are created after the migration (failover), they're joined to this network. + +1. In the [Azure portal](https://portal.azure.com), click **New** > **Networking** > + **Virtual network** +3. For **Name**, type *myMigrationNetwork*. +4. Leave the default value for **Address space**. +5. For **Subscription**, select the appropriate subscription. +6. For **Resource group**, select **Use existing** and choose *migrationRG* from the drop-down. +7. For **Location**, select **West Europe**. +8. Leave the defaults for **Subnet**, both the **Name** and **IP range**. +9. Leave **Service Endpoints** disabled. +10. When you are done, click **Create**. + + +## Prepare the EC2 instances + +You need one or more VMs that you want to migrate. These EC2 instance should be running the 64-bit version of Windows Server 2008 R2 SP1 or later, Windows Server 2012, Windows Server 2012 R2 or Red Hat Enterprise Linux 6.7 (HVM virtualized instances only). The server must have only Citrix PV or AWS PV drivers. Instances running RedHat PV drivers aren't supported. + +The Mobility service must be installed on each VM you want to replicate. Site Recovery installs this service automatically when you enable replication for the VM. For automatic installation, you need to prepare an account on the EC2 instances that Site Recovery will use to access the VM. + +You can use a domain or local account. For Linux VMs, the account should be root on the source Linux +server. For Windows VMs, if you're not using a domain account, disable Remote User Access control +on the local machine: + + - In the registry, under **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System**, + add the DWORD entry **LocalAccountTokenFilterPolicy** and set the value to 1. + +You also need a separate EC2 instance that you can use as the Site Recovery configuration server. This instance must be running Windows Server 2012 R2. + + +## Prepare the infrastructure + +On the portal page for your vault, select **Site Recovery** from the **Getting Started** section and then click **Prepare Infrastructure**. + +### 1 Protection goal + +Select the following values on the **Protection Goal** page: + +| | | +|---------|-----------| +| Where are your machines located? | **On-premises**| +| Where do you want to replicate your machines? |**To Azure**| +| Are your machines virtualized? | **Not virtualized / Other**| + +When you are done, click **OK** to move to the next section. + +### 2 Source Prepare + +On the **Prepare source** page, click **+ Configuration Server**. + +1. Use an EC2 instance running Windows Server 2012 R2 to create a configuration server and register it with your recovery vault. + +2. Configure the proxy on the EC2 instance VM you are using as the configuration server so that it can access the [Service URLs](../site-recovery-support-matrix-to-azure.md). + +3. Download the [Microsoft Azure Site Recovery Unified Setup](http://aka.ms/unifiedinstaller_wus) program. You can download it to your local machine and then copy it over to the VM you are using as the configuration server. + +4. Click on the **Download** button to download the vault registration key. Copy the downloaded file over to the VM you are using as the configuration server. + +5. On the VM, right-click installer you downloaded for the **Microsoft Azure Site Recovery Unified Setup** and select **Run as administrator**. + + 1. In **Before You Begin**, select **Install the configuration server and process server** and then click **Next**. + 2. In **Third-Party Software License**, select **I accept the third-party license agreement.** and then click **Next**. + 3. In **Registration**, click browse and navigate to where you put the vault registration key file and then click **Next**. + 4. In **Internet Settings**, select **Connect to Azure Site Recovery without a proxy server.** and then click **Next**. + 5. In the **Prerequisites Check** page, it runs checks for several items. When it is complete, click **Next**. + 6. In **MySQL Configuration**, provide the required passwords and then click **Next**. + 7. In **Environment Details**, select **No**, you don't need to protect VMware machines and then click **Next**. + 8. In **Install Location**, click **Next** to accept the default. + 9. In **Network Selection**, click **Next** to accept the default. + 10. In **Summary** click **Install**. + 11. **Installation Progress** shows you information about where you are in the installation process. When it is complete, click **Finish**. You get a pop-up about needing a possible reboot, click **OK**. You also get a pop-up about the Configuration Server Connection Passphrase, copy the passphrase to your clipboard and save it somewhere safe. + +6. On the VM, run **cspsconfigtool.exe** to create one or more management accounts on the configuration server. Make sure the management accounts have administrator permissions on the EC2 instances that you want to migrate. + +When you are done setting up the configuration server, go back to the portal and select the server you just created for **Configuration Server** and click *OK** to move on to step 3 Target Prepare. + +### 3 Target Prepare + +In this section you enter information about the resources you created when you went through the [Prepare Azure resources](#prepare-azure-resources) section, earlier in this tutorial. + +1. In **Subscription**, select the Azure subscription that you used for the [Prepare Azure](../tutorial-prepare-azure.md) tutorial. +2. Select **Resource Manager** as the deployment model. +3. Site Recovery checks that you have one or more compatible Azure storage accounts and networks. These should be the resources you created when you went through the [Prepare Azure resources](#prepare-azure-resources) section, earlier in this tutorial +4. When you are done, click **OK**. + + +### 4 Replication settings Prepare + +You need to create a replication policy, before you can enable replication + +1. Click **+ Replicate and Associate**. +2. In **Name**, type **myReplicationPolicy**. +3. Leave the rest of the default settings and click **OK** to create the policy. The new policy is automatically associated with the configuration server. + +### 5 Deployment planning Select + +In **Have you completed deployment planning?**, select **I will do it later** from the drop-down and then click **OK**. + +When you are all done with all 5 sections of **Prepare infrastructure**, click **OK**. + + +## Enable replication + +Enable replication for each VM you want to migrate. When replication is enabled, Site Recovery installs the Mobility service automatically. + +1. Open the [Azure portal](htts://portal.azure.com). +1. On the page for your vault, under **Getting Started**, click **Site Recovery**. +2. Under **For on-premises machines and Azure VMs**, click **Step 1:Replicate application**. Complete the wizard pages with the following information and click **OK** on each page when finished: + - 1 Source Configure: + + | | | + |-----|-----| + | Source: | **On Premises**| + | Source location:| The name of your configuration server EC2 instance.| + |Machine type: | **Physical machines**| + | Process server: | Select the configuration server from the drop-down list.| + + - 2 Target Configure + + | | | + |-----|-----| + | Target: | Leave the default.| + | Subscription: | Select the subscription you have been using.| + | Post-failover resource group:| Use the resource group you created in the [Prepare Azure resources](#prepare-azure-resources) section.| + | Post-failover deployment model: | Choose **Resource Manager**| + | Storage account: | Choose the storage account you created in the [Prepare Azure resources](#prepare-azure-resources) section.| + | Azure network: | Choose **Configure now for selected machines**| + | Post-failover Azure network: | Choose the network you created in the [Prepare Azure resources](#prepare-azure-resources) section.| + | Subnet: | Select the **default** from the drop-down.| + + - 3 Physical Machines Select + + Click **+ Physical machine** and then enter the **Name**, the **IP Address** and **OS Type** of the EC2 instance that you want to migrate and then click **OK**. + + - 4 Properties Configure Properties + + Select the account that you created on the configuration server from the drop-down and click **OK**. + + - 5 Replication Settings Configure replication settings + + Make sure the replication policy selected in the drop-down is **myReplicationPolicy** and then click **OK**. + +3. When the wizard is complete, click **Enable replication**. + + +You can track progress of the **Enable Protection** job in **Monitoring and reports** > **Jobs** > **Site Recovery Jobs**. After the **Finalize Protection** job runs the machine is ready for failover. + +When you enable replication for a VM, it can take 15 minutes or longer for changes to take effect and appear in the portal. + +## Run a test failover + +When you run a test failover, the following happens: + +1. A prerequisites check runs to make sure all of the conditions required for failover are in + place. +2. Failover processes the data, so that an Azure VM can be created. If select the latest recovery + point, a recovery point is created from the data. +3. An Azure VM is created using the data processed in the previous step. + +In the portal, run the test failover as follows: + +1. On the page for your vault, go to **Protected items** > **Replicated Items**> click the VM > **+ Test Failover**. + +2. Select a recovery point to use for the failover: + - **Latest processed**: Fails the VM over to the latest recovery point that was processed by + Site Recovery. The time stamp is shown. With this option, no time is spent processing data, so + it provides a low RTO (recovery time objective). + - **Latest app-consistent**: This option fails over all VMs to the latest app-consistent + recovery point. The time stamp is shown. + - **Custom**: Select any recovery point. +3. In **Test Failover**, select the target Azure network to which Azure VMs will be connected after + failover occurs. This should be the network you created in the [Prepare Azure resources](#prepare-azure-resources) section. +4. Click **OK** to begin the failover. You can track progress by clicking on the VM to open its + properties. Or you can click the **Test Failover** job on the page for your vault in **Monitoring and reports** > **Jobs** > + **Site Recovery jobs**. +5. After the failover finishes, the replica Azure VM appears in the Azure portal > **Virtual + Machines**. Check that the VM is the appropriate size, that it's connected to the right network, + and that it's running. +6. You should now be able to connect to the replicated VM in Azure. +7. To delete Azure VMs created during the test failover, click **Cleanup test failover** on the + recovery plan. In **Notes**, record and save any observations associated with the test failover. + +In some scenarios, failover requires additional processing that takes around eight to ten minutes to complete. + + +## Migrate to Azure + +Run an actual failover for the EC2 instances to migrate them to Azure VMs. + +1. In **Protected items** > **Replicated items** click the AWS instances > **Failover**. +2. In **Failover** select a **Recovery Point** to failover to. Select the latest recovery point. +3. Select **Shut down machine before beginning failover** if you want Site Recovery to attempt to do a shutdown of source virtual machines before triggering the failover. Failover continues even if shutdown fails. You can follow the failover progress on the **Jobs** page. +4. Check that the VM appears in **Replicated items**. +5. Right-click each VM > **Complete Migration**. This finishes the migration process, stops replication for the AWS VM, and stops Site Recovery billing for the VM. + + ![Complete migration](./media/tutorial-migrate-aws-to-azure/complete-migration.png) + +> [!WARNING] +> **Don't cancel a failover in progress**: Before failover is started, VM replication is stopped. If you cancel a failover in progress, failover stops, but the VM won't replicate again. + + + + +## Next steps + +In this topic, you’ve learned how to migrate AWS EC2 instances to Azure VMs. To learn more about Azure VMs, continue to the tutorials for Windows VMs. + +> [!div class="nextstepaction"] +> [Azure Windows virtual machine tutorials](../../virtual-machines/windows/tutorial-manage-vm.md) diff --git a/articles/site-recovery/migrate/tutorial-migrate-on-premises-to-azure.md b/articles/site-recovery/migrate/tutorial-migrate-on-premises-to-azure.md new file mode 100644 index 0000000000000..8b4bf1ab1d39d --- /dev/null +++ b/articles/site-recovery/migrate/tutorial-migrate-on-premises-to-azure.md @@ -0,0 +1,115 @@ +--- +title: Migrate on-premises machines to Azure with Azure Site Recovery | Microsoft Docs +description: This article describes how to migrate on-premises machines to Azure, using Azure Site Recovery. +services: site-recovery +documentationcenter: '' +author: rayne-wiselman +manager: jwhit +editor: '' + +ms.assetid: ddb412fd-32a8-4afa-9e39-738b11b91118 +ms.service: site-recovery +ms.devlang: na +ms.topic: article +ms.tgt_pltfrm: na +ms.workload: storage-backup-recovery +ms.date: 11/27/2017 +ms.author: raynew +ms.custom: MVC +--- +# Migrate on-premises machines to Azure + +The [Azure Site Recovery](../site-recovery-overview.md) service manages and orchestrates replication, failover, and failback of on-premises machines, and Azure virtual machines (VMs). + +This tutorial shows you how to migrate on-premises VMs and physical servers to Azure, with Site Recovery. In this tutorial, you learn how to: + +> [!div class="checklist"] +> * Set up prerequisites for the deployment +> * Create a Recovery Services vault for Site Recovery +> * Deploy on-premises management servers +> * Set up a replication policy and enable replication +> * Run a disaster recovery drill to make sure everything's working +> * Run a one-time failover to Azure + +## Overview + +You migrate a machine by enabling replication for it, and failing it over to Azure. + + +## Prerequisites + +Here's what you need to do for this tutorial. + +- [Prepare](../tutorial-prepare-azure.md) Azure resources, including an Azure subscription, an Azure virtual network, and a storage account. +- [Prepare](../tutorial-prepare-on-premises-vmware.md) VMware on-premises VMware servers and VMs. +- Note that devices exported by paravirtualized drivers aren't supported. + + +## Create a Recovery Services vault + +[!INCLUDE [site-recovery-create-vault](../../../includes/site-recovery-create-vault.md)] + +## Select a protection goal + +Select what you want to replicate, and where you want to replicate to. +1. Click **Recovery Services vaults** > vault. +2. In the Resource Menu, click **Site Recovery** > **Prepare Infrastructure** > **Protection goal**. +3. In **Protection goal**, select: + - **VMware**: Select **To Azure** > **Yes, with VMWare vSphere Hypervisor**. + - **Physical machine**: Select **To Azure** > **Not virtualized/Other**. + - **Hyper-V**: Select **To Azure** > **Yes, with Hyper-V**. + + +## Set up the source environment + +- [Set up](../tutorial-vmware-to-azure.md#set-up-the-source-environment) the source environment for VMware VMs. +- [Set up](../tutorial-physical-to-azure.md#set-up-the-source-environment) the source environment for physical servers. +- [Set up](../tutorial-hyper-v-to-azure.md#set-up-the-source-environment) the source environment for Hyper-V VMs. + +## Set up the target environment + +Select and verify target resources. + +1. Click **Prepare infrastructure** > **Target**, and select the Azure subscription you want to use. +2. Specify the target deployment model. +3. Site Recovery checks that you have one or more compatible Azure storage accounts and networks. + +## Create a replication policy + +- [Set up a replication policy](../tutorial-vmware-to-azure.md#create-a-replication-policy) for VMware VMs. + + +## Enable replication + +- [Enable replication](../tutorial-vmware-to-azure.md#enable-replication) for VMware VMs. + + +## Run a test migration + +Run a [test failover](../tutorial-dr-drill-azure.md) to Azure, to make sure everything's working as expected. + + +## Migrate to Azure + +Run a failover for the machines you want to migrate. + +1. In **Settings** > **Replicated items** click the machine > **Failover**. +2. In **Failover** select a **Recovery Point** to fail over to. Select the latest recovery point. +3. The encryption key setting isn't relevant for this scenario. +4. Select **Shut down machine before beginning failover** if you want Site Recovery to attempt to do a shutdown of source virtual machines before triggering the failover. Failover continues even if shutdown fails. You can follow the failover progress on the **Jobs** page. +5. Check that the Azure VM appears in Azure as expected. +6. In **Replicated items**, right-click the VM > **Complete Migration**. This finishes the migration process, stops replication for the VM, and stops Site Recovery billing for the VM. + + ![Complete migration](./media/tutorial-migrate-on-premises-to-azure/complete-migration.png) + + +> [!WARNING] +> **Don't cancel a failover in progress**: VM replication is stopped before failover starts. If you cancel a failover in progress, failover stops, but the VM won't replicate again. + +In some scenarios, failover requires additional processing that takes around eight to ten minutes to complete. You might notice longer test failover times for physical servers, VMware Linux machines, VMware VMs that don't have the DHCP service enables, and VMware VMs that don't have the following boot drivers: storvsc, vmbus, storflt, intelide, atapi. + + +## Next steps + +> [!div class="nextstepaction"] +> [Replicating Azure VMs to another region after migration to Azure](site-recovery-azure-to-azure-after-migration.md) diff --git a/articles/sql-data-warehouse/sql-data-warehouse-migrate-migration-utility.md b/articles/sql-data-warehouse/sql-data-warehouse-migrate-migration-utility.md index 03c9b65140736..3e7fbcd4dfd5e 100644 --- a/articles/sql-data-warehouse/sql-data-warehouse-migrate-migration-utility.md +++ b/articles/sql-data-warehouse/sql-data-warehouse-migrate-migration-utility.md @@ -63,4 +63,4 @@ Now that you've migrated some data, check out how to [develop][develop]. [develop]: sql-data-warehouse-overview-develop.md -[Download Migration Utility]: https://migrhoststorage.blob.core.windows.net/sqldwsample/DataWarehouseMigrationUtility.zip +[Download Migration Utility]: https://www.microsoft.com/en-us/download/details.aspx?id=49100 diff --git a/articles/sql-data-warehouse/sql-data-warehouse-tables-data-types.md b/articles/sql-data-warehouse/sql-data-warehouse-tables-data-types.md index 3dd4391f49df8..e127afde37dd9 100644 --- a/articles/sql-data-warehouse/sql-data-warehouse-tables-data-types.md +++ b/articles/sql-data-warehouse/sql-data-warehouse-tables-data-types.md @@ -21,7 +21,7 @@ ms.author: shigu;barbkess # Guidance for defining data types for tables in SQL Data Warehouse Use these recommendations to define table data types that are compatible with SQL Data Warehouse. In addition to compatibility, minimizing the size of data types improves query performance. -SQL Data Warehouse supports the most commonly used data types. For a list of the supported data types, see [data types](/sql/docs/t-sql/statements/create-table-azure-sql-data-warehouse.md#datatypes) in the CREATE TABLE statement. +SQL Data Warehouse supports the most commonly used data types. For a list of the supported data types, see [data types](https://docs.microsoft.com/sql/t-sql/statements/create-table-azure-sql-data-warehouse#DataTypes) in the CREATE TABLE statement. ## Minimize row length diff --git a/articles/sql-database/saas-dbpertenant-wingtip-app-guidance-tips.md b/articles/sql-database/saas-dbpertenant-wingtip-app-guidance-tips.md deleted file mode 100644 index 8a1e1830ac958..0000000000000 --- a/articles/sql-database/saas-dbpertenant-wingtip-app-guidance-tips.md +++ /dev/null @@ -1,96 +0,0 @@ ---- -title: "Guidance for SQL Database multi-tenant app example - Wingtip SaaS | Microsoft Docs" -description: "Provides steps and guidance for installing and running the sample multi-tenant application that uses Azure SQL Database, the Wingtip SaaS example." -keywords: "sql database tutorial" -services: "sql-database" -author: "ajlam" -manager: "craigg" - -ms.service: "sql-database" -ms.custom: "scale out apps" -ms.workload: "On Demand" -ms.tgt_pltfrm: "na" -ms.devlang: "na" -ms.topic: "article" -ms.date: "11/12/2017" -ms.author: "andrela" ---- -# Guidance and tips for Azure SQL Database multi-tenant SaaS app example - - -## Download and unblock the Wingtip Tickets SaaS Database per Tenant scripts - -Executable contents (scripts, dlls) may be blocked by Windows when zip files are downloaded from an external source and extracted. When extracting the scripts from a zip file, ***follow the steps below to unblock the .zip file before extracting***. This ensures the scripts are allowed to run. - -1. Browse to [the Wingtip Tickets SaaS Database per Tenant GitHub repo](https://github.com/Microsoft/WingtipTicketsSaaS-DbPerTenant). -2. Click **Clone or download**. -3. Click **Download ZIP** and save the file. -4. Right-click the **WingtipTicketsSaaS-DbPerTenant-master.zip** file, and select **Properties**. -5. On the **General** tab, select **Unblock**. -6. Click **OK**. -7. Extract the files. - -Scripts are located in the *..\\Learning Modules* folder. - - -## Working with the Wingtip Tickets SaaS Database per Tenant PowerShell Scripts - -To get the most out of the sample you need to dive into the provided scripts. Use breakpoints and step through the scripts, examining the details of how the different SaaS patterns are implemented. To easily step through the provided scripts and modules for the best understanding, we recommend using the [PowerShell ISE](https://msdn.microsoft.com/powershell/scripting/core-powershell/ise/introducing-the-windows-powershell-ise). - -### Update the configuration file for your deployment - -Edit the **UserConfig.psm1** file with the resource group and user value that you set during deployment: - -1. Open the *PowerShell ISE* and load ...\\Learning Modules\\*UserConfig.psm1* -2. Update *ResourceGroupName* and *Name* with the specific values for your deployment (on lines 10 and 11 only). -3. Save the changes! - -Setting these values here simply keeps you from having to update these deployment-specific values in every script. - -### Execute Scripts by pressing F5 - -Several scripts use *$PSScriptRoot* to navigate folders, and *$PSScriptRoot* is only evaluated when scripts are executed by pressing **F5**.  Highlighting and running a selection (**F8**) can result in errors, so press **F5** when running scripts. - -### Step through the scripts to examine the implementation - -The best way to understand the scripts is by stepping through them to see what they do. Check out the included **Demo-** scripts that present an easy to follow high-level workflow. The **Demo-** scripts show the steps required to accomplish each task, so set breakpoints and drill deeper into the individual calls to see implementation details for the different SaaS patterns. - -Tips for exploring and stepping through PowerShell scripts: - -- Open **Demo-** scripts in the PowerShell ISE. -- Execute or continue with **F5** (using **F8** is not advised because *$PSScriptRoot* is not evaluated when running selections of a script). -- Place breakpoints by clicking or selecting a line and pressing **F9**. -- Step over a function or script call using **F10**. -- Step into a function or script call using **F11**. -- Step out of the current function or script call using **Shift + F11**. - - -## Explore database schema and execute SQL queries using SSMS - -Use [SQL Server Management Studio (SSMS)](https://docs.microsoft.com/sql/ssms/download-sql-server-management-studio-ssms) to connect and browse the application servers and databases. - -The deployment initially has two SQL Database servers to connect to - the *tenants1-dpt-<User>* server, and the *catalog-dpt-<User>* server. To ensure a successful demo connection, both servers have a [firewall rule](sql-database-firewall-configure.md) allowing all IPs through. - - -1. Open *SSMS* and connect to the *tenants1-dpt-<User>.database.windows.net* server. -2. Click **Connect** > **Database Engine...**: - - ![catalog server](media/saas-dbpertenant-wingtip-app-guidance-tips/connect.png) - -3. Demo credentials are: Login = *developer*, Password = *P@ssword1* - - ![connection](media/saas-dbpertenant-wingtip-app-guidance-tips/tenants1-connect.png) - -4. Repeat steps 2-3 and connect to the *catalog-dpt-<User>.database.windows.net* server. - - -After successfully connecting you should see both servers. Your list of databases might be different, depending on the tenants you have provisioned. - -![object explorer](media/saas-dbpertenant-wingtip-app-guidance-tips/object-explorer.png) - - - -## Next steps - -[Deploy the Wingtip Tickets SaaS Database per Tenant application](saas-dbpertenant-get-started-deploy.md) - diff --git a/articles/sql-database/saas-standaloneapp-get-started-deploy.md b/articles/sql-database/saas-standaloneapp-get-started-deploy.md index f66451d73a176..a04a80458fce0 100644 --- a/articles/sql-database/saas-standaloneapp-get-started-deploy.md +++ b/articles/sql-database/saas-standaloneapp-get-started-deploy.md @@ -14,20 +14,18 @@ ms.workload: "Inactive" ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 11/14/2017 -ms.author: sstein +ms.date: 11/29/2017 +ms.author: genemi --- - - # Deploy and explore a standalone single-tenant application that uses Azure SQL Database In this tutorial, you deploy and explore the Wingtip Tickets SaaS Standalone Application. The application is designed to showcase features of Azure SQL Database that simplify enabling SaaS scenarios. The Standalone Application pattern deploys an Azure resource group containing a single-tenant application and a single-tenant database for each tenant. Multiple instances of the application can be provisioned to provide a multi-tenant solution. -While in this tutorial you will deploy resource groups for several tenants into your Azure subscription, this pattern allows the resource groups to be deployed into a tenant’s Azure subscription. Azure has partner programs that allow these resource groups to be managed by a service provider on the tenant’s behalf as an admin in the tenant’s subscription. +In this tutorial, you deploy resource groups for several tenants into your Azure subscription. This pattern allows the resource groups to be deployed into a tenant’s Azure subscription. Azure has partner programs that allow these resource groups to be managed by a service provider on the tenant’s behalf. The service provider is an admin in the tenant’s subscription. -In the deployment section below there are three Deploy to Azure buttons, each of which deploys a different instance of the application customized for a specific tenant. When each button is pressed the corresponding application is fully deployed five minutes later. The apps are deployed in your Azure subscription. You have full access to explore and work with the individual application components. +In the later deployment section, there are three blue **Deploy to Azure** buttons. Each button deploys a different instance of the application. Each instance is customized for a specific tenant. When each button is pressed, the corresponding application is fully deployed within five minutes. The apps are deployed in your Azure subscription. You have full access to explore and work with the individual application components. The application source code and management scripts are available in the [WingtipTicketsSaaS-StandaloneApp](https://github.com/Microsoft/WingtipTicketsSaaS-StandaloneApp) GitHub repo. @@ -35,91 +33,93 @@ The application source code and management scripts are available in the [Wingtip In this tutorial you learn: > [!div class="checklist"] +> * How to deploy the Wingtip Tickets SaaS Standalone Application. +> * Where to get the application source code, and management scripts. +> * About the servers and databases that make up the app. -> * How to deploy the Wingtip Tickets SaaS Standalone Application -> * Where to get the application source code, and management scripts -> * About the servers and databases that make up the app - -Additional tutorials will be released in due course that will allow you to explore a range of management scenarios based on this application pattern. +Additional tutorials will be released. They will allow you to explore a range of management scenarios based on this application pattern. ## Deploy the Wingtip Tickets SaaS Standalone Application Deploy the app for the three provided tenants: -1. Click each **Deploy to Azure** button to open the deployment template in the Azure portal. Each template requires two parameter values; a name for a new resource group, and a user name that distinguishes this deployment from other deployments of the app. The next step provides details for setting these values.

-   **Contoso Concert Hall** +1. Click each blue **Deploy to Azure** button to open the deployment template in the [Azure portal](https://portal.azure.com). Each template requires two parameter values; a name for a new resource group, and a user name that distinguishes this deployment from other deployments of the app. The next step provides details for setting these values.

+   **Contoso Concert Hall**

-   **Dogwood Dojo** +   **Dogwood Dojo**

-   **Fabrikam Jazz Club** +   **Fabrikam Jazz Club** 2. Enter required parameter values for each deployment. > [!IMPORTANT] - > Some authentication, and server firewalls are intentionally unsecured for demonstration purposes. **Create a new resource group** for each application deployment. Do not use an existing resource group. Do not use this application, or any resources it creates, for production. Delete all the resource groups when you are finished with the applications to stop related billing. + > Some authentication and server firewalls are intentionally unsecured for demonstration purposes. **Create a new resource group** for each application deployment. Do not use an existing resource group. Do not use this application, or any resources it creates, for production. Delete all the resource groups when you are finished with the applications to stop related billing. It is best to use only lowercase letters, numbers, and hyphens in your resource names. - * For **Resource group** - Select Create new, and then provide a Name for the resource group (case sensitive). - * We recommend that all letters in your resource group name be lowercase. - * We recommend that you append a dash, followed by your initials, followed by a digit: for example, _wingtip-sa-af1_. - * Select a Location from the drop-down list. + * For **Resource group** - Select **Create new**, and then provide a lowercase **Name** for the resource group. + * We recommend that you append a dash, followed by your initials, followed by a digit: for example, *wingtip-sa-af1*. + * Select a **Location** from the drop-down list. - * For **User** - We recommend that you choose a short User value, such as your initials plus a digit: for example, _af1_. + * For **User** - We recommend that you choose a short user value, such as your initials plus a digit: for example, *af1*. -1. **Deploy the application**. +3. **Deploy the application**. * Click to agree to the terms and conditions. * Click **Purchase**. -1. Monitor deployment status of all three deployments by clicking **Notifications** (the bell icon right of the search box). Deploying the app takes approximately five minutes. +4. Monitor deployment status of all three deployments by clicking **Notifications** (the bell icon to the right of the search box). Deploying the app takes five minutes. ## Run the application -The app showcases venues, such as concert halls, jazz clubs, and sports clubs, that host events. Venues register as customers (or tenants) of Wingtip Tickets , for an easy way to list events and sell tickets. Each venue gets a personalized web site to manage and list their events and sell tickets, independent and isolated from other tenants. Under the covers, each tenant gets a separate application instance and standalone SQL database. +The app showcases venues that host events. Venue types include concert halls, jazz clubs, and sports clubs. Venues are the customers of the Wingtip Tickets app. In Wingtip Tickets, venues are registered as *tenants*. Being a tenant gives a venue an easy way to list events and to sell tickets to their customers. Each venue gets a personalized web site to list their events and to sell tickets. Each tenant is isolated from other tenants, and is independent from them. Under the covers, each tenant gets a separate application instance with its own standalone SQL database. 1. Open the events page for each of the three tenants in separate browser tabs: - http://events.contosoconcerthall.<USER>.trafficmanager.net
- http://events.dogwooddojo.<USER>.trafficmanager.net
- http://events.fabrikamjazzclub.<USER>.trafficmanager.net + - http://events.contosoconcerthall.<USER>.trafficmanager.net + - http://events.dogwooddojo.<USER>.trafficmanager.net + - http://events.fabrikamjazzclub.<USER>.trafficmanager.net - (replace <USER> with your deployment's User value). + (In each URL, replace <USER> with your deployment's user value.) ![Events](./media/saas-standaloneapp-get-started-deploy/fabrikam.png) +To control the distribution of incoming requests, the app uses [*Azure Traffic Manager*](../traffic-manager/traffic-manager-overview.md). Each tenant-specific app instance includes the tenant name as part of the domain name in the URL. All the tenant URLs include your specific **User** value. The URLs follow the following format: +- http://events.<venuename>.<USER>.trafficmanager.net -To control the distribution of incoming requests, the app uses [*Azure Traffic Manager*](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-overview). Each tenant-specific app includes the tenant name as part of the domain name in the URL. All the tenant URLs include your specific *User* value and follow this format: http://events.<venuename>.<USER>.trafficmanager.net. Each tenant's database location is included in the app settings of the corresponding deployed app. +Each tenant's database **Location** is included in the app settings of the corresponding deployed app. -In a production environment, you would typically create a CNAME DNS record to [*point a company internet domain*](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-point-internet-domain) to the URL of the traffic manager profile. +In a production environment, typically you create a CNAME DNS record to [*point a company internet domain*](../traffic-manager/traffic-manager-point-internet-domain.md) to the URL of the traffic manager profile. ## Explore the servers and tenant databases Let’s look at some of the resources that were deployed: -1. In the [Azure portal](http://portal.azure.com), browse to the list of resource groups. You should see the **wingtip-sa-catalog-<USER>** resource group in which **catalog-sa-<USER>** server is deployed with the **tenantcatalog** database. You should also see the three tenant resource groups. - -1. Open the **wingtip-sa-fabrikam-<USER>** resource group, which contains the resources for the Fabrikam Jazz Club deployment. The **fabrikamjazzclub-<USER>** server contains the **fabrikamjazzclub** database. +1. In the [Azure portal](http://portal.azure.com), browse to the list of resource groups. +2. See the **wingtip-sa-catalog-<USER>** resource group. + - In this resource group, the **catalog-sa-<USER>** server is deployed. The server contains the **tenantcatalog** database. + - You should also see the three tenant resource groups. +3. Open the **wingtip-sa-fabrikam-<USER>** resource group, which contains the resources for the Fabrikam Jazz Club deployment. The **fabrikamjazzclub-<USER>** server contains the **fabrikamjazzclub** database. - -Each tenant database is a 50 DTU _Standalone_ database. +Each tenant database is a 50 DTU *Standalone* database. ## Next steps In this tutorial you learned: > [!div class="checklist"] - -> * How to deploy the Wingtip Tickets SaaS Standalone Application -> * About the servers and databases that make up the app -> * How to delete sample resources to stop related billing +> * How to deploy the Wingtip Tickets SaaS Standalone Application. +> * About the servers and databases that make up the app. +> * How to delete sample resources to stop related billing. ## Additional resources -* To learn about multi-tenant SaaS applications, see [*Design patterns for multi-tenant SaaS applications*](saas-tenancy-app-design-patterns.md) +* To learn about elastic jobs, see [*Managing scaled-out cloud databases*](https://docs.microsoft.com/azure/sql-database/sql-database-elastic-jobs-overview) +--> + +- To learn about multi-tenant SaaS applications, see [Design patterns for multi-tenant SaaS applications](saas-tenancy-app-design-patterns.md). diff --git a/articles/sql-database/scripts/sql-database-setup-geodr-and-failover-pool-powershell.md b/articles/sql-database/scripts/sql-database-setup-geodr-and-failover-pool-powershell.md index 8c9eb14b09800..0ea5886fa5e4b 100644 --- a/articles/sql-database/scripts/sql-database-setup-geodr-and-failover-pool-powershell.md +++ b/articles/sql-database/scripts/sql-database-setup-geodr-and-failover-pool-powershell.md @@ -15,7 +15,7 @@ ms.devlang: PowerShell ms.topic: sample ms.tgt_pltfrm: sql-database ms.workload: database -ms.date: 07/25/2017 +ms.date: 11/29/2017 ms.author: carlrab --- diff --git a/articles/sql-database/scripts/sql-database-sync-data-between-azure-onprem.md b/articles/sql-database/scripts/sql-database-sync-data-between-azure-onprem.md index 94f8310754ba2..359e5e61f3076 100644 --- a/articles/sql-database/scripts/sql-database-sync-data-between-azure-onprem.md +++ b/articles/sql-database/scripts/sql-database-sync-data-between-azure-onprem.md @@ -23,7 +23,7 @@ ms.reviewer: douglasl This PowerShell example configures Data Sync to sync between an Azure SQL Database and a SQL Server on-premises database. -This sample requires the Azure PowerShell module version 4.2 or later. Run `Get-Module -ListAvailable AzureRM` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](https://docs.microsoft.com/en-us/powershell/azure/install-azurerm-ps). +This sample requires the Azure PowerShell module version 4.2 or later. Run `Get-Module -ListAvailable AzureRM` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](https://docs.microsoft.com/powershell/azure/install-azurerm-ps). Run `Login-AzureRmAccount` to create a connection with Azure. diff --git a/articles/sql-database/sql-database-connect-query-dotnet-core.md b/articles/sql-database/sql-database-connect-query-dotnet-core.md index 405e6159eb37f..757def134b473 100644 --- a/articles/sql-database/sql-database-connect-query-dotnet-core.md +++ b/articles/sql-database/sql-database-connect-query-dotnet-core.md @@ -14,42 +14,32 @@ ms.workload: "On Demand" ms.tgt_pltfrm: na ms.devlang: dotnet ms.topic: quickstart -ms.date: 07/05/2017 +ms.date: 07/07/2017 ms.author: carlrab - --- # Use .NET Core (C#) to query an Azure SQL database -This quick start tutorial demonstrates how to use [.NET Core](https://www.microsoft.com/net/) on Windows/Linux/macOS to create a C# program to connect to an Azure SQL database and use Transact-SQL statements to query data. +This quickstart tutorial demonstrates how to use [.NET Core](https://www.microsoft.com/net/) on Windows/Linux/macOS to create a C# program to connect to an Azure SQL database and use Transact-SQL statements to query data. ## Prerequisites -To complete this quick start tutorial, make sure you have the following: +To complete this quickstart tutorial, make sure you have the following: -- An Azure SQL database. This quick start uses the resources created in one of these quick starts: +[!INCLUDE [prerequisites-create-db](../../includes/sql-database-connect-query-prerequisites-create-db-includes.md)] - - [Create DB - Portal](sql-database-get-started-portal.md) - - [Create DB - CLI](sql-database-get-started-cli.md) - - [Create DB - PowerShell](sql-database-get-started-powershell.md) +- A [server-level firewall rule](sql-database-get-started-portal.md#create-a-server-level-firewall-rule) for the public IP address of the computer you use for this quickstart tutorial. -- A [server-level firewall rule](sql-database-get-started-portal.md#create-a-server-level-firewall-rule) for the public IP address of the computer you use for this quick start tutorial. - You have installed [.NET Core for your operating system](https://www.microsoft.com/net/core). ## SQL server connection information -Get the connection information needed to connect to the Azure SQL database. You will need the fully qualified server name, database name, and login information in the next procedures. - -1. Log in to the [Azure portal](https://portal.azure.com/). -2. Select **SQL Databases** from the left-hand menu, and click your database on the **SQL databases** page. -3. On the **Overview** page for your database, review the fully qualified server name as shown in the following image. You can hover over the server name to bring up the **Click to copy** option. - - ![server-name](./media/sql-database-connect-query-dotnet/server-name.png) +[!INCLUDE [prerequisites-server-connection-info](../../includes/sql-database-connect-query-prerequisites-server-connection-info-includes.md)] -4. If you forget your Azure SQL Database server login information, navigate to the SQL Database server page to view the server admin name. You can reset the password if necessary. +#### For ADO.NET -5. Click **Show database connection strings**. +1. Continue by clicking **Show database connection strings**. -6. Review the complete **ADO.NET** connection string. +2. Review the complete **ADO.NET** connection string. ![ADO.NET connection string](./media/sql-database-connect-query-dotnet/adonet-connection-string.png) diff --git a/articles/sql-database/sql-database-connect-query-dotnet-visual-studio.md b/articles/sql-database/sql-database-connect-query-dotnet-visual-studio.md index d682490499481..6eccc1ce6f33a 100644 --- a/articles/sql-database/sql-database-connect-query-dotnet-visual-studio.md +++ b/articles/sql-database/sql-database-connect-query-dotnet-visual-studio.md @@ -14,43 +14,33 @@ ms.workload: "Active" ms.tgt_pltfrm: na ms.devlang: dotnet ms.topic: quickstart -ms.date: 07/05/2017 +ms.date: 11/29/2017 ms.author: carlrab ms.custom: devcenter - --- # Use .NET (C#) with Visual Studio to connect and query an Azure SQL database -This quick start tutorial demonstrates how to use the [.NET framework](https://www.microsoft.com/net/) to create a C# program with Visual Studio to connect to an Azure SQL database and use Transact-SQL statements to query data. +This quickstart tutorial demonstrates how to use the [.NET framework](https://www.microsoft.com/net/) to create a C# program with Visual Studio to connect to an Azure SQL database and use Transact-SQL statements to query data. ## Prerequisites -To complete this quick start tutorial, make sure you have the following: +To complete this quickstart tutorial, make sure you have the following: -- An Azure SQL database. This quick start uses the resources created in one of these quick starts: +[!INCLUDE [prerequisites-create-db](../../includes/sql-database-connect-query-prerequisites-create-db-includes.md)] - - [Create DB - Portal](sql-database-get-started-portal.md) - - [Create DB - CLI](sql-database-get-started-cli.md) - - [Create DB - PowerShell](sql-database-get-started-powershell.md) +- A [server-level firewall rule](sql-database-get-started-portal.md#create-a-server-level-firewall-rule) for the public IP address of the computer you use for this quickstart tutorial. -- A [server-level firewall rule](sql-database-get-started-portal.md#create-a-server-level-firewall-rule) for the public IP address of the computer you use for this quick start tutorial. - An installation of [Visual Studio Community 2017, Visual Studio Professional 2017, or Visual Studio Enterprise 2017](https://www.visualstudio.com/downloads/). ## SQL server connection information -Get the connection information needed to connect to the Azure SQL database. You will need the fully qualified server name, database name, and login information in the next procedures. - -1. Log in to the [Azure portal](https://portal.azure.com/). -2. Select **SQL Databases** from the left-hand menu, and click your database on the **SQL databases** page. -3. On the **Overview** page for your database, review the fully qualified server name as shown in the following image. You can hover over the server name to bring up the **Click to copy** option. +[!INCLUDE [prerequisites-server-connection-info](../../includes/sql-database-connect-query-prerequisites-server-connection-info-includes.md)] - ![server-name](./media/sql-database-connect-query-dotnet/server-name.png) +#### For ADO.NET -4. If you forget your Azure SQL Database server login information, navigate to the SQL Database server page to view the server admin name. You can reset the password if necessary. +1. Continue by clicking **Show database connection strings**. -5. Click **Show database connection strings**. - -6. Review the complete **ADO.NET** connection string. +2. Review the complete **ADO.NET** connection string. ![ADO.NET connection string](./media/sql-database-connect-query-dotnet/adonet-connection-string.png) @@ -140,3 +130,10 @@ namespace sqltest - Learn about [Getting started with .NET Core on Windows/Linux/macOS using the command line](/dotnet/core/tutorials/using-with-xplat-cli). - Learn how to [Design your first Azure SQL database using SSMS](sql-database-design-first-database.md) or [Design your first Azure SQL database using .NET](sql-database-design-first-database-csharp.md). - For more information about .NET, see [.NET documentation](https://docs.microsoft.com/dotnet/). +- [Retry logic example: Connect resiliently to SQL with ADO.NET][step-4-connect-resiliently-to-sql-with-ado-net-a78n] + + + + +[step-4-connect-resiliently-to-sql-with-ado-net-a78n]: https://docs.microsoft.com/sql/connect/ado-net/step-4-connect-resiliently-to-sql-with-ado-net + diff --git a/articles/sql-database/sql-database-connect-query-go.md b/articles/sql-database/sql-database-connect-query-go.md new file mode 100644 index 0000000000000..9b7ea3f4efa84 --- /dev/null +++ b/articles/sql-database/sql-database-connect-query-go.md @@ -0,0 +1,293 @@ +--- +title: Use Go to query Azure SQL Database | Microsoft Docs +description: Use Go to create a program that connects to an Azure SQL Database, and use Transact-SQL statements to query and modify data. +services: sql-database +documentationcenter: '' +author: David-Engel +manager: craigg +editor: MightyPen + +ms.assetid: +ms.service: sql-database +ms.custom: mvc,develop apps +ms.workload: "On Demand" +ms.tgt_pltfrm: na +ms.devlang: go +ms.topic: quickstart +ms.date: 11/28/2017 +ms.author: v-daveng +--- +# Use Go to query an Azure SQL database + +This quickstart demonstrates how to use [Go](https://godoc.org/github.com/denisenkom/go-mssqldb) to connect to an Azure SQL database. Transact-SQL statements to query and modify data are also demonstrated. + +## Prerequisites + +To complete this quickstart tutorial, make sure you have the following prerequisites: + +[!INCLUDE [prerequisites-create-db](../../includes/sql-database-connect-query-prerequisites-create-db-includes.md)] + +- A [server-level firewall rule](sql-database-get-started-portal.md#create-a-server-level-firewall-rule) for the public IP address of the computer you use for this quickstart tutorial. + +- You have installed Go and related software for your operating system: + + - **MacOS**: Install Homebrew and GoLang. See [Step 1.2](https://www.microsoft.com/sql-server/developer-get-started/go/mac/). + - **Ubuntu**: Install GoLang. See [Step 1.2](https://www.microsoft.com/sql-server/developer-get-started/go/ubuntu/). + - **Windows**: Install GoLang. See [Step 1.2](https://www.microsoft.com/sql-server/developer-get-started/go/windows/). + +## SQL server connection information + +[!INCLUDE [prerequisites-server-connection-info](../../includes/sql-database-connect-query-prerequisites-server-connection-info-includes.md)] + +## Create Go project and dependencies + +1. From the terminal, create a new project folder called **SqlServerSample**. + + ```bash + mkdir SqlServerSample + ``` + +2. Change directory to **SqlServerSample** and get and install the SQL Server driver for Go: + + ```bash + cd SqlServerSample + go get github.com/denisenkom/go-mssqldb + go install github.com/denisenkom/go-mssqldb + ``` + +## Create sample data + +1. Using your favorite text editor, create a file called **CreateTestData.sql** in the **SqlServerSample** folder. Copy and paste the following the T-SQL code inside it. This code creates a schema, table, and inserts a few rows. + + ```sql + CREATE SCHEMA TestSchema; + GO + + CREATE TABLE TestSchema.Employees ( + Id INT IDENTITY(1,1) NOT NULL PRIMARY KEY, + Name NVARCHAR(50), + Location NVARCHAR(50) + ); + GO + + INSERT INTO TestSchema.Employees (Name, Location) VALUES + (N'Jared', N'Australia'), + (N'Nikita', N'India'), + (N'Tom', N'Germany'); + GO + + SELECT * FROM TestSchema.Employees; + GO + ``` + +2. Connect to the database using sqlcmd and run the SQL script to create the schema, table, and insert some rows. Replace the appropriate values for your server, database, username, and password. + + ```bash + sqlcmd -S your_server.database.windows.net -U your_username -P your_password -d your_database -i ./CreateTestData.sql + ``` + +## Insert code to query SQL database + +1. Create a file named **sample.go** in the **SqlServerSample** folder. + +2. Open the file and replace its contents with the following code. Add the appropriate values for your server, database, username, and password. This example uses the GoLang Context methods to ensure that there is an active connection to the database server. + + ```go + package main + + import ( + _ "github.com/denisenkom/go-mssqldb" + "database/sql" + "context" + "log" + "fmt" + ) + + var db *sql.DB + + var server = "your_server.database.windows.net" + var port = 1433 + var user = "your_username" + var password = "your_password" + var database = "your_database" + + func main() { + // Build connection string + connString := fmt.Sprintf("server=%s;user id=%s;password=%s;port=%d;database=%s;", + server, user, password, port, database) + + var err error + + // Create connection pool + db, err = sql.Open("sqlserver", connString) + if err != nil { + log.Fatal("Error creating connection pool:", err.Error()) + } + fmt.Printf("Connected!\n") + + // Create employee + createId, err := CreateEmployee("Jake", "United States") + fmt.Printf("Inserted ID: %d successfully.\n", createId) + + // Read employees + count, err := ReadEmployees() + fmt.Printf("Read %d rows successfully.\n", count) + + // Update from database + updateId, err := UpdateEmployee("Jake", "Poland") + fmt.Printf("Updated row with ID: %d successfully.\n", updateId) + + // Delete from database + rows, err := DeleteEmployee("Jake") + fmt.Printf("Deleted %d rows successfully.\n", rows) + } + + func CreateEmployee(name string, location string) (int64, error) { + ctx := context.Background() + var err error + + if db == nil { + log.Fatal("What?") + } + + // Check if database is alive. + err = db.PingContext(ctx) + if err != nil { + log.Fatal("Error pinging database: " + err.Error()) + } + + tsql := fmt.Sprintf("INSERT INTO TestSchema.Employees (Name, Location) VALUES (@Name,@Location);") + + // Execute non-query with named parameters + result, err := db.ExecContext( + ctx, + tsql, + sql.Named("Location", location), + sql.Named("Name", name)) + + if err != nil { + log.Fatal("Error inserting new row: " + err.Error()) + return -1, err + } + + return result.LastInsertId() + } + + func ReadEmployees() (int, error) { + ctx := context.Background() + + // Check if database is alive. + err := db.PingContext(ctx) + if err != nil { + log.Fatal("Error pinging database: " + err.Error()) + } + + tsql := fmt.Sprintf("SELECT Id, Name, Location FROM TestSchema.Employees;") + + // Execute query + rows, err := db.QueryContext(ctx, tsql) + if err != nil { + log.Fatal("Error reading rows: " + err.Error()) + return -1, err + } + + defer rows.Close() + + var count int = 0 + + // Iterate through the result set. + for rows.Next() { + var name, location string + var id int + + // Get values from row. + err := rows.Scan(&id, &name, &location) + if err != nil { + log.Fatal("Error reading rows: " + err.Error()) + return -1, err + } + + fmt.Printf("ID: %d, Name: %s, Location: %s\n", id, name, location) + count++ + } + + return count, nil + } + + // Update an employee's information + func UpdateEmployee(name string, location string) (int64, error) { + ctx := context.Background() + + // Check if database is alive. + err := db.PingContext(ctx) + if err != nil { + log.Fatal("Error pinging database: " + err.Error()) + } + + tsql := fmt.Sprintf("UPDATE TestSchema.Employees SET Location = @Location WHERE Name= @Name") + + // Execute non-query with named parameters + result, err := db.ExecContext( + ctx, + tsql, + sql.Named("Location", location), + sql.Named("Name", name)) + if err != nil { + log.Fatal("Error updating row: " + err.Error()) + return -1, err + } + + return result.LastInsertId() + } + + // Delete an employee from database + func DeleteEmployee(name string) (int64, error) { + ctx := context.Background() + + // Check if database is alive. + err := db.PingContext(ctx) + if err != nil { + log.Fatal("Error pinging database: " + err.Error()) + } + + tsql := fmt.Sprintf("DELETE FROM TestSchema.Employees WHERE Name=@Name;") + + // Execute non-query with named parameters + result, err := db.ExecContext(ctx, tsql, sql.Named("Name", name)) + if err != nil { + fmt.Println("Error deleting row: " + err.Error()) + return -1, err + } + + return result.RowsAffected() + } + ``` + +## Run the code + +1. At the command prompt, run the following commands: + + ```bash + go run sample.go + ``` + +2. Verify the output: + + ```text + Connected! + Inserted ID: 4 successfully. + ID: 1, Name: Jared, Location: Australia + ID: 2, Name: Nikita, Location: India + ID: 3, Name: Tom, Location: Germany + ID: 4, Name: Jake, Location: United States + Read 4 rows successfully. + Updated row with ID: 4 successfully. + Deleted 1 rows successfully. + ``` + +## Next steps + +- [Design your first Azure SQL database](sql-database-design-first-database.md) +- [Go Driver for Microsoft SQL Server](https://github.com/denisenkom/go-mssqldb) +- [Report issues or ask questions](https://github.com/denisenkom/go-mssqldb/issues) + diff --git a/articles/sql-database/sql-database-connect-query-java.md b/articles/sql-database/sql-database-connect-query-java.md index 4b9461ba405b4..5b343e82d5caf 100644 --- a/articles/sql-database/sql-database-connect-query-java.md +++ b/articles/sql-database/sql-database-connect-query-java.md @@ -14,27 +14,22 @@ ms.workload: "On Demand" ms.tgt_pltfrm: na ms.devlang: java ms.topic: quickstart -ms.date: 07/10/2017 +ms.date: 07/11/2017 ms.author: andrela - --- # Use Java to query an Azure SQL database -This quick start demonstrates how to use [Java](https://docs.microsoft.com/sql/connect/jdbc/microsoft-jdbc-driver-for-sql-server) to connect to an Azure SQL database and then use Transact-SQL statements to query data. +This quickstart demonstrates how to use [Java](https://docs.microsoft.com/sql/connect/jdbc/microsoft-jdbc-driver-for-sql-server) to connect to an Azure SQL database and then use Transact-SQL statements to query data. ## Prerequisites -To complete this quick start tutorial, make sure you have the following prerequisites: - -- An Azure SQL database. This quick start uses the resources created in one of these quick starts: +To complete this quickstart tutorial, make sure you have the following prerequisites: - - [Create DB - Portal](sql-database-get-started-portal.md) - - [Create DB - CLI](sql-database-get-started-cli.md) - - [Create DB - PowerShell](sql-database-get-started-powershell.md) +[!INCLUDE [prerequisites-create-db](../../includes/sql-database-connect-query-prerequisites-create-db-includes.md)] -- A [server-level firewall rule](sql-database-get-started-portal.md#create-a-server-level-firewall-rule) for the public IP address of the computer you use for this quick start tutorial. +- A [server-level firewall rule](sql-database-get-started-portal.md#create-a-server-level-firewall-rule) for the public IP address of the computer you use for this quickstart tutorial. -- You have installed Java and related software for your operating system. +- You have installed Java and related software for your operating system: - **MacOS**: Install Homebrew and Java, and then install Maven. See [Step 1.2 and 1.3](https://www.microsoft.com/sql-server/developer-get-started/java/mac/). - **Ubuntu**: Install the Java Development Kit, and install Maven. See [Step 1.2, 1.3, and 1.4](https://www.microsoft.com/sql-server/developer-get-started/java/ubuntu/). @@ -42,15 +37,7 @@ To complete this quick start tutorial, make sure you have the following prerequi ## SQL server connection information -Get the connection information needed to connect to the Azure SQL database. You will need the fully qualified server name, database name, and login information in the next procedures. - -1. Log in to the [Azure portal](https://portal.azure.com/). -2. Select **SQL Databases** from the left-hand menu, and click your database on the **SQL databases** page. -3. On the **Overview** page for your database, review the fully qualified server name as shown in the following image: You can hover over the server name to bring up the **Click to copy** option. - - ![server-name](./media/sql-database-connect-query-dotnet/server-name.png) - -4. If you forget your server login information, navigate to the SQL Database server page to view the server admin name. If necessary, reset the password. +[!INCLUDE [prerequisites-server-connection-info](../../includes/sql-database-connect-query-prerequisites-server-connection-info-includes.md)] ## **Create Maven project and dependencies** 1. From the terminal, create a new Maven project called **sqltest**. diff --git a/articles/sql-database/sql-database-connect-query-nodejs.md b/articles/sql-database/sql-database-connect-query-nodejs.md index 06a31b1cae42e..f3e58c49b6e79 100644 --- a/articles/sql-database/sql-database-connect-query-nodejs.md +++ b/articles/sql-database/sql-database-connect-query-nodejs.md @@ -14,41 +14,29 @@ ms.workload: "On Demand" ms.tgt_pltfrm: na ms.devlang: nodejs ms.topic: quickstart -ms.date: 07/05/2017 +ms.date: 07/06/2017 ms.author: carlrab - --- # Use Node.js to query an Azure SQL database -This quick start tutorial demonstrates how to use [Node.js](https://nodejs.org/en/) to create a program to connect to an Azure SQL database and use Transact-SQL statements to query data. +This quickstart tutorial demonstrates how to use [Node.js](https://nodejs.org/en/) to create a program to connect to an Azure SQL database and use Transact-SQL statements to query data. ## Prerequisites -To complete this quick start tutorial, make sure you have the following: +To complete this quickstart tutorial, make sure you have the following: -- An Azure SQL database. This quick start uses the resources created in one of these quick starts: +[!INCLUDE [prerequisites-create-db](../../includes/sql-database-connect-query-prerequisites-create-db-includes.md)] - - [Create DB - Portal](sql-database-get-started-portal.md) - - [Create DB - CLI](sql-database-get-started-cli.md) - - [Create DB - PowerShell](sql-database-get-started-powershell.md) +- A [server-level firewall rule](sql-database-get-started-portal.md#create-a-server-level-firewall-rule) for the public IP address of the computer you use for this quickstart tutorial. -- A [server-level firewall rule](sql-database-get-started-portal.md#create-a-server-level-firewall-rule) for the public IP address of the computer you use for this quick start tutorial. -- You have installed Node.js and related software for your operating system. +- You have installed Node.js and related software for your operating system: - **MacOS**: Install Homebrew and Node.js, and then install the ODBC driver and SQLCMD. See [Step 1.2 and 1.3](https://www.microsoft.com/sql-server/developer-get-started/node/mac/). - **Ubuntu**: Install Node.js, and then install the ODBC driver and SQLCMD. See [Step 1.2 and 1.3](https://www.microsoft.com/sql-server/developer-get-started/node/ubuntu/) . - **Windows**: Install Chocolatey and Node.js, and then install the ODBC driver and SQL CMD. See [Step 1.2 and 1.3](https://www.microsoft.com/sql-server/developer-get-started/node/windows/). ## SQL server connection information -Get the connection information needed to connect to the Azure SQL database. You will need the fully qualified server name, database name, and login information in the next procedures. - -1. Log in to the [Azure portal](https://portal.azure.com/). -2. Select **SQL Databases** from the left-hand menu, and click your database on the **SQL databases** page. -3. On the **Overview** page for your database, review the fully qualified server name as shown in the following image. You can hover over the server name to bring up the **Click to copy** option. - - ![server-name](./media/sql-database-connect-query-dotnet/server-name.png) - -4. If you have forgotten the login information for your Azure SQL Database server, navigate to the SQL Database server page to view the server admin name and, if necessary, reset the password. +[!INCLUDE [prerequisites-server-connection-info](../../includes/sql-database-connect-query-prerequisites-server-connection-info-includes.md)] > [!IMPORTANT] > You must have a firewall rule in place for the public IP address of the computer on which you perform this tutorial. If you are on a different computer or have a different public IP address, create a [server-level firewall rule using the Azure portal](sql-database-get-started-portal.md#create-a-server-level-firewall-rule). @@ -142,4 +130,3 @@ Open a command prompt and create a folder named *sqltest*. Navigate to the folde - Learn how to [Connect and query with SSMS](sql-database-connect-query-ssms.md) - Learn how to [Connect and query with Visual Studio Code](sql-database-connect-query-vscode.md). - diff --git a/articles/sql-database/sql-database-connect-query-php.md b/articles/sql-database/sql-database-connect-query-php.md index 895b455329ed8..861fe3865c8b0 100644 --- a/articles/sql-database/sql-database-connect-query-php.md +++ b/articles/sql-database/sql-database-connect-query-php.md @@ -14,27 +14,22 @@ ms.workload: "On Demand" ms.tgt_pltfrm: na ms.devlang: php ms.topic: quickstart -ms.date: 08/08/2017 +ms.date: 11/29/2017 ms.author: carlrab - --- # Use PHP to query an Azure SQL database -This quick start tutorial demonstrates how to use [PHP](http://php.net/manual/en/intro-whatis.php) to create a program to connect to an Azure SQL database and use Transact-SQL statements to query data. +This quickstart tutorial demonstrates how to use [PHP](http://php.net/manual/en/intro-whatis.php) to create a program to connect to an Azure SQL database and use Transact-SQL statements to query data. ## Prerequisites -To complete this quick start tutorial, make sure you have the following: - -- An Azure SQL database. This quick start uses the resources created in one of these quick starts: +To complete this quickstart tutorial, make sure you have the following: - - [Create DB - Portal](sql-database-get-started-portal.md) - - [Create DB - CLI](sql-database-get-started-cli.md) - - [Create DB - PowerShell](sql-database-get-started-powershell.md) +[!INCLUDE [prerequisites-create-db](../../includes/sql-database-connect-query-prerequisites-create-db-includes.md)] -- A [server-level firewall rule](sql-database-get-started-portal.md#create-a-server-level-firewall-rule) for the public IP address of the computer you use for this quick start tutorial. +- A [server-level firewall rule](sql-database-get-started-portal.md#create-a-server-level-firewall-rule) for the public IP address of the computer you use for this quickstart tutorial. -- You have installed PHP and related software for your operating system. +- You have installed PHP and related software for your operating system: - **MacOS**: Install Homebrew and PHP, install the ODBC driver and SQLCMD, and then install the PHP Driver for SQL Server. See [Steps 1.2, 1.3, and 2.1](https://www.microsoft.com/en-us/sql-server/developer-get-started/php/mac/). - **Ubuntu**: Install PHP and other required packages, and then install the PHP Driver for SQL Server. See [Steps 1.2 and 2.1](https://www.microsoft.com/sql-server/developer-get-started/php/ubuntu/). @@ -42,15 +37,7 @@ To complete this quick start tutorial, make sure you have the following: ## SQL server connection information -Get the connection information needed to connect to the Azure SQL database. You will need the fully qualified server name, database name, and login information in the next procedures. - -1. Log in to the [Azure portal](https://portal.azure.com/). -2. Select **SQL Databases** from the left-hand menu, and click your database on the **SQL databases** page. -3. On the **Overview** page for your database, review the fully qualified server name as shown in the following image. You can hover over the server name to bring up the **Click to copy** option. - - ![server-name](./media/sql-database-connect-query-dotnet/server-name.png) - -4. If you forget your server login information, navigate to the SQL Database server page to view the server admin name and, if necessary, reset the password. +[!INCLUDE [prerequisites-server-connection-info](../../includes/sql-database-connect-query-prerequisites-server-connection-info-includes.md)] ## Insert code to query SQL database @@ -97,3 +84,10 @@ Get the connection information needed to connect to the Azure SQL database. You - [Design your first Azure SQL database](sql-database-design-first-database.md) - [Microsoft PHP Drivers for SQL Server](https://github.com/Microsoft/msphpsql/) - [Report issues or ask questions](https://github.com/Microsoft/msphpsql/issues) +- [Retry logic example: Connect resiliently to SQL with PHP][step-4-connect-resiliently-to-sql-with-php-p42h] + + + + +[step-4-connect-resiliently-to-sql-with-php-p42h]: https://docs.microsoft.com/sql/connect/php/step-4-connect-resiliently-to-sql-with-php + diff --git a/articles/sql-database/sql-database-connect-query-portal.md b/articles/sql-database/sql-database-connect-query-portal.md index f215ca572d5ff..f21cec4908fe8 100644 --- a/articles/sql-database/sql-database-connect-query-portal.md +++ b/articles/sql-database/sql-database-connect-query-portal.md @@ -16,21 +16,18 @@ ms.workload: data-management ms.tgt_pltfrm: na ms.devlang: na ms.topic: hero-article -ms.date: 08/01/2017 +ms.date: 08/02/2017 ms.author: ayolubek --- # Azure portal: Use the SQL Query Editor to connect and query data -The SQL Query Editor is a browser query tool that provides an efficient and lightweight way to execute SQL queries on your Azure SQL Database or Azure SQL Data Warehouse without leaving the Azure portal. This quick start demonstrates how to use the Query Editor to connect to a SQL database, and then use Transact-SQL statements to query, insert, update, and delete data in the database. +The SQL Query Editor is a browser query tool that provides an efficient and lightweight way to execute SQL queries on your Azure SQL Database or Azure SQL Data Warehouse without leaving the Azure portal. This quickstart demonstrates how to use the Query Editor to connect to a SQL database, and then use Transact-SQL statements to query, insert, update, and delete data in the database. ## Prerequisites -This quick start uses as its starting point the resources created in one of these quick starts: - -- [Create DB - Portal](sql-database-get-started-portal.md) -- [Create DB - CLI](sql-database-get-started-cli.md) -- [Create DB - PowerShell](sql-database-get-started-powershell.md) +This quickstart uses as its starting point the resources created in one of these quickstarts: +[!INCLUDE [prerequisites-create-db](../../includes/sql-database-connect-query-prerequisites-create-db-includes.md)] ## Log in to the Azure portal diff --git a/articles/sql-database/sql-database-connect-query-python.md b/articles/sql-database/sql-database-connect-query-python.md index 35aa910023492..304ba2ef04f48 100644 --- a/articles/sql-database/sql-database-connect-query-python.md +++ b/articles/sql-database/sql-database-connect-query-python.md @@ -14,26 +14,22 @@ ms.workload: "On Demand" ms.tgt_pltfrm: n ms.devlang: python ms.topic: quickstart -ms.date: 08/08/2017 +ms.date: 08/09/2017 ms.author: carlrab --- # Use Python to query an Azure SQL database - This quick start demonstrates how to use [Python](https://python.org) to connect to an Azure SQL database and use Transact-SQL statements to query data. + This quickstart demonstrates how to use [Python](https://python.org) to connect to an Azure SQL database and use Transact-SQL statements to query data. ## Prerequisites -To complete this quick start tutorial, make sure you have the following: +To complete this quickstart tutorial, make sure you have the following: -- An Azure SQL database. This quick start uses the resources created in one of these quick starts: +[!INCLUDE [prerequisites-create-db](../../includes/sql-database-connect-query-prerequisites-create-db-includes.md)] - - [Create DB - Portal](sql-database-get-started-portal.md) - - [Create DB - CLI](sql-database-get-started-cli.md) - - [Create DB - PowerShell](sql-database-get-started-powershell.md) +- A [server-level firewall rule](sql-database-get-started-portal.md#create-a-server-level-firewall-rule) for the public IP address of the computer you use for this quickstart tutorial. -- A [server-level firewall rule](sql-database-get-started-portal.md#create-a-server-level-firewall-rule) for the public IP address of the computer you use for this quick start tutorial. - -- You have installed Python and related software for your operating system. +- You have installed Python and related software for your operating system: - **MacOS**: Install Homebrew and Python, install the ODBC driver and SQLCMD, and then install the Python Driver for SQL Server. See [Steps 1.2, 1.3, and 2.1](https://www.microsoft.com/sql-server/developer-get-started/python/mac/). - **Ubuntu**: Install Python and other required packages, and then install the Python Driver for SQL Server. See [Steps 1.2, 1.3, and 2.1](https://www.microsoft.com/sql-server/developer-get-started/python/ubuntu/). @@ -41,15 +37,7 @@ To complete this quick start tutorial, make sure you have the following: ## SQL server connection information -Get the connection information needed to connect to the Azure SQL database. You will need the fully qualified server name, database name, and login information in the next procedures. - -1. Log in to the [Azure portal](https://portal.azure.com/). -2. Select **SQL Databases** from the left-hand menu, and click your database on the **SQL databases** page. -3. On the **Overview** page for your database, review the fully qualified server name as shown in the following image. You can hover over the server name to bring up the **Click to copy** option. - - ![server-name](./media/sql-database-connect-query-dotnet/server-name.png) - -4. If you forget your server login information, navigate to the SQL Database server page to view the server admin name and, if necessary, reset the password. +[!INCLUDE [prerequisites-server-connection-info](../../includes/sql-database-connect-query-prerequisites-server-connection-info-includes.md)] ## Insert code to query SQL database diff --git a/articles/sql-database/sql-database-connect-query-ruby.md b/articles/sql-database/sql-database-connect-query-ruby.md index b31d66e6c8740..364e729830cb4 100644 --- a/articles/sql-database/sql-database-connect-query-ruby.md +++ b/articles/sql-database/sql-database-connect-query-ruby.md @@ -14,40 +14,28 @@ ms.workload: "Inactive" ms.tgt_pltfrm: na ms.devlang: ruby ms.topic: quickstart -ms.date: 07/14/2017 +ms.date: 07/15/2017 ms.author: carlrab --- - # Use Ruby to query an Azure SQL database -This quick start tutorial demonstrates how to use [Ruby](https://www.ruby-lang.org) to create a program to connect to an Azure SQL database and use Transact-SQL statements to query data. +This quickstart tutorial demonstrates how to use [Ruby](https://www.ruby-lang.org) to create a program to connect to an Azure SQL database and use Transact-SQL statements to query data. ## Prerequisites -To complete this quick start tutorial, make sure you have the following prerequisites: +To complete this quickstart tutorial, make sure you have the following prerequisites: -- An Azure SQL database. This quick start uses the resources created in one of these quick starts: +[!INCLUDE [prerequisites-create-db](../../includes/sql-database-connect-query-prerequisites-create-db-includes.md)] - - [Create DB - Portal](sql-database-get-started-portal.md) - - [Create DB - CLI](sql-database-get-started-cli.md) - - [Create DB - PowerShell](sql-database-get-started-powershell.md) +- A [server-level firewall rule](sql-database-get-started-portal.md#create-a-server-level-firewall-rule) for the public IP address of the computer you use for this quickstart tutorial. -- A [server-level firewall rule](sql-database-get-started-portal.md#create-a-server-level-firewall-rule) for the public IP address of the computer you use for this quick start tutorial. -- You have installed Ruby and related software for your operating system. +- You have installed Ruby and related software for your operating system: - **MacOS**: Install Homebrew, install rbenv and ruby-build, install Ruby, and then install FreeTDS. See [Step 1.2, 1.3, 1.4, and 1.5](https://www.microsoft.com/sql-server/developer-get-started/ruby/mac/). - **Ubuntu**: Install prerequisites for Ruby, install rbenv and ruby-build, install Ruby, and then install FreeTDS. See [Step 1.2, 1.3, 1.4, and 1.5](https://www.microsoft.com/sql-server/developer-get-started/ruby/ubuntu/). ## SQL server connection information -Get the connection information needed to connect to the Azure SQL database. You will need the fully qualified server name, database name, and login information in the next procedures. - -1. Log in to the [Azure portal](https://portal.azure.com/). -2. Select **SQL Databases** from the left-hand menu, and click your database on the **SQL databases** page. -3. On the **Overview** page for your database, review the fully qualified server name. You can hover over the server name to bring up the **Click to copy** option, as shown in the following image: - - ![server-name](./media/sql-database-connect-query-dotnet/server-name.png) - -4. If you have forgotten the login information for your Azure SQL Database server, navigate to the SQL Database server page to view the server admin name and, if necessary, reset the password. +[!INCLUDE [prerequisites-server-connection-info](../../includes/sql-database-connect-query-prerequisites-server-connection-info-includes.md)] > [!IMPORTANT] > You must have a firewall rule in place for the public IP address of the computer on which you perform this tutorial. If you are on a different computer or have a different public IP address, create a [server-level firewall rule using the Azure portal](sql-database-get-started-portal.md#create-a-server-level-firewall-rule). diff --git a/articles/sql-database/sql-database-connect-query-ssms.md b/articles/sql-database/sql-database-connect-query-ssms.md index 8f87682cd013c..19b63b9c6fb32 100644 --- a/articles/sql-database/sql-database-connect-query-ssms.md +++ b/articles/sql-database/sql-database-connect-query-ssms.md @@ -1,6 +1,6 @@ --- title: 'SSMS: Connect and query data in Azure SQL Database | Microsoft Docs' -description: Learn how to connect to SQL Database on Azure by using SQL Server Management Studio (SSMS). Then, run Transact-SQL (T-SQL) statements to query and edit data. +description: Learn how to connect to SQL Database on Azure by using SQL Server Management Studio (SSMS). Then run Transact-SQL (T-SQL) statements to query and edit data. metacanonical: '' keywords: connect to sql database,sql server management studio services: sql-database @@ -16,35 +16,26 @@ ms.workload: "Active" ms.tgt_pltfrm: na ms.devlang: na ms.topic: quickstart -ms.date: 05/26/2017 +ms.date: 11/28/2017 ms.author: carlrab - --- # Azure SQL Database: Use SQL Server Management Studio to connect and query data -[SQL Server Management Studio](https://msdn.microsoft.com/library/ms174173.aspx) (SSMS) is an integrated environment for managing any SQL infrastructure, from SQL Server to SQL Database for Microsoft Windows. This quick start demonstrates how to use SSMS to connect to an Azure SQL database, and then use Transact-SQL statements to query, insert, update, and delete data in the database. +[SQL Server Management Studio][ssms-install-latest-84g] (SSMS) is an integrated environment for managing any SQL infrastructure, from SQL Server to SQL Database for Microsoft Windows. This quickstart demonstrates how to use SSMS to connect to an Azure SQL database, and then use Transact-SQL statements to query, insert, update, and delete data in the database. ## Prerequisites -This quick start uses as its starting point the resources created in one of these quick starts: - -- [Create DB - Portal](sql-database-get-started-portal.md) -- [Create DB - CLI](sql-database-get-started-cli.md) -- [Create DB - PowerShell](sql-database-get-started-powershell.md) +This quickstart uses as its starting point the resources created in one of these quickstarts: -Before you start, make sure you have installed the newest version of [SSMS](https://msdn.microsoft.com/library/mt238290.aspx). +[!INCLUDE [prerequisites-create-db](../../includes/sql-database-connect-query-prerequisites-create-db-includes.md)] -## SQL server connection information +#### Install the latest SSMS -Get the connection information needed to connect to the Azure SQL database. You will need the fully qualified server name, database name, and login information in the next procedures. +Before you start, make sure you have installed the newest version of [SSMS][ssms-install-latest-84g]. -1. Log in to the [Azure portal](https://portal.azure.com/). -2. Select **SQL Databases** from the left-hand menu, and click your database on the **SQL databases** page. -3. On the **Overview** page for your database, review the fully qualified server name as shown in the image below. You can hover over the server name to bring up the **Click to copy** option. - - ![connection information](./media/sql-database-connect-query-dotnet/server-name.png) +## SQL server connection information -4. If you have forgotten the login information for your Azure SQL Database server, navigate to the SQL Database server page to view the server admin name and, if necessary, reset the password. +[!INCLUDE [prerequisites-server-connection-info](../../includes/sql-database-connect-query-prerequisites-server-connection-info-includes.md)] ## Connect to your database @@ -58,13 +49,14 @@ Use SQL Server Management Studio to establish a connection to your Azure SQL Dat 2. In the **Connect to Server** dialog box, enter the following information: - | Setting       | Suggested value | Description | - | ------------ | ------------------ | ------------------------------------------------- | + | Setting     | Suggested value | Description | + | ------------ | ------------------ | ----------- | | **Server type** | Database engine | This value is required. | | **Server name** | The fully qualified server name | The name should be something like this: **mynewserver20170313.database.windows.net**. | | **Authentication** | SQL Server Authentication | SQL Authentication is the only authentication type that we have configured in this tutorial. | | **Login** | The server admin account | This is the account that you specified when you created the server. | | **Password** | The password for your server admin account | This is the password that you specified when you created the server. | + |||| ![connect to server](./media/sql-database-connect-query-ssms/connect.png) @@ -169,3 +161,9 @@ Use the following code to delete the new product that you previously added using - To connect and query using Java, see [Connect and query with Java](sql-database-connect-query-java.md). - To connect and query using Python, see [Connect and query with Python](sql-database-connect-query-python.md). - To connect and query using Ruby, see [Connect and query with Ruby](sql-database-connect-query-ruby.md). + + + + +[ssms-install-latest-84g]: https://docs.microsoft.com/en-us/sql/ssms/sql-server-management-studio-ssms + diff --git a/articles/sql-database/sql-database-connect-query-vscode.md b/articles/sql-database/sql-database-connect-query-vscode.md index ec32231779b32..2a863bc245f57 100644 --- a/articles/sql-database/sql-database-connect-query-vscode.md +++ b/articles/sql-database/sql-database-connect-query-vscode.md @@ -16,21 +16,20 @@ ms.workload: "On Demand" ms.tgt_pltfrm: na ms.devlang: na ms.topic: quickstart -ms.date: 06/20/2017 +ms.date: 06/22/2017 ms.author: carlrab - --- # Azure SQL Database: Use Visual Studio Code to connect and query data -[Visual Studio Code](https://code.visualstudio.com/docs) is a graphical code editor for Linux, macOS, and Windows that supports extensions, including the [mssql extension](https://aka.ms/mssql-marketplace) for querying Microsoft SQL Server, Azure SQL Database, and SQL Data Warehouse. This quick start demonstrates how to use Visual Studio Code to connect to an Azure SQL database, and then use Transact-SQL statements to query, insert, update, and delete data in the database. +[Visual Studio Code](https://code.visualstudio.com/docs) is a graphical code editor for Linux, macOS, and Windows that supports extensions, including the [mssql extension](https://aka.ms/mssql-marketplace) for querying Microsoft SQL Server, Azure SQL Database, and SQL Data Warehouse. This quickstart demonstrates how to use Visual Studio Code to connect to an Azure SQL database, and then use Transact-SQL statements to query, insert, update, and delete data in the database. ## Prerequisites -This quick start uses as its starting point the resources created in one of these quick starts: +This quickstart uses as its starting point the resources created in one of these quickstarts: + +[!INCLUDE [prerequisites-create-db](../../includes/sql-database-connect-query-prerequisites-create-db-includes.md)] -- [Create DB - Portal](sql-database-get-started-portal.md) -- [Create DB - CLI](sql-database-get-started-cli.md) -- [Create DB - PowerShell](sql-database-get-started-powershell.md) +#### Install VS Code Before you start, make sure you have installed the newest version of [Visual Studio Code](https://code.visualstudio.com/Download) and loaded the [mssql extension](https://aka.ms/mssql-marketplace). For installation guidance for the mssql extension, see [Install VS Code](https://docs.microsoft.com/sql/linux/sql-server-linux-develop-use-vscode#install-vs-code) and see [mssql for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-mssql.mssql). @@ -60,13 +59,7 @@ No special configuration needed. Get the connection information needed to connect to the Azure SQL database. You will need the fully qualified server name, database name, and login information in the next procedures. -1. Log in to the [Azure portal](https://portal.azure.com/). -2. Select **SQL Databases** from the left-hand menu, and click your database on the **SQL databases** page. -3. On the **Overview** page for your database, review the fully qualified server name as shown in the following image. You can hover over the server name to bring up the **Click to copy** option. - - ![connection information](./media/sql-database-connect-query-dotnet/server-name.png) - -4. If you have forgotten the login information for your Azure SQL Database server, navigate to the SQL Database server page to view the server admin name and, if necessary, reset the password. +[!INCLUDE [prerequisites-server-connection-info](../../includes/sql-database-connect-query-prerequisites-server-connection-info-includes.md)] ## Set language mode to SQL @@ -172,7 +165,7 @@ Use the following code to update the new product that you previously added using ## Delete data -Use the following code to delete the new product that you previously added using the [DELETE](https://msdn.microsoft.com/library/ms189835.aspx) Transact-SQL statement. +Use the following code to delete the new product that you previously added using the [DELETE](https://docs.microsoft.com/sql/t-sql/statements/delete-transact-sql) Transact-SQL statement. 1. In the **Editor** window, delete the previous query and enter the following query: diff --git a/articles/sql-database/sql-database-connectivity-architecture.md b/articles/sql-database/sql-database-connectivity-architecture.md index b937da95f28d2..d95a024ba7083 100644 --- a/articles/sql-database/sql-database-connectivity-architecture.md +++ b/articles/sql-database/sql-database-connectivity-architecture.md @@ -1,6 +1,6 @@ --- title: Azure SQL Database connectivity architecture | Microsoft Docs -description: This document explains the Azure SQLDB connectivity architecture from within Azure or from outside of Azure. +description: This document explains the Azure SQLDB connectivity architecture from within Azure or from outside of Azure. services: sql-database documentationcenter: '' author: CarlRabeler @@ -15,15 +15,14 @@ ms.tgt_pltfrm: na ms.workload: "On Demand" ms.date: 06/05/2017 ms.author: carlrab - --- # Azure SQL Database Connectivity Architecture -This article explains the Azure SQL Database connectivity architecture and explains how the different components function to direct traffic to your instance of Azure SQL Database. These Azure SQL Database connectivity components function to direct network traffic to the Azure database with clients connecting from within Azure and with clients connecting from outside of Azure. This article also provides script samples to change how connectivity occurs, and the considerations related to changing the default connectivity settings. If there are any questions after reading this article, please contact Dhruv at dmalik@microsoft.com. +This article explains the Azure SQL Database connectivity architecture and explains how the different components function to direct traffic to your instance of Azure SQL Database. These Azure SQL Database connectivity components function to direct network traffic to the Azure database with clients connecting from within Azure and with clients connecting from outside of Azure. This article also provides script samples to change how connectivity occurs, and the considerations related to changing the default connectivity settings. ## Connectivity architecture -The following diagram provides a high-level overview of the Azure SQL Database connectivity architecture. +The following diagram provides a high-level overview of the Azure SQL Database connectivity architecture. ![architecture overview](./media/sql-database-connectivity-architecture/architecture-overview.png) @@ -61,14 +60,14 @@ The following table lists the primary and secondary IPs of the Azure SQL Databas | --- | --- |--- | | Australia East | 191.238.66.109 | 13.75.149.87 | | Australia South East | 191.239.192.109 | 13.73.109.251 | -| Brazil South | 104.41.11.5 | | -| Canada Central | 40.85.224.249 | | +| Brazil South | 104.41.11.5 | | +| Canada Central | 40.85.224.249 | | | Canada East | 40.86.226.166 | | | Central US | 23.99.160.139 | 13.67.215.62 | | East Asia | 191.234.2.139 | 52.175.33.150 | | East US 1 | 191.238.6.43 | 40.121.158.30 | | East US 2 | 191.239.224.107 | 40.79.84.180 | -| India Central | 104.211.96.159 | | +| India Central | 104.211.96.159 | | | India South | 104.211.224.146 | | | India West | 104.211.160.80 | | | Japan East | 191.237.240.43 | 13.78.61.196 | @@ -80,7 +79,7 @@ The following table lists the primary and secondary IPs of the Azure SQL Databas | South Central US | 23.98.162.75 | 13.66.62.124 | | South East Asia | 23.100.117.95 | 104.43.15.0 | | UK North | 13.87.97.210 | | -| UK South 1 | 51.140.184.11 | | +| UK South 1 | 51.140.184.11 | | | UK South 2 | 13.87.34.7 | | | UK West | 51.141.8.11 | | | West Central US | 13.78.145.25 | | @@ -91,12 +90,12 @@ The following table lists the primary and secondary IPs of the Azure SQL Databas ## Change Azure SQL Database connection policy -To change the Azure SQL Database connection policy for an Azure SQL Database server, use the [REST API](https://msdn.microsoft.com/library/azure/mt604439.aspx). +To change the Azure SQL Database connection policy for an Azure SQL Database server, use the [REST API](https://msdn.microsoft.com/library/azure/mt604439.aspx). -- If your connection policy is set to **Proxy**, all network packets flow via the Azure SQL Database gateway. For this setting, you need to allow outbound to only the Azure SQL Database gateway IP. Using a setting of **Proxy** has more latency than a setting of **Redirect**. -- If your connection policy is setting **Redirect**, all network packets flow directly to the middleware proxy. For this setting, you need to allow outbound to multiple IPs. +- If your connection policy is set to **Proxy**, all network packets flow via the Azure SQL Database gateway. For this setting, you need to allow outbound to only the Azure SQL Database gateway IP. Using a setting of **Proxy** has more latency than a setting of **Redirect**. +- If your connection policy is setting **Redirect**, all network packets flow directly to the middleware proxy. For this setting, you need to allow outbound to multiple IPs. -## Script to change connection settings via PowerShell +## Script to change connection settings via PowerShell > [!IMPORTANT] > This script requires the [Azure PowerShell module](/powershell/azure/install-azurerm-ps). @@ -136,7 +135,7 @@ $AuthContext = [Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationCo $result = $AuthContext.AcquireToken( "https://management.core.windows.net/", $clientId, -[Uri]$uri, +[Uri]$uri, [Microsoft.IdentityModel.Clients.ActiveDirectory.PromptBehavior]::Auto ) @@ -156,7 +155,7 @@ $body = @{properties=@{connectionType=$connectionType}} | ConvertTo-Json Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Sql/servers/$serverName/connectionPolicies/Default?api-version=2014-04-01-preview" -Method PUT -Headers $authHeader -Body $body -ContentType "application/json" ``` -## Script to change connection settings via Azure CLI 2.0 +## Script to change connection settings via Azure CLI 2.0 > [!IMPORTANT] > This script requires the [Azure CLI 2.0](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest). @@ -165,20 +164,17 @@ Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/$subscription The following CLI script shows how to change the connection policy.
- # Get SQL Server ID
- sqlserverid=$(az sql server show -n sql-server-name -g sql-server-group --query 'id' -o tsv)
+# Get SQL Server ID
+sqlserverid=$(az sql server show -n sql-server-name -g sql-server-group --query 'id' -o tsv)
 
 # Set URI
-uri="https://management.azure.com/$sqlserverid/connectionPolicies/Default?api-version=2014-04-01-preview"
-
-# Get Access Token 
-accessToken=$(az account get-access-token --query 'accessToken' -o tsv)
+id="$sqlserverid/connectionPolicies/Default"
 
 # Get current connection policy 
-curl -H "authorization: Bearer $accessToken" -X GET $uri
+az resource show --ids $id
 
-#Update connection policy 
-curl -H "authorization: Bearer $accessToken" -H "Content-Type: application/json" -d '{"properties":{"connectionType":"Proxy"}}' -X PUT $uri
+# Update connection policy 
+az resource update --ids $id --set properties.connectionType=Proxy
 
 
diff --git a/articles/sql-database/sql-database-connectivity-issues.md b/articles/sql-database/sql-database-connectivity-issues.md index 2d39756a48b88..20009a122e281 100644 --- a/articles/sql-database/sql-database-connectivity-issues.md +++ b/articles/sql-database/sql-database-connectivity-issues.md @@ -15,9 +15,8 @@ ms.workload: "On Demand" ms.tgt_pltfrm: na ms.devlang: na ms.topic: troubleshooting -ms.date: 11/03/2017 +ms.date: 11/29/2017 ms.author: daleche - --- # Troubleshoot, diagnose, and prevent SQL connection errors and transient errors for SQL Database This article describes how to prevent, troubleshoot, diagnose, and mitigate connection errors and transient errors that your client application encounters when it interacts with Azure SQL Database. Learn how to configure retry logic, build the connection string, and adjust other connection settings. @@ -38,16 +37,17 @@ You'll retry the SQL connection or establish it again, depending on the followin * **A transient error occurs during a connection try**: The connection should be retried after delaying for several seconds. * **A transient error occurs during a SQL query command**: The command should not be immediately retried. Instead, after a delay, the connection should be freshly established. Then the command can be retried. + -### Retry logic for transient errors +## Retry logic for transient errors Client programs that occasionally encounter a transient error are more robust when they contain retry logic. When your program communicates with Azure SQL Database through a 3rd party middleware, inquire with the vendor whether the middleware contains retry logic for transient errors. -#### Principles for retry +### Principles for retry * An attempt to open a connection should be retried if the error is transient. * A SQL SELECT statement that fails with a transient error should not be retried directly. @@ -56,30 +56,31 @@ When your program communicates with Azure SQL Database through a 3rd party middl * The retry logic must ensure that either the entire database transaction completed, or that the entire transaction is rolled back. -#### Other considerations for retry +### Other considerations for retry * A batch program that is automatically started after work hours, and which will complete before morning, can afford to very patient with long time intervals between its retry attempts. * A user interface program should account for the human tendency to give up after too long a wait. * However, the solution must not be to retry every few seconds, because that policy can flood the system with requests. -#### Interval increase between retries +### Interval increase between retries We recommend that you delay for 5 seconds before your first retry. Retrying after a delay shorter than 5 seconds risks overwhelming the cloud service. For each subsequent retry the delay should grow exponentially, up to a maximum of 60 seconds. A discussion of the *blocking period* for clients that use ADO.NET is available in [SQL Server Connection Pooling (ADO.NET)](http://msdn.microsoft.com/library/8xx3tyca.aspx). You might also want to set a maximum number of retries before the program self-terminates. -#### Code samples with retry logic -Code samples with retry logic, in a variety of programming languages, are available at: +### Code samples with retry logic +Code examples with retry logic are available at: -* [Connection libraries for SQL Database and SQL Server](sql-database-libraries.md) +- [Connect resiliently to SQL with ADO.NET][step-4-connect-resiliently-to-sql-with-ado-net-a78n] +- [Connect resiliently to SQL with PHP][step-4-connect-resiliently-to-sql-with-php-p42h] -#### Test your retry logic +### Test your retry logic To test your retry logic, you must simulate or cause an error than can be corrected while your program is still running. -##### Test by disconnecting from the network +#### Test by disconnecting from the network One way you can test your retry logic is to disconnect your client computer from the network while the program is running. The error will be: * **SqlException.Number** = 11001 @@ -96,7 +97,7 @@ To make this practical, you unplug your computer from the network before you sta * Pause further execution by using either the **Console.ReadLine** method or a dialog with an OK button. The user presses the Enter key after the computer plugged into the network. 5. Attempt again to connect, expecting success. -##### Test by misspelling the database name when connecting +#### Test by misspelling the database name when connecting Your program can purposely misspell the user name before the first connection attempt. The error will be: * **SqlException.Number** = 18456 @@ -112,16 +113,16 @@ To make this practical, your program could recognize a run time parameter that c 4. Remove 'WRONG_' from the user name. 5. Attempt again to connect, expecting success. + -### .NET SqlConnection parameters for connection retry +## .NET SqlConnection parameters for connection retry If your client program connects to to Azure SQL Database by using the .NET Framework class **System.Data.SqlClient.SqlConnection**, you should use .NET 4.6.1 or later (or .NET Core) so you can leverage its connection retry feature. Details of the feature are [here](http://go.microsoft.com/fwlink/?linkid=393996). - When you build the [connection string](http://msdn.microsoft.com/library/System.Data.SqlClient.SqlConnection.connectionstring.aspx) for your **SqlConnection** object, you should coordinate the values among the following parameters: * ConnectRetryCount   *(Default is 1. Range is 0 through 255.)* @@ -136,7 +137,7 @@ For example, if the count = 3, and interval = 10 seconds, a timeout of only 29 s -### Connection versus command +## Connection versus command The **ConnectRetryCount** and **ConnectRetryInterval** parameters let your **SqlConnection** object retry the connect operation without telling or bothering your program, such as returning control to your program. The retries can occur in the following situations: * mySqlConnection.Open method call @@ -144,9 +145,10 @@ The **ConnectRetryCount** and **ConnectRetryInterval** parameters let your **Sql There is a subtlety. If a transient error occurs while your *query* is being executed, your **SqlConnection** object does not retry the connect operation, and it certainly does not retry your query. However, **SqlConnection** very quickly checks the connection before sending your query for execution. If the quick check detects a connection problem, **SqlConnection** retries the connect operation. If the retry succeeds, you query is sent for execution. -#### Should ConnectRetryCount be combined with application retry logic? +### Should ConnectRetryCount be combined with application retry logic? Suppose your application has robust custom retry logic. It might retry the connect operation 4 times. If you add **ConnectRetryInterval** and **ConnectRetryCount** =3 to your connection string, you will increase the retry count to 4 * 3 = 12 retries. You might not intend such a high number of retries. + ## Connections to Azure SQL Database @@ -374,9 +376,7 @@ For details see: ### EntLib60 IsTransient method source code Next, from the **SqlDatabaseTransientErrorDetectionStrategy** class, is the C# source code for the **IsTransient** method. The source code clarifies which errors were considered to be transient and worthy of retry, as of April 2013. -Numerous **//comment** lines have been removed from this copy to emphasize readability. - -``` +```csharp public bool IsTransient(Exception ex) { if (ex != null) @@ -445,6 +445,14 @@ public bool IsTransient(Exception ex) ## Next steps * For troubleshooting other common Azure SQL Database connection issues, visit [Troubleshoot connection issues to Azure SQL Database](sql-database-troubleshoot-common-connection-issues.md). -* [SQL Server Connection Pooling (ADO.NET)](http://msdn.microsoft.com/library/8xx3tyca.aspx) +* [Connection libraries for SQL Database and SQL Server](sql-database-libraries.md) +* [SQL Server Connection Pooling (ADO.NET)](https://docs.microsoft.com/dotnet/framework/data/adonet/sql-server-connection-pooling) * [*Retrying* is an Apache 2.0 licensed general-purpose retrying library, written in **Python**, to simplify the task of adding retry behavior to just about anything.](https://pypi.python.org/pypi/retrying) + + + +[step-4-connect-resiliently-to-sql-with-ado-net-a78n]: https://docs.microsoft.com/sql/connect/ado-net/step-4-connect-resiliently-to-sql-with-ado-net + +[step-4-connect-resiliently-to-sql-with-php-p42h]: https://docs.microsoft.com/sql/connect/php/step-4-connect-resiliently-to-sql-with-php + diff --git a/articles/sql-database/sql-database-elastic-scale-add-a-shard.md b/articles/sql-database/sql-database-elastic-scale-add-a-shard.md index 009458ef107c7..b4931acf3eed3 100644 --- a/articles/sql-database/sql-database-elastic-scale-add-a-shard.md +++ b/articles/sql-database/sql-database-elastic-scale-add-a-shard.md @@ -14,64 +14,66 @@ ms.workload: "Inactive" ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 10/24/2016 +ms.date: 11/28/2017 ms.author: ddove --- # Adding a shard using Elastic Database tools ## To add a shard for a new range or key -Applications often need to simply add new shards to handle data that is expected from new keys or key ranges, for a shard map that already exists. For example, an application sharded by Tenant ID may need to provision a new shard for a new tenant, or data sharded monthly may need a new shard provisioned before the start of each new month. +Applications often need to add new shards to handle data that is expected from new keys or key ranges, for a shard map that already exists. For example, an application sharded by Tenant ID may need to provision a new shard for a new tenant, or data sharded monthly may need a new shard provisioned before the start of each new month. -If the new range of key values is not already part of an existing mapping, it is very simple to add the new shard and associate the new key or range to that shard. +If the new range of key values is not already part of an existing mapping, it is simple to add the new shard and associate the new key or range to that shard. ### Example: adding a shard and its range to an existing shard map -This sample uses the [TryGetShard](https://msdn.microsoft.com/library/azure/dn823929.aspx) the [CreateShard](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmap.createshard.aspx), [CreateRangeMapping](https://msdn.microsoft.com/library/azure/dn807221.aspx#M:Microsoft.Azure.SqlDatabase.ElasticScale.ShardManagement.RangeShardMap`1.CreateRangeMapping\(Microsoft.Azure.SqlDatabase.ElasticScale.ShardManagement.RangeMappingCreationInfo{`0}\)) methods, and creates an instance of the [ShardLocation](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardlocation.shardlocation.aspx#M:Microsoft.Azure.SqlDatabase.ElasticScale.ShardManagement.ShardLocation.) class. In the sample below, a database named **sample_shard_2** and all necessary schema objects inside of it have been created to hold range [300, 400). +This sample uses the TryGetShard ([Java](/java/api/com.microsoft.azure.elasticdb.shard.map._shard_map.trygetshard), [.NET](https://msdn.microsoft.com/library/azure/dn823929.aspx)) the CreateShard ([Java](/java/api/com.microsoft.azure.elasticdb.shard.map._shard_map.createshard), [.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmap.createshard.aspx)), CreateRangeMapping ([Java](/java/api/com.microsoft.azure.elasticdb.shard.map._range_shard_map.createrangemapping), [.NET](https://msdn.microsoft.com/library/azure/dn807221.aspx#M:Microsoft.Azure.SqlDatabase.ElasticScale.ShardManagement.RangeShardMap`1.CreateRangeMapping\(Microsoft.Azure.SqlDatabase.ElasticScale.ShardManagement.RangeMappingCreationInfo{`0}\))) methods, and creates an instance of the ShardLocation ([Java](/java/api/com.microsoft.azure.elasticdb.shard.base._shard_location), [.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardlocation.shardlocation.aspx#M:Microsoft.Azure.SqlDatabase.ElasticScale.ShardManagement.ShardLocation.)) class. In the sample below, a database named **sample_shard_2** and all necessary schema objects inside of it have been created to hold range [300, 400). - // sm is a RangeShardMap object. - // Add a new shard to hold the range being added. - Shard shard2 = null; +```csharp +// sm is a RangeShardMap object. +// Add a new shard to hold the range being added. +Shard shard2 = null; - if (!sm.TryGetShard(new ShardLocation(shardServer, "sample_shard_2"),out shard2)) - { - shard2 = sm.CreateShard(new ShardLocation(shardServer, "sample_shard_2")); - } +if (!sm.TryGetShard(new ShardLocation(shardServer, "sample_shard_2"),out shard2)) +{ + shard2 = sm.CreateShard(new ShardLocation(shardServer, "sample_shard_2")); +} - // Create the mapping and associate it with the new shard - sm.CreateRangeMapping(new RangeMappingCreationInfo +// Create the mapping and associate it with the new shard +sm.CreateRangeMapping(new RangeMappingCreationInfo (new Range(300, 400), shard2, MappingStatus.Online)); +``` - -As an alternative, you can use Powershell to create a new Shard Map Manager. An example is available [here](https://gallery.technet.microsoft.com/scriptcenter/Azure-SQL-DB-Elastic-731883db). +For the .NET version, you can also use PowerShell as an alternative to create a new Shard Map Manager. An example is available [here](https://gallery.technet.microsoft.com/scriptcenter/Azure-SQL-DB-Elastic-731883db). ## To add a shard for an empty part of an existing range -In some circumstances, you may have already mapped a range to a shard and partially filled it with data, but you now want upcoming data to be directed to a different shard. For example, you shard by day range and have already allocated 50 days to a shard, but on day 24, you want future data to land in a different shard. The elastic database [split-merge tool](sql-database-elastic-scale-overview-split-and-merge.md) can perform this operation, but if data movement is not necessary (for example, data for the range of days [25, 50), i.e., day 25 inclusive to 50 exclusive, does not yet exist) you can perform this entirely using the Shard Map Management APIs directly. +In some circumstances, you may have already mapped a range to a shard and partially filled it with data, but you now want upcoming data to be directed to a different shard. For example, you shard by day range and have already allocated 50 days to a shard, but on day 24, you want future data to land in a different shard. The elastic database [split-merge tool](sql-database-elastic-scale-overview-split-and-merge.md) can perform this operation, but if data movement is not necessary (for example, data for the range of days [25, 50), that is, day 25 inclusive to 50 exclusive, does not yet exist) you can perform this entirely using the Shard Map Management APIs directly. -### Example: splitting a range and assigning the empty portion to a newly-added shard +### Example: splitting a range and assigning the empty portion to a newly added shard A database named “sample_shard_2” and all necessary schema objects inside of it have been created. - // sm is a RangeShardMap object. - // Add a new shard to hold the range we will move - Shard shard2 = null; - - if (!sm.TryGetShard(new ShardLocation(shardServer, "sample_shard_2"),out shard2)) - { +```csharp +// sm is a RangeShardMap object. +// Add a new shard to hold the range we will move +Shard shard2 = null; - shard2 = sm.CreateShard(new ShardLocation(shardServer, "sample_shard_2")); - } +if (!sm.TryGetShard(new ShardLocation(shardServer, "sample_shard_2"),out shard2)) +{ + shard2 = sm.CreateShard(new ShardLocation(shardServer,"sample_shard_2")); +} - // Split the Range holding Key 25 +// Split the Range holding Key 25 +sm.SplitMapping(sm.GetMappingForKey(25), 25); - sm.SplitMapping(sm.GetMappingForKey(25), 25); +// Map new range holding [25-50) to different shard: +// first take existing mapping offline +sm.MarkMappingOffline(sm.GetMappingForKey(25)); - // Map new range holding [25-50) to different shard: - // first take existing mapping offline - sm.MarkMappingOffline(sm.GetMappingForKey(25)); - // now map while offline to a different shard and take online - RangeMappingUpdate upd = new RangeMappingUpdate(); - upd.Shard = shard2; - sm.MarkMappingOnline(sm.UpdateMapping(sm.GetMappingForKey(25), upd)); +// now map while offline to a different shard and take online +RangeMappingUpdate upd = new RangeMappingUpdate(); +upd.Shard = shard2; +sm.MarkMappingOnline(sm.UpdateMapping(sm.GetMappingForKey(25), upd)); +``` -**Important**: Use this technique only if you are certain that the range for the updated mapping is empty. The methods above do not check data for the range being moved, so it is best to include checks in your code. If rows exist in the range being moved, the actual data distribution will not match the updated shard map. Use the [split-merge tool](sql-database-elastic-scale-overview-split-and-merge.md) to perform the operation instead in these cases. +**Important**: Use this technique only if you are certain that the range for the updated mapping is empty. The preceding methods do not check data for the range being moved, so it is best to include checks in your code. If rows exist in the range being moved, the actual data distribution will not match the updated shard map. Use the [split-merge tool](sql-database-elastic-scale-overview-split-and-merge.md) to perform the operation instead in these cases. [!INCLUDE [elastic-scale-include](../../includes/elastic-scale-include.md)] diff --git a/articles/sql-database/sql-database-elastic-scale-data-dependent-routing.md b/articles/sql-database/sql-database-elastic-scale-data-dependent-routing.md index e97203f4945b7..f6dca69c82859 100644 --- a/articles/sql-database/sql-database-elastic-scale-data-dependent-routing.md +++ b/articles/sql-database/sql-database-elastic-scale-data-dependent-routing.md @@ -14,44 +14,44 @@ ms.workload: "Inactive" ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 03/27/2017 +ms.date: 11/28/2017 ms.author: ddove --- # Data dependent routing -**Data dependent routing** is the ability to use the data in a query to route the request to an appropriate database. This is a fundamental pattern when working with sharded databases. The request context may also be used to route the request, especially if the sharding key is not part of the query. Each specific query or transaction in an application using data dependent routing is restricted to accessing a single database per request. For the Azure SQL Database Elastic tools, this routing is accomplished with the **[ShardMapManager class](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.aspx)** in ADO.NET applications. +**Data dependent routing** is the ability to use the data in a query to route the request to an appropriate database. This is a fundamental pattern when working with sharded databases. The request context may also be used to route the request, especially if the sharding key is not part of the query. Each specific query or transaction in an application using data dependent routing is restricted to accessing a single database per request. For the Azure SQL Database Elastic tools, this routing is accomplished with the **ShardMapManager** ([.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.mapmanager._shard_map_manager)) class. The application does not need to track various connection strings or DB locations associated with different slices of data in the sharded environment. Instead, the [Shard Map Manager](sql-database-elastic-scale-shard-map-management.md) opens connections to the correct databases when needed, based on the data in the shard map and the value of the sharding key that is the target of the application’s request. The key is typically the *customer_id*, *tenant_id*, *date_key*, or some other specific identifier that is a fundamental parameter of the database request). For more information, see [Scaling Out SQL Server with Data Dependent Routing](https://technet.microsoft.com/library/cc966448.aspx). ## Download the client library -To get the class, install the [Elastic Database Client Library](http://www.nuget.org/packages/Microsoft.Azure.SqlDatabase.ElasticScale.Client/). +To download: +* the .NET version of the library, see [NuGet](https://www.nuget.org/packages/Microsoft.Azure.SqlDatabase.ElasticScale.Client/). +* the Java version of the library, see [Maven Central Repository](https://search.maven.org/#search%7Cga%7C1%7Celastic-db-tools). ## Using a ShardMapManager in a data dependent routing application -Applications should instantiate the **ShardMapManager** during initialization, using the factory call **[GetSQLShardMapManager](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanagerfactory.getsqlshardmapmanager.aspx)**. In this example, both a **ShardMapManager** and a specific **ShardMap** that it contains are initialized. This example shows the GetSqlShardMapManager and [GetRangeShardMap](https://msdn.microsoft.com/library/azure/dn824173.aspx) methods. +Applications should instantiate the **ShardMapManager** during initialization, using the factory call **GetSQLShardMapManager** ([.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanagerfactory.getsqlshardmapmanager.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.mapmanager._shard_map_manager_factory.getsqlshardmapmanager)). In this example, both a **ShardMapManager** and a specific **ShardMap** that it contains are initialized. This example shows the GetSqlShardMapManager and GetRangeShardMap ([.NET](https://msdn.microsoft.com/library/azure/dn824173.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.mapmanager._shard_map_manager.getrangeshardmap)) methods. - ShardMapManager smm = ShardMapManagerFactory.GetSqlShardMapManager(smmConnnectionString, - ShardMapManagerLoadPolicy.Lazy); - RangeShardMap customerShardMap = smm.GetRangeShardMap("customerMap"); +```csharp +ShardMapManager smm = ShardMapManagerFactory.GetSqlShardMapManager(smmConnnectionString, ShardMapManagerLoadPolicy.Lazy); +RangeShardMap customerShardMap = smm.GetRangeShardMap("customerMap"); +``` ### Use lowest privilege credentials possible for getting the shard map If an application is not manipulating the shard map itself, the credentials used in the factory method should have just read-only permissions on the **Global Shard Map** database. These credentials are typically different from credentials used to open connections to the shard map manager. See also [Credentials used to access the Elastic Database client library](sql-database-elastic-scale-manage-credentials.md). ## Call the OpenConnectionForKey method -The **[ShardMap.OpenConnectionForKey method](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmap.openconnectionforkey.aspx)** returns an ADO.Net connection ready for issuing commands to the appropriate database based on the value of the **key** parameter. Shard information is cached in the application by the **ShardMapManager**, so these requests do not typically involve a database lookup against the **Global Shard Map** database. - - // Syntax: - public SqlConnection OpenConnectionForKey( - TKey key, - string connectionString, - ConnectionOptions options - ) +The **ShardMap.OpenConnectionForKey method** ([.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmap.openconnectionforkey.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.mapper._list_shard_mapper.openconnectionforkey)) returns a connection ready for issuing commands to the appropriate database based on the value of the **key** parameter. Shard information is cached in the application by the **ShardMapManager**, so these requests do not typically involve a database lookup against the **Global Shard Map** database. +```csharp +// Syntax: +public SqlConnection OpenConnectionForKey(TKey key, string connectionString, ConnectionOptions options) +``` * The **key** parameter is used as a lookup key into the shard map to determine the appropriate database for the request. * The **connectionString** is used to pass only the user credentials for the desired connection. No database name or server name are included in this *connectionString* since the method will determine the database and server using the **ShardMap**. -* The **[connectionOptions](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.connectionoptions.aspx)** should be set to **ConnectionOptions.Validate** if an environment where shard maps may change and rows may move to other databases as a result of split or merge operations. This involves a brief query to the local shard map on the target database (not to the global shard map) before the connection is delivered to the application. +* The **connectionOptions** ([.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.connectionoptions.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.mapper._connection_options)) should be set to **ConnectionOptions.Validate** if an environment where shard maps may change and rows may move to other databases as a result of split or merge operations. This involves a brief query to the local shard map on the target database (not to the global shard map) before the connection is delivered to the application. If the validation against the local shard map fails (indicating that the cache is incorrect), the Shard Map Manager will query the global shard map to obtain the new correct value for the lookup, update the cache, and obtain and return the appropriate database connection. @@ -59,62 +59,61 @@ Use **ConnectionOptions.None** only when shard mapping changes are not expected This example uses the value of an integer key **CustomerID**, using a **ShardMap** object named **customerShardMap**. - int customerId = 12345; - int newPersonId = 4321; +```csharp +int customerId = 12345; +int newPersonId = 4321; - // Connect to the shard for that customer ID. No need to call a SqlConnection - // constructor followed by the Open method. - using (SqlConnection conn = customerShardMap.OpenConnectionForKey(customerId, - Configuration.GetCredentialsConnectionString(), ConnectionOptions.Validate)) - { - // Execute a simple command. - SqlCommand cmd = conn.CreateCommand(); - cmd.CommandText = @"UPDATE Sales.Customer - SET PersonID = @newPersonID - WHERE CustomerID = @customerID"; +// Connect to the shard for that customer ID. No need to call a SqlConnection +// constructor followed by the Open method. +using (SqlConnection conn = customerShardMap.OpenConnectionForKey(customerId, Configuration.GetCredentialsConnectionString(), ConnectionOptions.Validate)) +{ + // Execute a simple command. + SqlCommand cmd = conn.CreateCommand(); + cmd.CommandText = @"UPDATE Sales.Customer + SET PersonID = @newPersonID WHERE CustomerID = @customerID"; - cmd.Parameters.AddWithValue("@customerID", customerId); - cmd.Parameters.AddWithValue("@newPersonID", newPersonId); - cmd.ExecuteNonQuery(); - } + cmd.Parameters.AddWithValue("@customerID", customerId);cmd.Parameters.AddWithValue("@newPersonID", newPersonId); + cmd.ExecuteNonQuery(); +} +``` -The **OpenConnectionForKey** method returns a new already-open connection to the correct database. Connections utilized in this way still take full advantage of ADO.Net connection pooling. As long as transactions and requests can be satisfied by one shard at a time, this should be the only modification necessary in an application already using ADO.Net. +The **OpenConnectionForKey** method returns a new already-open connection to the correct database. Connections utilized in this way still take full advantage of connection pooling. -The **[OpenConnectionForKeyAsync method](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmap.openconnectionforkeyasync.aspx)** is also available if your application makes use asynchronous programming with ADO.Net. Its behavior is the data dependent routing equivalent of ADO.Net's **[Connection.OpenAsync](https://msdn.microsoft.com/library/hh223688\(v=vs.110\).aspx)** method. +The **OpenConnectionForKeyAsync method** ([.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmap.openconnectionforkeyasync.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.mapper._list_shard_mapper.openconnectionforkeyasync)) is also available if your application makes use asynchronous programming with ADO.Net. ## Integrating with transient fault handling -A best practice in developing data access applications in the cloud is to ensure that transient faults are caught by the app, and that the operations are retried several times before throwing an error. Transient fault handling for cloud applications is discussed at [Transient Fault Handling](https://msdn.microsoft.com/library/dn440719\(v=pandp.60\).aspx). +A best practice in developing data access applications in the cloud is to ensure that transient faults are caught by the app, and that the operations are retried several times before throwing an error. Transient fault handling for cloud applications is discussed at Transient Fault Handling ([.NET](https://msdn.microsoft.com/library/dn440719\(v=pandp.60\).aspx), [Java](/java/api/com.microsoft.azure.elasticdb.core.commons.transientfaulthandling)). Transient fault handling can coexist naturally with the Data Dependent Routing pattern. The key requirement is to retry the entire data access request including the **using** block that obtained the data-dependent routing connection. The example above could be rewritten as follows (note highlighted change). ### Example - data dependent routing with transient fault handling -
int customerId = 12345; 
+```csharp
+int customerId = 12345; 
 int newPersonId = 4321; 
 
-Configuration.SqlRetryPolicy.ExecuteAction(() =>  
-    { 
-        // Connect to the shard for a customer ID. 
-        using (SqlConnection conn = customerShardMap.OpenConnectionForKey(customerId,  
-        Configuration.GetCredentialsConnectionString(), ConnectionOptions.Validate)) 
-        { 
-            // Execute a simple command 
-            SqlCommand cmd = conn.CreateCommand(); 
+Configuration.SqlRetryPolicy.ExecuteAction(() => 
+{
+    // Connect to the shard for a customer ID. 
+    using (SqlConnection conn = customerShardMap.OpenConnectionForKey(customerId, Configuration.GetCredentialsConnectionString(), ConnectionOptions.Validate)) 
+    { 
+        // Execute a simple command 
+        SqlCommand cmd = conn.CreateCommand(); 
 
-            cmd.CommandText = @"UPDATE Sales.Customer 
+        cmd.CommandText = @"UPDATE Sales.Customer 
                             SET PersonID = @newPersonID 
-                            WHERE CustomerID = @customerID"; 
+                            WHERE CustomerID = @customerID"; 
 
-            cmd.Parameters.AddWithValue("@customerID", customerId); 
-            cmd.Parameters.AddWithValue("@newPersonID", newPersonId); 
-            cmd.ExecuteNonQuery(); 
+        cmd.Parameters.AddWithValue("@customerID", customerId); 
+        cmd.Parameters.AddWithValue("@newPersonID", newPersonId); 
+        cmd.ExecuteNonQuery(); 
 
-            Console.WriteLine("Update completed"); 
-        } 
-    });  
-
+ Console.WriteLine("Update completed"); + } +}); +``` -Packages necessary to implement transient fault handling are downloaded automatically when you build the elastic database sample application. Packages are also available separately at [Enterprise Library - Transient Fault Handling Application Block](http://www.nuget.org/packages/EnterpriseLibrary.TransientFaultHandling/). Use version 6.0 or later. +Packages necessary to implement transient fault handling are downloaded automatically when you build the elastic database sample application. ## Transactional consistency Transactional properties are guaranteed for all operations local to a shard. For example, transactions submitted through data-dependent routing execute within the scope of the target shard for the connection. At this time, there are no capabilities provided for enlisting multiple connections into a transaction, and therefore there are no transactional guarantees for operations performed across shards. diff --git a/articles/sql-database/sql-database-elastic-scale-manage-credentials.md b/articles/sql-database/sql-database-elastic-scale-manage-credentials.md index c835ee0c3bf4a..739e5ba71b1cb 100644 --- a/articles/sql-database/sql-database-elastic-scale-manage-credentials.md +++ b/articles/sql-database/sql-database-elastic-scale-manage-credentials.md @@ -14,12 +14,12 @@ ms.workload: "Inactive" ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 10/24/2016 +ms.date: 11/29/2017 ms.author: ddove --- # Credentials used to access the Elastic Database client library -The [Elastic Database client library](http://www.nuget.org/packages/Microsoft.Azure.SqlDatabase.ElasticScale.Client/) uses three different kinds of credentials to access the [shard map manager](sql-database-elastic-scale-shard-map-management.md). Depending on the need, use the credential with the lowest level of access possible. +The [Elastic Database client library](sql-database-elastic-database-client-library.md) uses three different kinds of credentials to access the [shard map manager](sql-database-elastic-scale-shard-map-management.md). Depending on the need, use the credential with the lowest level of access possible. * **Management credentials**: for creating or manipulating a shard map manager. (See the [glossary](sql-database-elastic-scale-glossary.md).) * **Access credentials**: to access an existing shard map manager to obtain information about shards. @@ -28,40 +28,43 @@ The [Elastic Database client library](http://www.nuget.org/packages/Microsoft.Az See also [Managing databases and logins in Azure SQL Database](sql-database-manage-logins.md). ## About management credentials -Management credentials are used to create a [**ShardMapManager**](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.aspx) object for applications that manipulate shard maps. (For example, see [Adding a shard using Elastic Database tools](sql-database-elastic-scale-add-a-shard.md) and [data-dependent routing](sql-database-elastic-scale-data-dependent-routing.md)). The user of the elastic scale client library creates the SQL users and SQL logins and makes sure each is granted the read/write permissions on the global shard map database and all shard databases as well. These credentials are used to maintain the global shard map and the local shard maps when changes to the shard map are performed. For instance, use the management credentials to create the shard map manager object (using [**GetSqlShardMapManager**](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanagerfactory.getsqlshardmapmanager.aspx): +Management credentials are used to create a **ShardMapManager** ([Java](/java/api/com.microsoft.azure.elasticdb.shard.mapmanager._shard_map_manager), [.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.aspx)) object for applications that manipulate shard maps. (For example, see [Adding a shard using Elastic Database tools](sql-database-elastic-scale-add-a-shard.md) and [data-dependent routing](sql-database-elastic-scale-data-dependent-routing.md)). The user of the elastic scale client library creates the SQL users and SQL logins and makes sure each is granted the read/write permissions on the global shard map database and all shard databases as well. These credentials are used to maintain the global shard map and the local shard maps when changes to the shard map are performed. For instance, use the management credentials to create the shard map manager object (using **GetSqlShardMapManager** ([Java](/java/api/com.microsoft.azure.elasticdb.shard.mapmanager._shard_map_manager_factory.getsqlshardmapmanager), [.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanagerfactory.getsqlshardmapmanager.aspx)): - // Obtain a shard map manager. - ShardMapManager shardMapManager = ShardMapManagerFactory.GetSqlShardMapManager( - smmAdminConnectionString, - ShardMapManagerLoadPolicy.Lazy - ); +``` +// Obtain a shard map manager. +ShardMapManager shardMapManager = ShardMapManagerFactory.GetSqlShardMapManager(smmAdminConnectionString,ShardMapManagerLoadPolicy.Lazy); +``` The variable **smmAdminConnectionString** is a connection string that contains the management credentials. The user ID and password provide read/write access to both shard map database and individual shards. The management connection string also includes the server name and database name to identify the global shard map database. Here is a typical connection string for that purpose: - "Server=.database.windows.net;Database=;User ID=;Password=;Trusted_Connection=False;Encrypt=True;Connection Timeout=30;” +``` +"Server=.database.windows.net;Database=;User ID=;Password=;Trusted_Connection=False;Encrypt=True;Connection Timeout=30;” +``` Do not use values in the form of "username@server"—instead just use the "username" value. This is because credentials must work against both the shard map manager database and individual shards, which may be on different servers. ## Access credentials When creating a shard map manager in an application that does not administer shard maps, use credentials that have read-only permissions on the global shard map. The information retrieved from the global shard map under these credentials is used for [data-dependent routing](sql-database-elastic-scale-data-dependent-routing.md) and to populate the shard map cache on the client. The credentials are provided through the same call pattern to **GetSqlShardMapManager**: - // Obtain shard map manager. - ShardMapManager shardMapManager = ShardMapManagerFactory.GetSqlShardMapManager( - smmReadOnlyConnectionString, - ShardMapManagerLoadPolicy.Lazy - ); +``` +// Obtain shard map manager. +ShardMapManager shardMapManager = ShardMapManagerFactory.GetSqlShardMapManager(smmReadOnlyConnectionString, ShardMapManagerLoadPolicy.Lazy); +``` Note the use of the **smmReadOnlyConnectionString** to reflect the use of different credentials for this access on behalf of **non-admin** users: these credentials should not provide write permissions on the global shard map. ## Connection credentials -Additional credentials are needed when using the [**OpenConnectionForKey**](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmap.openconnectionforkey.aspx) method to access a shard associated with a sharding key. These credentials need to provide permissions for read-only access to the local shard map tables residing on the shard. This is needed to perform connection validation for data-dependent routing on the shard. This code snippet allows data access in the context of data-dependent routing: +Additional credentials are needed when using the **OpenConnectionForKey** ([Java](/java/api/com.microsoft.azure.elasticdb.shard.mapper._list_shard_mapper.openconnectionforkey), [.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmap.openconnectionforkey.aspx)) method to access a shard associated with a sharding key. These credentials need to provide permissions for read-only access to the local shard map tables residing on the shard. This is needed to perform connection validation for data-dependent routing on the shard. This code snippet allows data access in the context of data-dependent routing: - using (SqlConnection conn = rangeMap.OpenConnectionForKey( - targetWarehouse, smmUserConnectionString, ConnectionOptions.Validate)) +```csharp +using (SqlConnection conn = rangeMap.OpenConnectionForKey(targetWarehouse, smmUserConnectionString, ConnectionOptions.Validate)) +``` In this example, **smmUserConnectionString** holds the connection string for the user credentials. For Azure SQL DB, here is a typical connection string for user credentials: - "User ID=; Password=; Trusted_Connection=False; Encrypt=True; Connection Timeout=30;” +``` +"User ID=; Password=; Trusted_Connection=False; Encrypt=True; Connection Timeout=30;” +``` As with the admin credentials, do not use values in the form of "username@server". Instead, just use "username". Also note that the connection string does not contain a server name and database name. That is because the **OpenConnectionForKey** call automatically directs the connection to the correct shard based on the key. Hence, the database name and server name are not provided. diff --git a/articles/sql-database/sql-database-elastic-scale-multishard-querying.md b/articles/sql-database/sql-database-elastic-scale-multishard-querying.md index b8764c01bf0e9..86c4abd8d4238 100644 --- a/articles/sql-database/sql-database-elastic-scale-multishard-querying.md +++ b/articles/sql-database/sql-database-elastic-scale-multishard-querying.md @@ -14,7 +14,7 @@ ms.workload: "Inactive" ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 04/12/2016 +ms.date: 11/28/2017 ms.author: torsteng --- @@ -22,40 +22,38 @@ ms.author: torsteng ## Overview With the [Elastic Database tools](sql-database-elastic-scale-introduction.md), you can create sharded database solutions. **Multi-shard querying** is used for tasks such as data collection/reporting that require running a query that stretches across several shards. (Contrast this to [data-dependent routing](sql-database-elastic-scale-data-dependent-routing.md), which performs all work on a single shard.) -1. Get a [**RangeShardMap**](https://msdn.microsoft.com/library/azure/dn807318.aspx) or [**ListShardMap**](https://msdn.microsoft.com/library/azure/dn807370.aspx) using the [**TryGetRangeShardMap**](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.trygetrangeshardmap.aspx), the [**TryGetListShardMap**](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.trygetlistshardmap.aspx), or the [**GetShardMap**](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.getshardmap.aspx) method. See [**Constructing a ShardMapManager**](sql-database-elastic-scale-shard-map-management.md#constructing-a-shardmapmanager) and [**Get a RangeShardMap or ListShardMap**](sql-database-elastic-scale-shard-map-management.md#get-a-rangeshardmap-or-listshardmap). -2. Create a **[MultiShardConnection](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.query.multishardconnection.aspx)** object. -3. Create a **[MultiShardCommand](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.query.multishardcommand.aspx)**. -4. Set the **[CommandText property](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.query.multishardcommand.commandtext.aspx#P:Microsoft.Azure.SqlDatabase.ElasticScale.Query.MultiShardCommand.CommandText)** to a T-SQL command. -5. Execute the command by calling the **[ExecuteReader method](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.query.multishardcommand.executereader.aspx)**. -6. View the results using the **[MultiShardDataReader class](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.query.multisharddatareader.aspx)**. +1. Get a **RangeShardMap** ([Java](/java/api/com.microsoft.azure.elasticdb.shard.map._range_shard_map), [.NET](https://msdn.microsoft.com/library/azure/dn807318.aspx)) or **ListShardMap** ([Java](/java/api/com.microsoft.azure.elasticdb.shard.map._list_shard_map), [.NET](https://msdn.microsoft.com/library/azure/dn807370.aspx)) using the **TryGetRangeShardMap** ([Java](/java/api/com.microsoft.azure.elasticdb.shard.mapmanager._shard_map_manager.trygetrangeshardmap), [.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.trygetrangeshardmap.aspx)), the **TryGetListShardMap** ([Java](/java/api/com.microsoft.azure.elasticdb.shard.mapmanager._shard_map_manager.trygetlistshardmap), [.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.trygetlistshardmap.aspx)), or the **GetShardMap** ([Java](/java/api/com.microsoft.azure.elasticdb.shard.mapmanager._shard_map_manager.getshardmap), [.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.getshardmap.aspx)) method. See **[Constructing a ShardMapManager](sql-database-elastic-scale-shard-map-management.md#constructing-a-shardmapmanager)** and **[Get a RangeShardMap or ListShardMap](sql-database-elastic-scale-shard-map-management.md#get-a-rangeshardmap-or-listshardmap)**. +2. Create a **MultiShardConnection** ([Java](/java/api/com.microsoft.azure.elasticdb.query.multishard._multi_shard_connection), [.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.query.multishardconnection.aspx)) object. +3. Create a **MultiShardStatement or MultiShardCommand** ([Java](/java/api/com.microsoft.azure.elasticdb.query.multishard._multi_shard_statement), [.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.query.multishardcommand.aspx)). +4. Set the **CommandText property** ([Java](/java/api/com.microsoft.azure.elasticdb.query.multishard._multi_shard_statement), [.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.query.multishardcommand.commandtext.aspx#P:Microsoft.Azure.SqlDatabase.ElasticScale.Query.MultiShardCommand.CommandText)) to a T-SQL command. +5. Execute the command by calling the **ExecuteQueryAsync or ExecuteReader** ([Java](), [.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.query.multishardcommand.executereader.aspx)) method. +6. View the results using the **MultiShardResultSet or MultiShardDataReader** ([Java](/java/api/com.microsoft.azure.elasticdb.query.multishard._multi_shard_result_set), [.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.query.multisharddatareader.aspx)) class. ## Example The following code illustrates the usage of multi-shard querying using a given **ShardMap** named *myShardMap*. - using (MultiShardConnection conn = new MultiShardConnection( - myShardMap.GetShards(), - myShardConnectionString) - ) - { +```csharp +using (MultiShardConnection conn = new MultiShardConnection(myShardMap.GetShards(), myShardConnectionString)) +{ using (MultiShardCommand cmd = conn.CreateCommand()) - { - cmd.CommandText = "SELECT c1, c2, c3 FROM ShardedTable"; - cmd.CommandType = CommandType.Text; - cmd.ExecutionOptions = MultiShardExecutionOptions.IncludeShardNameColumn; - cmd.ExecutionPolicy = MultiShardExecutionPolicy.PartialResults; + { + cmd.CommandText = "SELECT c1, c2, c3 FROM ShardedTable"; + cmd.CommandType = CommandType.Text; + cmd.ExecutionOptions = MultiShardExecutionOptions.IncludeShardNameColumn; + cmd.ExecutionPolicy = MultiShardExecutionPolicy.PartialResults; - using (MultiShardDataReader sdr = cmd.ExecuteReader()) - { - while (sdr.Read()) - { - var c1Field = sdr.GetString(0); - var c2Field = sdr.GetFieldValue(1); - var c3Field = sdr.GetFieldValue(2); - } - } - } + using (MultiShardDataReader sdr = cmd.ExecuteReader()) + { + while (sdr.Read()) + { + var c1Field = sdr.GetString(0); + var c2Field = sdr.GetFieldValue(1); + var c3Field = sdr.GetFieldValue(2); + } + } } - +} +``` A key difference is the construction of multi-shard connections. Where **SqlConnection** operates on a single database, the **MultiShardConnection** takes a ***collection of shards*** as its input. Populate the collection of shards from a shard map. The query is then executed on the collection of shards using **UNION ALL** semantics to assemble a single overall result. Optionally, the name of the shard where the row originates from can be added to the output using the **ExecutionOptions** property on command. @@ -68,8 +66,4 @@ Multi-shard queries do not verify whether shardlets on the queried database are [!INCLUDE [elastic-scale-include](../../includes/elastic-scale-include.md)] -## See also -**[System.Data.SqlClient](http://msdn.microsoft.com/library/System.Data.SqlClient.aspx)** classes and methods. - -Manage shards using the [Elastic Database client library](sql-database-elastic-database-client-library.md). Includes a namespace called [Microsoft.Azure.SqlDatabase.ElasticScale.Query](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.query.aspx) that provides the ability to query multiple shards using a single query and result. It provides a querying abstraction over a collection of shards. It also provides alternative execution policies, in particular partial results, to deal with failures when querying over many shards. diff --git a/articles/sql-database/sql-database-elastic-scale-shard-map-management.md b/articles/sql-database/sql-database-elastic-scale-shard-map-management.md index 4c4ecaa7fe9b7..737c796945c25 100644 --- a/articles/sql-database/sql-database-elastic-scale-shard-map-management.md +++ b/articles/sql-database/sql-database-elastic-scale-shard-map-management.md @@ -14,7 +14,7 @@ ms.workload: "On Demand" ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 10/24/2016 +ms.date: 11/28/2017 ms.author: ddove --- @@ -23,7 +23,7 @@ To easily scale out databases on SQL Azure, use a shard map manager. The shard m ![Shard map management](./media/sql-database-elastic-scale-shard-map-management/glossary.png) -Understanding how these maps are constructed is essential to shard map management. This is done using the [ShardMapManager class](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.aspx), found in the [Elastic Database client library](sql-database-elastic-database-client-library.md) to manage shard maps. +Understanding how these maps are constructed is essential to shard map management. This is done using the ShardMapManager class ([.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.aspx), [Java](https://docs.microsoft.com/en-us/java/api/com.microsoft.azure.elasticdb.shard.mapmanager._shard_map_manager)), found in the [Elastic Database client library](sql-database-elastic-database-client-library.md) to manage shard maps. ## Shard maps and shard mappings For each shard, you must select the type of shard map to create. The choice depends on the database architecture: @@ -37,24 +37,26 @@ For a single-tenant model, create a **list mapping** shard map. The single-tenan ![List mapping][1] -The multi-tenant model assigns several tenants to a single database (and you can distribute groups of tenants across multiple databases). Use this model when you expect each tenant to have small data needs. In this model, we assign a range of tenants to a database using **range mapping**. +The multi-tenant model assigns several tenants to a single database (and you can distribute groups of tenants across multiple databases). Use this model when you expect each tenant to have small data needs. In this model, assign a range of tenants to a database using **range mapping**. ![Range mapping][2] -Or you can implement a multi-tenant database model using a *list mapping* to assign multiple tenants to a single database. For example, DB1 is used to store information about tenant id 1 and 5, and DB2 stores data for tenant 7 and tenant 10. +Or you can implement a multi-tenant database model using a *list mapping* to assign multiple tenants to a single database. For example, DB1 is used to store information about tenant ID 1 and 5, and DB2 stores data for tenant 7 and tenant 10. -![Muliple tenants on single DB][3] +![Multiple tenants on single DB][3] -### Supported .Net types for sharding keys -Elastic Scale support the following .Net Framework types as sharding keys: +### Supported types for sharding keys +Elastic Scale support the following types as sharding keys: -* integer -* long -* guid -* byte[] -* datetime -* timespan -* datetimeoffset +| .NET | Java | +| --- | --- | +| integer |integer | +| long |long | +| guid |uuid | +| byte[] |byte[] | +| datetime | timestamp | +| timespan | duration| +| datetimeoffset |offsetdatetime | ### List and range shard maps Shard maps can be constructed using **lists of individual sharding key values**, or they can be constructed using **ranges of sharding key values**. @@ -73,7 +75,7 @@ Shard maps can be constructed using **lists of individual sharding key values**, ### Range shard maps In a **range shard map**, the key range is described by a pair **[Low Value, High Value)** where the *Low Value* is the minimum key in the range, and the *High Value* is the first value higher than the range. -For example, **[0, 100)** includes all integers greater than or equal 0 and less than 100. Note that multiple ranges can point to the same database, and disjoint ranges are supported (e.g., [100,200) and [400,600) both point to Database C in the example below.) +For example, **[0, 100)** includes all integers greater than or equal 0 and less than 100. Note that multiple ranges can point to the same database, and disjoint ranges are supported (for example, [100,200) and [400,600) both point to Database C in the following example.) | Key | Shard Location | | --- | --- | @@ -93,66 +95,111 @@ In the client library, the shard map manager is a collection of shard maps. The 3. **Application cache**: Each application instance accessing a **ShardMapManager** object maintains a local in-memory cache of its mappings. It stores routing information that has recently been retrieved. ## Constructing a ShardMapManager -A **ShardMapManager** object is constructed using a [factory](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanagerfactory.aspx) pattern. The **[ShardMapManagerFactory.GetSqlShardMapManager](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanagerfactory.getsqlshardmapmanager.aspx)** method takes credentials (including the server name and database name holding the GSM) in the form of a **ConnectionString** and returns an instance of a **ShardMapManager**. +A **ShardMapManager** object is constructed using a factory ([.NET](/dotnet/api/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanagerfactory), [Java](/java/api/com.microsoft.azure.elasticdb.shard.mapmanager._shard_map_manager_factory)) pattern. The **ShardMapManagerFactory.GetSqlShardMapManager** ([.NET](/dotnet/api/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanagerfactory.getsqlshardmapmanager), [Java](/java/api/com.microsoft.azure.elasticdb.shard.mapmanager._shard_map_manager_factory.getsqlshardmapmanager)) method takes credentials (including the server name and database name holding the GSM) in the form of a **ConnectionString** and returns an instance of a **ShardMapManager**. -**Please Note:** The **ShardMapManager** should be instantiated only once per app domain, within the initialization code for an application. Creation of additional instances of ShardMapManager in the same appdomain, will result in increased memory and CPU utilization of the application. A **ShardMapManager** can contain any number of shard maps. While a single shard map may be sufficient for many applications, there are times when different sets of databases are used for different schema or for unique purposes; in those cases multiple shard maps may be preferable. +**Please Note:** The **ShardMapManager** should be instantiated only once per app domain, within the initialization code for an application. Creation of additional instances of ShardMapManager in the same app domain results in increased memory and CPU utilization of the application. A **ShardMapManager** can contain any number of shard maps. While a single shard map may be sufficient for many applications, there are times when different sets of databases are used for different schema or for unique purposes; in those cases multiple shard maps may be preferable. -In this code, an application tries to open an existing **ShardMapManager** with the [TryGetSqlShardMapManager method](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanagerfactory.trygetsqlshardmapmanager.aspx). If objects representing a Global **ShardMapManager** (GSM) do not yet exist inside the database, the client library creates them there using the [CreateSqlShardMapManager method](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanagerfactory.createsqlshardmapmanager.aspx). +In this code, an application tries to open an existing **ShardMapManager** with the TryGetSqlShardMapManager ([.NET](/dotnet/api/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanagerfactory.trygetsqlshardmapmanager.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.mapmanager._shard_map_manager_factory.trygetsqlshardmapmanager)) method. If objects representing a Global **ShardMapManager** (GSM) do not yet exist inside the database, the client library creates them there using the CreateSqlShardMapManager ([.NET](/dotnet/api/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanagerfactory.createsqlshardmapmanager), [Java](/java/api/com.microsoft.azure.elasticdb.shard.mapmanager._shard_map_manager_factory.createsqlshardmapmanager)) method. - // Try to get a reference to the Shard Map Manager - // via the Shard Map Manager database. - // If it doesn't already exist, then create it. - ShardMapManager shardMapManager; - bool shardMapManagerExists = ShardMapManagerFactory.TryGetSqlShardMapManager( +```csharp +// Try to get a reference to the Shard Map Manager via the Shard Map Manager database. +// If it doesn't already exist, then create it. +ShardMapManager shardMapManager; +bool shardMapManagerExists = ShardMapManagerFactory.TryGetSqlShardMapManager( connectionString, ShardMapManagerLoadPolicy.Lazy, out shardMapManager); - if (shardMapManagerExists) - { - Console.WriteLine("Shard Map Manager already exists"); - } - else - { - // Create the Shard Map Manager. - ShardMapManagerFactory.CreateSqlShardMapManager(connectionString); - Console.WriteLine("Created SqlShardMapManager"); - - shardMapManager = ShardMapManagerFactory.GetSqlShardMapManager( +if (shardMapManagerExists) +{ + Console.WriteLine("Shard Map Manager already exists"); +} +else +{ + // Create the Shard Map Manager. + ShardMapManagerFactory.CreateSqlShardMapManager(connectionString); + Console.WriteLine("Created SqlShardMapManager"); + + shardMapManager = ShardMapManagerFactory.GetSqlShardMapManager( connectionString, ShardMapManagerLoadPolicy.Lazy); - // The connectionString contains server name, database name, and admin credentials - // for privileges on both the GSM and the shards themselves. - } - -As an alternative, you can use Powershell to create a new Shard Map Manager. An example is available [here](https://gallery.technet.microsoft.com/scriptcenter/Azure-SQL-DB-Elastic-731883db). +// The connectionString contains server name, database name, and admin credentials for privileges on both the GSM and the shards themselves. +} +``` + +```Java +// Try to get a reference to the Shard Map Manager in the shardMapManager database. +// If it doesn't already exist, then create it. +ShardMapManager shardMapManager = null; +boolean shardMapManagerExists = ShardMapManagerFactory.tryGetSqlShardMapManager(shardMapManagerConnectionString,ShardMapManagerLoadPolicy.Lazy, refShardMapManager); +shardMapManager = refShardMapManager.argValue; + +if (shardMapManagerExists) { + ConsoleUtils.writeInfo("Shard Map %s already exists", shardMapManager); +} +else { + // The Shard Map Manager does not exist, so create it + shardMapManager = ShardMapManagerFactory.createSqlShardMapManager(shardMapManagerConnectionString); + ConsoleUtils.writeInfo("Created Shard Map %s", shardMapManager); +} +``` + +For the .NET version, you can use Powershell to create a new Shard Map Manager. An example is available [here](https://gallery.technet.microsoft.com/scriptcenter/Azure-SQL-DB-Elastic-731883db). ## Get a RangeShardMap or ListShardMap -After creating a shard map manager, you can get the [RangeShardMap](https://msdn.microsoft.com/library/azure/dn807318.aspx) or [ListShardMap](https://msdn.microsoft.com/library/azure/dn807370.aspx) using the [TryGetRangeShardMap](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.trygetrangeshardmap.aspx), the [TryGetListShardMap](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.trygetlistshardmap.aspx), or the [GetShardMap](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.getshardmap.aspx) method. +After creating a shard map manager, you can get the RangeShardMap ([.NET](https://msdn.microsoft.com/library/azure/dn807318.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.map._range_shard_map)) or ListShardMap ([.NET](https://msdn.microsoft.com/library/azure/dn807370.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.map._list_shard_map)) using the TryGetRangeShardMap ([.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.trygetrangeshardmap.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.mapmanager._shard_map_manager.trygetrangeshardmap)), the TryGetListShardMap ([.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.trygetlistshardmap.aspx), [Java](https://docs.microsoft.com/en-us/java/api/com.microsoft.azure.elasticdb.shard.mapmanager._shard_map_manager.trygetlistshardmap)), or the GetShardMap ([.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmapmanager.getshardmap.aspx), [Java](https://docs.microsoft.com/en-us/java/api/com.microsoft.azure.elasticdb.shard.mapmanager._shard_map_manager.getshardmap)) method. - /// - /// Creates a new Range Shard Map with the specified name, or gets the Range Shard Map if it already exists. - /// - public static RangeShardMap CreateOrGetRangeShardMap(ShardMapManager shardMapManager, string shardMapName) - { - // Try to get a reference to the Shard Map. - RangeShardMap shardMap; - bool shardMapExists = shardMapManager.TryGetRangeShardMap(shardMapName, out shardMap); +```csharp +// Creates a new Range Shard Map with the specified name, or gets the Range Shard Map if it already exists. +public static RangeShardMap CreateOrGetRangeShardMap(ShardMapManager shardMapManager, string shardMapName) +{ + // Try to get a reference to the Shard Map. + RangeShardMap shardMap; + bool shardMapExists = shardMapManager.TryGetRangeShardMap(shardMapName, out shardMap); - if (shardMapExists) - { - ConsoleUtils.WriteInfo("Shard Map {0} already exists", shardMap.Name); + if (shardMapExists) + { + ConsoleUtils.WriteInfo("Shard Map {0} already exists", shardMap.Name); + } + else + { + // The Shard Map does not exist, so create it + shardMap = shardMapManager.CreateRangeShardMap(shardMapName); + ConsoleUtils.WriteInfo("Created Shard Map {0}", shardMap.Name); + } + + return shardMap; +} +``` + +```Java +// Creates a new Range Shard Map with the specified name, or gets the Range Shard Map if it already exists. +static RangeShardMap createOrGetRangeShardMap(ShardMapManager shardMapManager, + String shardMapName, + ShardKeyType keyType) { + // Try to get a reference to the Shard Map. + ReferenceObjectHelper> refRangeShardMap = new ReferenceObjectHelper<>(null); + boolean isGetSuccess = shardMapManager.tryGetRangeShardMap(shardMapName, keyType, refRangeShardMap); + RangeShardMap shardMap = refRangeShardMap.argValue; + + if (isGetSuccess && shardMap != null) { + ConsoleUtils.writeInfo("Shard Map %1$s already exists", shardMap.getName()); + } + else { + // The Shard Map does not exist, so create it + try { + shardMap = shardMapManager.createRangeShardMap(shardMapName, keyType); } - else - { - // The Shard Map does not exist, so create it - shardMap = shardMapManager.CreateRangeShardMap(shardMapName); - ConsoleUtils.WriteInfo("Created Shard Map {0}", shardMap.Name); + catch (Exception e) { + e.printStackTrace(); } + ConsoleUtils.writeInfo("Created Shard Map %1$s", shardMap.getName()); + } - return shardMap; - } + return shardMap; +} +``` ### Shard map administration credentials Applications that administer and manipulate shard maps are different from those that use the shard maps to route connections. @@ -164,154 +211,46 @@ See [Credentials used to access the Elastic Database client library](sql-databas ### Only metadata affected Methods used for populating or changing the **ShardMapManager** data do not alter the user data stored in the shards themselves. For example, methods such as **CreateShard**, **DeleteShard**, **UpdateMapping**, etc. affect the shard map metadata only. They do not remove, add, or alter user data contained in the shards. Instead, these methods are designed to be used in conjunction with separate operations you perform to create or remove actual databases, or that move rows from one shard to another to rebalance a sharded environment. (The **split-merge** tool included with elastic database tools makes use of these APIs along with orchestrating actual data movement between shards.) See [Scaling using the Elastic Database split-merge tool](sql-database-elastic-scale-overview-split-and-merge.md). -## Populating a shard map example -An example sequence of operations to populate a specific shard map is shown below. The code performs these steps: - -1. A new shard map is created within a shard map manager. -2. The metadata for two different shards is added to the shard map. -3. A variety of key range mappings are added, and the overall contents of the shard map are displayed. - -The code is written so that the method can be rerun if an error occurs. Each request tests whether a shard or mapping already exists, before attempting to create it. The code assumes that databases named **sample_shard_0**, **sample_shard_1** and **sample_shard_2** have already been created in the server referenced by string **shardServer**. - - public void CreatePopulatedRangeMap(ShardMapManager smm, string mapName) - { - RangeShardMap sm = null; - - // check if shardmap exists and if not, create it - if (!smm.TryGetRangeShardMap(mapName, out sm)) - { - sm = smm.CreateRangeShardMap(mapName); - } - - Shard shard0 = null, shard1=null; - // Check if shard exists and if not, - // create it (Idempotent / tolerant of re-execute) - if (!sm.TryGetShard(new ShardLocation( - shardServer, - "sample_shard_0"), - out shard0)) - { - Shard0 = sm.CreateShard(new ShardLocation( - shardServer, - "sample_shard_0")); - } - - if (!sm.TryGetShard(new ShardLocation( - shardServer, - "sample_shard_1"), - out shard1)) - { - Shard1 = sm.CreateShard(new ShardLocation( - shardServer, - "sample_shard_1")); - } - - RangeMapping rmpg=null; - - // Check if mapping exists and if not, - // create it (Idempotent / tolerant of re-execute) - if (!sm.TryGetMappingForKey(0, out rmpg)) - { - sm.CreateRangeMapping( - new RangeMappingCreationInfo - (new Range(0, 50), - shard0, - MappingStatus.Online)); - } - - if (!sm.TryGetMappingForKey(50, out rmpg)) - { - sm.CreateRangeMapping( - new RangeMappingCreationInfo - (new Range(50, 100), - shard1, - MappingStatus.Online)); - } - - if (!sm.TryGetMappingForKey(100, out rmpg)) - { - sm.CreateRangeMapping( - new RangeMappingCreationInfo - (new Range(100, 150), - shard0, - MappingStatus.Online)); - } - - if (!sm.TryGetMappingForKey(150, out rmpg)) - { - sm.CreateRangeMapping( - new RangeMappingCreationInfo - (new Range(150, 200), - shard1, - MappingStatus.Online)); - } - - if (!sm.TryGetMappingForKey(200, out rmpg)) - { - sm.CreateRangeMapping( - new RangeMappingCreationInfo - (new Range(200, 300), - shard0, - MappingStatus.Online)); - } - - // List the shards and mappings - foreach (Shard s in sm.GetShards() - .OrderBy(s => s.Location.DataSource) - .ThenBy(s => s.Location.Database)) - { - Console.WriteLine("shard: "+ s.Location); - } - - foreach (RangeMapping rm in sm.GetMappings()) - { - Console.WriteLine("range: [" + rm.Value.Low.ToString() + ":" - + rm.Value.High.ToString()+ ") ==>" +rm.Shard.Location); - } - } - -As an alternative you can use PowerShell scripts to achieve the same result. Some of the sample PowerShell examples are available [here](https://gallery.technet.microsoft.com/scriptcenter/Azure-SQL-DB-Elastic-731883db). - -Once shard maps have been populated, data access applications can be created or adapted to work with the maps. Populating or manipulating the maps need not occur again until **map layout** needs to change. - ## Data dependent routing -The shard map manager will be most used in applications that require database connections to perform the app-specific data operations. Those connections must be associated with the correct database. This is known as **Data Dependent Routing**. For these applications, instantiate a shard map manager object from the factory using credentials that have read-only access on the GSM database. Individual requests for later connections supply credentials necessary for connecting to the appropriate shard database. +The shard map manager is used in applications that require database connections to perform the app-specific data operations. Those connections must be associated with the correct database. This is known as **Data Dependent Routing**. For these applications, instantiate a shard map manager object from the factory using credentials that have read-only access on the GSM database. Individual requests for later connections supply credentials necessary for connecting to the appropriate shard database. Note that these applications (using **ShardMapManager** opened with read-only credentials) cannot make changes to the maps or mappings. For those needs, create administrative-specific applications or PowerShell scripts that supply higher-privileged credentials as discussed earlier. See [Credentials used to access the Elastic Database client library](sql-database-elastic-scale-manage-credentials.md). -For more details, see [Data dependent routing](sql-database-elastic-scale-data-dependent-routing.md). +For more information, see [Data dependent routing](sql-database-elastic-scale-data-dependent-routing.md). ## Modifying a shard map A shard map can be changed in different ways. All of the following methods modify the metadata describing the shards and their mappings, but they do not physically modify data within the shards, nor do they create or delete the actual databases. Some of the operations on the shard map described below may need to be coordinated with administrative actions that physically move data or that add and remove databases serving as shards. These methods work together as the building blocks available for modifying the overall distribution of data in your sharded database environment. -* To add or remove shards: use **[CreateShard](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmap.createshard.aspx)** and **[DeleteShard](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmap.deleteshard.aspx)** of the [Shardmap class](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmap.aspx). +* To add or remove shards: use **CreateShard** ([.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmap.createshard.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.map._shard_map.createshard)) and **DeleteShard** ([.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmap.deleteshard.aspx), [Java](https://docs.microsoft.com/en-us/java/api/com.microsoft.azure.elasticdb.shard.map._shard_map.deleteshard)) of the Shardmap ([.NET](https://msdn.microsoft.com/library/azure/microsoft.azure.sqldatabase.elasticscale.shardmanagement.shardmap.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.map._shard_map)) class. The server and database representing the target shard must already exist for these operations to execute. These methods do not have any impact on the databases themselves, only on metadata in the shard map. -* To create or remove points or ranges that are mapped to the shards: use **[CreateRangeMapping](https://msdn.microsoft.com/library/azure/dn841993.aspx)**, **[DeleteMapping](https://msdn.microsoft.com/library/azure/dn824200.aspx)** of the [RangeShardMapping class](https://msdn.microsoft.com/library/azure/dn807318.aspx), and **[CreatePointMapping](https://msdn.microsoft.com/library/azure/dn807218.aspx)** of the [ListShardMap](https://msdn.microsoft.com/library/azure/dn842123.aspx) +* To create or remove points or ranges that are mapped to the shards: use **CreateRangeMapping** ([.NET](https://msdn.microsoft.com/library/azure/dn841993.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.map._range_shard_map.createrangemapping)), **DeleteMapping** ([.NET](https://msdn.microsoft.com/library/azure/dn824200.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.map._range_shard_map.deletemapping)) of the RangeShardMapping ([.NET](https://msdn.microsoft.com/library/azure/dn807318.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.map._range_shard_map)) class, and **CreatePointMapping** ([.NET](https://msdn.microsoft.com/library/azure/dn807218.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.map._list_shard_map.createpointmapping)) of the ListShardMap ([.NET](https://msdn.microsoft.com/library/azure/dn842123.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.map._list_shard_map)) class. - Many different points or ranges can be mapped to the same shard. These methods only affect metadata - they do not affect any data that may already be present in shards. If data needs to be removed from the database in order to be consistent with **DeleteMapping** operations, you will need to perform those operations separately but in conjunction with using these methods. -* To split existing ranges into two, or merge adjacent ranges into one: use **[SplitMapping](https://msdn.microsoft.com/library/azure/dn824205.aspx)** and **[MergeMappings](https://msdn.microsoft.com/library/azure/dn824201.aspx)**. + Many different points or ranges can be mapped to the same shard. These methods only affect metadata - they do not affect any data that may already be present in shards. If data needs to be removed from the database in order to be consistent with **DeleteMapping** operations, you perform those operations separately but in conjunction with using these methods. +* To split existing ranges into two, or merge adjacent ranges into one: use **SplitMapping** ([.NET](https://msdn.microsoft.com/library/azure/dn824205.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.map._range_shard_map.splitmapping)) and **MergeMappings** ([.NET](https://msdn.microsoft.com/library/azure/dn824201.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.map._range_shard_map.mergemappings)). Note that split and merge operations **do not change the shard to which key values are mapped**. A split breaks an existing range into two parts, but leaves both as mapped to the same shard. A merge operates on two adjacent ranges that are already mapped to the same shard, coalescing them into a single range. The movement of points or ranges themselves between shards needs to be coordinated by using **UpdateMapping** in conjunction with actual data movement. You can use the **Split/Merge** service that is part of elastic database tools to coordinate shard map changes with data movement, when movement is needed. -* To re-map (or move) individual points or ranges to different shards: use **[UpdateMapping](https://msdn.microsoft.com/library/azure/dn824207.aspx)**. +* To re-map (or move) individual points or ranges to different shards: use **UpdateMapping** ([.NET](https://msdn.microsoft.com/library/azure/dn824207.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.map._range_shard_map.updatemapping)). - Since data may need to be moved from one shard to another in order to be consistent with **UpdateMapping** operations, you will need to perform that movement separately but in conjunction with using these methods. -* To take mappings online and offline: use **[MarkMappingOffline](https://msdn.microsoft.com/library/azure/dn824202.aspx)** and **[MarkMappingOnline](https://msdn.microsoft.com/library/azure/dn807225.aspx)** to control the online state of a mapping. + Since data may need to be moved from one shard to another in order to be consistent with **UpdateMapping** operations, you need to perform that movement separately but in conjunction with using these methods. +* To take mappings online and offline: use **MarkMappingOffline** ([.NET](https://msdn.microsoft.com/library/azure/dn824202.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.map._range_shard_map.markmappingoffline)) and **MarkMappingOnline** ([.NET](https://msdn.microsoft.com/library/azure/dn807225.aspx), [Java](/java/api/com.microsoft.azure.elasticdb.shard.map._range_shard_map.markmappingonline)) to control the online state of a mapping. - Certain operations on shard mappings are only allowed when a mapping is in an “offline” state, including **UpdateMapping** and **DeleteMapping**. When a mapping is offline, a data-dependent request based on a key included in that mapping will return an error. In addition, when a range is first taken offline, all connections to the affected shard are automatically killed in order to prevent inconsistent or incomplete results for queries directed against ranges being changed. + Certain operations on shard mappings are only allowed when a mapping is in an “offline” state, including **UpdateMapping** and **DeleteMapping**. When a mapping is offline, a data-dependent request based on a key included in that mapping returns an error. In addition, when a range is first taken offline, all connections to the affected shard are automatically killed in order to prevent inconsistent or incomplete results for queries directed against ranges being changed. Mappings are immutable objects in .Net. All of the methods above that change mappings also invalidate any references to them in your code. To make it easier to perform sequences of operations that change a mapping’s state, all of the methods that change a mapping return a new mapping reference, so operations can be chained. For example, to delete an existing mapping in shardmap sm that contains the key 25, you can execute the following: - sm.DeleteMapping(sm.MarkMappingOffline(sm.GetMappingForKey(25))); +``` + sm.DeleteMapping(sm.MarkMappingOffline(sm.GetMappingForKey(25))); +``` ## Adding a shard -Applications often need to simply add new shards to handle data that is expected from new keys or key ranges, for a shard map that already exists. For example, an application sharded by Tenant ID may need to provision a new shard for a new tenant, or data sharded monthly may need a new shard provisioned before the start of each new month. +Applications often need to add new shards to handle data that is expected from new keys or key ranges, for a shard map that already exists. For example, an application sharded by Tenant ID may need to provision a new shard for a new tenant, or data sharded monthly may need a new shard provisioned before the start of each new month. -If the new range of key values is not already part of an existing mapping and no data movement is necessary, it is very simple to add the new shard and associate the new key or range to that shard. For details on adding new shards, see [Adding a new shard](sql-database-elastic-scale-add-a-shard.md). +If the new range of key values is not already part of an existing mapping and no data movement is necessary, it is simple to add the new shard and associate the new key or range to that shard. For details on adding new shards, see [Adding a new shard](sql-database-elastic-scale-add-a-shard.md). -For scenarios that require data movement, however, the split-merge tool is needed to orchestrate the data movement between shards in combination with the necessary shard map updates. For details on using the split-merge yool, see [Overview of split-merge](sql-database-elastic-scale-overview-split-and-merge.md) +For scenarios that require data movement, however, the split-merge tool is needed to orchestrate the data movement between shards in combination with the necessary shard map updates. For details on using the split-merge tool, see [Overview of split-merge](sql-database-elastic-scale-overview-split-and-merge.md) [!INCLUDE [elastic-scale-include](../../includes/elastic-scale-include.md)] diff --git a/articles/sql-database/sql-database-libraries.md b/articles/sql-database/sql-database-libraries.md index 65a55ed81dfbc..09ddf1d2ede69 100644 --- a/articles/sql-database/sql-database-libraries.md +++ b/articles/sql-database/sql-database-libraries.md @@ -14,9 +14,8 @@ ms.workload: "On Demand" ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 07/05/2017 +ms.date: 11/29/2017 ms.author: genemi - --- # Connectivity libraries and frameworks for Microsoft SQL Server @@ -44,12 +43,25 @@ The table below lists a few examples of Object Relational Mapping (ORM) framewor | Node.js | Windows, Linux, macOS | [Sequelize ORM](http://docs.sequelizejs.com) | | Python | Windows, Linux, macOS |[Django](https://www.djangoproject.com/) | | Ruby | Windows, Linux, macOS | [Ruby on Rails](http://rubyonrails.org/) | +|||| ## Related links - [SQL Server Drivers](http://msdn.microsoft.com/library/mt654049.aspx) for connecting from client applications -- [Connect to SQL Database by using .NET (C#)](sql-database-connect-query-dotnet.md) -- [Connect to SQL Database by using PHP](sql-database-connect-query-php.md) -- [Connect to SQL Database by using Node.js](sql-database-connect-query-nodejs.md) -- [Connect to SQL Database by using Java](sql-database-connect-query-java.md) -- [Connect to SQL Database by using Python](sql-database-connect-query-python.md) -- [Connect to SQL Database by using Ruby](sql-database-connect-query-ruby.md) +- Connect to SQL Database: + - [Connect to SQL Database by using .NET (C#)](sql-database-connect-query-dotnet.md) + - [Connect to SQL Database by using PHP](sql-database-connect-query-php.md) + - [Connect to SQL Database by using Node.js](sql-database-connect-query-nodejs.md) + - [Connect to SQL Database by using Java](sql-database-connect-query-java.md) + - [Connect to SQL Database by using Python](sql-database-connect-query-python.md) + - [Connect to SQL Database by using Ruby](sql-database-connect-query-ruby.md) +- Retry logic code examples: + - [Connect resiliently to SQL with ADO.NET][step-4-connect-resiliently-to-sql-with-ado-net-a78n] + - [Connect resiliently to SQL with PHP][step-4-connect-resiliently-to-sql-with-php-p42h] + + + + +[step-4-connect-resiliently-to-sql-with-ado-net-a78n]: https://docs.microsoft.com/sql/connect/ado-net/step-4-connect-resiliently-to-sql-with-ado-net + +[step-4-connect-resiliently-to-sql-with-php-p42h]: https://docs.microsoft.com/sql/connect/php/step-4-connect-resiliently-to-sql-with-php + diff --git a/articles/sql-database/sql-database-metrics-diag-logging.md b/articles/sql-database/sql-database-metrics-diag-logging.md index 98c390bf536b6..671285d51bf4e 100644 --- a/articles/sql-database/sql-database-metrics-diag-logging.md +++ b/articles/sql-database/sql-database-metrics-diag-logging.md @@ -45,7 +45,7 @@ When you enable metrics and diagnostics logging, you need to specify the Azure r You can provision a new Azure resource or select an existing resource. After selecting the storage resource, you need to specify which data to collect. Options available include: -- [1-minute metrics](sql-database-metrics-diag-logging.md#1-minute-metrics): Contains DTU percentage, DTU limit, CPU percentage, physical data read percentage, log write percentage, Successful/Failed/Blocked by firewall connections, sessions percentage, workers percentage, storage, storage percentage, and XTP storage percentage. +- [All metrics](sql-database-metrics-diag-logging.md#all-metrics): Contains DTU percentage, DTU limit, CPU percentage, physical data read percentage, log write percentage, Successful/Failed/Blocked by firewall connections, sessions percentage, workers percentage, storage, storage percentage, and XTP storage percentage. - [QueryStoreRuntimeStatistics](sql-database-metrics-diag-logging.md#query-store-runtime-statistics): Contains information about the query runtime statistics, such as CPU usage and query duration. - [QueryStoreWaitStatistics](sql-database-metrics-diag-logging.md#query-store-wait-statistics): Contains information about the query wait statistics, which tells you what your queries waited on, such as CPU, LOG, and LOCKING. - [Errors](sql-database-metrics-diag-logging.md#errors-dataset): Contains information about SQL errors that happened on this database. @@ -240,7 +240,7 @@ Or, more simply: insights-{metrics|logs}-{category name}/resourceId=/{resource Id}/y={four-digit numeric year}/m={two-digit numeric month}/d={two-digit numeric day}/h={two-digit 24-hour clock hour}/m=00/PT1H.json ``` -For example, a blob name for 1-minute metrics might be: +For example, a blob name for all metrics might be: ```powershell insights-metrics-minute/resourceId=/SUBSCRIPTIONS/s1id1234-5679-0123-4567-890123456789/RESOURCEGROUPS/TESTRESOURCEGROUP/PROVIDERS/MICROSOFT.SQL/ servers/Server1/databases/database1/y=2016/m=08/d=22/h=18/m=00/PT1H.json @@ -258,7 +258,7 @@ Learn how to [download metrics and diagnostics logs from Storage](../storage/blo ## Metrics and logs available -### 1-minute metrics +### All metrics |**Resource**|**Metrics**| |---|---| diff --git a/articles/sql-database/toc.yml b/articles/sql-database/toc.yml index 7c0946fed3971..f253b14b7f676 100644 --- a/articles/sql-database/toc.yml +++ b/articles/sql-database/toc.yml @@ -25,12 +25,14 @@ href: sql-database-connect-query-dotnet-visual-studio.md - name: .NET core href: sql-database-connect-query-dotnet-core.md - - name: PHP - href: sql-database-connect-query-php.md - - name: Node.js - href: sql-database-connect-query-nodejs.md + - name: Go + href: sql-database-connect-query-go.md - name: Java href: sql-database-connect-query-java.md + - name: Node.js + href: sql-database-connect-query-nodejs.md + - name: PHP + href: sql-database-connect-query-php.md - name: Python href: sql-database-connect-query-python.md - name: Ruby @@ -350,8 +352,6 @@ items: - name: Tutorial intro href: saas-dbpertenant-wingtip-app-overview.md - - name: App guidance - href: saas-dbpertenant-wingtip-app-guidance-tips.md - name: Deploy example app href: saas-dbpertenant-get-started-deploy.md - name: Provision tenants diff --git a/articles/storage/blobs/storage-how-to-use-blobs-cli.md b/articles/storage/blobs/storage-how-to-use-blobs-cli.md index 7bd7dc5e4bfb5..55efb61426e1e 100644 --- a/articles/storage/blobs/storage-how-to-use-blobs-cli.md +++ b/articles/storage/blobs/storage-how-to-use-blobs-cli.md @@ -3,7 +3,7 @@ title: Perform operations on Azure Blob storage (object storage) with the Azure description: Learn how to upload and download blobs in Azure Blob storage, as well as construct a shared access signature (SAS) to manage access to a blob in your storage account. services: storage documentationcenter: na -author: mmacy +author: tamram manager: timlt editor: tysonn @@ -14,7 +14,7 @@ ms.tgt_pltfrm: na ms.devlang: na ms.topic: hero-article ms.date: 06/15/2017 -ms.author: marsma +ms.author: tamram --- # Perform Blob storage operations with Azure CLI diff --git a/articles/storage/blobs/storage-python-how-to-use-blob-storage.md b/articles/storage/blobs/storage-python-how-to-use-blob-storage.md index f9cd7f37b3897..87bd9c6987b88 100644 --- a/articles/storage/blobs/storage-python-how-to-use-blob-storage.md +++ b/articles/storage/blobs/storage-python-how-to-use-blob-storage.md @@ -3,7 +3,7 @@ title: How to use Azure Blob storage (object storage) from Python | Microsoft Do description: Store unstructured data in the cloud with Azure Blob storage (object storage). services: storage documentationcenter: python -author: mmacy +author: tamram manager: timlt editor: tysonn diff --git a/articles/storage/blobs/storage-quickstart-blobs-cli.md b/articles/storage/blobs/storage-quickstart-blobs-cli.md index 0750c3fda5f57..7af6fca9d191c 100644 --- a/articles/storage/blobs/storage-quickstart-blobs-cli.md +++ b/articles/storage/blobs/storage-quickstart-blobs-cli.md @@ -3,7 +3,7 @@ title: Azure Quickstart - Transfer objects to/from Azure Blob storage using the description: Quickly learn to transfer objects to/from Azure Blob storage using the Azure CLI services: storage documentationcenter: na -author: mmacy +author: tamram manager: timlt editor: tysonn @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.devlang: na ms.topic: quickstart ms.date: 07/19/2017 -ms.author: marsma +ms.author: tamram --- # Transfer objects to/from Azure Blob storage using the Azure CLI diff --git a/articles/storage/blobs/storage-quickstart-blobs-powershell.md b/articles/storage/blobs/storage-quickstart-blobs-powershell.md index 7bf84671c7fdd..2fa442d29f340 100644 --- a/articles/storage/blobs/storage-quickstart-blobs-powershell.md +++ b/articles/storage/blobs/storage-quickstart-blobs-powershell.md @@ -3,7 +3,7 @@ title: Azure Quickstart - Transfer objects to/from Azure Blob storage using Powe description: Quickly learn to transfer objects to/from Azure Blob storage using PowerShell services: storage documentationcenter: storage -author: robinsh +author: tamram manager: timlt editor: tysonn @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.devlang: na ms.topic: quickstart ms.date: 07/19/2017 -ms.author: robinsh +ms.author: tamram --- # Transfer objects to/from Azure Blob storage using Azure PowerShell diff --git a/articles/storage/blobs/storage-samples-blobs-cli.md b/articles/storage/blobs/storage-samples-blobs-cli.md index bab254145e0a8..e3472934f0b01 100644 --- a/articles/storage/blobs/storage-samples-blobs-cli.md +++ b/articles/storage/blobs/storage-samples-blobs-cli.md @@ -3,7 +3,7 @@ title: Azure CLI samples for Blob storage | Microsoft Docs description: Azure CLI samples for working with Azure Blob Storage services: storage documentationcenter: na -author: mmacy +author: tamram manager: timlt editor: tysonn @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.devlang: na ms.topic: sample ms.date: 06/13/2017 -ms.author: marsma +ms.author: tamram --- # Azure CLI samples for Azure Blob storage diff --git a/articles/storage/blobs/storage-samples-blobs-powershell.md b/articles/storage/blobs/storage-samples-blobs-powershell.md index 2552bf5194ff9..733b2d5ef5361 100644 --- a/articles/storage/blobs/storage-samples-blobs-powershell.md +++ b/articles/storage/blobs/storage-samples-blobs-powershell.md @@ -3,7 +3,7 @@ title: Azure PowerShell samples for Azure Blob storage | Microsoft Docs description: Azure PowerShell samples for working with Azure Blob storage services: storage documentationcenter: na -author: robinsh +author: tamram manager: timlt editor: tysonn @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.devlang: na ms.topic: sample ms.date: 11/07/2017 -ms.author: robinsh +ms.author: tamram --- # Azure PowerShell samples for Azure Blob storage diff --git a/articles/storage/common/storage-quickstart-create-storage-account-cli.md b/articles/storage/common/storage-quickstart-create-storage-account-cli.md index 9c23cf41fa1e8..e402efc339120 100644 --- a/articles/storage/common/storage-quickstart-create-storage-account-cli.md +++ b/articles/storage/common/storage-quickstart-create-storage-account-cli.md @@ -3,7 +3,7 @@ title: Azure Quickstart - Create a storage account using the Azure CLI | Microso description: Quickly learn to create a new storage account using the Azure CLI. services: storage documentationcenter: na -author: mmacy +author: tamram manager: timlt editor: tysonn @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.devlang: na ms.topic: quickstart ms.date: 06/28/2017 -ms.author: marsma +ms.author: tamram --- # Create a storage account using the Azure CLI diff --git a/articles/storage/common/storage-quickstart-create-storage-account-powershell.md b/articles/storage/common/storage-quickstart-create-storage-account-powershell.md index f527c3547b977..7e0b8193f70b1 100644 --- a/articles/storage/common/storage-quickstart-create-storage-account-powershell.md +++ b/articles/storage/common/storage-quickstart-create-storage-account-powershell.md @@ -3,7 +3,7 @@ title: Azure Quickstart - Create a storage account using PowerShell | Microsoft description: Quickly learn to create a new storage account with PowerShell services: storage documentationcenter: '' -author: robinsh +author: tamram manager: timlt editor: tysonn @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.devlang: na ms.topic: quickstart ms.date: 06/29/2017 -ms.author: robinsh +ms.author: tamram --- # Create a storage account using PowerShell diff --git a/articles/storage/files/storage-python-how-to-use-file-storage.md b/articles/storage/files/storage-python-how-to-use-file-storage.md index 6a94884acaa20..5931d3d804c4e 100644 --- a/articles/storage/files/storage-python-how-to-use-file-storage.md +++ b/articles/storage/files/storage-python-how-to-use-file-storage.md @@ -3,7 +3,7 @@ title: Develop for Azure Files with Python | Microsoft Docs description: Learn how to develop Python applications and services that use Azure Files to store file data. services: storage documentationcenter: python -author: robinsh +author: tamram manager: timlt editor: tysonn @@ -14,7 +14,7 @@ ms.tgt_pltfrm: na ms.devlang: python ms.topic: article ms.date: 09/19/2017 -ms.author: robinsh +ms.author: tamram --- # Develop for Azure Files with Python diff --git a/articles/storage/queues/storage-python-how-to-use-queue-storage.md b/articles/storage/queues/storage-python-how-to-use-queue-storage.md index 8db0bb0658ea7..dcebe2dab6bfc 100644 --- a/articles/storage/queues/storage-python-how-to-use-queue-storage.md +++ b/articles/storage/queues/storage-python-how-to-use-queue-storage.md @@ -3,7 +3,7 @@ title: How to use Queue storage from Python | Microsoft Docs description: Learn how to use the Azure Queue service from Python to create and delete queues, and insert, get, and delete messages. services: storage documentationcenter: python -author: robinsh +author: tamram manager: timlt editor: tysonn @@ -14,7 +14,7 @@ ms.tgt_pltfrm: na ms.devlang: python ms.topic: article ms.date: 12/08/2016 -ms.author: robinsh +ms.author: tamram --- # How to use Queue storage from Python diff --git a/articles/storage/scripts/storage-blobs-container-calculate-size-cli.md b/articles/storage/scripts/storage-blobs-container-calculate-size-cli.md index 9234193a39b1b..026a928771834 100644 --- a/articles/storage/scripts/storage-blobs-container-calculate-size-cli.md +++ b/articles/storage/scripts/storage-blobs-container-calculate-size-cli.md @@ -3,7 +3,7 @@ title: Azure CLI Script Sample - Calculate blob container size | Microsoft Docs description: Calculate the size of a container in Azure Blob storage by totaling the size of the blobs in the container. services: storage documentationcenter: na -author: mmacy +author: tamram manager: timlt editor: tysonn @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.devlang: azurecli ms.topic: sample ms.date: 06/28/2017 -ms.author: marsma +ms.author: tamram --- # Calculate the size of a Blob storage container diff --git a/articles/storage/scripts/storage-blobs-container-delete-by-prefix-cli.md b/articles/storage/scripts/storage-blobs-container-delete-by-prefix-cli.md index 8c01760a22279..cf7ce4be91178 100644 --- a/articles/storage/scripts/storage-blobs-container-delete-by-prefix-cli.md +++ b/articles/storage/scripts/storage-blobs-container-delete-by-prefix-cli.md @@ -3,7 +3,7 @@ title: Azure CLI Script Sample - Delete containers by prefix | Microsoft Docs description: Delete Azure Storage blob containers based on a container name prefix. services: storage documentationcenter: na -author: mmacy +author: tamram manager: timlt editor: tysonn @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.devlang: azurecli ms.topic: sample ms.date: 06/22/2017 -ms.author: marsma +ms.author: tamram --- # Delete containers based on container name prefix diff --git a/articles/storage/scripts/storage-blobs-container-delete-by-prefix-powershell.md b/articles/storage/scripts/storage-blobs-container-delete-by-prefix-powershell.md index 0466b2a654182..f626f26ad15d4 100644 --- a/articles/storage/scripts/storage-blobs-container-delete-by-prefix-powershell.md +++ b/articles/storage/scripts/storage-blobs-container-delete-by-prefix-powershell.md @@ -3,7 +3,7 @@ title: Azure PowerShell Script Sample - Delete containers by prefix | Microsoft description: Delete Azure Storage blob containers based on a container name prefix. services: storage documentationcenter: na -author: robinsh +author: tamram manager: timlt editor: tysonn @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.devlang: azurecli ms.topic: sample ms.date: 06/13/2017 -ms.author: robinsh +ms.author: tamram --- # Delete containers based on container name prefix diff --git a/articles/storage/scripts/storage-common-rotate-account-keys-cli.md b/articles/storage/scripts/storage-common-rotate-account-keys-cli.md index 4336ad7a0152f..a1edb3f3c6e56 100644 --- a/articles/storage/scripts/storage-common-rotate-account-keys-cli.md +++ b/articles/storage/scripts/storage-common-rotate-account-keys-cli.md @@ -3,7 +3,7 @@ title: Azure CLI Script Sample - Rotate storage account access keys | Microsoft description: Create an Azure Storage account, then retrieve and rotate its account access keys. services: storage documentationcenter: na -author: mmacy +author: tamram manager: timlt editor: tysonn @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.devlang: azurecli ms.topic: sample ms.date: 06/22/2017 -ms.author: marsma +ms.author: tamram --- # Create a storage account and rotate its account access keys diff --git a/articles/storage/scripts/storage-common-rotate-account-keys-powershell.md b/articles/storage/scripts/storage-common-rotate-account-keys-powershell.md index 2078099ec68f4..f876cc7683850 100644 --- a/articles/storage/scripts/storage-common-rotate-account-keys-powershell.md +++ b/articles/storage/scripts/storage-common-rotate-account-keys-powershell.md @@ -3,7 +3,7 @@ title: Azure PowerShell Script Sample - Rotate storage account access key | Micr description: Create an Azure Storage account, then retrieve and rotate one of its account access keys. services: storage documentationcenter: na -author: robinsh +author: tamram manager: timlt editor: tysonn @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.devlang: azurecli ms.topic: sample ms.date: 06/13/2017 -ms.author: robinsh +ms.author: tamram --- # Create a storage account and rotate its account access keys diff --git a/articles/stream-analytics/stream-analytics-edge.md b/articles/stream-analytics/stream-analytics-edge.md index 97bb4e63e3fa9..f24777bc3d5c0 100644 --- a/articles/stream-analytics/stream-analytics-edge.md +++ b/articles/stream-analytics/stream-analytics-edge.md @@ -33,7 +33,7 @@ This feature is in preview, if you have any question or feedback you can use [th * **Low-latency command and control**: For example, manufacturing safety systems must respond to operational data with ultra-low latency. With ASA on IoT Edge, you can analyze sensor data in near real-time, and issue commands when you detect anomalies to stop a machine or trigger alerts. * **Limited connectivity to the cloud**: Mission critical systems, such as remote mining equipment, connected vessels or offshore drilling, need to analyze and react to data even when cloud connectivity is intermittent. With ASA, your streaming logic runs independently of the network connectivity and you can choose what you send to the cloud for further processing or storage. * **Limited bandwidth**: The volume of data produced by jet engines or connected cars can be so large that data must be filtered or pre-processed before sending it to the cloud. Using ASA, you can filter or aggregate the data that need to be sent to the cloud. -* **Compliance**: Regulatory compliance may require some data to be locally anonymized or aggregated before being sent to the cloud. With ASA, you +* **Compliance**: Regulatory compliance may require some data to be locally anonymized or aggregated before being sent to the cloud. ## Edge jobs in Azure Stream Analytics ### What is an "edge" job? diff --git a/articles/stream-analytics/stream-analytics-high-frequency-trading.md b/articles/stream-analytics/stream-analytics-high-frequency-trading.md index b0f76e38c2e19..ce6d234f210c2 100644 --- a/articles/stream-analytics/stream-analytics-high-frequency-trading.md +++ b/articles/stream-analytics/stream-analytics-high-frequency-trading.md @@ -1,5 +1,5 @@ --- -title: High Frequency Trading Simulation With Stream Analytics | Microsoft Docs +title: High-frequency trading simulation With Stream Analytics | Microsoft Docs description: How to perform linear regression model training and scoring in the same Stream Analytics job keywords: 'machine learning, advanced analytics, linear regression, simulation, UDA, user defined function' documentationcenter: '' @@ -18,17 +18,22 @@ ms.date: 11/05/2017 ms.author: zhongc --- -# High frequency trading simulation with Stream Analytics -The combination of Azure Stream Analytics' SQL language and JavaScript UDF and UDA is a powerful combination that allows users to perform advanced analytics, including online machine learning training and scoring, as well as stateful process simulation. This article describes how to perform linear regression in an Azure Stream Analytics job that does continuous training and scoring in a high frequency trading scenario. +# High-frequency trading simulation with Stream Analytics +The combination of SQL language and JavaScript user-defined functions (UDFs) and user-defined aggregates (UDAs) in Azure Stream Analytics enables users to perform advanced analytics. Advanced analytics might include online machine learning training and scoring, as well as stateful process simulation. This article describes how to perform linear regression in an Azure Stream Analytics job that does continuous training and scoring in a high-frequency trading scenario. -## High frequency trading -The logical flow of high frequency trading is about getting real-time quotes from a security exchange, build a predictive model around the quotes, so we can anticipate the price movement, and place buy or sell orders accordingly in order to make money off the successful prediction of the price movements. As a result, we need the following -* Real-time quote feed -* A predictive model that can operate on the real-time quotes -* A trading simulation that demonstrates the profit/loss of the trading algorithm +## High-frequency trading +The logical flow of high-frequency trading is about: +1. Getting real-time quotes from a security exchange. +2. Building a predictive model around the quotes, so we can anticipate the price movement. +3. Placing buy or sell orders to make money from the successful prediction of the price movements. + +As a result, we need: +* A real-time quote feed. +* A predictive model that can operate on the real-time quotes. +* A trading simulation that demonstrates the profit or loss of the trading algorithm. ### Real-time quote feed -IEX offers free real-time bid and ask quotes using socket.io, https://iextrading.com/developer/docs/#websockets. A simple console program can be written to receive real-time quotes, and push to Event Hub as a data source. The skeleton of the program is shown below. Error handling is omitted for brevity. You will also need to include SocketIoClientDotNet and WindowsAzure.ServiceBus nuget packages in your project. +IEX offers free [real-time bid and ask quotes](https://iextrading.com/developer/docs/#websockets) by using socket.io. A simple console program can be written to receive real-time quotes and push to Azure Event Hubs as a data source. The following code is a skeleton of the program. The code omits error handling for brevity. You also need to include SocketIoClientDotNet and WindowsAzure.ServiceBus NuGet packages in your project. using Quobject.SocketIoClientDotNet.Client; @@ -48,7 +53,7 @@ IEX offers free real-time bid and ask quotes using socket.io, https://iextrading socket.Emit("subscribe", symbols); }); -Here are some sample events generated. +Here are some generated sample events: {"symbol":"MSFT","marketPercent":0.03246,"bidSize":100,"bidPrice":74.8,"askSize":300,"askPrice":74.83,"volume":70572,"lastSalePrice":74.825,"lastSaleSize":100,"lastSaleTime":1506953355123,"lastUpdated":1506953357170,"sector":"softwareservices","securityType":"commonstock"} {"symbol":"GOOG","marketPercent":0.04825,"bidSize":114,"bidPrice":870,"askSize":0,"askPrice":0,"volume":11240,"lastSalePrice":959.47,"lastSaleSize":60,"lastSaleTime":1506953317571,"lastUpdated":1506953357633,"sector":"softwareservices","securityType":"commonstock"} @@ -59,18 +64,20 @@ Here are some sample events generated. {"symbol":"GOOG","marketPercent":0.04795,"bidSize":114,"bidPrice":870,"askSize":0,"askPrice":0,"volume":11240,"lastSalePrice":959.47,"lastSaleSize":60,"lastSaleTime":1506953317571,"lastUpdated":1506953362629,"sector":"softwareservices","securityType":"commonstock"} >[!NOTE] ->The timestamp of the event is **lastUpdated**, in epoch time. +>The time stamp of the event is **lastUpdated**, in epoch time. + +### Predictive model for high-frequency trading +For the purpose of demonstration, we use a linear model described by Darryl Shen in [his paper](http://eprints.maths.ox.ac.uk/1895/1/Darryl%20Shen%20%28for%20archive%29.pdf). -### Predictive model for high frequency trading -For the purpose of demonstration, we use a linear model described by Darryl Shen in his paper. http://eprints.maths.ox.ac.uk/1895/1/Darryl%20Shen%20%28for%20archive%29.pdf. +Volume order imbalance (VOI) is a function of current bid/ask price and volume, and bid/ask price and volume from the last tick. The paper identifies the correlation between VOI and future price movement. It builds a linear model between the past 5 VOI values and the price change in the next 10 ticks. The model is trained by using previous day's data with linear regression. -Volume Order Imbalance (VOI) is a function of current bid/ask price and volume, and bid/ask price/volume from the last tick. The paper identifies correlation between VOI and future price movement, and builds a linear model between the past 5 VOI values and the price change in the next 10 ticks. The model is trained using previous day's data with linear regression. The trained model is then used to make price change predictions on quotes in the current trading day in real time. When a large enough price change is predicted, a trade is executed. Depending on the threshold setting, thousands of trades can be expected for a single stock during a trading day. +The trained model is then used to make price change predictions on quotes in the current trading day in real time. When a large enough price change is predicted, a trade is executed. Depending on the threshold setting, thousands of trades can be expected for a single stock during a trading day. ![VOI definition](./media/stream-analytics-high-frequency-trading/voi-formula.png) Now, let's express the training and prediction operations in an Azure Stream Analytics job. -First, the inputs are cleaned up. Epoch time is converted to datetime using **DATEADD**. **TRY_CAST** is used to coerce data types without failing the query. It's always a good practice to cast input fields to the expected data types, so there is no unexpected behavior when it comes to manipulation or comparison of the fields. +First, the inputs are cleaned up. Epoch time is converted to datetime via **DATEADD**. **TRY_CAST** is used to coerce data types without failing the query. It's always a good practice to cast input fields to the expected data types, so there is no unexpected behavior in manipulation or comparison of the fields. WITH typeconvertedquotes AS ( @@ -90,12 +97,12 @@ First, the inputs are cleaned up. Epoch time is converted to datetime using **DA ), timefilteredquotes AS ( /* filter between 7am and 1pm PST, 14:00 to 20:00 UTC */ - /* cleanup invalid data points */ + /* clean up invalid data points */ SELECT * FROM typeconvertedquotes WHERE DATEPART(hour, lastUpdated) >= 14 AND DATEPART(hour, lastUpdated) < 20 AND bidSize > 0 AND askSize > 0 AND bidPrice > 0 AND askPrice > 0 ), -Next, we use the **LAG** function to get values from the last tick. One hour of **LIMIT DURATION** value is arbitrarily chosen. Given the quote frequency, it's safe to assume you can find the previous tick looking back for one hour. +Next, we use the **LAG** function to get values from the last tick. One hour of **LIMIT DURATION** value is arbitrarily chosen. Given the quote frequency, it's safe to assume that you can find the previous tick by looking back one hour. shiftedquotes AS ( /* get previous bid/ask price and size in order to calculate VOI */ @@ -113,7 +120,7 @@ Next, we use the **LAG** function to get values from the last tick. One hour of FROM timefilteredquotes ), -We can then compute VOI value. Note, we filter out the null values if the previous tick doesn't exist, just in case. +We can then compute VOI value. We filter out the null values if the previous tick doesn't exist, just in case. currentPriceAndVOI AS ( /* calculate VOI */ @@ -160,7 +167,7 @@ Now, we use **LAG** again to create a sequence with 2 consecutive VOI values, fo FROM currentPriceAndVOI ), -We then reshape the data into inputs for a two variable linear model. Again filter out the events where we don't have all the data. +We then reshape the data into inputs for a two-variable linear model. Again, we filter out the events where we don't have all the data. modelInput AS ( /* create feature vector, x being VOI, y being delta price */ @@ -227,7 +234,7 @@ Because Azure Stream Analytics doesn't have a built-in linear regression functio FROM modelparambs ), -In order to use previous day's model for current event's scoring, we want to join the quotes with the model. However, here, instead of using **JOIN**, we **UNION** the model events and quote events, and then use **LAG** to pair the events with previous day's model, so we can get exactly one match. Because of the weekend, we have to look back three days. If using a straightforward **JOIN**, we would get three models for every quote event. +To use the previous day's model for current event's scoring, we want to join the quotes with the model. But instead of using **JOIN**, we **UNION** the model events and quote events. Then we use **LAG** to pair the events with previous day's model, so we can get exactly one match. Because of the weekend, we have to look back three days. If we used a straightforward **JOIN**, we would get three models for every quote event. shiftedVOI AS ( /* get two consecutive VOIs */ @@ -263,7 +270,7 @@ In order to use previous day's model for current event's scoring, we want to joi FROM model ), VOIANDModelJoined AS ( - /* match VOIs with the latest model within 3 days (72 hours, to take weekend into account) */ + /* match VOIs with the latest model within 3 days (72 hours, to take the weekend into account) */ SELECT symbol, midPrice, @@ -276,7 +283,7 @@ In order to use previous day's model for current event's scoring, we want to joi WHERE type = 'voi' ), -Now, we can make predictions and generate buy/sell signals based on the model, with a 0.02 threshold value. Trade value of 10 is buy; trade value of -10 is sell. +Now, we can make predictions and generate buy/sell signals based on the model, with a 0.02 threshold value. A trade value of 10 is buy. A trade value of -10 is sell. prediction AS ( /* make prediction if there is a model */ @@ -305,11 +312,13 @@ Now, we can make predictions and generate buy/sell signals based on the model, w ), ### Trading simulation -Once we have the trading signals, we would like to test how effective the trading strategy is, without trading for real. This is achieved with a user defined aggregate (UDA), with a hopping windows, hopping every one minute. The additional grouping on date, and the having clause allow the window only accounts for events belong to the same day. For a hopping window going across two days, **GROUP BY** date, separates the grouping into previous day and current day. The **HAVING** clause filters out the windows ending on the current day, but grouping on the previous day. +After we have the trading signals, we want to test how effective the trading strategy is, without trading for real. + +We achieve this test by using a UDA, with a hopping window, hopping every one minute. The additional grouping on date and the having clause allow the window only accounts for events that belong to the same day. For a hopping window across two days, the **GROUP BY** date separates the grouping into previous day and current day. The **HAVING** clause filters out the windows that are ending on the current day but grouping on the previous day. simulation AS ( - /* perform trade simulation for the past 7 hours to cover an entire trading day, generate output every minute */ + /* perform trade simulation for the past 7 hours to cover an entire trading day, and generate output every minute */ SELECT DateAdd(hour, -7, System.Timestamp) AS time, symbol, @@ -320,7 +329,13 @@ Once we have the trading signals, we would like to test how effective the tradin Having DateDiff(day, date, time) < 1 AND DATEPART(hour, time) < 13 ) -The JavaScript UDA initializes all accumulators in the init function, compute the state transition with every event added to the window, and returns the simulation results at the end of the window. The general trading process is to buy stock when a buy signal is received and there is no stocking holding; sell stock when a sell signal is received and there is stock holding, or short if there is no stock holding. If there is short position, and a buy signal is received, buy to cover. We never hold or short 10 shares of a given stock in this simulation, and transaction cost is a flat $8. +The JavaScript UDA initializes all accumulators in the `init` function, computes the state transition with every event added to the window, and returns the simulation results at the end of the window. The general trading process is to: + +- Buy stock when a buy signal is received and there is no stocking holding. +- Sell stock when a sell signal is received and there is stock holding. +- Short if there is no stock holding. + +If there's a short position, and a buy signal is received, we buy to cover. We never hold or short 10 shares of a stock in this simulation. The transaction cost is a flat $8. function main() { @@ -408,7 +423,7 @@ The JavaScript UDA initializes all accumulators in the init function, compute th } } -Finally we output to Power BI dashboard for visualization. +Finally, we output to the Power BI dashboard for visualization. SELECT * INTO tradeSignalDashboard FROM tradeSignal /* output tradeSignal to PBI */ SELECT @@ -429,6 +444,10 @@ Finally we output to Power BI dashboard for visualization. ## Summary -As you can see, a realistic high frequency trading model can be implemented with a moderately complex query in Azure Stream Analytics. We have to simplify the model from five input variables to two, because of the lack of built-in linear regression function. However, for a determined user, algorithms with higher dimensions and sophistication can possibly be implemented as JavaScript UDA as well. What's worth noting is that most of the query, other than the JavaScript UDA, can be tested and debugged within Visual Studio with [Azure Stream Analytics Tool for Visual Studio](stream-analytics-tools-for-visual-studio.md). After the initial query was written, the author spent less than 30 minutes testing and debugging the query in Visual Studio. Currently, UDA cannot be debugged in Visual Studio. We are working on enabling that with the ability to step through JavaScript code. In addition, please note the fields reaching the UDA have field names all lower cased. This was not an obvious behavior during query testing. However, with Azure Stream Analytics compatibility level 1.1, we allow the field name casing to be preserved, so the behavior is more natural. +We can implement a realistic high-frequency trading model with a moderately complex query in Azure Stream Analytics. We have to simplify the model from five input variables to two, because of the lack of a built-in linear regression function. But for a determined user, algorithms with higher dimensions and sophistication can possibly be implemented as JavaScript UDA as well. + +It's worth noting that most of the query, other than the JavaScript UDA, can be tested and debugged in Visual Studio through [Azure Stream Analytics tools for Visual Studio](stream-analytics-tools-for-visual-studio.md). After the initial query was written, the author spent less than 30 minutes testing and debugging the query in Visual Studio. + +Currently, the UDA cannot be debugged in Visual Studio. We are working on enabling that with the ability to step through JavaScript code. In addition, note that the fields reaching the UDA have lowercase names. This was not an obvious behavior during query testing. But with Azure Stream Analytics compatibility level 1.1, we preserve the field name casing so the behavior is more natural. -I hope this article serves as an inspiration for all Azure Stream Analytics users, who can use our service to perform advanced analytics in near real time, continuously. Let us know any feedback you have to make it easier to implement queries for advance analytics scenarios. +I hope this article serves as an inspiration for all Azure Stream Analytics users, who can use our service to perform advanced analytics in near real time, continuously. Let us know any feedback you have to make it easier to implement queries for advanced analytics scenarios. diff --git a/articles/traffic-manager/traffic-manager-FAQs.md b/articles/traffic-manager/traffic-manager-FAQs.md index fb725f6f4221b..c31ac7bbb0e7b 100644 --- a/articles/traffic-manager/traffic-manager-FAQs.md +++ b/articles/traffic-manager/traffic-manager-FAQs.md @@ -276,7 +276,7 @@ Azure Resource Manager requires all resource groups to specify a location, which The current monitoring status of each endpoint, in addition to the overall profile, is displayed in the Azure portal. This information also is available via the Traffic Monitor [REST API](https://msdn.microsoft.com/library/azure/mt163667.aspx), [PowerShell cmdlets](https://msdn.microsoft.com/library/mt125941.aspx), and [cross-platform Azure CLI](../cli-install-nodejs.md). -Azure does not provide historical information about past endpoint health or the ability to raise alerts about changes to endpoint health. +You can also use Azure Monitor to track the health of your endpoints and see a visual representation of them. For more about using Azure Monitor, see the [Azure Monitoring documentation](https://docs.microsoft.com/azure/monitoring-and-diagnostics/monitoring-overview-metrics). ### Can I monitor HTTPS endpoints? @@ -288,6 +288,10 @@ Traffic manager cannot provide any certificate validation, including: * SNI server-side certificates are not supported * Client certificates are not supported +### I stopped an Azure cloud service / web application endpoint in my Traffic Manager profile but I am not receiving any traffic even after I restarted it. How can I fix this? + +When an Azure cloud service / web application endpoint is stopped Traffic Manager stops checking its health and restarts the health checks only after it detects that the endpoint has restarted. To prevent this delay, disable and then reenable that endpoint in the Traffic Manager profile after you restart the endpoint. + ### Can I use Traffic Manager even if my application does not have support for HTTP or HTTPS? Yes. You can specify TCP as the monitoring protocol and Traffic Manager can initiate a TCP connection and wait for a response from the endpoint. If the endpoint replies to the connection request with a response to establish the connection, within the timeout period, then that endpoint is marked as healthy. diff --git a/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-portal.md b/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-portal.md index 76d528994c126..7ce534d32fc45 100644 --- a/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-portal.md +++ b/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-portal.md @@ -119,7 +119,7 @@ The previous examples automatically scaled a scale set in or out with basic host ![Create autoscale rules that scale on a schedule](media/virtual-machine-scale-sets-autoscale-portal/schedule-autoscale.PNG) -To see how your autoscale rules are applied, select **Run history** across the top of the **Scaling** window. The graph and events list shows when the autoscale rules trigger and the number of VM instances in your scale increases or decreases. +To see how your autoscale rules are applied, select **Run history** across the top of the **Scaling** window. The graph and events list shows when the autoscale rules trigger and the number of VM instances in your scale set increases or decreases. ## Next steps diff --git a/articles/virtual-machines/linux/scheduled-events.md b/articles/virtual-machines/linux/scheduled-events.md index 5567bb7bf8ff0..6ceb6efa0626a 100644 --- a/articles/virtual-machines/linux/scheduled-events.md +++ b/articles/virtual-machines/linux/scheduled-events.md @@ -59,27 +59,43 @@ Scheduled events are delivered to: As a result, you should check the `Resources` field in the event to identify which VMs are going to be impacted. ### Discovering the endpoint +For VNET enabled VMs, the full endpoint for the latest version of Scheduled Events is: + + > `http://169.254.169.254/metadata/scheduledevents?api-version=2017-08-01` + In the case where a Virtual Machine is created within a Virtual Network (VNet), the metadata service is available from a static non-routable IP, `169.254.169.254`. -If the Virtual Machine is not created within a Virtual Network, the default cases for cloud services and classic VMs, additional logic is required to discover the endpoint to use. +If the Virtual Machine is not created within a Virtual Network, the default cases for cloud services and classic VMs, additional logic is required to discover the IP address to use. Refer to this sample to learn how to [discover the host endpoint](https://github.com/azure-samples/virtual-machines-python-scheduled-events-discover-endpoint-for-non-vnet-vm). ### Versioning -The Instance Metadata Service is versioned. Versions are mandatory and the current version is `2017-03-01`. +The Scheduled Events Service is versioned. Versions are mandatory and the current version is `2017-08-01`. + +| Version | Release Notes | +| - | - | +| 2017-08-01 |
  • Removed prepended underscore from resource names for Iaas VMs
  • Metadata Header requirement enforced for all requests | +| 2017-03-01 |
  • Public Preview Version + > [!NOTE] > Previous preview releases of scheduled events supported {latest} as the api-version. This format is no longer supported and will be deprecated in the future. ### Using headers -When you query the Metadata Service, you must provide the header `Metadata: true` to ensure the request was not unintentionally redirected. +When you query the Metadata Service, you must provide the header `Metadata:true` to ensure the request was not unintentionally redirected. The `Metadata:true` header is required for all scheduled events requests. Failure to include the header in the request will result in a Bad Request response from the Metadata Service. ### Enabling Scheduled Events The first time you make a request for scheduled events, Azure implicitly enables the feature on your Virtual Machine. As a result, you should expect a delayed response in your first call of up to two minutes. +> [!NOTE] +> Scheduled Events is automatically disabled for your service if your service doesn't call the end point for 1 day. Once Scheduled Events is disabled for your service, there will not be events created for user initiated maintenance. + ### User initiated maintenance User initiated virtual machine maintenance via the Azure portal, API, CLI, or PowerShell results in a scheduled event. This allows you to test the maintenance preparation logic in your application and allows your application to prepare for user initiated maintenance. Restarting a virtual machine schedules an event with type `Reboot`. Redeploying a virtual machine schedules an event with type `Redeploy`. +> [!NOTE] +> Currently a maximum of 100 user initiated maintenance operations can be simultaneously scheduled. + > [!NOTE] > Currently user initiated maintenance resulting in Scheduled Events is not configurable. Configurability is planned for a future release. @@ -88,8 +104,9 @@ Restarting a virtual machine schedules an event with type `Reboot`. Redeploying ### Query for events You can query for Scheduled Events simply by making the following call: +#### Bash ``` -curl -H Metadata:true http://169.254.169.254/metadata/scheduledevents?api-version=2017-03-01 +curl -H Metadata:true http://169.254.169.254/metadata/scheduledevents?api-version=2017-08-01 ``` A response contains an array of scheduled events. An empty array means that there are currently no events scheduled. @@ -133,16 +150,25 @@ Each event is scheduled a minimum amount of time in the future based on event ty Once you have learned of an upcoming event and completed your logic for graceful shutdown, you can approve the outstanding event by making a `POST` call to the metadata service with the `EventId`. This indicates to Azure that it can shorten the minimum notification time (when possible). +The following is the json expected in the `POST` request body. The request should contain a list of `StartRequests`. Each `StartRequest` contains the `EventId` for the event you want to expedite: +``` +{ + "StartRequests" : [ + { + "EventId": {EventId} + } + ] +} +``` + +#### Bash Sample ``` -curl -H Metadata:true -X POST -d '{"DocumentIncarnation":"5", "StartRequests": [{"EventId": "f020ba2e-3bc0-4c40-a10b-86575a9eabd5"}]}' http://169.254.169.254/metadata/scheduledevents?api-version=2017-03-01 +curl -H Metadata:true -X POST -d '{"DocumentIncarnation":"5", "StartRequests": [{"EventId": "f020ba2e-3bc0-4c40-a10b-86575a9eabd5"}]}' http://169.254.169.254/metadata/scheduledevents?api-version=2017-08-01 ``` > [!NOTE] > Acknowledging an event allows the event to proceed for all `Resources` in the event, not just the virtual machine that acknowledges the event. You may therefore choose to elect a leader to coordinate the acknowledgement, which may be as simple as the first machine in the `Resources` field. - - - ## Python sample The following sample queries the metadata service for scheduled events and approves each outstanding event. @@ -188,6 +214,6 @@ if __name__ == '__main__': ``` ## Next steps - +- Review the Scheduled Events code samples in the [Azure Instance Metadata Scheduled Events Github Repository](https://github.com/Azure-Samples/virtual-machines-scheduled-events-discover-endpoint-for-non-vnet-vm) - Read more about the APIs available in the [Instance Metadata service](instance-metadata-service.md). - Learn about [planned maintenance for Linux virtual machines in Azure](planned-maintenance.md). diff --git a/articles/virtual-machines/linux/sizes-storage.md b/articles/virtual-machines/linux/sizes-storage.md index 596f8c929cf5f..6c6840aaaf21c 100644 --- a/articles/virtual-machines/linux/sizes-storage.md +++ b/articles/virtual-machines/linux/sizes-storage.md @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: article ms.tgt_pltfrm: vm-linux ms.workload: infrastructure-services -ms.date: 11/08/2017 +ms.date: 11/28/2017 ms.author: jonbeck --- diff --git a/articles/virtual-machines/linux/tutorial-secure-web-server.md b/articles/virtual-machines/linux/tutorial-secure-web-server.md index bdd207f96fabc..eab7881e52016 100644 --- a/articles/virtual-machines/linux/tutorial-secure-web-server.md +++ b/articles/virtual-machines/linux/tutorial-secure-web-server.md @@ -57,7 +57,7 @@ az keyvault create \ ``` ## Generate a certificate and store in Key Vault -For production use, you should import a valid certificate signed by trusted provider with [az keyvault certificate import](/cli/azure/certificate#import). For this tutorial, the following example shows how you can generate a self-signed certificate with [az keyvault certificate create](/cli/azure/certificate#create) that uses the default certificate policy: +For production use, you should import a valid certificate signed by trusted provider with [az keyvault certificate import](/cli/azure/keyvault/certificate#az_keyvault_certificate_import). For this tutorial, the following example shows how you can generate a self-signed certificate with [az keyvault certificate create](/cli/azure/keyvault/certificate#az_keyvault_certificate_create) that uses the default certificate policy: ```azurecli-interactive az keyvault certificate create \ diff --git a/articles/virtual-machines/windows/publish-web-app-from-visual-studio.md b/articles/virtual-machines/windows/publish-web-app-from-visual-studio.md new file mode 100644 index 0000000000000..61bef565b8e21 --- /dev/null +++ b/articles/virtual-machines/windows/publish-web-app-from-visual-studio.md @@ -0,0 +1,134 @@ +--- +title: Publish a Web App to an Azure VM from Visual Studio| Microsoft Docs +description: Publish an ASP.NET Web Application to an Azure Virtual Machine from Visual Studio +services: virtual-machines-windows +documentationcenter: '' +author: +- kraigb +- justcla +manager: ghogen +editor: '' +tags: azure-service-management + +ms.assetid: 70267837-3629-41e0-bb58-2167ac4932b3 +ms.service: virtual-machines-windows +ms.workload: infrastructure-services +ms.tgt_pltfrm: vm-windows +ms.devlang: dotnet +ms.topic: article +ms.date: 11/03/2017 +ms.author: +- kraigb +- justcla + +--- +# Publish an ASP.NET Web App to an Azure VM from Visual Studio + +This document describes how to publish an ASP.NET web application to an Azure virtual machine (VM) using the **Microsoft Azure Virtual Machines** publishing feature in Visual Studio 2017. + +## Prerequisites +In order to use Visual Studio to publish an ASP.NET project to an Azure VM, the VM must be correctly set up. + +- Machine must be configured to run an ASP.NET web application and have WebDeploy installed. + +- The VM must have a DNS name configured. For more information, see [Create a fully qualified domain name in the Azure portal for a Windows VM](portal-create-fqdn.md). + +## Publish your ASP.NET web app to the Azure VM using Visual Studio +The following section describes how to publish an existing ASP.NET web application to an Azure virtual machine. + +1. Open your web app solution in Visual Studio 2017. +2. Right-click the project in Solution Explorer and choose **Publish...** +3. Use the arrow on the right of the page to scroll through the publishing options until you find **Microsoft Azure Virtual Machines**. + + ![Publish Page - Right arrow] + +4. Select the **Microsoft Azure Virtual Machines** icon and select **Publish**. + + ![Publish Page - Microsoft Azure Virtual Machine icon] + +5. Choose the appropriate account (with Azure subscription connected to your virtual machine). + - If you're signed in to Visual Studio, the account list is populated with all your authenticated accounts. + - If you are not signed in, or if the account you need is not listed, choose "Add an account..." and follow the prompts to log in. + ![Azure Account Selector] + +6. Select the appropriate VM from the list of Existing Virtual Machines. + + > [!Note] + > Populating this list can take some time. + + ![Azure VM Selector] + +7. Click OK to begin publishing. + +8. When prompted for credentials, supply the username and password of a user account on the target VM that is configured with publishing rights (typically the admin username and password used when creating the VM). + + ![WebDeploy Login] + +9. Accept the security certificate. + + ![Certificate Error] + +10. Watch the Output window to check the progress of the publish operation. + + ![Output Window] + +11. If publishing is successful, a browser launches to open the URL of the newly published site. + +**Success!** + +You have now successfully published your web app to an Azure virtual machine. + +## Publish Page Options + +After completing the publish wizard, the Publish page is opened in the document well with the new publishing profile selected. + +### Re-publish + +To publish updates to your web application, select the **Publish** button on the Publish page. +- If prompted, enter username and password. +- Publishing begins immediately. + +![Publish Page - Publish button] + +### Modify publish profile settings + +To view and modify the publish profile settings, select **Settings...**. + +![Publish Page - Settings button] + +Your settings should look something like this: + +![Publish Settings - Connection page] + +#### Save User name and Password +- To avoid providing authentication information every time you publish, you can populate the **User name** and **Password** fields and select the **Save password** box. +- Use the **Validate Connection** button to confirm that you have entered the right information. + +#### Deploy to clean web server + +- If you want to ensure that the web server has a clean copy of the web application after each upload (and that no other files are left hanging around from a previous deployment), you can check the **Remove additional files at destination** checkbox in the **Settings** tab. + +- Warning: Publishing with this setting deletes all files that exist on the web server (wwwroot directory). Be sure you know the state of the machine before publishing with this option enabled. + +![Publish Settings - Settings page] + +## Next steps + +### Set up CI/CD for automated deployment to Azure VM + +To set up a continuous delivery pipeline with Visual Studio Team Service, see [Deploy to a Windows Virtual Machine](https://docs.microsoft.com/en-us/vsts/build-release/apps/cd/deploy-webdeploy-iis-deploygroups). + +[VM Overview - DNS Name]: ../../../includes/media/publish-web-app-from-visual-studio/VMOverviewDNSName.png +[IP Address Config - DNS Name]: ../../../includes/media/publish-web-app-from-visual-studio/IPAddressConfigDNSName.png +[VM Overview - DNS Configured]: ../../../includes/media/publish-web-app-from-visual-studio/VMOverviewDNSConfigured.png +[Publish Page - Right arrow]: ../../../includes/media/publish-web-app-from-visual-studio/PublishPageRightArrow.png +[Publish Page - Microsoft Azure Virtual Machine icon]: ../../../includes/media/publish-web-app-from-visual-studio/PublishPageMicrosoftAzureVirtualMachineIcon.png +[Azure Account Selector]: ../../../includes/media/publish-web-app-from-visual-studio/ChooseVM-SelectAccount.png +[Azure VM Selector]: ../../../includes/media/publish-web-app-from-visual-studio/ChooseVM-SelectVM.png +[WebDeploy Login]: ../../../includes/media/publish-web-app-from-visual-studio/WebDeployLogin.png +[Certificate Error]: ../../../includes/media/publish-web-app-from-visual-studio/CertificateError.png +[Output Window]: ../../../includes/media/publish-web-app-from-visual-studio/OutputWindow.png +[Publish Page - Publish button]: ../../../includes/media/publish-web-app-from-visual-studio/PublishPagePublishButton.png +[Publish Page - Settings button]: ../../../includes/media/publish-web-app-from-visual-studio/PublishPageSettingsButton.png +[Publish Settings - Connection page]: ../../../includes/media/publish-web-app-from-visual-studio/PublishSettingsConnectionPage.png +[Publish Settings - Settings page]: ../../../includes/media/publish-web-app-from-visual-studio/PublishSettingsSettingsPage.png diff --git a/articles/virtual-machines/windows/scheduled-events.md b/articles/virtual-machines/windows/scheduled-events.md index 282caa00b39ce..c029133585267 100644 --- a/articles/virtual-machines/windows/scheduled-events.md +++ b/articles/virtual-machines/windows/scheduled-events.md @@ -57,28 +57,43 @@ Scheduled events are delivered to: As a result, you should check the `Resources` field in the event to identify which VMs are going to be impacted. -### Discovering the endpoint +## Discovering the endpoint +For VNET enabled VMs, the full endpoint for the latest version of Scheduled Events is: + + > `http://169.254.169.254/metadata/scheduledevents?api-version=2017-08-01` + In the case where a Virtual Machine is created within a Virtual Network (VNet), the metadata service is available from a static non-routable IP, `169.254.169.254`. -If the Virtual Machine is not created within a Virtual Network, the default cases for cloud services and classic VMs, additional logic is required to discover the endpoint to use. +If the Virtual Machine is not created within a Virtual Network, the default cases for cloud services and classic VMs, additional logic is required to discover the IP address to use. Refer to this sample to learn how to [discover the host endpoint](https://github.com/azure-samples/virtual-machines-python-scheduled-events-discover-endpoint-for-non-vnet-vm). ### Versioning -The Instance Metadata Service is versioned. Versions are mandatory and the current version is `2017-03-01`. +The Scheduled Events Service is versioned. Versions are mandatory and the current version is `2017-08-01`. + +| Version | Release Notes | +| - | - | +| 2017-08-01 |
  • Removed prepended underscore from resource names for Iaas VMs
  • Metadata Header requirement enforced for all requests | +| 2017-03-01 |
  • Public Preview Version > [!NOTE] > Previous preview releases of scheduled events supported {latest} as the api-version. This format is no longer supported and will be deprecated in the future. ### Using headers -When you query the Metadata Service, you must provide the header `Metadata: true` to ensure the request was not unintentionally redirected. +When you query the Metadata Service, you must provide the header `Metadata:true` to ensure the request was not unintentionally redirected. The `Metadata:true` header is required for all scheduled events requests. Failure to include the header in the request will result in a Bad Request response from the Metadata Service. ### Enabling Scheduled Events The first time you make a request for scheduled events, Azure implicitly enables the feature on your Virtual Machine. As a result, you should expect a delayed response in your first call of up to two minutes. +> [!NOTE] +> Scheduled Events is automatically disabled for your service if your service doesn't call the end point for 1 day. Once Scheduled Events is disabled for your service, there will not be events created for user initiated maintenance. + ### User initiated maintenance User initiated virtual machine maintenance via the Azure portal, API, CLI, or PowerShell results in a scheduled event. This allows you to test the maintenance preparation logic in your application and allows your application to prepare for user initiated maintenance. Restarting a virtual machine schedules an event with type `Reboot`. Redeploying a virtual machine schedules an event with type `Redeploy`. +> [!NOTE] +> Currently a maximum of 100 user initiated maintenance operations can be simultaneously scheduled. + > [!NOTE] > Currently user initiated maintenance resulting in Scheduled Events is not configurable. Configurability is planned for a future release. @@ -87,8 +102,9 @@ Restarting a virtual machine schedules an event with type `Reboot`. Redeploying ### Query for events You can query for Scheduled Events simply by making the following call: +#### Powershell ``` -curl -H Metadata:true http://169.254.169.254/metadata/scheduledevents?api-version=2017-03-01 +curl http://169.254.169.254/metadata/scheduledevents?api-version=2017-08-01 -H @{"Metadata"="true"} ``` A response contains an array of scheduled events. An empty array means that there are currently no events scheduled. @@ -132,8 +148,20 @@ Each event is scheduled a minimum amount of time in the future based on event ty Once you have learned of an upcoming event and completed your logic for graceful shutdown, you can approve the outstanding event by making a `POST` call to the metadata service with the `EventId`. This indicates to Azure that it can shorten the minimum notification time (when possible). +The following is the json expected in the `POST` request body. The request should contain a list of `StartRequests`. Each `StartRequest` contains the `EventId` for the event you want to expedite: ``` -curl -H Metadata:true -X POST -d '{"DocumentIncarnation":"5", "StartRequests": [{"EventId": "f020ba2e-3bc0-4c40-a10b-86575a9eabd5"}]}' http://169.254.169.254/metadata/scheduledevents?api-version=2017-03-01 +{ + "StartRequests" : [ + { + "EventId": {EventId} + } + ] +} +``` + +#### Powershell +``` +curl -H @{"Metadata"="true"} -Method POST -Body '{"DocumentIncarnation":"5", "StartRequests": [{"EventId": "f020ba2e-3bc0-4c40-a10b-86575a9eabd5"}]}' -Uri http://169.254.169.254/metadata/scheduledevents?api-version=2017-08-01 ``` > [!NOTE] @@ -146,7 +174,7 @@ The following sample queries the metadata service for scheduled events and appro ```PowerShell # How to get scheduled events -function GetScheduledEvents($uri) +function Get-ScheduledEvents($uri) { $scheduledEvents = Invoke-RestMethod -Headers @{"Metadata"="true"} -URI $uri -Method get $json = ConvertTo-Json $scheduledEvents @@ -155,7 +183,7 @@ function GetScheduledEvents($uri) } # How to approve a scheduled event -function ApproveScheduledEvent($eventId, $docIncarnation, $uri) +function Approve-ScheduledEvent($eventId, $docIncarnation, $uri) { # Create the Scheduled Events Approval Document $startRequests = [array]@{"EventId" = $eventId} @@ -170,7 +198,7 @@ function ApproveScheduledEvent($eventId, $docIncarnation, $uri) Invoke-RestMethod -Uri $uri -Headers @{"Metadata"="true"} -Method POST -Body $approvalString } -function HandleScheduledEvents($scheduledEvents) +function Handle-ScheduledEvents($scheduledEvents) { # Add logic for handling events here } @@ -182,10 +210,10 @@ $localHostIP = "169.254.169.254" $scheduledEventURI = 'http://{0}/metadata/scheduledevents?api-version=2017-03-01' -f $localHostIP # Get events -$scheduledEvents = GetScheduledEvents $scheduledEventURI +$scheduledEvents = Get-ScheduledEvents $scheduledEventURI # Handle events however is best for your service -HandleScheduledEvents $scheduledEvents +Handle-ScheduledEvents $scheduledEvents # Approve events when ready (optional) foreach($event in $scheduledEvents.Events) @@ -194,190 +222,13 @@ foreach($event in $scheduledEvents.Events) $entry = Read-Host "`nApprove event? Y/N" if($entry -eq "Y" -or $entry -eq "y") { - ApproveScheduledEvent $event.EventId $scheduledEvents.DocumentIncarnation $scheduledEventURI + Approve-ScheduledEvent $event.EventId $scheduledEvents.DocumentIncarnation $scheduledEventURI } } ``` - -## C\# sample - -The following sample is of a simple client that communicates with the metadata service. - -```csharp -public class ScheduledEventsClient -{ - private readonly string scheduledEventsEndpoint; - private readonly string defaultIpAddress = "169.254.169.254"; - - // Set up the scheduled events URI for a VNET-enabled VM - public ScheduledEventsClient() - { - scheduledEventsEndpoint = string.Format("http://{0}/metadata/scheduledevents?api-version=2017-03-01", defaultIpAddress); - } - - // Get events - public string GetScheduledEvents() - { - Uri cloudControlUri = new Uri(scheduledEventsEndpoint); - using (var webClient = new WebClient()) - { - webClient.Headers.Add("Metadata", "true"); - return webClient.DownloadString(cloudControlUri); - } - } - - // Approve events - public void ApproveScheduledEvents(string jsonPost) - { - using (var webClient = new WebClient()) - { - webClient.Headers.Add("Content-Type", "application/json"); - webClient.UploadString(scheduledEventsEndpoint, jsonPost); - } - } -} -``` - -Scheduled Events can be represented using the following data structures: - -```csharp -public class ScheduledEventsDocument -{ - public string DocumentIncarnation; - public List Events { get; set; } -} - -public class CloudControlEvent -{ - public string EventId { get; set; } - public string EventStatus { get; set; } - public string EventType { get; set; } - public string ResourceType { get; set; } - public List Resources { get; set; } - public DateTime? NotBefore { get; set; } -} - -public class ScheduledEventsApproval -{ - public string DocumentIncarnation; - public List StartRequests = new List(); -} - -public class StartRequest -{ - [JsonProperty("EventId")] - private string eventId; - - public StartRequest(string eventId) - { - this.eventId = eventId; - } -} -``` - -The following sample queries the metadata service for scheduled events and approves each outstanding event. - -```csharp -public class Program -{ - static ScheduledEventsClient client; - - static void Main(string[] args) - { - client = new ScheduledEventsClient(); - - while (true) - { - string json = client.GetDocument(); - ScheduledEventsDocument scheduledEventsDocument = JsonConvert.DeserializeObject(json); - - HandleEvents(scheduledEventsDocument.Events); - - // Wait for user response - Console.WriteLine("Press Enter to approve executing events\n"); - Console.ReadLine(); - - // Approve events - ScheduledEventsApproval scheduledEventsApprovalDocument = new ScheduledEventsApproval() - { - DocumentIncarnation = scheduledEventsDocument.DocumentIncarnation - }; - - foreach (CloudControlEvent event in scheduledEventsDocument.Events) - { - scheduledEventsApprovalDocument.StartRequests.Add(new StartRequest(event.EventId)); - } - - if (scheduledEventsApprovalDocument.StartRequests.Count > 0) - { - // Serialize using Newtonsoft.Json - string approveEventsJsonDocument = - JsonConvert.SerializeObject(scheduledEventsApprovalDocument); - - Console.WriteLine($"Approving events with json: {approveEventsJsonDocument}\n"); - client.ApproveScheduledEvents(approveEventsJsonDocument); - } - - Console.WriteLine("Complete. Press enter to repeat\n\n"); - Console.ReadLine(); - Console.Clear(); - } - } - - private static void HandleEvents(List events) - { - // Add logic for handling events here - } -} -``` - -## Python sample - -The following sample queries the metadata service for scheduled events and approves each outstanding event. - -```python -#!/usr/bin/python - -import json -import urllib2 -import socket -import sys - -metadata_url = "http://169.254.169.254/metadata/scheduledevents?api-version=2017-03-01" -headers = "{Metadata:true}" -this_host = socket.gethostname() - -def get_scheduled_events(): - req = urllib2.Request(metadata_url) - req.add_header('Metadata', 'true') - resp = urllib2.urlopen(req) - data = json.loads(resp.read()) - return data - -def handle_scheduled_events(data): - for evt in data['Events']: - eventid = evt['EventId'] - status = evt['EventStatus'] - resources = evt['Resources'] - eventtype = evt['EventType'] - resourcetype = evt['ResourceType'] - notbefore = evt['NotBefore'].replace(" ","_") - if this_host in resources: - print "+ Scheduled Event. This host is scheduled for " + eventype + " not before " + notbefore - # Add logic for handling events here - -def main(): - data = get_scheduled_events() - handle_scheduled_events(data) - -if __name__ == '__main__': - main() - sys.exit(0) -``` - ## Next steps +- Review the Scheduled Events code samples in the [Azure Instance Metadata Scheduled Events Github Repository](https://github.com/Azure-Samples/virtual-machines-scheduled-events-discover-endpoint-for-non-vnet-vm) - Read more about the APIs available in the [Instance Metadata service](instance-metadata-service.md). - Learn about [planned maintenance for Windows virtual machines in Azure](planned-maintenance.md). - diff --git a/articles/virtual-machines/windows/sizes-storage.md b/articles/virtual-machines/windows/sizes-storage.md index 7fe7dd7c9f4a8..9a8b46fd15e4b 100644 --- a/articles/virtual-machines/windows/sizes-storage.md +++ b/articles/virtual-machines/windows/sizes-storage.md @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: article ms.tgt_pltfrm: vm-windows ms.workload: infrastructure-services -ms.date: 11/08/2017 +ms.date: 11/28/2017 ms.author: jonbeck --- diff --git a/articles/virtual-machines/windows/toc.yml b/articles/virtual-machines/windows/toc.yml index e45abd3cf495f..046106b03d533 100644 --- a/articles/virtual-machines/windows/toc.yml +++ b/articles/virtual-machines/windows/toc.yml @@ -224,6 +224,8 @@ items: - name: Chef href: chef-automation.md + - name: Publish Web App from Visual Studio + href: publish-web-app-from-visual-studio.md - name: Run applications items: - name: SQL Server diff --git a/articles/virtual-machines/workloads/oracle/oracle-considerations.md b/articles/virtual-machines/workloads/oracle/oracle-considerations.md index c5dd565b364ef..237f0e26d2ae7 100644 --- a/articles/virtual-machines/workloads/oracle/oracle-considerations.md +++ b/articles/virtual-machines/workloads/oracle/oracle-considerations.md @@ -11,14 +11,14 @@ ms.assetid: 5d71886b-463a-43ae-b61f-35c6fc9bae25 ms.service: virtual-machines-windows ms.devlang: na ms.topic: article -ms.tgt_pltfrm: vm-windows +ms.tgt_pltfrm: vm-linux ms.workload: infrastructure-services -ms.date: 06/15/2017 +ms.date: 11/28/2017 ms.author: rclaus --- # Oracle solutions and their deployment on Microsoft Azure -This article covers information required to succesfully deploy various Oracle solutions on Microsoft Azure. These solutions are based on Virtual Machine images published by Oracle in the Azure Marketplace. To get a list of currently available images, run the following command: +This article covers information required to successfully deploy various Oracle solutions on Microsoft Azure. These solutions are based on Virtual Machine images published by Oracle in the Azure Marketplace. To get a list of currently available images, run the following command: ```azurecli-interactive az vm image list --publisher oracle -o table --all ``` @@ -38,35 +38,35 @@ Oracle-Linux Oracle 7.3 Oracle:Oracle-Linux Oracle-WebLogic-Server Oracle Oracle-WebLogic-Server Oracle:Oracle-WebLogic-Server:Oracle-WebLogic-Server:12.1.2 12.1.2 ``` -These images are considered "Bring Your Own License" and as such you will only be charged for compute, storage and networking costs incurred by running a VM. It is assumed you are properly licensed to use Oracle software and that you have a current support agreement in place with Oracle. Oracle has guaranteed license mobility from on-premises to Azure. See the published [Oracle and Microsoft](http://www.oracle.com/technetwork/topics/cloud/faq-1963009.html) note for details on license mobility. +These images are considered "Bring Your Own License" and as such you will only be charged for compute, storage, and networking costs incurred by running a VM. It is assumed you are properly licensed to use Oracle software and that you have a current support agreement in place with Oracle. Oracle has guaranteed license mobility from on-premises to Azure. See the published [Oracle and Microsoft](http://www.oracle.com/technetwork/topics/cloud/faq-1963009.html) note for details on license mobility. -Individuals can also choose to base their solutions on custom images they create from scratch in Azure or upload a custom images from their on premises environments. +Individuals can also choose to base their solutions on a custom image they create from scratch in Azure or upload a custom image from their on premises environment. ## Support for JD Edwards -According to Oracle Support note [Doc ID 2178595.1](https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=573435677515785&id=2178595.1&_afrWindowMode=0&_adf.ctrl-state=o852dw7d_4) , JD Edwards EnterpriseOne verions 9.2 and above are supported on **any public cloud offering** that meets their specific `Minimum Technical Requirements` (MTR). You will need to create custom images that meet their MTR specifications for OS and software application compatability. +According to Oracle Support note [Doc ID 2178595.1](https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=573435677515785&id=2178595.1&_afrWindowMode=0&_adf.ctrl-state=o852dw7d_4) , JD Edwards EnterpriseOne versions 9.2 and above are supported on **any public cloud offering** that meets their specific `Minimum Technical Requirements` (MTR). You need to create custom images that meet their MTR specifications for OS and software application compatibility. ## Oracle Database virtual machine images Oracle supports running Oracle DB 12.1 Standard and Enterprise editions in Azure on virtual machine images based on Oracle Linux. For the best performance for production workloads of Oracle DB on Azure, be sure to properly size the VM image and use Managed Disks that are backed by Premium Storage. For instructions on how to quickly get an Oracle DB up and running in Azure using the Oracle published VM image, [try the Oracle DB Quickstart walkthrough](oracle-database-quick-create.md). ### Attached disk configuration options -Attached disks rely on the Azure Blob storage service. Each standard disk is capable of a theoretical maximum of approximately 500 input/output operations per second (IOPS). Our premium disk offering is preferred for high performance database workloads and can achieve up to 5000 IOps per disk. While you can use a single disk if that meets your performance needs - you can improve the effective IOPS performance if you use multiple attached disks, spread database data across them, and then use Oracle Automatic Storage Management (ASM). See [Oracle Automatic Storage overview](http://www.oracle.com/technetwork/database/index-100339.html) for more Oracle ASM specific information. For an example of how to install and configure Oracle ASM on a Linux Azure VM - you can try the [Installing and Configuring Oracle Automated Storage Management](configure-oracle-asm.md) tutorial. +Attached disks rely on the Azure Blob storage service. Each standard disk is capable of a theoretical maximum of approximately 500 input/output operations per second (IOPS). Our premium disk offering is preferred for high-performance database workloads and can achieve up to 5000 IOps per disk. While you can use a single disk if that meets your performance needs - you can improve the effective IOPS performance if you use multiple attached disks, spread database data across them, and then use Oracle Automatic Storage Management (ASM). See [Oracle Automatic Storage overview](http://www.oracle.com/technetwork/database/index-100339.html) for more Oracle ASM specific information. For an example of how to install and configure Oracle ASM on a Linux Azure VM - you can try the [Installing and Configuring Oracle Automated Storage Management](configure-oracle-asm.md) tutorial. -### Oracle Realtime Application Cluster (RAC) -Oracle RAC is designed to mitigate the failure of a single node in an on-premises multi-node cluster configuration. It relies on two on-premises technologies which are not native to hyper-scale public cloud environments: network multi-cast and shared disk. There are third party solutions created by companies [such as FlashGrid](https://www.flashgrid.io/oracle-rac-in-azure/) that emulate these technologies if you need to deploy Oracle RAC in Azure. +## Oracle Real Application Cluster (Oracle RAC) +Oracle RAC is designed to mitigate the failure of a single node in an on-premises multi-node cluster configuration. It relies on two on-premises technologies which are not native to hyper-scale public cloud environments: network multi-cast and shared disk. If your database solution requires Oracle RAC in Azure, you need 3rd party software to enable these technologies. A **Microsoft Azure Certified** offering called [FlashGrid Node for Oracle RAC](https://azuremarketplace.microsoft.com/marketplace/apps/flashgrid-inc.flashgrid-racnode?tab=Overview) is available in the Azure Marketplace, published by FlashGrid Inc. For more information on this solution and how it works in Azure, please see the [FlashGrid solution page](https://www.flashgrid.io/oracle-rac-in-azure/). -### High availability and disaster recovery considerations +## High availability and disaster recovery considerations When using Oracle Databases in Azure, you are responsible for implementing a high availability and disaster recovery solution to avoid any downtime. -High availability and disaster recovery for Oracle Database Enterprise Edition (without RAC) on Azure can be achieved using [Data Guard, Active Data Guard](http://www.oracle.com/technetwork/articles/oem/dataguardoverview-083155.html), or [Oracle Golden Gate](http://www.oracle.com/technetwork/middleware/goldengate), with two databases in two separate virtual machines. Both virtual machines should be in the same [virtual network](https://azure.microsoft.com/documentation/services/virtual-network/) to ensure they can access each other over the private persistent IP address. Additionally, we recommend placing the virtual machines in the same availability set to allow Azure to place them into separate fault domains and upgrade domains. Should you want to have geo-redundancy - you can have these two databases replicate between two different regions and connect the two instances with a VPN Gateway. +High availability and disaster recovery for Oracle Database Enterprise Edition (without relying on Oracle RAC) can be achieved on Azure using [Data Guard, Active Data Guard](http://www.oracle.com/technetwork/articles/oem/dataguardoverview-083155.html), or [Oracle Golden Gate](http://www.oracle.com/technetwork/middleware/goldengate), with two databases on two separate virtual machines. Both virtual machines should be in the same [virtual network](https://azure.microsoft.com/documentation/services/virtual-network/) to ensure they can access each other over the private persistent IP address. Additionally, we recommend placing the virtual machines in the same availability set to allow Azure to place them into separate fault domains and upgrade domains. Should you want to have geo-redundancy - you can have these two databases replicate between two different regions and connect the two instances with a VPN Gateway. -We have a tutorial "[Implement Oracle DataGuard on Azure](configure-oracle-dataguard.md)" which walks you through the basic setup procedure to trial this on Azure. +We have a tutorial "[Implement Oracle DataGuard on Azure](configure-oracle-dataguard.md)", which walks you through the basic setup procedure to trial this on Azure. With Oracle Data Guard, high availability can be achieved with a primary database in one virtual machine, a secondary (standby) database in another virtual machine, and one-way replication set up between them. The result is read access to the copy of the database. With Oracle GoldenGate, you can configure bi-directional replication between the two databases. To learn how to set up a high-availability solution for your databases using these tools, see [Active Data Guard](http://www.oracle.com/technetwork/database/features/availability/data-guard-documentation-152848.html) and [GoldenGate](http://docs.oracle.com/goldengate/1212/gg-winux/index.html) documentation at the Oracle website. If you need read-write access to the copy of the database, you can use [Oracle Active Data Guard](http://www.oracle.com/uk/products/database/options/active-data-guard/overview/index.html). -We have a tutorial "[Implement Oracle GoldenGate on Azure](configure-oracle-golden-gate.md)" which walks you through the basic seup procedure to trial this on Azure. +We have a tutorial "[Implement Oracle GoldenGate on Azure](configure-oracle-golden-gate.md)", which walks you through the basic setup procedure to trial this on Azure. -Despite having an HA and DR solution architected in Azure, you will want to ensure you have a backup strategy in place to restore your database. We have a tutorial [Backup and recover an Oracle Database](oracle-backup-recovery.md) which walks you through the basic procedure for establishing a consistant backup. +Despite having an HA and DR solution architected in Azure, you want to ensure you have a backup strategy in place to restore your database. We have a tutorial [Backup and recover an Oracle Database](oracle-backup-recovery.md) which walks you through the basic procedure for establishing a consistent backup. ## Oracle WebLogic Server virtual machine images * **Clustering is supported on Enterprise Edition only.** You are licensed to use WebLogic clustering only when using the Enterprise Edition of WebLogic Server. Do not use clustering with WebLogic Server Standard Edition. @@ -77,7 +77,7 @@ Despite having an HA and DR solution architected in Azure, you will want to ensu Bootstrap to: example.cloudapp.net/138.91.142.178:7006' over: 't3' got an error or timed out] - This is because for any remote T3 access, WebLogic Server expects the load balancer port and the WebLogic managed server port to be the same. In the above case, the client is accessing port 7006 (the load balancer port) and the managed server is listening on 7008 (the private port). This restriction is applicable only for T3 access, not HTTP. + This is because for any remote T3 access, WebLogic Server expects the load balancer port and the WebLogic managed server port to be the same. In the preceding case, the client is accessing port 7006 (the load balancer port) and the managed server is listening on 7008 (the private port). This restriction is applicable only for T3 access, not HTTP. To avoid this issue, use one of the following workarounds: @@ -88,9 +88,9 @@ Despite having an HA and DR solution architected in Azure, you will want to ensu For related information, see KB article **860340.1** at . -* **Dynamic clustering and load balancing limitations.** Suppose you want to use a dynamic cluster in WebLogic Server and expose it through a single, public load-balanced endpoint in Azure. This can be done as long as you use a fixed port number for each of the managed servers (not dynamically assigned from a range) and do not start more managed servers than there are machines the administrator is tracking (that is, no more than one managed server per virtual machine). If your configuration results in more WebLogic servers being started than there are virtual machines (that is, where multiple WebLogic Server instances share the same virtual machine), then it is not possible for more than one of those instances of WebLogic Server servers to bind to a given port number – the others on that virtual machine will fail. +* **Dynamic clustering and load balancing limitations.** Suppose you want to use a dynamic cluster in WebLogic Server and expose it through a single, public load-balanced endpoint in Azure. This can be done as long as you use a fixed port number for each of the managed servers (not dynamically assigned from a range) and do not start more managed servers than there are machines the administrator is tracking (that is, no more than one managed server per virtual machine). If your configuration results in more WebLogic servers being started than there are virtual machines (that is, where multiple WebLogic Server instances share the same virtual machine), then it is not possible for more than one of those instances of WebLogic servers to bind to a given port number – the others on that virtual machine fail. - On the other hand, if you configure the admin server to automatically assign unique port numbers to its managed servers, then load balancing is not possible because Azure does not support mapping from a single public port to multiple private ports, as would be required for this configuration. + If you configure the admin server to automatically assign unique port numbers to its managed servers, then load balancing is not possible because Azure does not support mapping from a single public port to multiple private ports, as would be required for this configuration. * **Multiple instances of Weblogic Server on a virtual machine.** Depending on your deployment’s requirements, you might consider the option of running multiple instances of WebLogic Server on the same virtual machine, if the virtual machine is large enough. For example, on a medium size virtual machine, which contains two cores, you could choose to run two instances of WebLogic Server. Note however that we still recommend that you avoid introducing single points of failure into your architecture, which would be the case if you used just one virtual machine that is running multiple instances of WebLogic Server. Using at least two virtual machines could be a better approach, and each of those virtual machines could then run multiple instances of WebLogic Server. Each of these instances of WebLogic Server could still be part of the same cluster. Note, however, it is currently not possible to use Azure to load-balance endpoints that are exposed by such WebLogic Server deployments within the same virtual machine, because Azure load balancer requires the load-balanced servers to be distributed among unique virtual machines. ## Oracle JDK virtual machine images @@ -100,5 +100,5 @@ For related information, see KB article **860340.1** at striped with LVM or MDADM | /hana/shared | /root volume | /usr/sap | hana/backup | +| --- | --- | --- | --- | --- | --- | -- | +| E16v3 | 128GB | 2 x P20 | 1 x S20 | 1 x S6 | 1 x S6 | 1 x S10 | +| E32v3 | 256GB | 2 x P20 | 1 x S20 | 1 x S6 | 1 x S6 | 1 x S20 | +| E64v3 | 443GB | 2 x P20 | 1 x S20 | 1 x S6 | 1 x S6 | 1 x S30 | +| GS5 | 448 GB | 2 x P20 | 1 x S20 | 1 x S6 | 1 x S6 | 1 x S30 | +| M64s | 1TB | 2 x P30 | 1 x S30 | 1 x S6 | 1 x S6 |2 x S30 | +| M64ms | 1.7TB | 3 x P30 | 1 x S30 | 1 x S6 | 1 x S6 | 3 x S30 | +| M128s | 2TB | 3 x P30 | 1 x S30 | 1 x S6 | 1 x S6 | 3 x S30 | +| M128ms | 3.8TB | 5 x P30 | 1 x S30 | 1 x S6 | 1 x S6 | 5 x S30 | + + +### Azure networking +Assuming that you have a VPN or ExpressRoute site-to-site connectivity into Azure, you at minimum would have one [Azure VNet](https://docs.microsoft.com/azure/virtual-network/virtual-networks-overview) that is connected through a Virtual Gateway to the VPN or ExpressRoute circuit. The Virtual Gateway lives in a subnet in the Azure Vnet. In order to install HANA, you would create another two subnets within the VNet. One subnet that hosts the VM(s) that run the SAP HANA instance(s) and another subnet that runs eventual Jumpbox or Management VM(s) that can host SAP HANA Studio or other management software. +When you install the VMs that should run HANA, the VMs should have: + +- Two virtual NICs installed of which one connects to the management subnet and one NIC is used to connect from either on premise or other networks to the SAP HANA instance in the Azure VM. +- Static private IP addresses deployed for both vNICs + +An overview of the different possibilities of IP address assignment can be found [here](https://docs.microsoft.com/azure/virtual-network/virtual-network-ip-addresses-overview-arm). + +The traffic routing to either directly to the SAP HANA instance or to the jumpbox is directed by [Azure Network Security Groups](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-nsg) that are associated to the HANA subnet and the Management subnet. + +Overall the rough deployment schema would look like: + +![Rough deployment schema for SAP HANA](media/hana-vm-operations/hana-simple-networking.PNG) + + +If you just deploy SAP HANA in Azure without having a site-to-site (VPN or ExpressRoute into Azure), you would access the SAP HANA instance though a public IP address that is assigned to the Azure VM that runs your Jumpbox VM. In the simple case, you also rely on the Azure built-in DNS services in order to resolve hostnames. Especially when using public facing IP addresses, you want to use Azure Network Security Groups to limit the open ports or IP address ranges that are allowed to connect into the Azure subnets running assets with public facing IP addresses. The schema of such a deployment could look like: + +![Rough deployment schema for SAP HANA without site-to-site connection](media/hana-vm-operations/hana-simple-networking2.PNG) + + + +## Operations +### Backup and Restore operations on Azure VMs +The possibilities of SAP HANA Backup and Restore are documented in these documents: + +- [SAP HANA Backup overview](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-backup-guide) +- [SAP HANA file level backup](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-backup-file-level) +- [SAP HANA Storage snapshot benchmark](https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-backup-storage-snapshots) + + + +### Start and restart of VMs containing SAP HANA +One of the strengths of Azure public cloud is the fact that you are only charged for the compute minutes you are spending. That means if you shut down a VM with SAP HANA running in it, only the costs for storage are billed during that time. As you start the VM with SAP HANA in it again, the VM is going to start up again and is going to have the same IP addresses (if you deployed with static IP addresses). + + +### SAPRouter enabling SAP remote support +If you have a site-to-site connection between your on-premise location(s) and Azure and you run SAP components already, it is highly likely that you already run SAProuter already. In this case, there is nothing you need to do with SAP HANA instances you deploy in Azure. Except to maintain the private and static IP address of the VM that hosts HANA in the SAPRouter configuration and have the NSG of the subnet hosting the HANA VM adapted (traffic through port TCP/IP port 3299 allowed). + +If you are deploying SAP HANA and you connect to Azure through the Internet and you don't have an SAP Router installed in the Vnet that runs a VM with SAP HANA, you should install the SAPRouter in a separate VM in the Management subnet as shown here: + + +![Rough deployment schema for SAP HANA without site-to-site connection and SAPRouter](media/hana-vm-operations/hana-simple-networking3.PNG) + +You should install SAPRouter in a separate VM and not in your Jumpbox VM. The separate VM needs a static IP address. In order to connect SAPRouter to the SAPRouter that SAP hosts (counterpart of the SAPRouter instance you install VM), you need to contact SAP to get an IP address from SAP that you need to configure your SAPRouter instance. The only port necessary is TCP port 3299. +For more information on how to setup and maintain Remote Support connections through SAPRouter, check this [SAP source](https://support.sap.com/en/tools/connectivity-tools/remote-support.html). + +### High-Availability with SAP HANA on Azure native VMs +Running SUSE Linux 12 SP1 and more recent you can establish a Pacemaker cluster with STONITH devices to set up an SAP HANA configuration that uses synchronous replication with HANA System Replication and automatic failover. The procedure of setup is described in the article [High Availability of SAP HANA on Azure Virtual Machines](https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-high-availability). + + + + + + + + + + + + diff --git a/articles/virtual-machines/workloads/sap/media/hana-vm-operations/hana-simple-networking.PNG b/articles/virtual-machines/workloads/sap/media/hana-vm-operations/hana-simple-networking.PNG new file mode 100644 index 0000000000000..db9c5db5c2c37 Binary files /dev/null and b/articles/virtual-machines/workloads/sap/media/hana-vm-operations/hana-simple-networking.PNG differ diff --git a/articles/virtual-machines/workloads/sap/media/hana-vm-operations/hana-simple-networking2.PNG b/articles/virtual-machines/workloads/sap/media/hana-vm-operations/hana-simple-networking2.PNG new file mode 100644 index 0000000000000..74ba4d1515051 Binary files /dev/null and b/articles/virtual-machines/workloads/sap/media/hana-vm-operations/hana-simple-networking2.PNG differ diff --git a/articles/virtual-machines/workloads/sap/media/hana-vm-operations/hana-simple-networking3.PNG b/articles/virtual-machines/workloads/sap/media/hana-vm-operations/hana-simple-networking3.PNG new file mode 100644 index 0000000000000..efdec22daf844 Binary files /dev/null and b/articles/virtual-machines/workloads/sap/media/hana-vm-operations/hana-simple-networking3.PNG differ diff --git a/articles/virtual-machines/workloads/sap/toc.md b/articles/virtual-machines/workloads/sap/toc.md index bc1c880907b86..2a8727eaa6220 100644 --- a/articles/virtual-machines/workloads/sap/toc.md +++ b/articles/virtual-machines/workloads/sap/toc.md @@ -11,8 +11,9 @@ ### [HA Setup with STONITH](ha-setup-with-stonith.md) ### [OS Backup for Type II SKUs](os-backup-type-ii-skus.md) # SAP HANA on Azure Virtual Machines -## [Single instance SAP HANA](hana-get-started.md) +## [Single instance SAP HANA installation](hana-get-started.md) ## [S/4 HANA or BW/4 HANA SAP CAL deployment guide](cal-s4h.md) +## [SAP HANA on Azure operations guide](hana-vm-operations.md) ## [SAP HANA High Availability in Azure VMs](sap-hana-high-availability.md) ## [SAP HANA backup overview](sap-hana-backup-guide.md) ## [SAP HANA file level backup](sap-hana-backup-file-level.md) diff --git a/articles/vpn-gateway/vpn-gateway-about-vpn-devices.md b/articles/vpn-gateway/vpn-gateway-about-vpn-devices.md index 0bc15d9805517..4c7f73ea5e7df 100644 --- a/articles/vpn-gateway/vpn-gateway-about-vpn-devices.md +++ b/articles/vpn-gateway/vpn-gateway-about-vpn-devices.md @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: get-started-article ms.tgt_pltfrm: na ms.workload: infrastructure-services -ms.date: 11/07/2017 +ms.date: 11/28/2017 ms.author: yushwang;cherylmc --- @@ -56,7 +56,7 @@ To help configure your VPN device, refer to the links that correspond to appropr | Cisco |ISR |PolicyBased: IOS 15.0
    RouteBased*: IOS 15.1 |[Configuration samples](https://github.com/Azure/Azure-vpn-config-samples/tree/master/Cisco/Current/ISR) |[Configuration samples**](https://github.com/Azure/Azure-vpn-config-samples/tree/master/Cisco/Current/ISR) | | Citrix |NetScaler MPX, SDX, VPX |10.1 and above |[Configuration guide](https://docs.citrix.com/en-us/netscaler/11-1/system/cloudbridge-connector-introduction/cloudbridge-connector-azure.html) |Not compatible | | F5 |BIG-IP series |12.0 |[Configuration guide](https://devcentral.f5.com/articles/connecting-to-windows-azure-with-the-big-ip) |[Configuration guide](https://devcentral.f5.com/articles/big-ip-to-azure-dynamic-ipsec-tunneling) | -| Fortinet |FortiGate |FortiOS 5.4.2 | |[Configuration guide](http://cookbook.fortinet.com/ipsec-vpn-microsoft-azure-54) | +| Fortinet |FortiGate |FortiOS 5.6 | |[Configuration guide](http://cookbook.fortinet.com/ipsec-vpn-microsoft-azure-56/) | | Internet Initiative Japan (IIJ) |SEIL Series |SEIL/X 4.60
    SEIL/B1 4.60
    SEIL/x86 3.20 |[Configuration guide](http://www.iij.ad.jp/biz/seil/ConfigAzureSEILVPN.pdf) |Not compatible | | Juniper |SRX |PolicyBased: JunOS 10.2
    Routebased: JunOS 11.4 |[Configuration samples](https://github.com/Azure/Azure-vpn-config-samples/tree/master/Juniper/Current/SRX) |[Configuration samples](https://github.com/Azure/Azure-vpn-config-samples/tree/master/Juniper/Current/SRX) | | Juniper |J-Series |PolicyBased: JunOS 10.4r9
    RouteBased: JunOS 11.4 |[Configuration samples](https://github.com/Azure/Azure-vpn-config-samples/tree/master/Juniper/Current/JSeries) |[Configuration samples](https://github.com/Azure/Azure-vpn-config-samples/tree/master/Juniper/Current/JSeries) | diff --git a/articles/vpn-gateway/vpn-gateway-connect-different-deployment-models-portal.md b/articles/vpn-gateway/vpn-gateway-connect-different-deployment-models-portal.md index aad8c08decb96..7e9fc69f2593b 100644 --- a/articles/vpn-gateway/vpn-gateway-connect-different-deployment-models-portal.md +++ b/articles/vpn-gateway/vpn-gateway-connect-different-deployment-models-portal.md @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: article ms.tgt_pltfrm: na ms.workload: infrastructure-services -ms.date: 10/23/2017 +ms.date: 11/27/2017 ms.author: cherylmc --- @@ -46,7 +46,9 @@ You can use these values to create a test environment, or refer to them to bette VNet name = ClassicVNet
    Address space = 10.0.0.0/24
    -Subnet-1 = 10.0.0.0/27
    +Subnet name = Subnet-1
    +Subnet address range = 10.0.0.0/27
    +Subscription = the subscription you want to use
    Resource Group = ClassicRG
    Location = West US
    GatewaySubnet = 10.0.0.32/28
    @@ -56,18 +58,22 @@ Local site = RMVNetLocal
    VNet name = RMVNet
    Address space = 192.168.0.0/16
    -Subnet-1 = 192.168.1.0/24
    -GatewaySubnet = 192.168.0.0/26
    Resource Group = RG1
    Location = East US
    +Subnet name = Subnet-1
    +Address range = 192.168.1.0/24
    +GatewaySubnet = 192.168.0.0/26
    Virtual network gateway name = RMGateway
    Gateway type = VPN
    VPN type = Route-based
    -Gateway Public IP address name = rmgwpip
    +SKU = VpnGw1
    +Location = East US
    +Virtual network = RMVNet
    (associate the VPN gateway to this VNet) +First IP configuration = rmgwpip
    (gateway public IP address) Local network gateway = ClassicVNetLocal
    Connection name = RMtoClassic -### Connection overview +### Connection overview For this configuration, you create a VPN gateway connection over an IPsec/IKE VPN tunnel between the virtual networks. Make sure that none of your VNet ranges overlap with each other, or with any of the local networks that they connect to. @@ -80,83 +86,105 @@ The following table shows an example of how the example VNets and local sites ar ## Section 1 - Configure the classic VNet settings -In this section, you create the local network (local site) and the virtual network gateway for your classic VNet. If you don't have a classic VNet and are running these steps as an exercise, you can create a VNet by using [this article](../virtual-network/virtual-networks-create-vnet-classic-pportal.md) and the [Example](#values) settings values from above. - -When using the portal to create a classic virtual network, you must navigate to the virtual network page by using the following steps, otherwise the option to create a classic virtual network does not appear: +In this section, you create the classic VNet, the local network (local site), and the virtual network gateway. Screenshots are provided as examples. Be sure to replace the values with your own, or use the [Example](#values) values. -1. Click the '+' to open the 'New' page. -2. In the 'Search the marketplace' field, type 'Virtual Network'. If you instead, select Networking -> Virtual Network, you will not get the option to create a classic VNet. -3. Locate 'Virtual Network' from the returned list and click it to open the Virtual Network page. -4. On the virtual network page, select 'Classic' to create a classic VNet. +### 1. Create a classic VNet -If you already have a VNet with a VPN gateway, verify that the gateway is Dynamic. If it's Static, you must first delete the VPN gateway, then proceed. +If you don't have a classic VNet and are running these steps as an exercise, you can create a VNet by using [this article](../virtual-network/virtual-networks-create-vnet-classic-pportal.md) and the [Example](#values) settings values from above. -Screenshots are provided as examples. Be sure to replace the values with your own, or use the [Example](#values) values. +If you already have a VNet with a VPN gateway, verify that the gateway is Dynamic. If it's Static, you must first delete the VPN gateway before you proceed to [Configure the local site](#local). -### 1. Configure the local site +1. Open the [Azure portal](https://ms.portal.azure.com) and sign in with your Azure account. +2. Click **+ Create a resource** to open the 'New' page. +3. In the 'Search the marketplace' field, type 'Virtual Network'. If you instead, select Networking -> Virtual Network, you will not get the option to create a classic VNet. +4. Locate 'Virtual Network' from the returned list and click it to open the Virtual Network page. +5. On the virtual network page, select 'Classic' to create a classic VNet. If you take the default here, you will wind up with a Resource Manager VNet instead. -Open the [Azure portal](https://ms.portal.azure.com) and sign in with your Azure account. +### 2. Configure the local site 1. Navigate to **All resources** and locate the **ClassicVNet** in the list. -2. On the **Overview** page, in the **VPN connections** section, click the **Gateway** graphic to create a gateway. - - ![Configure a VPN gateway](./media/vpn-gateway-connect-different-deployment-models-portal/gatewaygraphic.png "Configure a VPN gateway") +2. On the **Overview** page, in the **VPN connections** section, click **Gateway** to create a gateway. + ![Configure a VPN gateway](./media/vpn-gateway-connect-different-deployment-models-portal/gatewaygraphic.png "Configure a VPN gateway") 3. On the **New VPN Connection** page, for **Connection type**, select **Site-to-site**. 4. For **Local site**, click **Configure required settings**. This opens the **Local site** page. 5. On the **Local site** page, create a name to refer to the Resource Manager VNet. For example, 'RMVNetLocal'. 6. If the VPN gateway for the Resource Manager VNet already has a Public IP address, use the value for the **VPN gateway IP address** field. If you are doing these steps as an exercise, or don't yet have a virtual network gateway for your Resource Manager VNet, you can make up a placeholder IP address. Make sure that the placeholder IP address uses a valid format. Later, you replace the placeholder IP address with the Public IP address of the Resource Manager virtual network gateway. -7. For **Client Address Space**, use the values for the virtual network IP address spaces for the Resource Manager VNet. This setting is used to specify the address spaces to route to the Resource Manager virtual network. +7. For **Client Address Space**, use the [values](#connectoverview) for the virtual network IP address spaces for the Resource Manager VNet. This setting is used to specify the address spaces to route to the Resource Manager virtual network. In the example, we use 192.168.0.0/16, the address range for the RMVNet. 8. Click **OK** to save the values and return to the **New VPN Connection** page. -### 2. Create the virtual network gateway +### 3. Create the virtual network gateway -1. On the **New VPN Connection** page, select the **Create gateway immediately** checkbox and click **Optional gateway configuration** to open the **Gateway configuration** page. +1. On the **New VPN Connection** page, select the **Create gateway immediately** checkbox. +2. Click **Optional gateway configuration** to open the **Gateway configuration** page. - ![Open gateway configuration page](./media/vpn-gateway-connect-different-deployment-models-portal/optionalgatewayconfiguration.png "Open gateway configuration page") -2. Click **Subnet - Configure required settings** to open the **Add subnet** page. The **Name** is already configured with the required value **GatewaySubnet**. -3. The **Address range** refers to the range for the gateway subnet. Although you can create a gateway subnet with a /29 address range (3 addresses), we recommend creating a gateway subnet that contains more IP addresses. This will accommodate future configurations that may require more available IP addresses. If possible, use /27 or /28. If you are using these steps as an exercise, you can refer to the [Example](#values) values. Click **OK** to create the gateway subnet. -4. On the **Gateway configuration** page, **Size** refers to the gateway SKU. Select the gateway SKU for your VPN gateway. -5. Verify the **Routing Type** is **Dynamic**, then click **OK** to return to the **New VPN Connection** page. -6. On the **New VPN Connection** page, click **OK** to begin creating your VPN gateway. Creating a VPN gateway can take up to 45 minutes to complete. + ![Open gateway configuration page](./media/vpn-gateway-connect-different-deployment-models-portal/optionalgatewayconfiguration.png "Open gateway configuration page") +3. Click **Subnet - Configure required settings** to open the **Add subnet** page. The **Name** is already configured with the required value: **GatewaySubnet**. +4. The **Address range** refers to the range for the gateway subnet. Although you can create a gateway subnet with a /29 address range (3 addresses), we recommend creating a gateway subnet that contains more IP addresses. This will accommodate future configurations that may require more available IP addresses. If possible, use /27 or /28. If you are using these steps as an exercise, you can refer to the [Example values](#values). For this example, we use '10.0.0.32/28'. Click **OK** to create the gateway subnet. +5. On the **Gateway configuration** page, **Size** refers to the gateway SKU. Select the gateway SKU for your VPN gateway. +6. Verify the **Routing Type** is **Dynamic**, then click **OK** to return to the **New VPN Connection** page. +7. On the **New VPN Connection** page, click **OK** to begin creating your VPN gateway. Creating a VPN gateway can take up to 45 minutes to complete. -### 3. Copy the virtual network gateway Public IP address +### 4. Copy the virtual network gateway Public IP address After the virtual network gateway has been created, you can view the gateway IP address. 1. Navigate to your classic VNet, and click **Overview**. -2. Click **VPN connections** to open the VPN connections page. On the VPN connections page, you can view the Public IP address. This is the Public IP address assigned to your virtual network gateway. -3. Write down or copy the IP address. You use it in later steps when you work with your Resource Manager local network gateway configuration settings. You can also view the status of your gateway connections. Notice the local network site you created is listed as 'Connecting'. The status will change after you have created your connections. -4. Close the page after copying the gateway IP address. +2. Click **VPN connections** to open the VPN connections page. On the VPN connections page, you can view the Public IP address. This is the Public IP address assigned to your virtual network gateway. Make a note of the IP address. You use it in later steps when you work with your Resource Manager local network gateway configuration settings. +3. You can view the status of your gateway connections. Notice the local network site you created is listed as 'Connecting'. The status will change after you have created your connections. You can close this page when you are finished viewing the status. ## Section 2 - Configure the Resource Manager VNet settings -In this section, you create the virtual network gateway and the local network gateway for your Resource Manager VNet. If you don't have a Resource Manager VNet and are running these steps as an exercise, you can create a VNet by using [this article](../virtual-network/virtual-networks-create-vnet-arm-pportal.md) and the [Example](#values) settings values from above. +In this section, you create the virtual network gateway and the local network gateway for your Resource Manager VNet. Screenshots are provided as examples. Be sure to replace the values with your own, or use the [Example](#values) values. + +### 1. Create a virtual network + +**Example values:** -Screenshots are provided as examples. Be sure to replace the values with your own, or use the [Example](#values) values. +* VNet name = RMVNet
    +* Address space = 192.168.0.0/16
    +* Resource Group = RG1
    +* Location = East US
    +* Subnet name = Subnet-1
    +* Address range = 192.168.1.0/24
    -### 1. Create a gateway subnet -Before creating a virtual network gateway, you first need to create the gateway subnet. Create a gateway subnet with CIDR count of /28 or larger. (/27, /26, etc.) +If you don't have a Resource Manager VNet and are running these steps as an exercise, you can create a VNet by using [this article](../virtual-network/virtual-networks-create-vnet-arm-pportal.md) and the Example values. + +### 2. Create a gateway subnet + +**Example value:** GatewaySubnet = 192.168.0.0/26 + +Before creating a virtual network gateway, you first need to create the gateway subnet. Create a gateway subnet with CIDR count of /28 or larger (/27, /26, etc.). If you are creating this as part of an exercise, you can use the Example values. [!INCLUDE [vpn-gateway-no-nsg-include](../../includes/vpn-gateway-no-nsg-include.md)] [!INCLUDE [vpn-gateway-add-gwsubnet-rm-portal](../../includes/vpn-gateway-add-gwsubnet-rm-portal-include.md)] -### 2. Create a virtual network gateway +### 3. Create a virtual network gateway -[!INCLUDE [vpn-gateway-add-gw-rm-portal](../../includes/vpn-gateway-add-gw-rm-portal-include.md)] +**Example values:** -### 3. Create a local network gateway +* Virtual network gateway name = RMGateway
    +* Gateway type = VPN
    +* VPN type = Route-based
    +* SKU = VpnGw1
    +* Location = East US
    +* Virtual network = RMVNet
    +* First IP configuration = rmgwpip
    -The local network gateway specifies the address range and the Public IP address associated with your classic VNet and its virtual network gateway. +[!INCLUDE [vpn-gateway-add-gw-rm-portal](../../includes/vpn-gateway-add-gw-rm-portal-include.md)] + +### 4. Create a local network gateway -If you are doing these steps as an exercise, refer to these settings: +**Example values:** Local network gateway = ClassicVNetLocal | Virtual Network | Address Space | Region | Connects to local network site |Gateway Public IP address| |:--- |:--- |:--- |:--- |:--- | | ClassicVNet |(10.0.0.0/24) |West US | RMVNetLocal (192.168.0.0/16) |The Public IP address that is assigned to the ClassicVNet gateway| | RMVNet | (192.168.0.0/16) |East US |ClassicVNetLocal (10.0.0.0/24) |The Public IP address that is assigned to the RMVNet gateway.| +The local network gateway specifies the address range and the Public IP address associated with your classic VNet and its virtual network gateway. If you are doing these steps as an exercise, refer to the Example values. + [!INCLUDE [vpn-gateway-add-lng-rm-portal](../../includes/vpn-gateway-add-lng-rm-portal-include.md)] ## Section 3 - Modify the classic VNet local site settings diff --git a/articles/vpn-gateway/vpn-gateway-howto-point-to-site-classic-azure-portal.md b/articles/vpn-gateway/vpn-gateway-howto-point-to-site-classic-azure-portal.md index 483ba05e68681..48fb3e4768285 100644 --- a/articles/vpn-gateway/vpn-gateway-howto-point-to-site-classic-azure-portal.md +++ b/articles/vpn-gateway/vpn-gateway-howto-point-to-site-classic-azure-portal.md @@ -32,8 +32,10 @@ This article shows you how to create a VNet with a Point-to-Site connection in t A Point-to-Site (P2S) VPN gateway lets you create a secure connection to your virtual network from an individual client computer. Point-to-Site VPN connections are useful when you want to connect to your VNet from a remote location, such when you are telecommuting from home or a conference. A P2S VPN is also a useful solution to use instead of a Site-to-Site VPN when you have only a few clients that need to connect to a VNet. A P2S VPN connection is established by starting it from the client computer. -The classic deployment model supports Windows VPN clients only and uses the Secure Socket Tunneling Protocol (SSTP), an SSL-based VPN protocol. In order to support non-Windows VPN clients, your VNet must be created using the Resource Manager deployment model. The Resource Manager deployment model supports IKEv2 VPN, in addition to SSTP. For more information, see [About P2S connections](point-to-site-about.md). - +> [!IMPORTANT] +> The classic deployment model supports Windows VPN clients only and uses the Secure Socket Tunneling Protocol (SSTP), an SSL-based VPN protocol. In order to support non-Windows VPN clients, your VNet must be created using the Resource Manager deployment model. The Resource Manager deployment model supports IKEv2 VPN in addition to SSTP. For more information, see [About P2S connections](point-to-site-about.md). +> +> ![Point-to-Site-diagram](./media/vpn-gateway-howto-point-to-site-classic-azure-portal/point-to-site-connection-diagram.png) diff --git a/articles/vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md b/articles/vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md index 3e3e3108c9047..bc5ac99c0d0bc 100644 --- a/articles/vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md +++ b/articles/vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: hero-article ms.tgt_pltfrm: na ms.workload: infrastructure-services -ms.date: 09/25/2017 +ms.date: 11/29/2017 ms.author: cherylmc --- @@ -42,7 +42,7 @@ This article helps you configure a P2S configuration with authentication using t Point-to-Site connections do not require a VPN device or a public-facing IP address. P2S creates the VPN connection over either SSTP (Secure Socket Tunneling Protocol), or IKEv2. -* SSTP is an SSL-based VPN tunnel that is supported only on Windows client platforms. It can penetrate firewalls, which makes it an ideal option to connect to Azure from anywhere. On the server side, we support SSTP versions 1.0, 1.1, and 1.2. The client decides which version to use. For Windows 8.1 and above, SSTP uses 1.2 by default. +* SSTP is an SSL-based VPN tunnel that is supported only on Windows client platforms. It can penetrate firewalls, which makes it an ideal option to connect to Azure from anywhere. On the server side, SSTP versions 1.0, 1.1, and 1.2 are supported. The client decides which version to use. For Windows 8.1 and above, SSTP uses 1.2 by default. * IKEv2 VPN, a standards-based IPsec VPN solution. IKEv2 VPN can be used to connect from Mac devices (OSX versions 10.11 and above). IKEv2 is currently in Preview. @@ -66,10 +66,10 @@ For more information about Point-to-Site connections, see [About Point-to-Site c ### Example values -You can use the example values to create a test environment, or refer to these values to better understand the examples in this article. We set the variables in section [1](#declare) of the article. You can either use the steps as a walk-through and use the values without changing them, or change them to reflect your environment. +You can use the example values to create a test environment, or refer to these values to better understand the examples in this article. The variables are set in section [1](#declare) of the article. You can either use the steps as a walk-through and use the values without changing them, or change them to reflect your environment. * **Name: VNet1** -* **Address space: 192.168.0.0/16** and **10.254.0.0/16**
    For this example, we use more than one address space to illustrate that this configuration works with multiple address spaces. However, multiple address spaces are not required for this configuration. +* **Address space: 192.168.0.0/16** and **10.254.0.0/16**
    This example uses more than one address space to illustrate that this configuration works with multiple address spaces. However, multiple address spaces are not required for this configuration. * **Subnet name: FrontEnd** * **Subnet address range: 192.168.1.0/24** * **Subnet name: BackEnd** @@ -140,7 +140,7 @@ In this section, you log in and declare the values used for this configuration. ``` 3. Create the virtual network. - In this example, the -DnsServer server parameter is optional. Specifying a value does not create a new DNS server. The DNS server IP address that you specify should be a DNS server that can resolve the names for the resources you are connecting to from your VNet. For this example, we used a private IP address, but it is likely that this is not the IP address of your DNS server. Be sure to use your own values. The value you specify is used by the resources that you deploy to the VNet, not by the P2S connection or the VPN client. + In this example, the -DnsServer server parameter is optional. Specifying a value does not create a new DNS server. The DNS server IP address that you specify should be a DNS server that can resolve the names for the resources you are connecting to from your VNet. This example uses a private IP address, but it is likely that this is not the IP address of your DNS server. Be sure to use your own values. The value you specify is used by the resources that you deploy to the VNet, not by the P2S connection or the VPN client. ```powershell New-AzureRmVirtualNetwork -Name $VNetName -ResourceGroupName $RG -Location $Location -AddressPrefix $VNetPrefix1,$VNetPrefix2 -Subnet $fesub, $besub, $gwsub -DnsServer 10.2.1.3 @@ -164,14 +164,14 @@ In this section, you log in and declare the values used for this configuration. Configure and create the virtual network gateway for your VNet. -* The *-GatewayType* must be **Vpn** and the *-VpnType* must be **RouteBased**. -* The -VpnClientProtocols is used to specify the types of tunnels that you would like to enable. The two tunnel options are **SSTP** and **IKEv2**. You can choose to enable one of them or both. If you want to enable both, then specify both the names separated by a comma. The Strongswan client on Android and Linux and the native IKEv2 VPN client on iOS and OSX will use only IKEv2 tunnel to connect. Windows clients try IKEv2 first and if that doesn’t connect, they fall back to SSTP. -* A VPN gateway can take up to 45 minutes to complete, depending on the [gateway sku](vpn-gateway-about-vpn-gateway-settings.md) you select. In this example, we use IKEv2, which is currently available in Preview. +* The -GatewayType must be **Vpn** and the -VpnType must be **RouteBased**. +* The -VpnClientProtocol is used to specify the types of tunnels that you would like to enable. The two tunnel options are **SSTP** and **IKEv2**. You can choose to enable one of them or both. If you want to enable both, then specify both the names separated by a comma. The Strongswan client on Android and Linux and the native IKEv2 VPN client on iOS and OSX will use only IKEv2 tunnel to connect. Windows clients try IKEv2 first and if that doesn’t connect, they fall back to SSTP. +* A VPN gateway can take up to 45 minutes to complete, depending on the [gateway sku](vpn-gateway-about-vpn-gateway-settings.md) you select. This example uses IKEv2, which is currently available in Preview. ```powershell New-AzureRmVirtualNetworkGateway -Name $GWName -ResourceGroupName $RG ` -Location $Location -IpConfigurations $ipconf -GatewayType Vpn ` --VpnType RouteBased -EnableBgp $false -GatewaySku VpnGw1 -VpnClientProtocols "IKEv2" +-VpnType RouteBased -EnableBgp $false -GatewaySku VpnGw1 -VpnClientProtocol "IKEv2" ``` ## 4. Add the VPN client address pool @@ -316,7 +316,7 @@ This is the most efficient method to upload a root certificate. #### Method 2 -This method is has more steps than Method 1, but has the same result. It is included in case you need to view the certificate data. +This method has more steps than Method 1, but has the same result. It is included in case you need to view the certificate data. 1. Create and prepare the new root certificate to add to Azure. Export the public key as a Base-64 encoded X.509 (.CER) and open it with a text editor. Copy the values, as shown in the following example: @@ -428,4 +428,4 @@ You can reinstate a client certificate by removing the thumbprint from the list [!INCLUDE [Point-to-Site FAQ](../../includes/vpn-gateway-faq-p2s-azurecert-include.md)] ## Next steps -Once your connection is complete, you can add virtual machines to your virtual networks. For more information, see [Virtual Machines](https://docs.microsoft.com/azure/#pivot=services&panel=Compute). To understand more about networking and virtual machines, see [Azure and Linux VM network overview](../virtual-machines/linux/azure-vm-network-overview.md). \ No newline at end of file +Once your connection is complete, you can add virtual machines to your virtual networks. For more information, see [Virtual Machines](https://docs.microsoft.com/azure/#pivot=services&panel=Compute). To understand more about networking and virtual machines, see [Azure and Linux VM network overview](../virtual-machines/linux/azure-vm-network-overview.md). diff --git a/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-cli.md b/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-cli.md index c0d54ac74b693..e9fe15c6c2e6e 100644 --- a/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-cli.md +++ b/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-cli.md @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: get-started-article ms.tgt_pltfrm: na ms.workload: infrastructure-services -ms.date: 11/27/2017 +ms.date: 11/29/2017 ms.author: cherylmc --- @@ -36,15 +36,23 @@ The steps in this article apply to the Resource Manager deployment model and use ## About connecting VNets -Connecting a virtual network to another virtual network using the VNet-to-VNet connection type (VNet2VNet) is similar to creating an IPsec connection to an on-premises site location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE and both function the same way when communicating. The difference between the connection types is the way the local network gateway is configured. When you create a VNet-to-VNet connection, you do not see the local network gateway address space. It is automatically created and populated. If you update the address space for one VNet, the other VNet will automatically know to route to the updated address space. +There are multiple ways to connect VNets. The sections below describe different ways to connect virtual networks. -If you are working with a complicated configuration, you may prefer to use the IPsec connection type, rather than VNet-to-VNet. This lets you specify additional address space for the local network gateway in order to route traffic. If you connect your VNets using the IPsec connection type, you need to create and configure the local network gateway manually. For more information, see [Site-to-Site configurations](vpn-gateway-howto-site-to-site-resource-manager-cli.md). +### VNet-to-VNet -Additionally, if your VNets are in the same region, you may want to consider connecting them using VNet Peering. VNet peering does not use a VPN gateway and the pricing and functionality is somewhat different. For more information, see [VNet peering](../virtual-network/virtual-network-peering-overview.md). +Configuring a VNet-to-VNet connection is a good way to easily connect VNets. Connecting a virtual network to another virtual network using the VNet-to-VNet connection type is similar to creating a Site-to-Site IPsec connection to an on-premises location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE, and both function the same way when communicating. The difference between the connection types is the way the local network gateway is configured. When you create a VNet-to-VNet connection, you do not see the local network gateway address space. It is automatically created and populated. If you update the address space for one VNet, the other VNet automatically knows to route to the updated address space. Creating a VNet-to-VNet connection is typically faster and easier than creating a Site-to-Site connection between VNets. -### Why create a VNet-to-VNet connection? +### Connecting VNets using Site-to-Site (IPsec) steps -You may want to connect virtual networks for the following reasons: +If you are working with a complicated network configuration, you may prefer to connect your VNets using the [Site-to-Site](vpn-gateway-howto-site-to-site-resource-manager-cli.md) steps, instead of the VNet-to-VNet steps. When you use the Site-to-Site steps, you create and configure the local network gateways manually. The local network gateway for each VNet treats the other VNet as a local site. This lets you specify additional address space for the local network gateway in order to route traffic. If the address space for a VNet changes, you need to manually update the corresponding local network gateway to reflect the change. It does not automatically update. + +### VNet peering + +You may want to consider connecting your VNets using VNet Peering. VNet peering does not use a VPN gateway and has different constraints. Additionally, [VNet peering pricing](https://azure.microsoft.com/pricing/details/virtual-network) is calculated differently than [VNet-to-VNet VPN Gateway pricing](https://azure.microsoft.com/pricing/details/vpn-gateway). For more information, see [VNet peering](../virtual-network/virtual-network-peering-overview.md). + +## Why create a VNet-to-VNet connection? + +You may want to connect virtual networks using a VNet-to-VNet connection for the following reasons: * **Cross region geo-redundancy and geo-presence** @@ -56,9 +64,9 @@ You may want to connect virtual networks for the following reasons: VNet-to-VNet communication can be combined with multi-site configurations. This lets you establish network topologies that combine cross-premises connectivity with inter-virtual network connectivity. -### Which set of steps should I use? +## Which VNet-to-VNet steps should I use? -This article helps you connect VNets using the VNet-to-VNet connection type. In this article, you see two different sets of steps. One set of steps for [VNets that reside in the same subscription](#samesub) and one for [VNets that reside in different subscriptions](#difsub). +In this article, you see two different sets of VNet-to-VNet connection steps. One set of steps for [VNets that reside in the same subscription](#samesub) and one for [VNets that reside in different subscriptions](#difsub). For this exercise, you can combine configurations, or just choose the one that you want to work with. All of the configurations use the VNet-to-VNet connection type. Network traffic flows between the VNets that are directly connected to each other. In this exercise, traffic from TestVNet4 does not route to TestVNet5. @@ -97,7 +105,6 @@ We use the following values in the examples: * VPNType: RouteBased * Connection(1to4): VNet1toVNet4 * Connection(1to5): VNet1toVNet5 (For VNets in different subscriptions) -* ConnectionType: VNet2VNet **Values for TestVNet4:** @@ -112,8 +119,6 @@ We use the following values in the examples: * Public IP: VNet4GWIP * VPNType: RouteBased * Connection: VNet4toVNet1 -* ConnectionType: VNet2VNet - ### Step 1 - Connect to your subscription diff --git a/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md b/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md index 581b15fce5455..743f34ec2e109 100644 --- a/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md +++ b/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: hero-article ms.tgt_pltfrm: na ms.workload: infrastructure-services -ms.date: 11/27/2017 +ms.date: 11/29/2017 ms.author: cherylmc --- @@ -38,15 +38,23 @@ The steps in this article apply to the Resource Manager deployment model and use ## About connecting VNets -Connecting a virtual network to another virtual network using the VNet-to-VNet connection type is similar to creating an IPsec connection to an on-premises site location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE and both function the same way when communicating. The difference between the connection types is the way the local network gateway is configured. When you create a VNet-to-VNet connection, you do not see the local network gateway address space. It is automatically created and populated. If you update the address space for one VNet, the other VNet will automatically know to route to the updated address space. +There are multiple ways to connect VNets. The sections below describe different ways to connect virtual networks. -If you are working with a complicated configuration, you may prefer to use the IPsec connection type, rather than VNet-to-VNet. This lets you specify additional address space for the local network gateway in order to route traffic. If you connect your VNets using the IPsec connection type, you need to create and configure the local network gateway manually. For more information, see [Site-to-Site configurations](vpn-gateway-howto-site-to-site-resource-manager-portal.md). +### VNet-to-VNet -Additionally, if your VNets are in the same region, you may want to consider connecting them using VNet Peering. VNet peering does not use a VPN gateway and the pricing and functionality is somewhat different. For more information, see [VNet peering](../virtual-network/virtual-network-peering-overview.md). +Configuring a VNet-to-VNet connection is a good way to easily connect VNets. Connecting a virtual network to another virtual network using the VNet-to-VNet connection type (VNet2VNet) is similar to creating a Site-to-Site IPsec connection to an on-premises location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE, and both function the same way when communicating. The difference between the connection types is the way the local network gateway is configured. When you create a VNet-to-VNet connection, you do not see the local network gateway address space. It is automatically created and populated. If you update the address space for one VNet, the other VNet automatically knows to route to the updated address space. Creating a VNet-to-VNet connection is typically faster and easier than creating a Site-to-Site connection between VNets. -### Why create a VNet-to-VNet connection? +### Site-to-Site (IPsec) -You may want to connect virtual networks for the following reasons: +If you are working with a complicated network configuration, you may prefer to connect your VNets using the [Site-to-Site](vpn-gateway-howto-site-to-site-resource-manager-portal.md) steps instead. When you use the Site-to-Site IPsec steps, you create and configure the local network gateways manually. The local network gateway for each VNet treats the other VNet as a local site. This lets you specify additional address space for the local network gateway in order to route traffic. If the address space for a VNet changes, you need to update the corresponding local network gateway to reflect that. It does not automatically update. + +### VNet peering + +You may want to consider connecting your VNets using VNet Peering. VNet peering does not use a VPN gateway and has different constraints. Additionally, [VNet peering pricing](https://azure.microsoft.com/pricing/details/virtual-network) is calculated differently than [VNet-to-VNet VPN Gateway pricing](https://azure.microsoft.com/pricing/details/vpn-gateway). For more information, see [VNet peering](../virtual-network/virtual-network-peering-overview.md). + +## Why create a VNet-to-VNet connection? + +You may want to connect virtual networks using a VNet-to-VNet connection for the following reasons: * **Cross region geo-redundancy and geo-presence** diff --git a/articles/vpn-gateway/vpn-gateway-modify-local-network-gateway-cli.md b/articles/vpn-gateway/vpn-gateway-modify-local-network-gateway-cli.md index 9aa206b93e6a9..5120a13d0b78c 100644 --- a/articles/vpn-gateway/vpn-gateway-modify-local-network-gateway-cli.md +++ b/articles/vpn-gateway/vpn-gateway-modify-local-network-gateway-cli.md @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: article ms.tgt_pltfrm: na ms.workload: infrastructure-services -ms.date: 06/19/2017 +ms.date: 11/29/2017 ms.author: cherylmc --- diff --git a/articles/vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md b/articles/vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md index 784bb89bf15f0..1f0628389771f 100644 --- a/articles/vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md +++ b/articles/vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md @@ -36,15 +36,23 @@ The steps in this article apply to the Resource Manager deployment model and use ## About connecting VNets -Connecting a virtual network to another virtual network using the VNet-to-VNet connection type (VNet2VNet) is similar to creating an IPsec connection to an on-premises site location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE and both function the same way when communicating. The difference between the connection types is the way the local network gateway is configured. When you create a VNet-to-VNet connection, you do not see the local network gateway address space. It is automatically created and populated. If you update the address space for one VNet, the other VNet will automatically know to route to the updated address space. +There are multiple ways to connect VNets. The sections below describe different ways to connect virtual networks. -If you are working with a complicated configuration, you may prefer to use the IPsec connection type, rather than VNet-to-VNet. This lets you specify additional address space for the local network gateway in order to route traffic. If you connect your VNets using the IPsec connection type, you need to create and configure the local network gateway manually. For more information, see [Site-to-Site configurations](vpn-gateway-create-site-to-site-rm-powershell.md). +### VNet-to-VNet -Additionally, if your VNets are in the same region, you may want to consider connecting them using VNet Peering. VNet peering does not use a VPN gateway and the pricing and functionality is somewhat different. For more information, see [VNet peering](../virtual-network/virtual-network-peering-overview.md). +Configuring a VNet-to-VNet connection is a good way to easily connect VNets. Connecting a virtual network to another virtual network using the VNet-to-VNet connection type (VNet2VNet) is similar to creating a Site-to-Site IPsec connection to an on-premises location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE, and both function the same way when communicating. The difference between the connection types is the way the local network gateway is configured. When you create a VNet-to-VNet connection, you do not see the local network gateway address space. It is automatically created and populated. If you update the address space for one VNet, the other VNet automatically knows to route to the updated address space. Creating a VNet-to-VNet connection is typically faster and easier than creating a Site-to-Site connection between VNets. -### Why create a VNet-to-VNet connection? +### Site-to-Site (IPsec) -You may want to connect virtual networks for the following reasons: +If you are working with a complicated network configuration, you may prefer to connect your VNets using the [Site-to-Site](vpn-gateway-create-site-to-site-rm-powershell.md) steps, instead the VNet-to-VNet steps. When you use the Site-to-Site steps, you create and configure the local network gateways manually. The local network gateway for each VNet treats the other VNet as a local site. This lets you specify additional address space for the local network gateway in order to route traffic. If the address space for a VNet changes, you need to update the corresponding local network gateway to reflect the change. It does not automatically update. + +### VNet peering + +You may want to consider connecting your VNets using VNet Peering. VNet peering does not use a VPN gateway and has different constraints. Additionally, [VNet peering pricing](https://azure.microsoft.com/pricing/details/virtual-network) is calculated differently than [VNet-to-VNet VPN Gateway pricing](https://azure.microsoft.com/pricing/details/vpn-gateway). For more information, see [VNet peering](../virtual-network/virtual-network-peering-overview.md). + +## Why create a VNet-to-VNet connection? + +You may want to connect virtual networks using a VNet-to-VNet connection for the following reasons: * **Cross region geo-redundancy and geo-presence** @@ -56,7 +64,7 @@ You may want to connect virtual networks for the following reasons: VNet-to-VNet communication can be combined with multi-site configurations. This lets you establish network topologies that combine cross-premises connectivity with inter-virtual network connectivity. -## Which set of steps should I use? +## Which VNet-to-VNet steps should I use? In this article, you see two different sets of steps. One set of steps for [VNets that reside in the same subscription](#samesub) and one for [VNets that reside in different subscriptions](#difsub). The key difference between the sets is that you must use separate PowerShell sessions when configuring the connections for VNets that reside in different subscriptions. diff --git a/includes/app-service-web-create-app-service-plan-linux-no-h.md b/includes/app-service-web-create-app-service-plan-linux-no-h.md index 51d4400b5e543..b1663328b5bc7 100644 --- a/includes/app-service-web-create-app-service-plan-linux-no-h.md +++ b/includes/app-service-web-create-app-service-plan-linux-no-h.md @@ -1,4 +1,4 @@ -In the Cloud Shell, create an App Service plan in the resource group with the [az appservice plan create](/cli/azure/appservice/plan#create) command. +In the Cloud Shell, create an App Service plan in the resource group with the [az appservice plan create](/cli/azure/appservice/plan#az_appservice_plan_create) command. @@ -26,4 +26,4 @@ When the App Service plan has been created, the Azure CLI shows information simi "type": "Microsoft.Web/serverfarms", "workerTierName": null } -``` \ No newline at end of file +``` diff --git a/includes/app-service-web-create-app-service-plan-no-h.md b/includes/app-service-web-create-app-service-plan-no-h.md index d9217c57c7043..0e6edf4ff7ae8 100644 --- a/includes/app-service-web-create-app-service-plan-no-h.md +++ b/includes/app-service-web-create-app-service-plan-no-h.md @@ -1,4 +1,4 @@ -In the Cloud Shell, create an App Service plan with the [az appservice plan create](/cli/azure/appservice/plan#create) command. +In the Cloud Shell, create an App Service plan with the [az appservice plan create](/cli/azure/appservice/plan#az_appservice_plan_create) command. [!INCLUDE [app-service-plan](app-service-plan.md)] @@ -26,4 +26,4 @@ When the App Service plan has been created, the Azure CLI shows information simi "type": "Microsoft.Web/serverfarms", "workerTierName": null } -``` \ No newline at end of file +``` diff --git a/includes/app-service-web-create-resource-group-no-h.md b/includes/app-service-web-create-resource-group-no-h.md index 9d3ff6bf05157..cfc264c52e640 100644 --- a/includes/app-service-web-create-resource-group-no-h.md +++ b/includes/app-service-web-create-resource-group-no-h.md @@ -1,4 +1,4 @@ -In the Cloud Shell, create a resource group with the [az group create](/cli/azure/group#create) command. +In the Cloud Shell, create a resource group with the [az group create](/cli/azure/group#az_group_create) command. [!INCLUDE [resource group intro text](resource-group.md)] @@ -8,4 +8,4 @@ The following example creates a resource group named *myResourceGroup* in the *W az group create --name myResourceGroup --location "West Europe" ``` -You generally create your resource group and the resources in a region near you. \ No newline at end of file +You generally create your resource group and the resources in a region near you. diff --git a/includes/app-service-web-create-web-app-no-h.md b/includes/app-service-web-create-web-app-no-h.md index 3b950b10ef6ff..5cd4b7a8410fe 100644 --- a/includes/app-service-web-create-web-app-no-h.md +++ b/includes/app-service-web-create-web-app-no-h.md @@ -1,4 +1,4 @@ -In the Cloud Shell, create a [web app](../articles/app-service/app-service-web-overview.md) in the `myAppServicePlan` App Service plan with the [az webapp create](/cli/azure/webapp#create) command. +In the Cloud Shell, create a [web app](../articles/app-service/app-service-web-overview.md) in the `myAppServicePlan` App Service plan with the [az webapp create](/cli/azure/webapp#az_webapp_create) command. In the following example, replace *\* with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). diff --git a/includes/configure-deployment-user-no-h.md b/includes/configure-deployment-user-no-h.md index f3de157d58e4c..dda2c75ddb14c 100644 --- a/includes/configure-deployment-user-no-h.md +++ b/includes/configure-deployment-user-no-h.md @@ -1,4 +1,4 @@ -In the Cloud Shell, create deployment credentials with the [az webapp deployment user set](/cli/azure/webapp/deployment/user#set) command. A deployment user is required for FTP and local Git deployment to a web app. The user name and password are account level. _They are different from your Azure subscription credentials._ +In the Cloud Shell, create deployment credentials with the [az webapp deployment user set](/cli/azure/webapp/deployment/user#az_webapp_deployment_user_set) command. A deployment user is required for FTP and local Git deployment to a web app. The user name and password are account level. _They are different from your Azure subscription credentials._ In the following example, replace *\* and *\* (including brackets) with a new user name and password. The user name must be unique. The password must be at least eight characters long, with two of the following three elements: letters, numbers, symbols. @@ -13,4 +13,4 @@ You create this deployment user only once; you can use it for all your Azure dep > [!NOTE] > Record the user name and password. You need them to deploy the web app later. > -> \ No newline at end of file +> diff --git a/includes/data-factory-quickstart-prerequisites-2.md b/includes/data-factory-quickstart-prerequisites-2.md new file mode 100644 index 0000000000000..5113a21492726 --- /dev/null +++ b/includes/data-factory-quickstart-prerequisites-2.md @@ -0,0 +1,31 @@ +### Windows PowerShell + +#### Install PowerShell +Install the latest PowerShell if you don't have it on your machine. + +1. In your web browser, navigate to [Azure SDK Downloads and SDKS](https://azure.microsoft.com/downloads/) page. +2. Click **Windows install** in the **Command-line tools** -> **PowerShell** section. +3. To install PowerShell, run the **MSI** file. + +For detailed instructions, see [How to install and configure PowerShell](/powershell/azure/install-azurerm-ps). + +#### Log in to PowerShell + +1. Launch **PowerShell** on your machine. Keep PowerShell open until the end of this quickstart. If you close and reopen, you need to run these commands again. + + ![Launch PowerShell](media/data-factory-quickstart-prerequisites-2/search-powershell.png) +1. Run the following command, and enter the same Azure user name and password that you use to sign in to the Azure portal: + + ```powershell + Login-AzureRmAccount + ``` +2. If you have multiple Azure subscriptions, run the following command to view all the subscriptions for this account: + + ```powershell + Get-AzureRmSubscription + ``` +3. Run the following command to select the subscription that you want to work with. Replace **SubscriptionId** with the ID of your Azure subscription: + + ```powershell + Select-AzureRmSubscription -SubscriptionId "" + ``` diff --git a/includes/data-factory-quickstart-prerequisites.md b/includes/data-factory-quickstart-prerequisites.md new file mode 100644 index 0000000000000..019915f430f01 --- /dev/null +++ b/includes/data-factory-quickstart-prerequisites.md @@ -0,0 +1,61 @@ +## Prerequisites + +### Azure subscription +If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. + +### Azure roles +To create Data Factory instances, the user account you use to log in to Azure must be a member of **contributor** or **owner** roles, or an **administrator** of the Azure subscription. In the Azure portal, click your **user name** at the top-right corner, and select **Permissions** to view the permissions you have in the subscription. If you have access to multiple subscriptions, select the appropriate subscription. For sample instructions on adding a user to a role, see the [Add roles](../articles/billing/billing-add-change-azure-subscription-administrator.md) article. + +### Azure Storage Account +You use a general-purpose Azure Storage Account (specifically Blob Storage) as both **source** and **destination** data stores in this quickstart. If you don't have a general-purpose Azure storage account, see [Create a storage account](../articles/storage/common/storage-create-storage-account.md#create-a-storage-account) on creating one. + +#### Get storage account name and account key +You use the name and key of your Azure storage account in this quickstart. The following procedure provides steps to get the name and key of your storage account. + +1. Launch a Web browser and navigate to [Azure portal](https://portal.azure.com). Log in using your Azure user name and password. +2. Click **More services >** in the left menu, and filter with **Storage** keyword, and select **Storage accounts**. + + ![Search for storage account](media/data-factory-quickstart-prerequisites/search-storage-account.png) +3. In the list of storage accounts, filter for your storage account (if needed), and then select **your storage account**. +4. In the **Storage account** page, select **Access keys** on the menu. + + ![Get storage account name and key](media/data-factory-quickstart-prerequisites/storage-account-name-key.png) +5. Copy the values for **Storage account name** and **key1** fields to the clipboard. Paste them into a notepad or any other editor and save it. You use them later in this quickstart. + +#### Create input folder and files +In this section, you create a blob container named **adftutorial** in your Azure blob storage. Then, you create a folder named **input** in the container, and then upload a sample file to the input folder. + +1. In the **Storage account** page, switch to the **Overview**, and then click **Blobs**. + + ![Select Blobs option](media/data-factory-quickstart-prerequisites/select-blobs.png) +2. In the **Blob service** page, click **+ Container** on the toolbar. + + ![Add container button](media/data-factory-quickstart-prerequisites/add-container-button.png) +3. In the **New container** dialog, enter **adftutorial** for the name, and click **OK**. + + ![Enter container name](media/data-factory-quickstart-prerequisites/new-container-dialog.png) +4. Click **adftutorial** in the list of containers. + + ![Select the container](media/data-factory-quickstart-prerequisites/seelct-adftutorial-container.png) +1. In the **Container** page, click **Upload** on the toolbar. + + ![Upload button](media/data-factory-quickstart-prerequisites/upload-toolbar-button.png) +6. In the **Upload blob** page, click **Advanced**. + + ![Click Advanced link](media/data-factory-quickstart-prerequisites/upload-blob-advanced.png) +7. Launch **Notepad** and create a file named **emp.txt** with the following content: Save it in the **c:\ADFv2QuickStartPSH** folder: Create the folder **ADFv2QuickStartPSH** if it does not already exist. + + ``` + John, Doe + Jane, Doe + ``` +8. In the Azure portal, in the **Upload blob** page, browse, and select the **emp.txt** file for the **Files** field. +9. Enter **input** as a value **Upload to folder** filed. + + ![Upload blob settings](media/data-factory-quickstart-prerequisites/upload-blob-settings.png) +10. Confirm that the folder is **input** and file is **emp.txt**, and click **Upload**. +11. You should see the **emp.txt** file and the status of the upload in the list. +12. Close the **Upload blob** page by clicking **X** in the corner. + + ![Close upload blob page](media/data-factory-quickstart-prerequisites/close-upload-blob.png) +1. Keep the **container** page open. You use it to verify the output at the end of this quickstart. \ No newline at end of file diff --git a/includes/data-factory-quickstart-verify-output-cleanup.md b/includes/data-factory-quickstart-verify-output-cleanup.md new file mode 100644 index 0000000000000..dc112fcb333ba --- /dev/null +++ b/includes/data-factory-quickstart-verify-output-cleanup.md @@ -0,0 +1,24 @@ +## Verify the output +The pipeline automatically creates the output folder in the adftutorial blob container. Then, it copies the emp.txt file from the input folder to the output folder. + +1. In the Azure portal, on the **adftutorial** container page, click **Refresh** to see the output folder. + + ![Refresh](media/data-factory-quickstart-verify-output-cleanup/output-refresh.png) +2. Click **output** in the folder list. +2. Confirm that the **emp.txt** is copied to the output folder. + + ![Refresh](media/data-factory-quickstart-verify-output-cleanup/output-file.png) + +## Clean up resources +You can clean up the resources that you created in the Quickstart in two ways. You can delete the [Azure resource group](../articles/azure-resource-manager/resource-group-overview.md), which includes all the resources in the resource group. If you want to keep the other resources intact, delete only the data factory you created in this tutorial. + +Deleting a resource group deletes all resources including data factories in it. Run the following command to delete the entire resource group: +```powershell +Remove-AzureRmResourceGroup -ResourceGroupName $resourcegroupname +``` + +If you want to delete just the data factory, not the entire resource group, run the following command: + +```powershell +Remove-AzureRmDataFactoryV2 -Name $dataFactoryName -ResourceGroupName $resourceGroupName +``` \ No newline at end of file diff --git a/includes/functions-bindings-next-steps.md b/includes/functions-bindings-next-steps.md deleted file mode 100644 index 0d236fa43c314..0000000000000 --- a/includes/functions-bindings-next-steps.md +++ /dev/null @@ -1,2 +0,0 @@ -For information about other bindings and triggers for Azure Functions, see [Azure Functions triggers and bindings developer reference](../articles/azure-functions/functions-triggers-bindings.md). - diff --git a/includes/functions-bindings.md b/includes/functions-bindings.md index 7fbc057685589..f5aced992176f 100644 --- a/includes/functions-bindings.md +++ b/includes/functions-bindings.md @@ -1,25 +1,29 @@ -| Type | Service | Trigger* | Input | Output | -| --- | --- | --- | --- | --- | -| [Schedule](../articles/azure-functions/functions-bindings-timer.md) |Azure Functions |✔ | | |   -| [HTTP (REST or webhook)](../articles/azure-functions/functions-bindings-http-webhook.md) |Azure Functions |✔ | |✔\** |   -| [Blob Storage](../articles/azure-functions/functions-bindings-storage-blob.md) |Azure Storage |✔ |✔ |✔ |   -| [Events](../articles/azure-functions/functions-bindings-event-hubs.md) |Azure Event Hubs |✔ | |✔ |   -| [Queues](../articles/azure-functions/functions-bindings-storage-queue.md) |Azure Storage |✔ | |✔ |   -| [Queues and topics](../articles/azure-functions/functions-bindings-service-bus.md) |Azure Service Bus |✔ | |✔ |   -| [Storage tables](../articles/azure-functions/functions-bindings-storage-table.md) |Azure Storage | |✔ |✔ |   -| [SQL tables](../articles/azure-functions/functions-bindings-mobile-apps.md) |Azure Mobile Apps | |✔ |✔ |   -| [NoSQL DB](../articles/azure-functions/functions-bindings-documentdb.md) | Azure Cosmos DB |✔ |✔ |✔ |   -| [Push Notifications](../articles/azure-functions/functions-bindings-notification-hubs.md) |Azure Notification Hubs | | |✔ |   -| [Twilio SMS Text](../articles/azure-functions/functions-bindings-twilio.md) |Twilio | | |✔ | -| [SendGrid email](../articles/azure-functions/functions-bindings-sendgrid.md) | SendGrid | | |✔ | -| [Excel tables](../articles/azure-functions/functions-bindings-microsoft-graph.md) | Microsoft Graph | |✔ |✔ | -| [OneDrive files](../articles/azure-functions/functions-bindings-microsoft-graph.md) | Microsoft Graph | |✔ |✔ | -| [Outlook email](../articles/azure-functions/functions-bindings-microsoft-graph.md) | Microsoft Graph | | |✔ | -| [Microsoft Graph events](../articles/azure-functions/functions-bindings-microsoft-graph.md) | Microsoft Graph |✔ |✔ |✔ | -| [Auth tokens](../articles/azure-functions/functions-bindings-microsoft-graph.md) | Microsoft Graph | |✔ | | +The following table shows the bindings that are supported in the two major versions of the Azure Functions runtime. -(\* - All triggers have associated input data) - -(\** - The HTTP output binding requires an HTTP trigger) +| Type | 1.x | 2.x | Trigger | Input | Output | +| ---- | :-: | :-: | :------: | :---: | :----: | +| [Blob Storage](../articles/azure-functions/functions-bindings-storage-blob.md) |✔|✔|✔|✔|✔|   +| [Cosmos DB](../articles/azure-functions/functions-bindings-documentdb.md) |✔|✔1|✔|✔|✔|   +| [Event Hubs](../articles/azure-functions/functions-bindings-event-hubs.md) |✔|✔|✔| |✔|   +| [External File](../articles/azure-functions/functions-bindings-external-file.md)2 |✔|| |✔|✔|   +| [External Table](../articles/azure-functions/functions-bindings-external-table.md)2 |✔|| |✔|✔|   +| [HTTP](../articles/azure-functions/functions-bindings-http-webhook.md) |✔|✔|✔| |✔| +| [Microsoft Graph
    Excel tables](../articles/azure-functions/functions-bindings-microsoft-graph.md) ||✔1| |✔|✔| +| [Microsoft Graph
    OneDrive files](../articles/azure-functions/functions-bindings-microsoft-graph.md) ||✔1| |✔|✔| +| [Microsoft Graph
    Outlook email](../articles/azure-functions/functions-bindings-microsoft-graph.md) ||✔1| | |✔| +| [Microsoft Graph
    Events](../articles/azure-functions/functions-bindings-microsoft-graph.md) ||✔1|✔|✔|✔| +| [Microsoft Graph
    Auth tokens](../articles/azure-functions/functions-bindings-microsoft-graph.md) ||✔1| |✔| | +| [Mobile Apps](../articles/azure-functions/functions-bindings-mobile-apps.md) |✔|✔1| |✔|✔|   +| [Notification Hubs](../articles/azure-functions/functions-bindings-notification-hubs.md) |✔|| | |✔| +| [Queue storage](../articles/azure-functions/functions-bindings-storage-queue.md) |✔|✔|✔| |✔|   +| [SendGrid](../articles/azure-functions/functions-bindings-sendgrid.md) |✔|✔1| | |✔| +| [Service Bus](../articles/azure-functions/functions-bindings-service-bus.md) |✔|✔1|✔| |✔|   +| [Table storage](../articles/azure-functions/functions-bindings-storage-table.md) |✔|✔| |✔|✔|   +| [Timer](../articles/azure-functions/functions-bindings-timer.md) |✔|✔|✔| | | +| [Twilio](../articles/azure-functions/functions-bindings-twilio.md) |✔|✔1| | |✔| +| [Webhooks](../articles/azure-functions/functions-bindings-http-webhook.md) |✔||✔| |✔| +   +1 Must be registered as a binding extension in 2.x. See [Known issues in 2.x](https://github.com/Azure/azure-webjobs-sdk-script/wiki/Azure-Functions-runtime-2.0-known-issues). +2 Experimental — not supported and might be abandoned in the future. diff --git a/includes/functions-host-json-http.md b/includes/functions-host-json-http.md new file mode 100644 index 0000000000000..11b5e3ddb9f9e --- /dev/null +++ b/includes/functions-host-json-http.md @@ -0,0 +1,17 @@ +```json +{ + "http": { + "routePrefix": "api", + "maxOutstandingRequests": 20, + "maxConcurrentRequests": 10, + "dynamicThrottlesEnabled": false + } +} +``` + +|Property |Default | Description | +|---------|---------|---------| +|routePrefix|api|The route prefix that applies to all routes. Use an empty string to remove the default prefix. | +|maxOutstandingRequests|-1|The maximum number of outstanding requests that will be held at any given time (-1 means unbounded). The limit includes requests that are queued but have not started executing, as well as any in-progress executions. Any incoming requests over this limit are rejected with a 429 "Too Busy" response. Callers can use that response to employ time-based retry strategies. This setting controls only queuing that occurs within the job host execution path. Other queues, such as the ASP.NET request queue, are unaffected by this setting. | +|maxConcurrentRequests|-1|The maximum number of HTTP functions that will be executed in parallel (-1 means unbounded). For example, you could set a limit if your HTTP functions use too many system resources when concurrency is high. Or if your functions make outbound requests to a third-party service, those calls might need to be rate-limited.| +|dynamicThrottlesEnabled|false|Causes the request processing pipeline to periodically check system performance counters. Counters include connections, threads, processes, memory, and cpu. If any of the counters are over a built-in threshold (80%), requests are rejected with a 429 "Too Busy" response until the counter(s) return to normal levels.| diff --git a/includes/functions-selector-bindings.md b/includes/functions-selector-bindings.md deleted file mode 100644 index f1e15edcce69e..0000000000000 --- a/includes/functions-selector-bindings.md +++ /dev/null @@ -1,16 +0,0 @@ - -> [!div class="op_single_selector"] -> * [Cosmos DB](../articles/azure-functions/functions-bindings-documentdb.md) -> * [Event Hubs](../articles/azure-functions/functions-bindings-event-hubs.md) -> * [HTTP/webhook](../articles/azure-functions/functions-bindings-http-webhook.md) -> * [Microsoft Graph](../articles/azure-functions/functions-bindings-microsoft-graph.md) -> * [Mobile Apps](../articles/azure-functions/functions-bindings-mobile-apps.md) -> * [Notification Hubs](../articles/azure-functions/functions-bindings-notification-hubs.md) -> * [Service Bus](../articles/azure-functions/functions-bindings-service-bus.md) -> * [Storage Queue](../articles/azure-functions/functions-bindings-storage-queue.md) -> * [Storage Blob](../articles/azure-functions/functions-bindings-storage-blob.md) -> * [Storage Table](../articles/azure-functions/functions-bindings-storage-table.md) -> * [Timer](../articles/azure-functions/functions-bindings-timer.md) -> * [Twilio](../articles/azure-functions/functions-bindings-twilio.md) -> -> diff --git a/includes/media/data-factory-quickstart-prerequisites-2/search-powershell.png b/includes/media/data-factory-quickstart-prerequisites-2/search-powershell.png new file mode 100644 index 0000000000000..dc16967673eb0 Binary files /dev/null and b/includes/media/data-factory-quickstart-prerequisites-2/search-powershell.png differ diff --git a/includes/media/data-factory-quickstart-prerequisites/add-container-button.png b/includes/media/data-factory-quickstart-prerequisites/add-container-button.png new file mode 100644 index 0000000000000..0d6de6338d37a Binary files /dev/null and b/includes/media/data-factory-quickstart-prerequisites/add-container-button.png differ diff --git a/includes/media/data-factory-quickstart-prerequisites/close-upload-blob.png b/includes/media/data-factory-quickstart-prerequisites/close-upload-blob.png new file mode 100644 index 0000000000000..3fac23c0817bd Binary files /dev/null and b/includes/media/data-factory-quickstart-prerequisites/close-upload-blob.png differ diff --git a/includes/media/data-factory-quickstart-prerequisites/new-container-dialog.png b/includes/media/data-factory-quickstart-prerequisites/new-container-dialog.png new file mode 100644 index 0000000000000..b279f5f4ea543 Binary files /dev/null and b/includes/media/data-factory-quickstart-prerequisites/new-container-dialog.png differ diff --git a/includes/media/data-factory-quickstart-prerequisites/search-storage-account.png b/includes/media/data-factory-quickstart-prerequisites/search-storage-account.png new file mode 100644 index 0000000000000..4fd612c9b16b6 Binary files /dev/null and b/includes/media/data-factory-quickstart-prerequisites/search-storage-account.png differ diff --git a/includes/media/data-factory-quickstart-prerequisites/seelct-adftutorial-container.png b/includes/media/data-factory-quickstart-prerequisites/seelct-adftutorial-container.png new file mode 100644 index 0000000000000..3aaf4240660c9 Binary files /dev/null and b/includes/media/data-factory-quickstart-prerequisites/seelct-adftutorial-container.png differ diff --git a/includes/media/data-factory-quickstart-prerequisites/select-blobs.png b/includes/media/data-factory-quickstart-prerequisites/select-blobs.png new file mode 100644 index 0000000000000..93d45b8d85779 Binary files /dev/null and b/includes/media/data-factory-quickstart-prerequisites/select-blobs.png differ diff --git a/includes/media/data-factory-quickstart-prerequisites/storage-account-name-key.png b/includes/media/data-factory-quickstart-prerequisites/storage-account-name-key.png new file mode 100644 index 0000000000000..4b7ea4f7d0a26 Binary files /dev/null and b/includes/media/data-factory-quickstart-prerequisites/storage-account-name-key.png differ diff --git a/includes/media/data-factory-quickstart-prerequisites/upload-blob-advanced.png b/includes/media/data-factory-quickstart-prerequisites/upload-blob-advanced.png new file mode 100644 index 0000000000000..5c77c3bd82961 Binary files /dev/null and b/includes/media/data-factory-quickstart-prerequisites/upload-blob-advanced.png differ diff --git a/includes/media/data-factory-quickstart-prerequisites/upload-blob-settings.png b/includes/media/data-factory-quickstart-prerequisites/upload-blob-settings.png new file mode 100644 index 0000000000000..d38608e015d64 Binary files /dev/null and b/includes/media/data-factory-quickstart-prerequisites/upload-blob-settings.png differ diff --git a/includes/media/data-factory-quickstart-prerequisites/upload-toolbar-button.png b/includes/media/data-factory-quickstart-prerequisites/upload-toolbar-button.png new file mode 100644 index 0000000000000..ccfd334e22489 Binary files /dev/null and b/includes/media/data-factory-quickstart-prerequisites/upload-toolbar-button.png differ diff --git a/includes/media/data-factory-quickstart-verify-output-cleanup/output-file.png b/includes/media/data-factory-quickstart-verify-output-cleanup/output-file.png new file mode 100644 index 0000000000000..4c166bf8e87da Binary files /dev/null and b/includes/media/data-factory-quickstart-verify-output-cleanup/output-file.png differ diff --git a/includes/media/data-factory-quickstart-verify-output-cleanup/output-refresh.png b/includes/media/data-factory-quickstart-verify-output-cleanup/output-refresh.png new file mode 100644 index 0000000000000..b5d78cebd6846 Binary files /dev/null and b/includes/media/data-factory-quickstart-verify-output-cleanup/output-refresh.png differ diff --git a/includes/media/publish-web-app-from-visual-studio/CertificateError.png b/includes/media/publish-web-app-from-visual-studio/CertificateError.png new file mode 100644 index 0000000000000..dc2b5eabb1e2b Binary files /dev/null and b/includes/media/publish-web-app-from-visual-studio/CertificateError.png differ diff --git a/includes/media/publish-web-app-from-visual-studio/ChooseVM-SelectAccount.png b/includes/media/publish-web-app-from-visual-studio/ChooseVM-SelectAccount.png new file mode 100644 index 0000000000000..94b848f18c9f2 Binary files /dev/null and b/includes/media/publish-web-app-from-visual-studio/ChooseVM-SelectAccount.png differ diff --git a/includes/media/publish-web-app-from-visual-studio/ChooseVM-SelectVM.png b/includes/media/publish-web-app-from-visual-studio/ChooseVM-SelectVM.png new file mode 100644 index 0000000000000..3dc51aa671feb Binary files /dev/null and b/includes/media/publish-web-app-from-visual-studio/ChooseVM-SelectVM.png differ diff --git a/includes/media/publish-web-app-from-visual-studio/OutputWindow.png b/includes/media/publish-web-app-from-visual-studio/OutputWindow.png new file mode 100644 index 0000000000000..3112cff3a44cf Binary files /dev/null and b/includes/media/publish-web-app-from-visual-studio/OutputWindow.png differ diff --git a/includes/media/publish-web-app-from-visual-studio/PublishPageMicrosoftAzureVirtualMachineIcon.png b/includes/media/publish-web-app-from-visual-studio/PublishPageMicrosoftAzureVirtualMachineIcon.png new file mode 100644 index 0000000000000..e455283f842f3 Binary files /dev/null and b/includes/media/publish-web-app-from-visual-studio/PublishPageMicrosoftAzureVirtualMachineIcon.png differ diff --git a/includes/media/publish-web-app-from-visual-studio/PublishPagePublishButton.png b/includes/media/publish-web-app-from-visual-studio/PublishPagePublishButton.png new file mode 100644 index 0000000000000..de8af0344f4b9 Binary files /dev/null and b/includes/media/publish-web-app-from-visual-studio/PublishPagePublishButton.png differ diff --git a/includes/media/publish-web-app-from-visual-studio/PublishPageRightArrow.png b/includes/media/publish-web-app-from-visual-studio/PublishPageRightArrow.png new file mode 100644 index 0000000000000..5b8d9622cf8e7 Binary files /dev/null and b/includes/media/publish-web-app-from-visual-studio/PublishPageRightArrow.png differ diff --git a/includes/media/publish-web-app-from-visual-studio/PublishPageSettingsButton.png b/includes/media/publish-web-app-from-visual-studio/PublishPageSettingsButton.png new file mode 100644 index 0000000000000..a910e0dcdb756 Binary files /dev/null and b/includes/media/publish-web-app-from-visual-studio/PublishPageSettingsButton.png differ diff --git a/includes/media/publish-web-app-from-visual-studio/PublishSettingsConnectionPage.png b/includes/media/publish-web-app-from-visual-studio/PublishSettingsConnectionPage.png new file mode 100644 index 0000000000000..68bb42e38f589 Binary files /dev/null and b/includes/media/publish-web-app-from-visual-studio/PublishSettingsConnectionPage.png differ diff --git a/includes/media/publish-web-app-from-visual-studio/PublishSettingsSettingsPage.png b/includes/media/publish-web-app-from-visual-studio/PublishSettingsSettingsPage.png new file mode 100644 index 0000000000000..5fcae7e350d48 Binary files /dev/null and b/includes/media/publish-web-app-from-visual-studio/PublishSettingsSettingsPage.png differ diff --git a/includes/media/publish-web-app-from-visual-studio/WebDeployLogin.png b/includes/media/publish-web-app-from-visual-studio/WebDeployLogin.png new file mode 100644 index 0000000000000..38ed733f01236 Binary files /dev/null and b/includes/media/publish-web-app-from-visual-studio/WebDeployLogin.png differ diff --git a/includes/media/sql-database-connect-query-prerequisites-server-connection-info-includes/server-name.png b/includes/media/sql-database-connect-query-prerequisites-server-connection-info-includes/server-name.png new file mode 100644 index 0000000000000..9250ce80a9e4b Binary files /dev/null and b/includes/media/sql-database-connect-query-prerequisites-server-connection-info-includes/server-name.png differ diff --git a/includes/site-recovery-install-mob-svc-win-cmd.md b/includes/site-recovery-install-mob-svc-win-cmd.md index b0e9dc2c1f669..b0ee7aa9c6540 100644 --- a/includes/site-recovery-install-mob-svc-win-cmd.md +++ b/includes/site-recovery-install-mob-svc-win-cmd.md @@ -29,7 +29,7 @@ UnifiedAgent.exe /Role /InstallLocation /Platform “ |-|-|-|-| |/Role|Mandatory|Specifies whether Mobility Service (MS) should be installed or MasterTarget(MT) should be installed|MS
    MT| |/InstallLocation|Optional|Location where Mobility Service is installed|Any folder on the computer| -|/Platform|Mandatory|Specifies the platform on which the Mobility Service is getting installed

    - **VMware** : use this value if you are installing mobility service on a VM running on *VMware vSphere ESXi Hosts*, *Hyper-V Hosts* and *Phsyical Servers*
    - **Azure** : use this value if you are installing agent on a Azure IaaS VM| VMware
    Azure| +|/Platform|Mandatory|Specifies the platform on which the Mobility Service is getting installed

    - **VMware** : use this value if you are installing mobility service on a VM running on *VMware vSphere ESXi Hosts*, *Hyper-V Hosts* and *Physical Servers*
    - **Azure** : use this value if you are installing agent on a Azure IaaS VM| VMware
    Azure| |/Silent|Optional|Specifies to run the installer in silent mode| NA| >[!TIP] @@ -39,7 +39,7 @@ UnifiedAgent.exe /Role /InstallLocation /Platform “ ``` Usage : -UnifiedAgentConfigurator.exe” /CSEndPoint /PassphraseFilePath +UnifiedAgentConfigurator.exe /CSEndPoint /PassphraseFilePath ``` | Parameter|Type|Description|Possible values| diff --git a/includes/sql-database-connect-query-prerequisites-create-db-includes.md b/includes/sql-database-connect-query-prerequisites-create-db-includes.md new file mode 100644 index 0000000000000..cfe5e77156d66 --- /dev/null +++ b/includes/sql-database-connect-query-prerequisites-create-db-includes.md @@ -0,0 +1,8 @@ + + + +- An Azure SQL database. You can use one of these techniques to create a database: + + - [Create DB - Portal](../articles/sql-database/sql-database-get-started-portal.md) + - [Create DB - CLI](../articles/sql-database/sql-database-get-started-cli.md) + - [Create DB - PowerShell](../articles/sql-database/sql-database-get-started-powershell.md) diff --git a/includes/sql-database-connect-query-prerequisites-server-connection-info-includes.md b/includes/sql-database-connect-query-prerequisites-server-connection-info-includes.md new file mode 100644 index 0000000000000..fb8cf095914dc --- /dev/null +++ b/includes/sql-database-connect-query-prerequisites-server-connection-info-includes.md @@ -0,0 +1,16 @@ + + + +Get the connection information needed to connect to the Azure SQL database. You will need the fully qualified server name, database name, and login information in the next procedures. + +1. Log in to the [Azure portal](https://portal.azure.com/). +2. Select **SQL Databases** from the left-hand menu, and click your database on the **SQL databases** page. +3. On the **Overview** page for your database, review the fully qualified server name as shown in the following image. You can hover over the server name to bring up the **Click to copy** option. + + ![server-name](./media/sql-database-connect-query-prerequisites-server-connection-info-includes/server-name.png) + +4. If you forget your server login information, navigate to the SQL Database server page to view the server admin name. If necessary, reset the password. + diff --git a/includes/virtual-machines-common-acu.md b/includes/virtual-machines-common-acu.md index c8aa2945cedf2..ae76f7964439e 100644 --- a/includes/virtual-machines-common-acu.md +++ b/includes/virtual-machines-common-acu.md @@ -1,4 +1,4 @@ - + @@ -27,7 +27,7 @@ We have created the concept of the Azure Compute Unit (ACU) to provide a way of | [Ds_v3](../articles/virtual-machines/virtual-machines-windows-sizes-general.md) |160-190* ** | | [E_v3](../articles/virtual-machines/virtual-machines-windows-sizes-memory.md) |160-190* ** | | [Es_v3](../articles/virtual-machines/virtual-machines-windows-sizes-memory.md) |160-190* ** | -| [F2s_v2-F72s_v2](../articles/virtual-machines/windows/sizes-compute.md) |195-210* | +| [F2s_v2-F72s_v2](../articles/virtual-machines/windows/sizes-compute.md) |195-210* ** | | [F1-F16](../articles/virtual-machines/windows/sizes-compute.md) |210-250* | | [F1s-F16s](../articles/virtual-machines/windows/sizes-compute.md) |210-250* | | [G1-G5](../articles/virtual-machines/virtual-machines-windows-sizes-memory.md) |180 - 240* | diff --git a/includes/virtual-machines-common-sizes-compute.md b/includes/virtual-machines-common-sizes-compute.md index a7901fa13bfde..30daac5b4898f 100644 --- a/includes/virtual-machines-common-sizes-compute.md +++ b/includes/virtual-machines-common-sizes-compute.md @@ -16,7 +16,7 @@ The Fs-series provides all the advantages of the F-series, in addition to Premiu ACU: 195 - 210 -| Size | vCPU's | Memory: GiB | Local SSD: GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) | Max NICs / Expected network bandwidth (Mbps) | +| Size | vCPU's | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) | Max NICs / Expected network bandwidth (Mbps) | |------------------|--------|-------------|----------------|----------------|-----------------------------------------------------------------------|------------------------------------------------| | Standard_F2s_v2 | 2 | 4 | 16 | 4 | 4000 (32) | Moderate | | Standard_F4s_v2 | 4 | 8 | 32 | 8 | 8000 (64) | Moderate | diff --git a/includes/virtual-machines-common-sizes-storage.md b/includes/virtual-machines-common-sizes-storage.md index 30658b04d108b..3f27f56daff39 100644 --- a/includes/virtual-machines-common-sizes-storage.md +++ b/includes/virtual-machines-common-sizes-storage.md @@ -10,8 +10,8 @@ ACU: 180-240 |---------------|-----------|-------------|--------------------------|----------------|-------------------------------------------------------------|-------------------------------------------|------------------------------| | Standard_L4s | 4 | 32 | 678 | 8 | 20,000 / 200 | 10,000 / 250 | 2 / 4,000 | | Standard_L8s | 8 | 64 | 1,388 | 16 | 40,000 / 400 | 20,000 / 500 | 4 / 8,000 | -| Standard_L16s | 16 | 128 | 2,807 | 32 | 90.000 / 800 | 10,000 / 1,000 | 8 / 6,000 - 16,000 † | -| Standard_L32s* | 32 | 256 | 5,630 | 64 | 160,000 / 1,600 | 90,000 / 2,000 | 8 / 20,000 | +| Standard_L16s | 16 | 128 | 2,807 | 32 | 80,000 / 800 | 40,000 / 1,000 | 8 / 6,000 - 16,000 † | +| Standard_L32s* | 32 | 256 | 5,630 | 64 | 160,000 / 1,600 | 80,000 / 2,000 | 8 / 20,000 | The maximum disk throughput possible with Ls-series VMs may be limited by the number, size, and striping of any attached disks. For details, see [Premium Storage: High-performance storage for Azure virtual machine workloads](../articles/virtual-machines/windows/premium-storage.md). diff --git a/includes/virtual-machines-using-managed-disks-template-deployments.md b/includes/virtual-machines-using-managed-disks-template-deployments.md index 424f7e589d6b9..36d7567cae196 100644 --- a/includes/virtual-machines-using-managed-disks-template-deployments.md +++ b/includes/virtual-machines-using-managed-disks-template-deployments.md @@ -215,5 +215,5 @@ To find full information on the REST API specifications, please review the [crea * [Full list of managed disk templates](https://github.com/Azure/azure-quickstart-templates/blob/master/managed-disk-support-list.md) * Visit the [Azure Managed Disks Overview](../articles/virtual-machines/windows/managed-disks-overview.md) document to learn more about managed disks. * Review the template reference documentation for virtual machine resources by visiting the [Microsoft.Compute/virtualMachines template reference](/azure/templates/microsoft.compute/virtualmachines) document. -* Review the template reference documentation for disk resources by visiting the [Microsoft.Compute/disks template reference](/templates/microsoft.compute/disks) document. +* Review the template reference documentation for disk resources by visiting the [Microsoft.Compute/disks template reference](/azure/templates/microsoft.compute/disks) document. diff --git a/includes/vpn-gateway-gwsku-include.md b/includes/vpn-gateway-gwsku-include.md index d64c4e1b012af..59e36656da229 100644 --- a/includes/vpn-gateway-gwsku-include.md +++ b/includes/vpn-gateway-gwsku-include.md @@ -22,7 +22,7 @@ The new gateway SKUs streamline the feature sets offered on the gateways: | **SKU**| **Features**| | --- | --- | -|**Basic** | **Route-based VPN**: 10 tunnels with P2S; no RADIUS authentication; no IKEv2
    **Policy-based VPN**: (IKEv1): 1 tunnel; no P2S| +|**Basic** | **Route-based VPN**: 10 tunnels with P2S; no RADIUS authentication for P2S; no IKEv2 for P2S
    **Policy-based VPN**: (IKEv1): 1 tunnel; no P2S| | **VpnGw1, VpnGw2, and VpnGw3** | **Route-based VPN**: up to 30 tunnels (*), P2S, BGP, active-active, custom IPsec/IKE policy, ExpressRoute/VPN co-existence | | | | diff --git a/includes/vpn-gateway-modify-ip-prefix-cli-include.md b/includes/vpn-gateway-modify-ip-prefix-cli-include.md index c3d9a0b984bc5..d1f818429e67e 100644 --- a/includes/vpn-gateway-modify-ip-prefix-cli-include.md +++ b/includes/vpn-gateway-modify-ip-prefix-cli-include.md @@ -5,7 +5,7 @@ If you don't have a gateway connection and you want to add or remove IP address Each time you make a change, the entire list of prefixes must be specified, not just the prefixes that you want to change. Specify only the prefixes that you want to keep. In this case, 10.0.0.0/24 and 20.0.0.0/24 ```azurecli -az network local-gateway create --gateway-ip-address 23.99.221.164 --name Site2 --connection-name TestRG1 --local-address-prefixes 10.0.0.0/24 20.0.0.0/24 +az network local-gateway create --gateway-ip-address 23.99.221.164 --name Site2 -g TestRG1 --local-address-prefixes 10.0.0.0/24 20.0.0.0/24 ``` ### To modify local network gateway IP address prefixes - existing gateway connection @@ -15,5 +15,5 @@ If you have a gateway connection and want to add or remove IP address prefixes, Each time you make a change, the entire list of prefixes must be specified, not just the prefixes that you want to change. In this example, 10.0.0.0/24 and 20.0.0.0/24 are already present. We add the prefixes 30.0.0.0/24 and 40.0.0.0/24 and specify all 4 of the prefixes when updating. ```azurecli -az network local-gateway update --local-address-prefixes 10.0.0.0/24 20.0.0.0/24 30.0.0.0/24 40.0.0.0/24 --name VNet1toSite2 --connection-name TestRG1 -``` \ No newline at end of file +az network local-gateway update --local-address-prefixes 10.0.0.0/24 20.0.0.0/24 30.0.0.0/24 40.0.0.0/24 --name VNet1toSite2 -g TestRG1 +```