Quantcast
Channel: Randy Riness @ SPSCC aggregator
Viewing all 3015 articles
Browse latest View live

MSDN Blogs: Developer controlled app updates

$
0
0

Keeping apps up to date is a somewhat tedious task; it requires extra work on developers to check versions, guide the user to the store and hope they install the update – the result is lots of user confusion causing old apps to continue existing in the wild. We’ve heard this feedback from developers and are super excited to announce that starting at build 14393 and beyond, Windows 10 allows developers to make stronger guarantees around app updates!

Doing this requires a few simple APIs, creates a consistent and predictable user experience and lets developers to focus on what they do best while allowing Windows to do the heavy lifting. Here’s what the UI would look like to a user.

 

Non Blocking Update

 

More details? Sure thing!

Implementing in app

There are two fundamental ways that app updates can be managed. In both cases, the net result for these methods is the same – the update is applied. However, in one case, you can choose to let the system do all the work while in the other case you might want to have a deeper level of control over the user experience.

Simple updates

First and foremost is the very simple API call that tells the system to check for updates, download them and then request permission from the user to install them. You’ll start by using the StoreContext class to get StorePackageUpdate objects, download and install them.

using Windows.Services.Store;

private async void GetEasyUpdates()
{
    StoreContext updateManager = StoreContext.GetDefault();
    IReadOnlyList<StorePackageUpdate> updates = await updateManager.GetAppAndOptionalStorePackageUpdatesAsync();

    if (updates.Count > 0)
    {
        IAsyncOperationWithProgress downloadOperation = updateManager.RequestDownloadAndInstallStorePackageUpdatesAsync(updates);
        StorePackageUpdateResult result = await downloadOperation.AsTask();
    }
}

Finer controlled updates

For developers who are looking to have a completely customized experience, we have a solution for you as well! Additional APIs are provided which enable more control over the update process; this control is a dial that you as a developer can control. The platform enables you to do the following

  1. Get progress events on an individual package download or on the whole update
  2. Apply updates at the user’s and app’s convenience rather than one or the other

Developers are able to download updates in the background (while app is in use) then request the user install updates, if they decline, you can simply disable capabilities affected by the update if you choose.

Download Updates

private async void DownloadUpdatesAsync()
{
    StoreContext updateManager = StoreContext.GetDefault();
    IReadOnlyList<StorePackageUpdate> updates = await updateManager.GetAppAndOptionalStorePackageUpdatesAsync();

    if (updates.Count > 0)
    {
        IAsyncOperationWithProgress downloadOperation =
            updateManager.RequestDownloadStorePackageUpdatesAsync(updates);

        downloadOperation.Progress = async (asyncInfo, progress) =>
        {
            // Show progress UI
        };

        StorePackageUpdateResult result = await downloadOperation.AsTask();
        if (result.OverallState == StorePackageUpdateState.Completed)
        {
            // Update was downloaded, add logic to request install
        }
    }
}

Install Updates

private async void InstallUpdatesAsync()
{
    StoreContext updateManager = StoreContext.GetDefault();
    IReadOnlyList<StorePackageUpdate> updates = await updateManager.GetAppAndOptionalStorePackageUpdatesAsync();    

    // Save app state here

    IAsyncOperationWithProgress installOperation =
        updateManager.RequestDownloadAndInstallStorePackageUpdatesAsync(updates);

    StorePackageUpdateResult result = await installOperation.AsTask();

    // Under normal circumstances, app will terminate here

    // Handle error cases here using StorePackageUpdateResult from above
}

Making updates mandatory

In some cases, it might actually be desirable to have an update that must be installed to a user’s device – making it truly mandatory (e.g. a critical fix to an app that can’t wait). In these cases, there are additional measure that you can take to make the update mandatory.

  1. Implement the mandatory update logic in your app code (would need to be done before mandatory update itself)
  2. During submission to the Dev Center, ensure the “Make this update mandatory” box is selected

Implementing app code

In order to take full advantage of mandatory updates, you’ll need to make some slight modifications to the code above. You’ll need to use the StorePackageUpdate object to determine if the update is mandatory.

private async bool CheckForMandatoryUpdates()
{
    StoreContext updateManager = StoreContext.GetDefault();
    IReadOnlyList<StorePackageUpdate> updates = await updateManager.GetAppAndOptionalStorePackageUpdatesAsync();

    if (updates.Count > 0)
    {
        foreach (StorePackageUpdate u in updates)
        {
            if (u.Mandatory)
                return true;
        }
    }
    return false;
}

Then you’ll need to create a custom in app dialog to inform the user that there is a mandatory update and that they must install it to continue full use of the app. If the user declines the update, the app could either degrade functionality (e.g. prevent online access) or terminate completely (e.g. online only games)

Dev Center

To ensure the StorePackageUpdate shows true for a mandatory update, you will need to mark the update as mandatory in the Dev Center in the Packages page.

mandatory-dev-center

Remarks:

  1. In rare occurences where a device comes back online after a mandatory update has been supersceded with another non-mandatory update, the non-mandatory update will still show up on the device as mandatory given the missed update before it was mandatory.
  2. Developer controlled updates / mandatory updates is currently limited to the Windows Store.

 

While we don’t always like to force updates onto our users or devices, it’s somestimes a necessary part of the development lifecycle that we can’t avoid. Windows 10 makes this process simple, intuitive and managable for both developers and users!

 

Any questions? leave them in the comments!

 

Thanks!

Jason Salameh, Senior Program Manager – Windows Developer Platform


MS Access Blog: Why are you still waiting to upgrade your email to the cloud?

$
0
0

You’ve heard about the cloud, you know the benefits it offers, yet you’re still using an on-premises solution for your company’s emails. Maybe the thought of the transition is too daunting, or you’re not convinced it’s necessary at this time—there are many reasons that you might chose to stay with on-premises servers.

According to the 2015 IDG Enterprise Cloud Computing Survey, 72 percent of organizations already have at least one application in the cloud and 56 percent are currently identifying which IT operations to move. Here are some common misconceptions about on-premises email and the reality of what migrating your business email to the cloud can do for your organization.

Email attacks don’t cost our company that much—While sometimes seemingly minor day-to-day annoyances, the cost of malware attacks add up over time, according to CSO Online. Luckily, cloud-based solutions make a difference. Since email threats are constantly evolving, it’s important to have the most up-to-date security protection, which cloud email can provide.

why-are-you-still-waiting-to-upgrade-your-email-to-the-cloud-1b

Source: “Phishing is a $3.7-million annual cost for average large company,” 2015, CSO Online

Maybe you don’t face daily threats or don’t see much action in the data-breach arena. But the facts are, when looking at attack incident numbers, cloud-hosted servers showed fewer incidents, according to Alert Logic’s Cloud Security Report.

why-are-you-still-waiting-to-upgrade-your-email-to-the-cloud-2b

Source: “Cloud Security Report,” 2015, Alert Logic

Migration costs too much money and downtime—It’s easy to assume that migrating your business email to a cloud server will cause a lot of downtime and upfront infrastructure costs, but it’s the contrary.

Since you don’t have to purchase and maintain expensive hardware, cloud email lowers your company’s capital expenditures. Instead of maintaining and upgrading on-premises servers, your IT team can concentrate on improving their own products and services.

Upp Technology found that 50 percent of companies using cloud technology report having reduced their IT spending by 25 percent. This frees up funds for other projects and gives IT more time to contribute to your bottom line. There’s virtually no lost time during migration, as rapid application delivery ensures business processes stay up and running while you transition.

Downtime is a part of everyday business—Reliable uptime is an important cost consideration. Technical delays and downtime from on-premises servers add up, and they’re completely avoidable. When on-premises servers go down, it costs more than productivity. Cogeco Peer 1 found that downtime could cost more than $1 million per hour for one in six enterprises.

The bottom line

Moving your company’s email to the cloud saves money, protects data and frees up time to focus on other ways to make your organization more productive and profitable.

Learn how to make your transition to the cloud as seamless as possible in the e-book, “Elevate Your Email: Why now is the right time to take your email to the cloud.”

The post Why are you still waiting to upgrade your email to the cloud? appeared first on Office Blogs.

MSDN Blogs: Feature Pack 1 для SharePoint Server 2016

$
0
0

Breaking-News

Ранее я уже писал о Feature Pack для SharePoint Server 2016 — части стратегии Microsoft по выпуску функционала для локальной версии.

ФункцияОписаниеСсылки
Улучшения для MinRoleТеперь функция MinRole в SharePoint Server 2016 включает две новые роли серверов и расширенную поддержку ферм малых размеров.TechNet
Настраиваемое меню в SharePoint app launcherТеперь администраторы SharePoint могут добавлять новые элементы в app launcherTechNet
Гибридная таксономияГибридная таксономия — это новое решение, с помощью которого можно создавать и поддерживать общую таксономию между фермой SharePoint Server 2016 и SharePoint Online.TechNet
Журнал действий администратораФункция «Журнал действий администратора» позволяет записывать в журнал действия администрирования SharePoint, помогая администраторам SharePoint устранять неполадки, связанные с изменениями фермы.TechNet
OneDrive API для SharePoint и Office 365OneDrive API обеспечивает поддержку доступа к файлам, расположенным в SharePoint Server 2016 и Office 365.http://dev.onedrive.com/
Унифицированный аудит коллекций сайтов SharePoint Server 2016 и Office 365Эта новая функция SharePoint Server 2016 позволяет администраторам просматривать журналы действий пользователей в Центре администрирования Office 365.TechNet
Обновленный интерфейс для OneDrive for BusinessПользовательский интерфейс OneDrive для бизнеса был обновлен с учетом новых функций Office 365.TechNet

Ссылки

Announcing Feature Pack 1 for SharePoint Server 2016—cloud-born and future-proof

Feature Pack 1 for SharePoint Server 2016 now available

New features in November 2016 PU for SharePoint Server 2016 (Feature Pack 1)

MSDN Blogs: Release Update 2016-11-11

$
0
0

Release Notes

  • Run now button added to designer
  • Request Trigger Query parameters now show up in the designer
  • Connection references now show up in code-view
  • Output tokens can now be copy/pasted or highlighted correctly

Bug Fixes

  • Historical run view would be blank when updating an app with multiple triggers
  • HTTP + Swagger would change URL to HTTPS even if an unsupported scheme
  • Broken connections would not show up in historical run view
  • Fixed info bubble rendering
  • Save/Discard would light up even with no changes

MSDN Blogs: Creating Azure Resources with ARM Templates Step by Step

$
0
0

The files used in this article can be found in GitHub here: https://github.com/ssemyan/BasicAzureLinuxVmArmTemplate

There are many ways to script the creation of virtual machines, services, and other resources in Azure. Available tools include PowerShell, cross-platform command line tools, and SDKs for Java, .NET and other languages. These resources can be found here.

Azure Resource Manager (ARM) templates are a way of describing resources in JSON. These templates can be used by PowerShell or the command line tools to build out deployments including networking, services, VMs, etc.

In this article, I’ll explain in detail how to use PowerShell and ARM templates to build out a Linux VM and the associated components such as networking and security.

It is easy to script out an existing deployment in the Azure Portal. To do this, simply click on the Automation script link in the resource group’s properties blade.

Automation Script

Doing this will generate a set of files you can use to recreate the contents of the resource group.

Automation Script

These files are a good start but if you want to create re-usable deployments you will want to edit them a bit. The files referenced in this article can be found on GitHub here: https://github.com/ssemyan/BasicAzureLinuxVmArmTemplate

For deploying using PowerShell and JSON templates, there are three files: deploy.ps1, parameters.json, and template.json. We will go over each of these in detail.

The first file is the PowerShell deployment script deploy.ps1 Looing at this file you will see this section first:

param(
 [Parameter(Mandatory=$True)]
 [string]
 $subscriptionId,

 [Parameter(Mandatory=$True)]
 [string]
 $resourceGroupName,

 [string]
 $templateFilePath = "template.json",

 [string]
 $parametersFilePath = "parameters.json"
)

This section sets the command-line parameters for the script. $subscriptionId is the ID of subscription to use. Sometimes a login will have access to multiple subscriptions so it is important to specify which subscription to use. $resourceGroupName is the name of the resource group to create. This name is also used to name some of the shared components like the storage account and virtual network (vnet). Resource groups can be thought of as folders that make it easy to organize and manage groups of related resources. $templateFilePath and $parametersFilePath indicate which files to use for the deployment. We will cover these files in depth a bit later. For now, just know that by default we use template.json and parameter.json but you can override this by passing in different values.

To run the deployment script you therefore must enter a subscription ID, a resource group name and, optionally, the name of the parameter and template files to use if you want to use files other than the default.

When calling the script from within PowerShell, simply enter the script filename and the parameters:

PS C:sourceBasic_Azure_VM_ARM_Template> .deploy.ps1 -subscriptionId f061aa2a-50c2-46a0-818b-6a5829ff5a70 -resourceGroupName mygroup

From a command prompt, you will need to invoke PowerShell like so:

C:sourceBasic_Azure_VM_ARM_Template>powershell -f deploy.ps1 -subscriptionId f061aa2a-50c2-46a0-818b-6a5829ff5a70 -resourceGroupName mygroup

The line of the deployment script states that the script should stop on any errors. And then the script logs into Azure using the supplied subscription ID

$ErrorActionPreference = "Stop"

# sign in and select subscription
Write-Host "Logging in...";
Login-AzureRmAccount -SubscriptionID $subscriptionId;

In the next section, we load the parameters file into a variable so we can use it within the script. This allows us to use some of the parameters within the script such as the location for the resources.

# load the parameters so we can use them in the script
$params = ConvertFrom-Json -InputObject (Gc $parametersFilePath -Raw)

Now we are ready to start creating resources in Azure. First, we create the resource group unless it already exists.

$resourceGroup = Get-AzureRmResourceGroup -Name $resourceGroupName -ErrorAction SilentlyContinue
if(!$resourceGroup)
{
    Write-Host "Creating resource group '$resourceGroupName' in location $params.parameters.location.value";
    New-AzureRmResourceGroup -Name $resourceGroupName -Location $params.parameters.location.value -Verbose 
}
else{
    Write-Host "Using existing resource group '$resourceGroupName'";
}

Next, before we create the resources in our resource group, we do a test deployment to ensure the parameter and template files are correct and that the resources can be created.

# Test
Write-Host "Testing deployment...";
$testResult = Test-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateFile $templateFilePath -TemplateParameterFile $parametersFilePath -ErrorAction Stop;
if ($testResult.Count -gt 0)
{
	write-host ($testResult | ConvertTo-Json -Depth 5 | Out-String);
	write-output "Errors in template - Aborting";
	exit;
}

Note that if we find errors, we print them to the screen in JSON format with the depth set to 5 levels deep so that we can see any nested errors in plain text. If there are no errors we are ready to do the actual deployment.

# Start the deployment
Write-Host "Starting deployment...";
New-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateFile $templateFilePath -TemplateParameterFile $parametersFilePath -Verbose;

The -Verbose flag means you want watch the resources as they are created:

C:sourceBasic_Azure_VM_ARM_Template>powershell -f deploy.ps1 -subscriptionId cd82f777-1ae3-48fb-8887-05e9f88e3a33 -resourceGroupName mygroup
Logging in...

Environment           : AzureCloud
Account               : user@live.com
TenantId              : 5e8c4d9e-7f04-49c5-96d5-9b7b6c0a908c
SubscriptionId        : 3e898ccf-0dc8-4268-a935-2e260646258b
SubscriptionName      : Visual Studio Enterprise
CurrentStorageAccount :

Creating resource group 'mygroup' in location centralus
VERBOSE: Performing the operation "Replacing resource group ..." on target "".
VERBOSE: 11:30:54 AM - Created resource group 'mygroup' in location 'centralus'

ResourceGroupName : mygroup
Location          : centralus
ProvisioningState : Succeeded
Tags              :
TagsTable         :
ResourceId        : /subscriptions/3e898ccf-0dc8-4268-a935-2e260646258b/resourceGroups/mygroup

Testing deployment...
Starting deployment...
VERBOSE: Performing the operation "Creating Deployment" on target "mygroup".
VERBOSE: 11:30:57 AM - Template is valid.
VERBOSE: 11:30:59 AM - Create template deployment 'template'
VERBOSE: 11:30:59 AM - Checking deployment status in 5 seconds
VERBOSE: 11:31:04 AM - Resource Microsoft.Storage/storageAccounts 'mygroupstorage6t6' provisioning status is running
VERBOSE: 11:31:04 AM - Resource Microsoft.Network/publicIpAddresses 'mygroup-chefsvr-ip' provisioning status is running
VERBOSE: 11:31:04 AM - Resource Microsoft.Network/networkSecurityGroups 'mygroup-nsq' provisioning status is running
VERBOSE: 11:31:04 AM - Resource Microsoft.Network/virtualNetworks 'mygroup-vnet' provisioning status is running
VERBOSE: 11:31:04 AM - Checking deployment status in 10 seconds
VERBOSE: 11:31:14 AM - Resource Microsoft.Network/networkSecurityGroups 'mygroup-nsq' provisioning status is succeeded
VERBOSE: 11:31:14 AM - Resource Microsoft.Network/virtualNetworks 'mygroup-vnet' provisioning status is succeeded
VERBOSE: 11:31:15 AM - Checking deployment status in 15 seconds
VERBOSE: 11:31:30 AM - Resource Microsoft.Compute/virtualMachines 'chefsvr' provisioning status is running
VERBOSE: 11:31:30 AM - Resource Microsoft.Storage/storageAccounts 'mygroupstorage6t6' provisioning status is succeeded
VERBOSE: 11:31:30 AM - Resource Microsoft.Network/networkInterfaces 'chefsvrnic' provisioning status is succeeded
VERBOSE: 11:31:30 AM - Resource Microsoft.Storage/storageAccounts 'mygroupstorage6t6' provisioning status is succeeded
VERBOSE: 11:31:30 AM - Resource Microsoft.Network/publicIpAddresses 'mygroup-chefsvr-ip' provisioning status is
succeeded
VERBOSE: 11:31:30 AM - Checking deployment status in 20 seconds
VERBOSE: 11:31:50 AM - Checking deployment status in 25 seconds
VERBOSE: 11:32:16 AM - Checking deployment status in 30 seconds
VERBOSE: 11:32:46 AM - Checking deployment status in 35 seconds
VERBOSE: 11:33:22 AM - Checking deployment status in 40 seconds
VERBOSE: 11:34:02 AM - Checking deployment status in 45 seconds
VERBOSE: 11:34:47 AM - Checking deployment status in 50 seconds
VERBOSE: 11:35:38 AM - Checking deployment status in 55 seconds
VERBOSE: 11:36:33 AM - Checking deployment status in 60 seconds
VERBOSE: 11:37:33 AM - Resource Microsoft.Compute/virtualMachines 'chefsvr' provisioning status is succeeded

Now, let’s turn our attention to the parameter and template files. These files are in JSON format and are used to store two types of settings. The template file describes the components and how they relate to each other. Generally, this file is not expected to change from deployment to deployment. The parameters file holds the settings that are specific to each deployment. This might include VM names, usernames, passwords or SSH keys, etc.

Let’s take a close look at each of these files. Looking first at the template file, at the beginning you will see the list of parameters:

"parameters": {"location": {"type": "string","metadata": {"description": "Which region to use."
      }
    },"virtualMachineName": {"type": "string","metadata": {"description": "Name for the Virtual Machine."
      }
    },"virtualMachineSize": {"type": "string","metadata": {"description": "Size of the Virtual Machine."
      }
    },"adminUsername": {"type": "string","metadata": {"description": "User name for the Virtual Machine."
      }
    },"adminPublicKey": {"type": "string","metadata": {"description": "Public SSH Key for the Virtual Machine."
      }
    }
  },

Each parameter has a name, a type, and (optionally) a description. Because the parameters change per deployment you may have multiple parameter files, one for each type of VM you might want to create. Note that I use SSH keys instead of passwords. For more information on SSH key creation, see my article Generating SSH keys for Azure Linux VMs

Note: putting passwords and SSH keys in plain text in parameter files is NOT a best practice. A better practice would be to place them in securestrings and pull values from the KeyVault at deploy time instead. I chose to keep things simple for this article but more information on this topic can be found in the article Create an SSL enabled Web server farm with VM Scale Sets

Next in the template file are variables. These are settings that are either calculated or are not changed for different deployments. You will notice the use of various functions such as concat, substring, uniquestring, etc. There are many functions which can be used in ARM templates and they are documented here. The variable section for our file looks like this:

"variables": {"vnetId": "[resourceId(resourceGroup().name,'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]","subnetRef": "[concat(variables('vnetId'), '/subnets/', variables('subnetName'))]","storageAccountName": "[concat(resourceGroup().name, 'storage', substring(uniquestring(resourceGroup().name),0,3))]","virtualNetworkName": "[concat(resourceGroup().name, '-vnet')]","networkInterfaceName": "[concat(parameters('virtualMachineName'), 'nic')]","networkSecurityGroupName": "[concat(resourceGroup().name, '-nsq')]","publicIpAddressName": "[concat(resourceGroup().name, '-', parameters('virtualMachineName'), '-ip')]","storageAccountType": "Premium_LRS","addressPrefix": "10.0.0.0/24","subnetName": "default","subnetPrefix": "10.0.0.0/24","publicIpAddressType": "Dynamic"
  },

Some of these variables are just calculated values used during creation of the network or related resources. This includes vnetId and subnetRef. Another set of variables is for creating the names for shared resources within the resource group. The value for the virtualNetworkName for example is just the resource group name with “-vnet” appended to it. Thus, in resource group “mygroup” the vnet name will be “mygroup-vnet”. Vnet names only have to be unique within a resource group. The networkSecurityGroupName is created similarly. The VM networkInterfaceName is created by joining the VM name with “nic” and the publicIpAddressName is created by joining the resource group name with the VM name and “-ip” to make what should be a unique name. Storage account names must also be unique. To create a unique name for storageAccountName we take the name of the resource group and then join it with a unique string based on the resource group name (the unique string is a hash of the text). We only take the first 3 characters and join them with the resource group name to create what should be a unique name for the storage account.

The rest of the variables are set here because they do not need to change from deployment to deployment. This includes the address prefixes for the net and subnet, the storage account type, the subnet name, and whether the public IP address is dynamic or static.

Now that we have parameters and variables they can be referenced in the remainder of the template by doing this:

"[parameters('paramName')]"

or

"[variables('variableName')]"

Notice how they are enclosed in brackets. This tells the template processor to treat these as functions rather than literals. One item of note is that if you are using Visual Studio 2015 you will get syntax completion in the template and parameter json files.

Now we are at the meat of the template file – the creation of resources. Each resource looks similar to this (for the Vnet):

{"name": "[variables('virtualNetworkName')]","type": "Microsoft.Network/virtualNetworks","apiVersion": "2016-09-01","location": "[parameters('location')]","properties": {"addressSpace": {"addressPrefixes": ["[variables('addressPrefix')]"
          ]
        },"subnets": [
          {"name": "[variables('subnetName')]","properties": {"addressPrefix": "[variables('subnetPrefix')]"
            }
          }
        ]
      }
    },

Each resource has a name and a type. In the case above we are using the virtualNetworkName we created in the variables section for the resource of type: Microsoft.Network/virtualNetworks. The location comes from our parameters, and the rest of the required information comes from the variables we created. Every resource has an associated apiVersion that tells which version of the API the resource is available in. I tend to leave these as generated from the Portal.

It is important that some resources are created before others. To enforce this use the dependsOn section in the resource. For example, before we can create the VM, we a storage account and the Vnet. Using dependsOn tells the script to not create the VM until both the storage account and the VNet have been created:

"dependsOn": ["[concat('Microsoft.Network/networkInterfaces/', variables('networkInterfaceName'))]","[concat('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]"
      ],

The New-AzureRmResourceGroupDeployment cmdlet will not try to create resources that already exist. Thus, you can run a script over again without worry that it is creating copies of existing resources. This means you can run this script twice with two different parameter files describing two different VMs and you will create the associated storage account and vnet only once.

I hope this article has been useful in demystifying Azure ARM Templates. Automated and scripted deployments are critical for creating cloud infrastructure quickly and in a repeatable fashion. ARM templates are a useful way of describing your infrastructure as code.

A great resource with many examples of ARM templates is the Azure Quickstart Template project on GitHub: https://github.com/Azure/azure-quickstart-templates This is the place to go to see how to create various resource scenarios with ARM templates.

If you are interested in creating multiple of the same resources in a loop, see Harry Chen’s article ARM template enforcing script execution order and timing in a loop

Again, the files used in this article can be found in GitHub here: https://github.com/ssemyan/BasicAzureLinuxVmArmTemplate

MSDN Blogs: Experiencing Issues with changing Billing plans in Azure Portal – 11/11 – Resolved

$
0
0
Final Update: Friday, 11 November 2016 22:25 UTC

This is a retroactive notification about an issue with the Billing Service feature in Application Insights. A few of our customers would have observed issues with changing their billing plans for a brief period of 20 minutes starting at 11/11 22:00:00 UTC. We’ve confirmed that all systems are back to normal with no customer impact as of 11/11, 22:20 UTC. Our logs show the incident started on 11/11, 22:00 UTC and that during the 20 minutes that it took to resolve the issue, our customers wold have experienced issues with modifying their Billing plans for Application Insights.
  • Root Cause: The failure was due to a faulty change in one of our back end systems.
  • Incident Timeline: 20 minutes – 11/11, 22:00 UTC through 11/11, 22:20 UTC

We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.


-Arun

MSDN Blogs: Application Insights – Advisory 11/12

$
0
0
Between 11/09  and 11/14 7: 00 PM UTC some customers will experience failures in creating Application Insights components in the Central US region as we are working on updates to one of our back end services. The issue should be resolved by 11/14 ~9:00 PM UTC.

Workaround:  Customers should not create Application Insights components in Central US region till the issue is resolved.

We apologize for any inconvenience.

-Sapna

MSDN Blogs: Webinars: Web and [Mobile] API back-ends on Azure

$
0
0
   
    
   
    
 Western Europe Webinar series:

Web and [Mobile] API back-ends on Azure

 

 

In English language

    
 The series “API and App back-ends on Azure” consists of three webcasts that introduce you to Azure App and API Services. These are platform services that allow developers to focus on code and use the platform to take care of hosting, deployment, scaling and securing the code to run for you. While we will show a few slides, we spend more time in demo mode. Whether Windows or Mac, developing in .NET, Node.je, PHP or Java, this series gives you a good overview of the options you have while staying platform agnostic. 
 . 
 November 9, 2016 11:00 – 12:00

Session 1: Modern APIs and Apps Platform on Azure

 

(Recording available)

 
 APIs are the engine of innovation today. Taking an API first approach, we’ll dive into the benefits of leveraging Azure App Service for hosting our (mobile) API which is going to back our apps. During the session we’ll go from creating a simple API to having it hosted on Azure. You’ll learn what this platform offers you so much out of the box that can help you accelerate your (mobile) development and hosting. 
 .  
 November 23, 2016 11:00 – 12:00Session 2: DevOps with App Service 

(Register here)

 
 Continuing on our first episode, we’ll further dive into the features that Azure App Service provides us for easing development & operations. We look at Continuous Integration, rolling upgrades, A/B testing, performance monitoring. Options for deployment and release managements are also addressed. 
 .  
 December 7, 2016 11:00 – 12:00

Session 3: API Management

 

(Register here)

 
 Having addressed App Service for the hosting of our Web, Mobile Apps and APIs, manageability and integration of these APIs become a key element for supporting faster innovation. In this session we’ll look at options for exposing, securing, managing and sharing our APIs with Mobile developers, partners and internal users. 

MSDN Blogs: Updates to the Azure Gallery Image… – kind of…

$
0
0

The Microsoft Dynamics NAV 2017 Image on Azure has been live for a few weeks and I have received a lot of feedback that it is very hard to setup the new things in NAV, like:

  • Outlook Add-in
  • Excel Add-in
  • Embedded PowerBI
  • Microsoft Flow
  • PowerApps
  • Azure services
  • etc.

Indeed, you are right. It is very hard. especially when you are used to just running a script and have everything done for you, but if you think back, it is not that different from when you had to create Provider Hosted SharePoint Apps in Visual Studio, setup self signed certificates that didn’t work with anything but Windows etc. etc.

I (nor Microsoft) ever promised that the scripts would include every new functionality exposed in NAV. I am trying to, but it is hard to ship something a few days after a product release and then have everything ready by then. It was my intention to fix these things in the upcoming Cummulative Updates, but based on the feedback and how many people have spent time trying to set something up, I have made an exception.

No new Image on Azure!

There is NOT going to be a new image on Azure before CU1 (which comes in December), I cannot do that. The Image follows the releases of NAV and that is how it needs to be, so if you are creating your demo environment manually using the classic portal or the new portal, you will have to wait until December to get help.

But…

If you use the http://aka.ms/navdemodeploy to deploy your Demo Environment, this script will automatically do the “extra” stuff needed, to make some of these things work, it will overwrite some of the DEMO scripts with a newer version during deployment and then run the scripts as normally.

So what has been fixed

Here’s a list of the things, that the new navdemodeploy script fixes:

PublicWebBaseUrl

In customsettings.config, there is a setting called PublicWebBaseUrl, which is the Url pointing to the Web Client. In NAV 2016 (and 2017RM) this points to the Web Client which is using NAV User/Password authentication. It is now being changed to point to the Web Client using Azure Active Directory (Office 365) authentication when you run the O365 Integration Script.

This fix will make the Outlook Add-in work and is a foundation for some of the other things as well.

App Registration for PowerBI

When you want to setup embedded PowerBI, you will be met by this dialog.

azureadsetup

In the text, you will find a link to the Documentation, which looks like this

azureadapphelp

and if you follow that, you will almost be there.

You will however need to give some services access to your App also (step 8½).

  • In the API Access section there is a Required Permissions, click that
    • Add delegated permissions to view all reports to the Power BI Service
    • Add delegated permissions to Sign In and read user profile to Windows Azure Active Directory

That should do it.

Note, that you need to do this logged into https://portal.azure.com as your Office 365 administrator, which probably isn’t the owner of the subscription in which your NAV VM is running.

In the new O365 Integration script, the app registration happens automatically and you should not have to go through this wizard.

Excel add-in, PowerApps, Flow, Azure Services

I am working on getting the setup for these things included in the script as well and I will update this blog post as soon as I have done so, just wanted to unblock people who have run into the above issues.

Sync-NavTenant

A few people have reported that after running the O365 Integration, then NAV Parts wouldn’t show. After starting a Management Shell and run Sync-NavTenant -ServerInstance NAV -Tenant default -force the problem disappeared. This problem is also fixed in the new scripts.

One Important Note

When deploying the new image, the new portal will say Succeeded, BEFORE it actually is done running all the demo scripts. The DEMO scripts will be launched after the machine is up running and will now include a number of reboots.

So please, give it ~10 minutes after the portal tells you that it is done.

Some more cool stuff that you get for free

Performance counters from NAV and from the machine are now available in the new portal:

perfcounters

Meaning that you can monitor your VM right here from the portal.

Use trusted certificate

3 new parameters have been added to the template, which will allow you to use a trusted certificate instead of having the DEMO scripts create a self signed certificate. Certificate Pfx Url, Certificate Pfx Password and Public Machine Name:

parameters

Note, that you have to create a CNAME record in the DNS to match the Public Machine Name yourself if you are using a trusted certificate.

Leave options empty if you don’t use them.

 

Enjoy

Freddy Kristiansen
Technical Evangelist

MSDN Blogs: Azure Migration from ASM to ARM

$
0
0

Now that ARM (aka Azure v2.0) is quite well established people are starting to ask how they migrate their stuff over from the old ASM world to the new portal.

Up to now that’s been done using mainly custom PowerShell scripting. Other tools might have existed but like PowerShell they just called the public API to move stuff. There has also been some support added to the new portal to help with this.

There is now a new tool to help with migration called AzMig which is designed to help with the process.

You can download this new AzMig tool from http://aka.ms/migaz

To help you learn there are also two new videos available which walk you though the process.

Migrate Azure IaaS Solutions from ASM to ARM Using migAz, Part 1

 

Migrate Azure IaaS Solutions from ASM to ARM Using migAz, Part 2

MSDN Blogs: SPN configurations for Kerberos Authentication – A quick reference

$
0
0

Many people consider configuring Kerberos authentication and making it work as a daunting task. One of the reasons for this is different configuration elements involved in the process.

One such important configuration is setting the appropriate SPN.

There are three important elements that needs to be considered while setting the Kerberos SPN for our application. I have listed them below:

1.How users will browse the application?

                       Users can browse the application with machine name or with a custom domain name. In most of the cases browsing with machine name does not need an SPN registered. But there are few exceptions.

2.The application pool identity

3. How server is going to decrypt the Kerberos token forwarded by the client.

There are two important properties under

             system.webServer/security/authentication/windowsAuthentication

  • useAppPoolCredentials: When we have useAppPoolCredentials set to true, server will decrypt the Kerberos traffic using application pool identity.
  • useKernelMode: When we have useKernelMode set to true, server will decrypt the Kerberos traffic using machine account.

Even with good understanding of Kerberos workflow and above-mentioned elements, sometimes people get confused on what SPN to set.

I have included the below table, which can be a quick reference for the SPN’s needed for different combinations of host name and application pool identities.

URL

Use Kernel Mode

Use App pool Credentials

Application pool Identity

SPN requirement

Browsed with machine name

True

False

Machine account

No additional SPN’s are needed

HOST SPN will be sufficient

Browsed with machine name

True

False

Custom domain account

No additional SPN’s are needed

HOST SPN will be sufficient

Browsed with machine name

False

True

Machine account

No additional SPN’s are needed

HOST SPN will be sufficient

Browsed with machine name

False

True

Custom domain account

Setspn–a HTTP/<machine name> <custom account name>

Browsed with machine name

True

True

Machine account

No additional SPN’s are needed

HOST SPN will be sufficient

Browsed with machine name

True

True

Custom domain account

Setspn–a HTTP/<machine name> <custom account name>

Browsed with custom host name

True

False

Machine account

Setspn–a HTTP/<custom host name> <machine name>

Browsed with custom host name

True

False

Custom domain account

Setspn–a HTTP/<custom host name> <machine name>

Browsed with custom host name

False

True

Machine account

Setspn–a HTTP/<custom host name> <machine name>

Browsed with custom host name

False

True

Custom domain account

Setspn–a HTTP/<custom host name> <custom account name>

Browsed with custom host name

True

True

Machine account

Setspn–a HTTP/<custom host name> <machine name>

Browsed with custom host name

True

True

Custom domain account

Setspn–a HTTP/<custom host name> <custom account name>

Browsed with machine name

False

False

Machine account

No additional SPN’s are needed

HOST SPN will be sufficient

Browsed with machine name

False

False

Custom domain account

Setspn–a HTTP/<machine name> <custom account name>

Browsed with custom host name

False

False

Machine account

Setspn–a HTTP/<custom host name> <machine name>

Browsed with custom host name

False

False

Custom domain account

Setspn–a HTTP/<custom host name> <custom account name>

 

Hope this helps.

MSDN Blogs: An investigation into why standard UI wasn’t raising a UIA event needed by Narrator

$
0
0

This post describes the approach taken when investigating why the Narrator screen reader was not reacting as expected to a change in state of some UI. While in this case the UI was a Win32 checkbox, some of the steps described could be useful when investigating Narrator behavior regardless of the control type being used.

 

Introduction

A few weeks ago I was told that the Narrator screen reader wasn’t reacting as expected when a checkbox in a particular UI was toggled. The expected behavior is that when Narrator’s cursor is located at a checkbox, and the toggle state of the checkbox changes, Narrator should make an announcement related to the toggle state change. Instead in the case of this particular UI, Narrator said nothing.

The following sections describe the investigations that took place in order to solve this mystery.

 

The Investigation

1. Reproducing the problem

One of the first things I like to do in a situation like this is reproduce the problem on my own device. Once I can do that, I can relax in the knowledge that I can spend a while pointing SDK tools to the UI, and not feel pressured that I may lose access to the device which reproduces the problem.

By reproducing the problem myself, I can also be sure that there are no required steps or software/hardware configuration that I hadn’t been made aware of. For example, when someone says they “clicked” a button, it might be critical as to exactly what input mechanism was used. For example, did they use a mouse, touch, keyboard, or invoke it programmatically through Narrator touch input or speech input?

Sure enough, I ran the feature on my own device and pointed Narrator to the UI checkbox of interest, and Narrator said nothing as I changed the toggle state of the checkbox. So it repro’d fine for me.

 

2. Considering how much of a problem this is to the customer

Having repro’d the problem on my device, I then took some time to consider how severe this problem is to the customer. After all, we all have a long list of things that we need to work on, and we need to feel confident that we’re working on the things that have most impact to our customers.

As a test, having changed the toggle state of the checkbox, I pressed Tab and Shift+Tab to move Narrator away from, and back to the checkbox. When Narrator returned to the checkbox, the current toggle state of the checkbox was announced as expected. Similarly when the Narrator cursor was on the checkbox, I could press the CapsLock+D keyboard shortcut to have Narrator announce details of the UI, and again the current toggle state of the checkbox was announced.

This means that there is a way for the customer who’s using Narrator to learn of the toggle state of the checkbox, despite an announcement not being made at the moment when the toggle state changes. Maybe that’s ok?

By default, an issue like that is not ok.

A sighted customer who changes the toggle state of the checkbox is informed immediately through visual feedback that the state has changed. That customer doesn’t have to take additional action such as tabbing or clicking away from the checkbox and then back to it, before being informed that their action at the checkbox was successful. So why would it be ok for a customer who’s using Narrator to have to take such action?

Perhaps if after further investigation, we discover that for some unexpected reason it would actually be a huge pile of work to fix the Narrator experience at this particular checkbox, and so further discussions around the impact of the issue might be justified. But until we have more information, we assume that we will be fixing this.

 

3. Is this checkbox really the issue?

Ok, so I know at this point that I can repro the problem with Narrator at the checkbox on my device. But what if the problem isn’t the checkbox, but rather Narrator? Or perhaps something between the two? For example, the UI framework associated with the checkbox, or maybe Windows UI Automation (UIA).

Actually, on a side note, in general for an issue like this, UIA is rarely a part of the problem. UIA channels the data associated with the checkbox UI from the “provider”, (in this case whatever exe is hosting the checkbox,) to the “client”, (in this case Narrator,) and typically works exactly as expected. So the other three areas of the control, the UI framework, and Narrator are always initially where I’m most interested.

So in order to get a feel for whether the issue really seems to lie with the checkbox in question, I want to point Narrator to another “similar” checkbox. Maybe I’ll find that all such checkboxes exhibit the unexpected behavior.

The question of what a “similar” checkbox is here is really important. Often a big proportion of the accessibility of a control is provided by the UI framework being used to present the control. This UI framework might be XAML, Edge, Win32, or something else. To involve multiple UI frameworks in my investigation at this point, will complicate things unnecessarily. Instead, let’s just limit things to one UI framework for the moment.

I had a pretty good idea what UI framework was being used for the checkbox of interest, but I never like to make assumptions which could result in me losing time if I assumed wrong. So I pointed the Inspect SDK tool at the checkbox, and took a look at the FrameworkId and ProviderDescription properties. The FrameworkId was reported to be “Win32”, and the very complicated ProviderDescription string contained the text “MSAA Proxy”. From this, I could tell that the checkbox was built with Win32.

A few comments on the ProviderDescription property associated with UI built with other UI frameworks are in the section “A tip on learning where UIA data is coming from” of Steps for customizing the accessible name of a standard control, for five UI frameworks.

Having established that the checkbox was built with Win32, I then looked for another Win32 checkbox on my device. I picked one from the classic Mouse Properties control panel, and verified using the Inspect SDK tool that its FrameworkId and ProviderDescription were similar to that of the original checkbox.

I then pointed Narrator at the control panel’s checkbox and toggled the state of the checkbox. When I did that, Narrator announced the toggle state change just fine. While that doesn’t prove that the issue I’m investigating lies with the original checkbox rather than the UI framework or Narrator, it does make me feel that it’s appropriate for me to continue focusing for the time being on the original checkbox itself.

 

InspectFrameworkHighlighted

Figure 1: The Inspect SDK tool reporting the FrameworkId property of a checkbox in the classic Mouse Properties control panel.

 

Bonus: Check out some of the helpful properties in the classic Mouse Properties control panel. Settings such as “Display pointer trails” and “Show location of pointer when I press CTRL key” can really help draw your attention to where the mouse cursor is.

 

4. Is the checkbox letting Narrator know that something needs to be announced?

When something changes in UI, Narrator needs to be informed of the change. Otherwise Narrator can’t react by making a helpful announcement to the customer. Narrator’s not going to constantly poll for changes in the UI, rather it will react to UIA events being raised by the UI. One of the most fundamental events is the FocusChanged event. This is raised whenever keyboard focus moves from one element to another, and so Narrator can react to that event to let the customer know what element has gained keyboard focus.

So the next question is, when the toggle state of the checkbox of interest changes, is a UIA event raised to let Narrator know about the change?

In order to learn the answer to that, we can run the AccEvent SDK tool. This tool is a UIA client app, (like Narrator is,) and can report what UIA events are being raised by UI. In this case, we’re interested in an event that lets us know that the ToggleState property, (from the UIA Toggle pattern,) has changed.

Note: UIA Control pattern property ids often involve a concatenation of the pattern name followed by the property name, so that can result in some duplication in the ids’ names. For example, the id of the ExpandCollapseState property from the ExpandCollapse pattern is “UIA_ExpandCollapseExpandCollapseStatePropertyId”. Technically that makes sense, but it can seem pretty weird when you first encounter it.

The id of the property we’re interested in with the original checkbox is the UIA_ToggleToggleStatePropertyId, because it’s the ToggleState property from the Toggle pattern. So I ran the AccEvent SDK tool, and set it up to report all ToggleToggleState property changes. I also set it up to report FocusChanged events, given how much impact that event has on the Narrator experience, and I could verify that FocusChanged events were being raised as expected when I tab to and away from the checkbox.

 

AccEventSetup

Figure 2: The AccEvent SDK tool’s setup window, showing that FocusChanged events and ToggleToggleState property changed events will be reported.

 

With the AccEvent SDK tool now reporting the events I’m interested in, I return to the checkbox, tab to it, and press the spacebar a few times to see what events are raised.

Crucially, as I toggle the state of the checkbox with the keyboard, the visuals on the screen change to reflect the new state of the checkbox, but no UIA ToggleToggleState property changed events are raised. This explains why Narrator doesn’t react to the change in toggle state at this checkbox.

 

5. Compare the AccEvent SDK tool output with a working checkbox

Ok, based on what I just did, I now believe that the checkbox of interest isn’t raising the UIA ToggleToggleState property changed event. But to be really thorough, I want to check what happens if I repeat the AccEvent test with a checkbox where Narrator works as expected. After all, maybe I botched the AccEvent setup, and it wasn’t really set up to report the event I’m interested in.

So I pointed the AccEvent SDK tool to the checkbox in the classic Mouse Properties control panel. When I then changed the toggle state of that checkbox, AccEvent did report the ToggleToggleState property changed events. This explains why Narrator does react to the change in toggle state at this checkbox.

 

AccEventShowToggle

Figure 3: The AccEvent SDK tool reporting ToggleToggleState property changed events being raised by a checkbox.

 

6. Comparing the original checkbox with a working checkbox

Once we reach a point where we find one control works as expected with Narrator, and another control doesn’t, it’s always interesting to try to identify some difference between the controls which might relate to the different Narrator experiences. We already know the two checkboxes that we’ve examined so far are both Win32 controls, so what else could be different?

So I next pointed the Inspect SDK tool to the two checkboxes, to see if anything in the long list of UIA properties seemed different. I found no interesting differences. Many important properties, such as those listed below, were identical across the two controls.

 

ControlType: UIA_CheckBoxControlTypeId

IsEnabled: true

IsOffscreen: false

IsKeyboardFocusable: true

HasKeyboardFocus: true

FrameworkId: “Win32”

ClassName: “Button”

IsTogglePatternAvailable: true

 

This was a little surprising to me, as often there’s some difference found in the list of properties when exploring why Narrator behaves differently at two controls.

Given that the Win32 checkbox has its own hwnd, I then decided to run the Spy++ tool available through Visual Studio’s Tools menu. Who knows, maybe there’d be some difference in the window styles which might be related. Overall the checkboxes’ styles were mostly the same. I did notice one had BS_CHECKBOX while the other had BS_AUTOCHECKBOX, and wondered if that might be related to the difference in the UIA events raised by the checkboxes. But that seemed really unlikely, otherwise Narrator would be hitting this problem more frequently than it apparently is.

 

7. Experimenting with my own checkbox

At this point, I felt I needed to experiment with my own brand new checkbox, to see if I could break it in the same way that the checkbox of interest is apparently broken. It seemed unlikely that simply comparing further the two checkboxes that I’ve been working with so far, would help me figure out the problem. But by creating my own checkbox, I could modify the checkbox button styles in the resource file however I wanted, and see if I could eventually replicate the problem with the Narrator experience.

And to be clear, this isn’t a Narrator problem as such. What I’m really trying to do is get to a point where my own checkbox stops raising the UIA ToggleToggleState property change event when I toggle it.

Note: The step of creating the simplest standalone app which repro’s a problem is something that I expect to have to do periodically. If I’ve spent months updating an app, and then find something I leverage from another team doesn’t work as expected in my app, it’s not helpful if I go to the other team and say “Your stuff doesn’t work”. It’s far more helpful if I create a tiny standalone app, and repro the interaction issue with the other team’s feature with that tiny app. The other team can then more efficiently investigate the issue without wondering how the months of updates I put in my original app might be related to the issue.

Whenever I need to create the simplest app I can, of any UI framework, I go to Visual Studio, and do New->Project, and pick the framework I’m interested in. In this case I picked “Visual C++”, “Win32 Project”. I then modified the Help dialog box in my app, such that it showed a checkbox.

 

A dialog box showing a test checkbox.

Figure 4: A dialog box showing a test checkbox.

 

Now that I have my own experimental checkbox, I need to verify that the UIA event of interest is raised as expected when I toggle the state of the checkbox. Once I’ve done that verification, I can then start working on incremental changes to see if I can break the checkbox. To my great surprise when I pointed the AccEvent SDK tool at my new checkbox, I found that no ToggleToggleState property changed event was raised when I toggled the checkbox. So my brand new shiny checkbox was broken from the start, and any customer using Narrator wouldn’t be notified when they change the state of the checkbox in the app.

This was most unexpected.

After this discovery, I was no longer focusing on the original problem checkbox. Instead I’d focus on my own checkbox, (on which I’d done no customization,) which wasn’t raising the expected UIA event, and on the checkbox in the classic Mouse Properties control panel, which did raise the event.

My first step was to make sure the button styles, (such as BS_AUTOCHECKBOX,) were identical between the two checkboxes. Yet nothing I could do with the button styles got my checkbox to raise the event I needed.

My next thought was perhaps the flags passed into the InitCommonControlsEx() function might be different between the apps hosting the two checkboxes. After all, a Win32 checkbox is a common control. It turned out that my new app wasn’t making a call to InitCommonControlsEx(), so I added the call, and tested it with a variety of flags. Still I couldn’t get the required UIA event to be raised.

 

8. The solution

There was only one more thing I could think of trying now. When using Win32 common controls, it’s possible to specify which version of the common controls you want to use. Whenever I specify that I want a particular version used, I always say I want version “6”. I’ve never looked into the full set of differences between versions of the Win32 common controls, but no doubt Common Control Versions contains some very interesting information.

I took the standard steps for adding a manifest to my Win32 app, and updated it with the XML shown below, (copied from Enabling Visual Styles).

 

<dependency>
    <dependentAssembly>
        <assemblyIdentity
            type=”win32″
            name=”Microsoft.Windows.Common-Controls”
            version=”6.0.0.0″
            processorArchitecture=”*”
            publicKeyToken=”6595b64144ccf1df”
            language=”*”
        />
    </dependentAssembly>
</dependency>

 

 

When I then ran the updated app, and set up the AccEvent SDK tool again to report events being raised, I found the ToggleToggleState property changed event was now raised as required. And having verified that the event was being raised, I pointed Narrator to the checkbox, and found Narrator now did announce the change in toggle state as I interacted with the checkbox.

This meant that I could get back to the team who owned the UI containing the original problematic checkbox, verify that they weren’t using version 6 of the Win32 common controls, and let them know that if they update their UI to use version 6, it’s very likely that Narrator will announce changes to the checkbox toggle state as required. And it so happens, they did just that and all was well.

 

9. Why the solution works

This investigation had been an interesting journey for me, and I was curious as to why the version of the common controls made a difference to which UIA events were raised by the UI.

The Win32 common controls do not natively support the Windows UI Automation API. (More recently-introduced UI frameworks such as XAML and Edge do natively support the API.) So the question is, how can a UIA client app like Narrator interact with the Win32 common controls? In general, this is made possible by UIA itself. UIA will recognize that a Win32 checkbox doesn’t natively support the UIA API, and will interact with the UI through other means, including the use of WinEvents. WinEvents are an old technology available only on the Windows desktop platform, (so not on Windows Phone, X-Box or Hololens). Modern features can leverage UIA’s rich set of events supported on many platforms, rather than using WinEvents, but in some cases when UIA’s APIs for raising events aren’t being called by UI, UIA can react to WinEvent-related functions instead, and in response will go on to call UIA clients’ event handlers.

However, this isn’t practical in the case of a Win32 checkbox being toggled. When a Win32 checkbox is toggled, the checkbox raises an event to say something changed, but isn’t specific about what changed. This means it isn’t practical for UIA to make a one-to-one mapping to a UIA event like the ToggleToggleState property changed event. Because of this, long ago the checkbox common control was updated to explicitly raise a UIA ToggleToggleState property changed event. That update is included in version 6 of the common controls, and so any checkbox built with an earlier version of the Win32 common controls will not raise the event that the customer using Narrator needs when interacting with the checkbox.

 

10. How to raise the required event yourself

Say for some reason it wasn’t practical for the UI with the original problematic checkbox to be changed to use version 6 of the common controls. Our goal of delivering a high quality Narrator experience with this UI still stands, so what else can we do to fix the problem?

The core problem here is that the expected UIA event isn’t being raised. So if the UI framework that we’re using won’t do it for us, can we do it ourselves? With this in mind, I returned to my experimental new app, and tried to raise the UIA event myself whenever the state of the checkbox changed. Before trying this, I needed to make sure that the app was in its original broken state, and I use AccEvent to verify that the required event wasn’t being raised as I toggled the state of the checkbox.

 

The first change I made was to include the main UIA header file in the app’s existing stdafx.h.

 

#include <objbase.h>
#include <UIAutomation.h>

 

 

I then added the following to the existing WM_COMMAND handler for the About dialog box in the app.

 

case WM_COMMAND:

    if ((HIWORD(wParam) == BN_CLICKED) &&
        (LOWORD(wParam) == IDC_MYCHECKBOX))
    {
        NotifyWinEvent(
            UIA_ToggleToggleStatePropertyId,
            (HWND)lParam,
            OBJID_CLIENT,
            CHILDID_SELF);

        return 0;
    }

 

I have to say, it really is pretty handy that I can pass the required UIA property id into the NotifyWinEvent() function, even though that function was introduced long before UIA existed. Note that I’m not including any details in the call about the new toggle state of the checkbox. Instead I’m just raising an event to say the toggle state has changed, and UIA will react by raising the required UIA event and will include the new state of the checkbox in the call that ultimately gets made to the client’s property changed event handler.

Note for XAML devs: A standard XAML Checkbox control will automatically raise ToggleToggleState events as your customers interact with your checkbox. If for some reason you’ve built custom UI which the user can toggle, but which doesn’t raise the event by default, you’d get an AutomationPeer for the control, (perhaps using FrameworkElementAutomationPeer.FromElement), and then call RaisePropertyChangedEvent(). In general it’s preferable to base any custom UI which can be toggled, on a XAML control which automatically raises all related UIA events for you.

 

Summary

If Narrator doesn’t announce a change in your UI as expected, consider the steps listed below during your investigation.

– Make sure you can reproduce the problem yourself.

– Compare the Narrator experience at other similar UI.

– Examine the UIA events raised by the UI where the problem occurs.

– Examine the UIA events raised by similar UI where the problem does not occur.

– Use the Inspect SDK tool to compare the UIA properties of UI where the problem does occur, with UI where it does not.

– Build a minimal UI of the same UI framework, and try to modify it to either fail or not fail in a similar way to the UIs you’ve examined so far.

 

In some cases by running through the above steps, an implementation detail might be revealed which when included in your own UI, will lead to the desired Narrator behavior.

And if you’re presenting Win32 checkboxes, use version 6 of the common controls if that’s practical. Otherwise you’ll want to raise the ToggleToggleState property changed event yourself in order to deliver a high quality experience to your customers who use Narrator.

Guy

MSDN Blogs: Quick tip on Service Fabric Remoting service development

$
0
0

Azure Service Fabricneeds no introduction. It is our next gen PaaS offering or also called PaaS v2. It’s been used internally for many years, tested and released as SDK for consumption. Some of the well known offerings like Az Sql, Az DocDB, Skype etc runs on Service Fabric. We already see developer community consuming for their production and hearing lot of goodness.

It is free, any one can download the SDK, develop and run from their laptop or own data center or publish to Azure. It works on windows and Linux as well. It has lot of rich features over the previous PaaS offerings (cloud services) so seeing lot of traction from big companies considering for critical application.

Recently I had a chance to work for SF Service Remoting and had some hiccups in following our document> “Service remoting with Reliable Services”

So, decided to break the high level documentation with screenshots for easier consumption. This sample is based on this example:-https://azure.microsoft.com/en-us/documentation/articles/service-fabric-reliable-services-communication-remoting/ 

Service side proj settings: Set the platform target as x64 If you want to use reliable collections, reliable actors APIs, failing to have this set throws as binding exception as below.

System.BadImageFormatException was unhandled
  FileName=Microsoft.ServiceFabric.Services, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
  FusionLog=Assembly manager loaded from:  C:WindowsMicrosoft.NETFrameworkv4.0.30319clr.dll
Running under executable  D:Cases_CoderemotingclienttestbinDebugremotingclienttest.vshost.exe
— A detailed error log follows.

 

platform

 

service

 

For client side/calling method, I do not see set up related information in detailed here https://azure.microsoft.com/en-us/documentation/articles/service-fabric-reliable-services-communication-remoting/. I found, these 3 dll’ s has to be referred at client side project for consuming service. I simply copied from service side sample packages folder to calling side proj folder.

image

image

image

client

MSDN Blogs: Top MSDN and TechNet Forum Contributors! November Week 2

$
0
0

Welcome back for another summary of movers and shakers over the week across the MSDN and TechNet forums!

This week, in honour of the Global MVP Summit that has just concluded, I have included a new section, for most helpful MVPs of the week!

 

The top five answerers in the TechNet forums

Let’s take a closer look at those five forum heroes…

Dave Patrick
Affiliations
Member Since
May 18, 2009
Biography
[Microsoft MVP] Windows

jrv
Affiliations
Website
Member Since
Apr 9, 2005

Jason Sandys
I work at
Affiliations
Website
Member Since
Jan 10, 2009
Contact
Biography

Carl Fan
Affiliations
Member Since
Mar 10, 2016

Hilary Cotter
Affiliations
Member Since
Mar 5, 2008
Contact

 

The top five answerers in the MSDN forums

Now, let’s also take a closer look at these…

Hilary Cotter
As above!

Dave Patrick
As above!

Lisa Chen1226
Affiliations
Member Since
Jul 31, 2014

croute1
I work at
A Midwest US company
Website
Member Since
May 14, 2013
Biography
Communication Specialist-
Emmy Award Winner

Rachit Sikroria
I work at
Tata Consultancy Services (TCS)
Affiliations
Website
Member Since
Dec 20, 2013
Contact
Biography
Rachit Sikroria is a Microsoft MVP. He is working as BizTalk Server Consultant in Gurgaon, India. He has nearly 6 years of experience with EAI technologies. He is specialized in Microsoft BizTalk Server 2006 – 2016, Business processes (EAI / B2B & BPM), Service Oriented Systems, Microsoft .NET architecture & software development. He has worked in many exciting industries such as Commercial Aviation, Banking, Finance & Energy. He enjoys supporting the BizTalk Community through a continued participation on the MSDN/TechNet Forums.

This week, TechNet beats MSDN by 306 answers!

 

The most recent and active moderators in the TechNet forums

Here’s some more about these most helpful moderators…

Ty Glander
I work at
Insight Global
Member Since
Dec 20, 2005
Contact

Darren Gosbell
Affiliations
Website
Member Since
Sep 19, 2008
Contact

Rich Lewis
Member Since
Nov 20, 2015

Ed Crowley
I work at
Convergent Computing
Affiliations
Website
Member Since
Aug 28, 2008
Biography
BS EECS University of California, Berkeley ’80 –MBA Haas School of Business, U.C. Berkeley ’89 —
Over 30 years in computing and IT —
MVP Exchange since 1997

Alvwan
Affiliations
Member Since
Aug 8, 2014

Eva Seydl
Affiliations
Member Since
Mar 17, 2015

Pierre Audonnet [MSFT]
I work at
Microsoft
Affiliations
Website
Member Since
Jun 17, 2008

Mike Jacquet
I work at
Microsoft Corp.
Affiliations
Member Since
May 14, 2010

Rachit Sikroria
As above!

 

The most recent moderators in the MSDN forums

And as before, some more info on these mega-moderators…

Cenkd
Member Since
Mar 9, 2012

Darren Gosbell
As above!

Brian Catlin [MVP]
I work at
Azius
Affiliations
Website
Member Since
Dec 31, 2008
Biography

Windows internals, device drivers, security, and forensics consulting and training (www.azius.com)

IoTGirl
I work at
HCL America
Affiliations
Website
Member Since
Dec 30, 2008
Biography
I have over 15 years experience in Windows Based devices

Obaid Farooqi
Affiliations
Member Since
Sep 12, 2008

Bruce Johnston – MSFT
I work at
Microsoft
Website
Member Since
Jul 25, 2014

Rachit Sikroria
As above!

Aaron Stebner
I work at
Microsoft
Affiliations
Website
Member Since
Apr 22, 2005

Kareninstructor
Affiliations
Member Since
Oct 1, 2008
Contact
Biography
Develop with VS2013/VS2015, C#. Moderator under VB.NET, C# and Windows Forms Data Controls and Databinding forums and moderator at vbforums entire site. Currently our company is developing in C#, Entity Framework, SQL-Server and Web-API. Stack over flow profile under Karen Payne. Site moderator for VB Forums.

 

The top five MVP answerers in the TechNet forums

Here’s some more about these most valuable answerers…

Dave Patrick
Affiliations
Member Since
May 18, 2009
Biography
[Microsoft MVP] Windows

Jason Sandys
I work at
Affiliations
Website
Member Since
Jan 10, 2009
Contact
Biography

Hilary Cotter
Affiliations
Member Since
Mar 5, 2008
Contact

Marcin Policht
Affiliations
Member Since
Apr 6, 2009

Rachit Sikroria
As above!

 

The top five MVP answerers in the MSDN forums

And finally, some more info on these most valuable answerers…

Hilary Cotter
Affiliations
Member Since
Mar 5, 2008
Contact

Dave Patrick
Affiliations
Member Since
May 18, 2009
Biography
[Microsoft MVP] Windows

Rachit Sikroria
I work at
Tata Consultancy Services (TCS)
Affiliations
Website
Member Since
Dec 20, 2013
Contact
Biography
Rachit Sikroria is a Microsoft MVP. He is working as BizTalk Server Consultant in Gurgaon, India. He has nearly 6 years of experience with EAI technologies. He is specialized in Microsoft BizTalk Server 2006 – 2016, Business processes (EAI / B2B & BPM), Service Oriented Systems, Microsoft .NET architecture & software development. He has worked in many exciting industries such as Commercial Aviation, Banking, Finance & Energy. He enjoys supporting the BizTalk Community through a continued participation on the MSDN/TechNet Forums.

Magnus (MM8)
Affiliations
Website
Member Since
Mar 10, 2009

Kareninstructor
Affiliations
Member Since
Oct 1, 2008
Contact
 
Biography
Develop with VS2013/VS2015, C#. Moderator under VB.NET, C# and Windows Forms Data Controls and Databinding forums and moderator at vbforums entire site. Currently our company is developing in C#, Entity Framework, SQL-Server and Web-API. Stack over flow profile under Karen Payne. Site moderator for VB Forums.

 


 

Congratulations to all our weekly winners, and thanks to YOU for reading!

MSDN Blogs: Containers in Enterprise, Part 2 : DevOps

$
0
0

Now that we’ve looked at container basics and how they help in solving some of the bottlenecks in enterprise, let’s change gears and move into next section.

During local development, you do many things manually(docker build and run commands). In production scenarios, though, you’d want to automate most of the things. This is where DevOps for container helps. Below is how a typical container devops pipeline should look like.

containerdevops

It starts with local development. You can use any of common IDEs. This is coupled with Docker Tools for your local development environment(Docker for Windows in our case). Locally, a dockerfile is all that is required.

From this point onward, source control and CI systems take charge. A developer can check-in code to any of their favorite source control repository. All major CI/CD tools (TeamCity, Jenkins, VSTS, etc.) support connecting to these source control repositories. CI/CD tools can build/compile an application against a application development platform such as .Net, Java or NodeJS. Docker tools for CI/CD work on top the compiled application. We’ll see how to use them soon.

Once docker tools for CI/CD play their part, flow moves to docker hub/docker registry. Both act as container image store and can store images. However, docker hub is a online registry managed by Docker. If you are not averse to the idea of Docker in charge of your enterprise container images, you can set up your own on-premise registry only for your enterprise.

With the container image you’ve got, you can then target any deployment target. Docker hub is a good choice for any public cloud platform. Docker registry is good choice for any deployment inside your enterprise or private data center.

Let’s see this in action!

I am going to use VSTS in this example. However, same results can be obtained from your favorite CI/CD system such as Teamcity or Jenkins as well.

First things first…

I have set up pre-requisites as below.

Docker Host in Azure: In earlier example, I was running all docker commands locally. However, with CI/CD, I need a separate host to run docker commands. Think of it as a build server in cloud. I have provisioned the Docker Host by using  docker-machine utility. It helps to create/configure machines with docker tools. With it I have used Azure Driver for docker-machine. So when I combine docker-machine with Azure driver, I get a Docker Host in Azure. With Azure driver, you can only create Linux hosts at this stage with Windows support on the way. I’ve also installed a VSTS Build Agent on this host so that VSTS can communicate with it. You configure this as a service endpoint in VSTS as shown below.

dockerhost

The various certificate entries you see above can be copied from the folder which gets created at location C:Users<your-user-id>.dockermachinemachines<machine-name-as-in-docker-machine-command> after running docker-machine command.

certs

You just open the ca, cert and key PEM files in a file editor(such as Notepad++), copy the contents and paste them inside the boxes in VSTS.

Docker Hub Account: Docker hub is an image store. Think of it as a App Store for Enterprise. With a personal account, you can store up to 5 images. Obviously for enterprise, you should have a enterprise account which costs some money :-). Configuration inside VSTS looks something like as below.

dockerhub

Docker Integration for VSTS: This is docker extension for VSTS. You can install it from VSTS MarketPlace.

dockerext

With these pre-requisites in place, you can set the build definition as below.

In the 1st step, just run dotnet restore command. This command brings all the nuget packages that application uses,

build1

in 2nd build step, we run dotnet publish command. This command will compile the application and keep binaries at the location specified with -o switch.

build2

In 3rd step, we run docker build command. This command is enabled by the docker integration extension we installed as part of pre-requisites. This build step uses the docker host in Azure we provisioned as part pre-requites.

build3

Finally, in 4th step, we upload the image that was build previously to docker hub.

build4

Now, trigger a build or manually queue a build from VSTS. It should run all steps to successful completion. At the end of it, you should be able to see the container image appearing in your Docker Hub account as shown below.

dockerhubrepo

You can get more details (e.g. Tag to be used for Pull command later on) by clicking it.

dockerhubtag

Now that we have got an image in Docker hub, we can use it for deployment to any machine that runs Linux/Unix.

So I have spin up a Linux machine in azure. Along with port 22 which is open by default for Linux machines, I’ve opened port 80 as well for HTTP traffic.

I SSH into this machine and run following docker command to pull container image from Docker hub.

dockerpull

I am running a Docker Pull command. I pass parameters such as the name of container image and the tag that I want to pull. With this information, pull command will bring down that image from Docker hub and store it on my Linux machine.

Obviously next, I create/run a container from this newly downloaded image.

dockerrunlinremote

I can now browse to the Linux VM endpoint (DNS Name) and browse application which is running on the Linux server.

remoteapp

Now that we have got a devops workflow set up, let’s discuss more about the benefits this approach brings.

devopsbenefits

1st set of benefits we get is Agility. We know from following DevOps practices that application development and deployment is accelerated. Containers further accelerate this model by cutting down VM provisioning and start-up times. Because I am now able to bring down container image from Docker hub, I can potentially deploy to any compatible target anywhere. This model enables to build, ship, run anywhere which is Docker’s motto.

2nd set of benefits are Consistency. Since application dependencies are packaged with container image, as the image moves between environment to environment, it provisions its dependencies by itself. There are no cases of missing frameworks as in the past. All major software vendors (including Microsoft) have uploaded images of their software in Docker hub. You can use this rich ecosystem to create application of your needs easily. Also, a lot many containers can run on a single host machine. This increases the container density and in-turn resource utilization of the host.

3rd set of benefits is Utility. You must have observed that containers are very fast (matter of 2-3 seconds!) compared to a typical virtual machine. This makes them good candidates to replace short living machines. A typical example is of integration testing. Any enterprise runs these tests typically during night time. Before you could run these tests, you need to bring up a bunch of machines, provision them with pre-requisites, wait for tests to finish and tear them down after execution. All of this takes time and costs. Containers can replace these VMs. They can cut down the time and cost. Another major and perhaps most demanded benefit is enabling micro-service architecture based application development. Instead of building software as a large monolith, you can cut it down into many smaller parts or micro-services. You can then deploy these parts as containers and version, update, service them independently. We’ll take a look at how to do this in part 3.

 

 


MSDN Blogs: Containers in Enterprise, Part 3 : Orchestration

$
0
0

In previous post, we looked at container devops. Let’s talk about container orchestration in this post.

In previous post, we briefly touched upon micro-services. Besides enabling micro-service architecture, orchestrators help in lot of other ways. Let’s start by asking ourselves why do we need orchestrators in the first place?

whyorchestrator

Well! running container locally is easy. DevOps helps to an extent where we can have a container image ready to be deployed anywhere. However, inspite of that, running hundreds of containers across a cluster of machines is difficult. Let’s spend some time discussing why so.

1st, application spans across multiple containers. A typical 3 tier application will have a web tier, a service tier and a database tier. All of them combined forms an application. So composition of these containers should be taken care of. Also depending upon the type of container, you need to open different ports. For a web tier, you’ll have to open port 80. For a database tier, you may need to open port 1433. So there is a great deal of automation that is involved. Once you get these containers up and running then you also need to worry about scaling them up or down and general load-balancing. Lastly, there is a general management aspect as well. Where should you run your containers, how should they be launched, general management, etc.

So what are containers? They are a series of tools and processes to automate the container life-cycle across a pool of resource. These resources could be virtual machines or physical servers. They allow solving the issues we discussed above using a declarative syntax. This syntax is expressed in docker-compose.yml file.

There are 2 aspects of orchestratos. 1st is the orchestrator itself and 2nd is the infrastructure they need to operate upon. This infrastructure consists of storage, networking and compute.

In Azure, these infrastructure services are provided by Azure Container Services. You can spin up a new ACS instance by clicking New–>Containers–>Azure Container Service from Azure portal as shown below.

acs

Once you go though Wizard, you’ll have a ACS cluster ready in about 10 mins. Note that in the 2nd wizard window, you get a choice to select between any one of the 3 main orchestrators –

  1. Mesos DC/OS
  2. Docker Swarm
  3. Google Kubernetes

I selected Docker Swarm for this post. My Completed ACS cluster looks like below.

localacs

As mentioned above its a bunch of storage, network and compute resources.

I’ve also set up the SSH connection to this cluster via Putty as mentioned here and here.

Let’s switch back to orchestrator. As discussed previously, you can use the declarative syntax of docker-compose.yml file to orchestrate containers. docker-compose.yml file is similar to dockerfile. It is a text file and contains instructions.

Let’s take a look at docker-compose file line by line.

dockercompose

1st line indicates the service to be created on the cluster. Instead of deploying a container on each node, a service needs to be created.

2nd line indicates the container image to be used for this service. This is same image we uploaded to Docker hub using devops pipeline in previous post.

3rd line instructs the ports to be opened on host and container.

4th line are the actual ports on host and container.

This file can container additional services as well. e.g. a typical 3 tier application may have a docker-compose file as below.

dockercompose2

Once you finalize your docker-compose file, you run docker-compose commands from a console connected to ACS cluster.

I have found that setting the DOCKER_HOST to listen on port 22375 works as opposed to 2375 mentioned here.

dockerhostset

Once you run above command, all the docker commands you execute locally, will actually run on remote ACS cluster.

First thing you do is navigate to the folder containing docker-compose file. In my case, it’s C:Users<my-user-id>. Run following command –

dockercompoeup

This command will execute commands in docker-compose file. It will create 1 or many services as mentioned in the docker-compose file.

Once the service is created, you can browse to ACS endpoint and browse the application. ACS endpoint is the Agent FQDN, which you can copy from the deployment history of the ACS cluster itself as shown below.

dephist

The actual application running on ACS cluster is shown below.

acsapp

This is good! We’ve got an application running on ACS cluster. Note that this is only one instance running. We want to have 1 more instance running behind a load-balancer. How do we do that? Well! its as simple as running following command.

dockerscale

I instruct to scale, pass the name of the service and how many instances I need. Application the gets deployed to another instance. See below the highlighted machine name is something new apart from cf94754229a0 shown in the picture above.

acsapp2

So, we have got a load-balanced, multi-node deployed application created from a container image deployed using a devops pipeline!

Let’s talk more about the orchestrator choices we’ve got.

orchchoice

As discussed above, in Azure, you can choose between Mesos DC/OS, Docker Swarm and Google Kubernetes as your orchestrator. Given that orchestrators are a relatively new technology, there is no prescriptive guidance as such available to choose one over the other.

It all comes to what you are already using, your comfort level and product expertise within enterprise.

In general, Docker swarm has deeper integration with docker ecosystem. Mesos DC/OS has a great data management story and is used by many big data project teams. Kebernetes builds on Google’s experience of running container in production for many years.

With this, we come to an end of the blog series that talked about containers in enterprise.

 

 

 

 

MSDN Blogs: SQL Server 2016 Log Shipping Jobs Fail Silently

$
0
0

Lately found out a solution to a very strange SQL 2016 Log Shipping issue: its LSBackup_<db name> job fails without any meaningful error message.  Nothing will show in SQL or Windows log. When you check its job history, you will see the job failed with below message:

Date  <date time>
        Log  Job History (LSBackup_<db name>)

        Step ID  1
        Server  <server name>
        Job Name  LSBackup_<db name>
        Step Name  Log shipping backup log job step.
        Duration  00:00:00
        Sql Severity 0
        Sql Message ID 0
        Operator Emailed 
        Operator Net sent 
        Operator Paged 
        Retries Attempted 0
        Message
        Executed as user: <domainuser>. The step failed.

Turns out the fix to this issue is either install .NET Framework 3.5 feature on the Windows server running SQL 2016 or install SQL Server 2016 CU1.

Please refer to https://support.microsoft.com/en-us/kb/3173666 for detailed information regarding how to get the CU. The link is for solving a different but related issue (LSCopy job fails).

This fix was tested on Windows Server 2012 R2 and Windows Server 2016.

MSDN Blogs: My Experience Completing the Microsoft Professional Program Certificate in Data Science

$
0
0

My Experience Completing the Microsoft Professional Program Certificate in Data Science

Earlier this year, Microsoft announced an interesting new educational track designed to help people grow skills in the area of data science. Originally titled “Microsoft Professional Degree”, they later updated the name to the current “Microsoft Professional Program in Data Science” to lower the emphasis of the certification and instead highlight the value of the skills learned over the course of the program.

As Microsoft was developing the material and solidifying the content, they asked a set of Microsoft employees to complete the 10 courses and provide detailed feedback in an expedited fashion as part of a pilot program to help improve the program. I was lucky to be one of the participants of the pilot program and completed the program in August 2016. I will follow up this post over the next few weeks with my feedback and details from each of the 10 courses I selected as part of the program.

The courses:

The program consists of 10 required courses to complete the program, with 2 or more course options for four of the courses (example: For the 5th course, you can take “Intro to R” or “Intro to Python”). Each of the courses is hosted on www.edx.org, an outstanding online learning platform that supports numerous teaching styles as you will learn over the course of the program. The courses are taught from a variety of industry professionals, MVPs, Microsoft employees, and college professors. The full list of courses can be found here:
https://www.edx.org/microsoft-professional-program-certficate-data-science

The cost:

Each course costs between $25 and $99 for a verified certificate, with the total cost of the 10 courses I completed equaling $467. There is an option to audit the courses for free, but you will not obtain the certificate and will not complete the official program, but can complete 100% of the videos, labs, and quizzes from the program.

The time commitment:

While this will vary greatly depending on your background, interest, and motivation to complete the course, it is possible to get consumed with the courses and complete them very rapidly. I completed the first 9 courses in roughly 7 weeks, followed by the final project being open for 6 weeks. In the first week of the final project (course #10), I had a score that I felt was sufficient to safely complete the program, but continued to work on the final project for the entire 6 weeks because I found it so interesting.

 

unitoverview

Overall, the program was an excellent experience and I highly recommend it to anyone in the data field. I was able to cover content that I haven’t studied in years while exploring new areas that I’ve been wanting to invest in, but haven’t made a priority until now. I have a much better broad understanding of the topics covered in the course and have already started to leverage them in my customer environments.

Let me know if you have any questions or feedback on the courses and program.

Thanks,
Sam Lester (MSFT)

MSDN Blogs: Video: Every Girl Can Code with Microsoft Small Basic

$
0
0

Leveraging Microsoft Small Basic to teach future generations of women. Here is a video which tells how Microsoft Small Basic helps them to achieve more Smile So please leverage Microsoft Small Basic to teach future generations of women, young girls. They can learn via Visual Programming, create programs, Games and improve their knowledge.

Just to update you that Microsoft Small Basic puts the “fun” back into computer programming. Few of the interesting stuff I want to mention were,

– With a friendly development environment that is very easy to master, it eases both kids and adults into the world of programming.
– Small Basic combines a friendly environment with a very simple language and a rich and engaging set of libraries to make your programs and games pop.
– In a matter of few lines of code, you will be well on your way to creating your very own game.
– With a new and revolutionary IntelliSense®, Small Basic makes writing code a breeze.
– Now you can share your programs with your friends; let them import your published programs and run them on their computer.

 

For more information, refer the following:

Site: http://www.smallbasic.com/
Download: http://www.microsoft.com/en-US/download/details.aspx?id=46392
Blog: http://blogs.msdn.com/smallbasic/ 
Curriculum: http://social.technet.microsoft.com/wiki/contents/articles/16299.small-basic-curriculum.aspx
Documentation: http://www.smallbasic.com/doc.aspx
e-Books: http://social.technet.microsoft.com/wiki/contents/articles/16386.small-basic-e-books.aspx
Program Gallery: http://blogs.msdn.com/b/smallbasic/archive/2013/02/17/small-basic-program-gallery-listed-by-category.aspx
Extensions: http://blogs.msdn.com/b/smallbasic/archive/2012/10/12/small-basic-extensions-gallery.aspx

Hope this helps.

MSDN Blogs: Are the inmates still running the asylum?

$
0
0

The future belongs to those who can build desirable products and services. Consumers vote with their feet. If it is not working for them, they will go somewhere else. If you are not building something that works for them, someone else will. Relying alone on being big and having a long history is the same as surrendering.

I recently came across two examples full of good intensions; but falling miserable short. Both in the banking industry.

Business as usual

By now all banks have realized that mobile access to your personal bank account information is expected. One of the largest banks in Denmark is providing a phone app (1) enabling their customers to do basic banking services, like checking their balances directly on the phone. Great intentions! YouTube has an app – it is called “YouTube”. Facebook has an app – it is called “Facebook”. Linked-In has an app – it is called “Linked-In”. These are great names, you don’t really care about the platform; but about what you want to do. The bank in question decided to call their app: “Mobile Bank”. For the development team inside the bank building this app it is a great name. Their mission is different from everyone else’s in the bank – they deliver a mobile banking experience. For the users, it makes absolutely no sense. When using a phone, they are already mobile – why should the app’s name include “mobile”? The app is just another way of interacting with the bank – and it should carry the bank’s name. The bank failed to look at their offering from the user’s perspective. It is the digital equivalent of:

Another big bank in Denmark trying to attract millennials to the bank’s investment platform (2). The millennials are a minority that 20 years from now is no longer a minority. Demanding digital experiences of high quality that is tailored to them – being intelligent about the opportunities and accepting no-nonsense. This bank’s value proposition is that they have 100+ years of experience in investments – in making people richer. That is a phenomenal skill – it shouldn’t be hard to sell. The offering to the millennials is that for 1% of their investment capital these experts can manage their investment – in a modern style with very few clicks and options. Old-style banking in a new wrapping. What is the product the millennials actual want? A return on their investment. What is the product the bank is offering? A simplified platform for investing. If the bank truly believed their experts could make people richer, then sell that. Suppose the bank’s offering had been: “Let us manage your investments. We share your goal: Make your money grow. Our experts will make the optimal investments on your behalf – when we succeed we split the profit: 90% to you; 10% to us. No charge, if we fail to make your money grow.” This would align the bank’s behavior with what the customer expectations.

Are you the user’s advocate?

The two examples are quite similar in nature. No one was the user’s advocate. Who in your company speaks the voice of the user? Are your solutions designed for the user? Or perhaps even better with the user?

In Alan Cooper’s epic book “The Inmates Are Running the Asylum” he shares the insight, that you win users over one-by-one. This insight implies that collaborating closely with a small number of potential users to define and refine the solution is a great approach. Once the initial users are not just satisfied but cannot wait for the product and service to be available – then (and only then) are you on the right track.

Peak Design is a company that has internalized what it takes to win users over. Their dedication to their consumers is an inspiration to me and a yard-stick of what it takes to succeed on purpose. Here is a short video telling the story of one of their products.

1: https://www.microsoft.com/en-us/store/p/mobile-bank/9wzdncrcwp6n
2: http://june.dk

THIS BLOG IS PROVIDED AS-IS; AND CONFERS NO RIGHTS.

Viewing all 3015 articles
Browse latest View live