Quantcast
Channel: Randy Riness @ SPSCC aggregator
Viewing all 3015 articles
Browse latest View live

MS Access Blog: Connect your team using Office 2016 and Windows 10

$
0
0

Lead highly interactive remote meetings using voice, video and screen sharing.

connect-your-team-using-office-2016-and-windows-10-image-1

An online immersion session is not your typical online event. Each 90-minute interactive session starts with an online roundtable discussing your business challenges and then launches you into a live environment in the cloud. A skilled facilitator will guide you through simulated business scenarios that are customized to your interests.

We will send you a link to connect your own device to a remote desktop loaded with our latest and greatest technology, so you can experience first-hand how Microsoft tools can solve your biggest challenges in a collaborative, fun environment.

Online immersion sessions help you discover how to:

  • Keep information secure while being productive—Make it easier to work securely and maintain compliance without inhibiting your workflow.
  • Capture, review and share notes from anywhere—Boost your team’s productivity by sharing documents and collaborating in real time.
  • Use social tools to find experts and answers—Break down barriers between departments to share knowledge quickly.
  • Quickly visualize and analyze complex data—Zero in on the data and insights you need without having to involve a BI expert.
  • Co-author and share content quickly—Access and edit documents even while others are editing and reviewing them—all at the same time.

Expect to leave the session with enough time-saving skills to more than offset your time investment within a few short days.

Each session is only open to 20 participants. Reserve your seat now and learn how you can be more productive anywhere, anytime with Office 365.

Sessions are held at 10 a.m. PT and 12 p.m. PT every Wednesday. Register now!

The post Connect your team using Office 2016 and Windows 10 appeared first on Office Blogs.


MSDN Blogs: Programming Windows Information Protection

$
0
0

With Windows 10 (you need the version 1607 or later), you can use the Windows Information Protection (WIP, formerly “Enterprise Data Protection”), and the developers can also use this features in your app code.

By using this Windows Information Protection (WIP) API, your app can distinguish whether the accessing resources (file, network, etc) are “managed” or “unmanaged” and process the WIP-aware advanced capabilities. (Even if the app is not using WIP API, the Windows Information Protection works. But the app cannot handle the granular controls without this API.)
Edge, Notepad, Mobile Office, etc are WIP-aware applications. (Please see the official document “List of enlightened Microsoft apps for use with Windows Information Protection (WIP)” for the complete list.) These apps will help you to understand the WIP API.

Applying Policy

Before you start building apps, you must apply the Windows Information Protection policy to your development host.
To do this, you can use the MDM service (Microsoft Intune, SCCM, or 3rd party product) or WIP Setup Developer Assistant.

If you use the Microsoft Intune, first you open the Intune Portal and click [Add Policy] from the [Policy] tab.

Note : Before starting policy settings, you must enroll your device (PC) into Microsoft Intune. (Here we are assuming that all setting was done.)
If you are using EMS or Azure AD Premium, you can configure that all users’ Windows 10 devices in Azure AD be managed by Microsoft Intune.

Note : Currently you must use English as the language settings when you set WIP policy using Microsoft Intune portal. (Sorry …)
If you are using non-English language (including Japanese), please change the default language settings to English when you use Microsoft Intune portal.

In the window pop-up, you select [Windows] – [Windows Information Protection] and click [Create Policy] button.
After clicking the button, the policy setting window is displayed.

In the policy setting window, first you set the policy name and description.

Next you can add your developing app by clicking [Add] button.

In this post, we create the UWP app using Visual Studio. You retrieve the app’s publisher name and product name from the app manifest (Package.appxmanifest) in your UWP project, and paste these values to the following dialog.

Note : When you get the publisher name and product name of the app in the Windows store, first you get the app ID (ex. 9wzdncrfhvjl) in the store page’s url and go to https://bspmts.mp.microsoft.com/v1/public/catalog/Retail/Products/{app id}/applockerdata.
When you get the publisher name of the desktop app, you use Get-AppLockerFileInformation PowerShell command.
Please see “Create a Windows Information Protection (WIP) policy using Microsoft Intune” for details.

Note : You can also use the asterisk (*) for the publisher name and product name. (Of course, it’s not good at production, but it might be okay in your dev env.)

Next you specify the protection mode (Block actions / Display warning / Auditing log / Off). In this post, we select [Block] mode which blocks all actions.

Next you specify the identity name for tagging corporate data. This is usually expressed as your primary internet domain. (If your tenant is also having custom domain, you can specify the multiple values separated by the pipe character.)

Next you add the conditions for the corporate network boundary. Using this conditions, WIP determines whether the connecting network is “managed” or “unmanaged”.

In this setting, “Enterprise Network Domain Names” and “Enterprise IPv4 Ranges” are mandatory.
By default, the enterprise IP range is auto-detected and you can set the arbitrary value in the “Enterprise IPv4 Ranges”.

Lastly, you specify the DRA (Data Recovery Agent) certificate which is used for the recovery of the encrypted data.
When the enterprise data is stored in your device, the data is protected using encryption. If you need to recover these data accidentally, this certificate is used.

If you want to create this certificate for the development purpose, you can get that by the following command.

cipher /r:{your arbitary certname}

After you create the Windows Information Protection (WIP) policy, click [Manage Deployment] link.

Add your device group or user group to the right-side pane, and click [OK] to deploy this policy.

After you have deployed your policy, please sign-out and sign-in to the Windows again. (The policy will effect.)

WIP Setup Developer Assistant is very easy to setup the policy, because you don’t need the extra service and difficult prerequisite settings (device enrollment , etc).
But this tool is for the development purpose, so we don’t recommend to use this tool in your work (or production) environment. (Please apply to the development host, using VM, etc.)

First you install WIP Setup Developer Assistant from Windows store, and you run this tool as an administrator.

The setup configuration windows is displayed, and the settings are the same as before (Microsoft Intune).
First you install your app using Visual Studio or sideloading, and you just select the installed app by clicking [Select Packages] button.
When the settings are completed, you just click [Apply Changes] button and the WIP is set up in this host. (It’s very simple !)

Programming

Now you can create your code and debug your app.
In this post we use C# for programming, but you can also create your WIP-aware app using C++ (see here).

The official document “Build an enlightened app that consumes both enterprise data and personal data” is a good resource for you (developers), and you can learn the several cases for WIP programming in this document.
Let’s look at a few examples !

First, you create the UWP app project using Visual Studio.

For using WIP API, add library reference for “Windows Desktop Extensions for the UWP” and “Windows Mobile Extensions for the UWP”.

For adding the WIP capabilities to this UWP app, open Package.appxmanifest code editor (right-click and select [View Code] menu) and add the following (bold and red parts).

<?xml version="1.0" encoding="utf-8"?><Package
  xmlns="http://schemas.microsoft.com/appx/manifest/foundation/windows10"
  xmlns:mp="http://schemas.microsoft.com/appx/2014/phone/manifest"
  xmlns:uap="http://schemas.microsoft.com/appx/manifest/uap/windows10"  xmlns:rescap="http://schemas.microsoft.com/appx/manifest/foundation/windows10/restrictedcapabilities"  IgnorableNamespaces="uap mp rescap">
  . . .<Capabilities><Capability Name="internetClient" /><Capability Name="privateNetworkClientServer" /><rescap:Capability Name="enterpriseDataPolicy"/></Capabilities></Package>

The following code is just checking if the app is managed.
If the previous policy setup is succeeded, the following code returns “Managed” as a message dialog.

. . .
using Windows.Foundation.Metadata;
using Windows.UI.Popups;
using Windows.Security.EnterpriseData;
. . .

private async void button_Click(object sender, RoutedEventArgs e)
{
  if (!ApiInformation.IsApiContractPresent("Windows.Security.EnterpriseData.EnterpriseDataContract", 3))
  {
    await (new MessageDialog("API is not supported")).ShowAsync();
    return;
  }
  if (ProtectionPolicyManager.IsIdentityManaged("emdemo01.onmicrosoft.com"))
  {
    await (new MessageDialog("Managed")).ShowAsync();
    return;
  }
  else
  {
    await (new MessageDialog("Sorry, unmanaged")).ShowAsync();
    return;
  }
}

First, your app setup the enterprise identity. (You must change “emdemo01.onmicrosoft.com” to your corporate identity value.)

private async void button1_Click(object sender, RoutedEventArgs e)
{
  // Setup Identity
  var protectionPolicyManager = ProtectionPolicyManager.GetForCurrentView();
  protectionPolicyManager.Identity = "emdemo01.onmicrosoft.com";
}

If succeeded, the following briefcase icon will be shown in your app.

The following code copies the text to the clipboard. This clipboard text is protected, because the previous code setup the enterprise identity.

private async void button2_Click(object sender, RoutedEventArgs e)
{
  // Clipboard Copy
  var dataPackage = new Windows.ApplicationModel.DataTransfer.DataPackage();
  dataPackage.SetText(textBox.Text);
  Windows.ApplicationModel.DataTransfer.Clipboard.SetContent(dataPackage);
  await (new MessageDialog("Copied the data to clipboard")).ShowAsync();
}

If you paste the copied (clipboard) data into your personal app (WordPad, etc), the following message appears and your operation is blocked.

If you build another enterprise app (having the same identity) and request the clipboard access as follows (see dataPackageView.RequestAccessAsync() method), this app can paste the protected clipboard data.

private async void button_Click(object sender, RoutedEventArgs e)
{
  DataPackageView dataPackageView = Clipboard.GetContent();
  if (dataPackageView.Contains(StandardDataFormats.Text))
  {
    ProtectionPolicyEvaluationResult result = await dataPackageView.RequestAccessAsync();
    if (result == ProtectionPolicyEvaluationResult.Allowed)
    {
      string contentsOfClipboard = await dataPackageView.GetTextAsync();
      textBox.Text = contentsOfClipboard;
    }
  }
}

The following example shows that it stores the enterprise data in the local file.

. . .
using Windows.Storage;
using Windows.Security.Cryptography;
. . .

private async void button_Click(object sender, RoutedEventArgs e)
{
  // Save and Protect your file
  StorageFolder folder =
    ApplicationData.Current.LocalFolder;
  StorageFile file =
    await folder.CreateFileAsync("sample.txt",
      CreationCollisionOption.ReplaceExisting);
  FileProtectionInfo fileProtectionInfo =
    await FileProtectionManager.ProtectAsync(
      file,"emdemo01.onmicrosoft.com");
  var buffer = CryptographicBuffer.ConvertStringToBinary("test data", BinaryStringEncoding.Utf8);
  await FileIO.WriteBufferAsync(file, buffer);
  await (new MessageDialog($"Saved protected data in {file.Path}")).ShowAsync();
}

When you try to open this file using unmanaged app, the previous error (same error) is also displayed.

Of course, your app can read the file and check the protection status as follows.

private async void button_Click(object sender, RoutedEventArgs e)
{
  // Read the file
  StorageFolder folder =
    ApplicationData.Current.LocalFolder;
  StorageFile file =
    await folder.GetFileAsync("sample.txt");
  string data = await FileIO.ReadTextAsync(file);
  await (new MessageDialog($"Saved protected data in {data}")).ShowAsync();

  // Check the file protection status
  FileProtectionInfo fileProtectionInfo =
    await FileProtectionManager.GetProtectionInfoAsync(file);
  if (fileProtectionInfo.Identity == "emdemo01.onmicrosoft.com"&&
        fileProtectionInfo.Status == FileProtectionStatus.Protected)
  {
    await (new MessageDialog("Protected")).ShowAsync();
  }
  else
  {
    await (new MessageDialog("Not Protected")).ShowAsync();
  }
}

Your app can also understand whether the managed network or not, and access to the enterprise network resources that are managed by the policy.
For more details, see the official document “Build an enlightened app that consumes both enterprise data and personal data“.

Deployment and Debugging

When you debug (set breakpoints, etc) your app, you can use the usual [start debugging] menu in Visual Studio. (Be sure to set the policy as I described before.)
That’s all !

If you’re using the remote host (VM, etc) which is configured by the WIP Setup Developer Assistant, you can use sideloading for app installation.

  1. For enabling sideloading, login to the remote host, open [Settings] – [Update & Security], select [For developers] tab, and be sure to enable the sideloading.
  2. Create app package using Visual Studio.

  3. The package is in the “AppPackages” folder, and copy the all package files to the remote host.
  4. Run Add-AppDevPackage.ps1 (PowerShell script file) in the package files, and install.
  5. When you debug the app which is remotely installed, install the Visual Studio Remote Tools, launch the installed app, selcet [Debug] – [Attach to Process] menu in Visual Studio, and select the app process in the remote host.

Note : Before you install the app, the certificate must be imported in your local machine beforehand. The previous Add-AppDevPackage.ps1 installs this certificate during the deployment process.

MSDN Blogs: Microsoft IT’s Enterprise Integration Platform (EPS) team goes live on BizTalk Server 2016

$
0
0

Microsoft IT’s Enterprise Integration Platform (EPS) team became the first to go-live on BizTalk Server 2016. The platform that is managed by EPS is one of the most critical in Microsoft ecosystem as it processes over 30 million B2B transactions per month worth in excess of USD 120 Billion annually with 2000 + partners using multiple message formats including X12, EDIFACT, XML, SWIFT spanning across all business domains viz., Supply Chain, Finance, Human Resources, Volume Licensing. The initial functional & performance test results of BizTalk 2016 were so encouraging that the team implemented a “critical” trade screening business process that ensures screening of organizations, consumers before Microsoft sells its products and services.

 

This project is part of overarching initiative to optimize infrastructure costs by adopting Azure’s IaaS & PaaS offerings and completely eliminating on premise footprint. This is where BizTalk 2016 delivers key business value due to its compatibility with Azure IaaS while providing a supported High Availability solution. Yes, that’s right – unlike its predecessor, BizTalk Server 2016 leverages out-of-box SQL Server 2016’s “Always On Availability Group ” functionality to provide high availability solution on Azure IaaS. This release also provides BizTalk connectors to support hybrid integration scenarios.

 

While the PaaS story of B2B integration matures with rapid emergence of Logic Apps, Microsoft was fully focused on this new release of BizTalk Server as it allows existing customers to move to Microsoft supported BizTalk-IaaS solution without having to change existing BizTalk applications. To ensure a defect-free release for BizTalk 2016, Microsoft IT and BizTalk Product Group collaborated to identify top enterprise integration scenarios which need to be tested before general availability (GA) of BizTalk 2016 and be the first customer to adopt BizTalk 2016 on IaaS. To achieve this objective, rigorous testing and validation was performed along the following lines:

  • BizTalk functional testing on IaaS
    • All artefacts including different adaptors and message types
    • End to end flows which include B2B and hybrid A2A scenarios using the new Logic Apps adaptor.
    • Individual Application data and sanity validation tests
  • High Availability tests
    • Change the backend SQL HA/DR architecture from SQL WFSC cluster to SQL Always On Availability Group and potential implications/best practices to adhere to.
    • Extreme HA tests including MSDTC edge case scenarios.
  • Performance & Load testing
    • Performance tests (sustenance, peak load behavior etc.)
    • Reliability tests (peak load auto-recovery)
    • Scalability tests (ability to handle up to 5X the normal traffic load patterns)

The migration from BizTalk Server 2013R2 was “lift and shift” or as is basis. Existing BizTalk applications from 2013R2 were used as is.

The experience overall with BizTalk Server 2016 on Microsoft Azure IaaS VMs has been exciting especially with the new set of features including IaaS High-Availability support, Logic Apps adaptor, improved BizTalk Admin UI console and new features which have made administration easier than before. Additionally, due to the migration to IaaS, the team has been able to consolidate and optimize hardware requirement exactly as per use thereby already showing up to 20% cost savings annually in the initial analysis.

MSDN Blogs: AppFabric: Cumulative Update 7 For Microsoft AppFabric 1.1 for Windows Server – Instruction

$
0
0

Please visit below KB to get CU 7:

https://support.microsoft.com/en-us/kb/3092423

 

To apply this fix, follow these steps:

  1. Upgrade the servers to the .NET Framework 4.5.2 or onward.
  2. Install the cumulative update package.
  3. Please make sure to put following setting in the DistributedCacheService.exe.config file:
<appSettings><add key="backgroundGC" value="true"/></appSettings>
  1. Restart the AppFabric Caching service for the update to take effect.

Note By default, the DistributedCacheService.exe.config file is located under the following directory: %ProgramFiles%AppFabric 1.1 for Windows Server

 
Important for Client Application:
It is important that your application or development environment use the same assemblies as the cache servers. During any upgrade of the distributed cache system, make sure that all cache clients using that system have the same versions of the assemblies. Check this by comparing the product version of the cache client’s Microsoft.ApplicationServer.Caching.Client.dll file with the product version of the cache server’s Microsoft.ApplicationServer.Caching.Server.dll file located in the installation folder.
  • Microsoft.ApplicationServer.Caching.Core.dll
  • Microsoft.ApplicationServer.Caching.Client.dll
  • Microsoft.WindowsFabric.Common.dll
  • Microsoft.WindowsFabric.Data.Common.dll

 

For More Information, please visit the link:

https://msdn.microsoft.com/en-us/library/ff637695%28v=azure.10%29.aspx

MSDN Blogs: Experiencing Data Access Issue in Azure Portal for Request Data Type – 11/07 – Investigating

$
0
0
Initial Update: Monday, 07 November 2016 20:39 UTC

We are aware of issues within Application Insights and are actively investigating. Some customers may not be able to view or download web test results in the portal. The following data types are affected: Availability.
  • Work Around: None
  • Next Update: Before 11/08 01:00 UTC

We are working hard to resolve this issue and apologize for any inconvenience.


-Sapna

MSDN Blogs: Kinect for Windows news gets new address, keeps same great community

$
0
0

For the past several years, this blog has brought you news about the latest Kinect for Windows technology and its innovative applications in medicine, education, manufacturing, retailing, performance art, and more. We’ve been awed by the creative ways in which the developer community has harnessed Kinect for Windows to build solutions that allow users to interact naturally with computing technology.

Earlier this year, we reported that developers would soon be able to read Kinect’s RGB, IR, and depth data with the Windows 10 Anniversary Update and a Kinect driver update. (Kinect’s microphone data was already available to the Windows 10 UWP audio and speech recognition APIs). And, as explained in this Windows Developer blog post, we are pleased to announce the availability of these functionalities—along with the much-requested ability to access Kinect skeletal tracking data. You can find code samples at GitHub as part of the Windows universal samples, and you can download the latest Kinect driver from your computer’s Device Manager.

With the Windows 10 APIs and the new driver, developers can incorporate Kinect functionality into Universal Windows Platform (UWP) applications. Moreover, these same APIs will handle rich data from other sensors, provided you have the appropriate driver.

Now that these capabilities have been implemented in Windows 10, it only seems fitting to fold Kinect for Windows news into the Windows Developer blog, at https://blogs.windows.com/buildingapps/. Beginning today, that’s where you can keep abreast of the current developments in human-computer interactions—and how you can use Kinect for Windows functionality to create solutions that will run across the entire UWP ecosystem.

As part of this effort to consolidate your sources for developer information, we are also merging Kinect for Windows social media with the Windows Developer Facebook and Twitter accounts. The Kinect for Windows Facebook and Twitter accounts will remain open until Nov 15th, 2016, but now would be a good time to start following the Windows Dev accounts for the latest info about developing UWP apps.

And of course, now is also the ideal time to hop on over to our new home on the Windows Developer blog and add it to your favorites. And as always, we’re eager to hear from you, so please join the conversation at the Kinect for Windows v2 SDK forum.

Happy developing!

The Kinect for Windows Team

Key links

MSDN Blogs: Test execution failing when a Data-Driven test methods is referring parameter values from a test case WorkItem

$
0
0

Recently we are getting cases, where users get the below error when a data-driven test method is referring parameter values from a test case WorkItem.

Test execution failure error:

The unit test adapter failed to connect to the data source or to read the data. For more information on troubleshooting this error, see “Troubleshooting Data-Driven Unit Tests” (http://go.microsoft.com/fwlink/?LinkId=62412) in the MSDN Library.
Error details: Could not load file or assembly ‘Microsoft.TeamFoundation.TestManagement.Controller, PublicKeyToken=b03f5f7f11d50a3a’ or one of its dependencies. The system cannot find the file specified.

To resolve the error above, follow the steps:

Add a runsettings file in the test project.

Append the text below in the runsettings file:

<RunSettings>

   <MSTest>

        <ForcedLegacyMode>true</ForcedLegacyMode>

   </MSTest>

</RunSettings>

Select the runsettings file and then execute the test case.

In case you are running the test on a test agent machine as a part TFS build, you need Visual Studio 2015 on the test agent machine.
Also, configure the runsettings file with the ForcedLegacyMode tag in the test execution step.

Content: Sinjith Haridasan Reeja
Review: Deepak Mittal

MSDN Blogs: 11/07 – Errata added for [MS-RDPEPC]: Remote Desktop Protocol: Print Virtual Channel Extension


MSDN Blogs: 11/07 – Errata added for [MS-WFDPE]: Wi-Fi Display Protocol Extension

MSDN Blogs: Werder-Fans bleiben am Ball – egal wann, egal wo, egal auf welchem Endgerät

$
0
0

Fußball ist Emotion pur. Anhänger eines Fußball-Clubs verfolgen ihren Verein aber nicht nur an Spieltagen und live im Stadion, sondern wollen die Schritte ihrer Stars und Entscheidungen der Vereinsführung und des Trainerstabes hautnah miterleben. Egal ob sie nun im Urlaub sind, den Verein nur aus der Ferne beobachten können oder in der Heimatstadt des Clubs wohnen. Auch bei der Wahl des Mediums will man(n und Frau) sich natürlich nicht einschränken lassen.

Digitalisierung spielt deshalb nicht nur bei Fußball-Erstligist SV Werder Bremen eine große Rolle. 1996 war Werder der erste Bundesliga-Club, der eine eigene Webseite hatte. 2007 schuf der Bundesligist als erster eine eigene TV-Redaktion. Und in den letzten Jahren wurde „mobil, mobil, mobil“ zum Schlachtruf des SV Werders. Um auf die mobilen Endgeräte seiner Fans zu kommen, musste der Bundesliga Klub Informationen und Inhalte zeitnah, qualitativ hochwertig und überall verfügbar machen. Der größte Hebel zum Erfolg lag hier im Bewegtbild.

Mit Microsofts „Azure Cloud“ meisterten Werder Bremen und Microsoft-Partner Team neusta diese Herausforderung. Alle Werder-Ereignisse, von der Pressekonferenz bis zum Fußball-Match, bietet der Verein nun live und in HD-Qualität an. Dieser responsive Ansatz ermöglicht es den Werder Fans, über die 90 Spielminuten den Matches hinaus, von überall mit ihrem Club in Kontakt zu treten. Dank der Cloud sind sie noch näher und emotionaler an der Werder-Welt dran. Genau diese Nutzererfahrung wird von den Fans begeistert angenommen und spiegelt sich in sämtlichen Nutzerzahlen positiv wieder.

Mehr zu Werder TV

Mehr zu team neusta

MSDN Blogs: Приглашаем вас на вебинар 23 ноября в 11:00 «Разработка и тестирование приложений с Azure»

$
0
0

azure

Разработка и тестирование приложений с Azure

 

Приглашаем вас на вебинар 23 ноября в 11:00 «Разработка и тестирование приложений с Azure»

Использование ресурсов и сервисов облачных платформ для разработки и тестирования становится всё популярнее среди компаний-разработчиков. Инструменты Microsoft предоставляют вам не только инфраструктуру по запросу, но и отличные возможности для упрощения, совершенствования и даже оптимизации процессов разработки и тестирования.

На вебинаре мы расскажем и покажем, как с помощью сервисов Azure организовать подготовку ресурсов для этих целей.

Вебинар проведет Александр Белоцерковский, технический евангелист Microsoft в России.
Каждый участник вебинара получит промокод Azure Pass на 100$.

Регистрация

SPSCC Posts & Announcements: Veterans Day events at SPSCC

$
0
0

Campus Closure Nov. 11

Both SPSCC campuses will be closed on Friday, Nov. 11, 2016 in obseverance of Veterans Day.

Veterans Center hosts celebration for local and national veterans Nov. 10

South Puget Sound Community College’s Veterans Center is hosting a special event to honor and celebrate community veterans.  This year’s Veterans Day Celebration includes speakers, a banner signing, breakfast foods, and more.  The event is scheduled for Thursday, Nov. 10 at 8:00 a.m. and will last about two hours in building 21 at SPSCC’s Olympia Campus.

MSDN Blogs: Build a Smart Light with Azure IoT Hub

$
0
0

In this post, we will build a voice control smart light with Azure IoT hub. All hardwares we need are shown in the picture below, a Raspberry Pi, 3 LEDs, 3 220Ω resistances, a breadboard, some DuPont lines, and Amazon Echo Dot.

All hardwares we need

Don’t worry if you do not have one of them or even any of them, we still can make things work with simulated device, and I will explain how to do that and the end of this post.

Before we begin this awesome job, let’s make it clear how does the entire flow of the information go from your voice to the light turn off or on status. Firstly, Amazon Echo Dot records your voice and send it to Amazon Cloud, and Amazon Cloud transforms your voice to command. Then Amazon Cloud send the command to Azure IoT Hub Server side, and Azure IoT Hub Server pass it to Azure IoT Hub Client side. In this post the client is a Raspberry Pi. Finally, Raspberry Pi turns off or on the LED via GPIO pins.

First of all, you need to setup Azure IoT Hub. Azure IoT Hub provides FREE plan, so you needn’t pay for it now. Follow the step in https://github.com/Azure/azure-iot-sdks/blob/master/doc/setup_iothub.md. Notice, if you want to use Azure IoT Hub free plan, you need change Pricing and scale tier from S1 – Standard to Free. The free plan has a limitation of 8000 messages per day, but it’s enough for our experiment, right?

After setup Azure IoT Hub, we need create a device in our hub. You can follow the steps in https://github.com/Azure/azure-iot-sdks/blob/e1c8c6df558823f21bd94875d940cdb864b490b8/doc/manage_iot_hub.md to create your device. Remember the name of device your created, we’ll need it in the later steps.

Let’s deploy Azure IoT Hub to server side. Choose a server that supports Node, for example, I choose Azure web app. Azure web app is not necessary for Azure IoT Hub, if you already have a web hosting with Node support, just use it.

I use express generator to create a node website quickly, you may also use tools you like. You can install express generator by simply run

npm install -g generator-express

if it doesn’t work try run is as admin

sudo npm install -g generator-express

Enter local Azure IoT Hub server development root path, run

yo express

Choose the below options:

? Would you like to create a new directory for your project? Yes
? Enter directory name {appname}
? Select a version to install: MVC
? Select a view engine to use: Jade
? Select a css preprocessor to use (Sass Requires Ruby): None
? Select a database to use: None
? Select a build tool to use: Grunt

After that, you can see a folder named {appname} in your current path, enter it and add two lines in package.json in dependencies field:

"azure-iothub": "^1.0.18","azure-event-hubs": "^0.0.4","uuid": "^2.0.3"

Then run npm install to apply the changes. Run npm start, and open http://127.0.0.1:3000, you can see a web page shows Generator-Express MVC.

Generator Express default homepage

You have completed local develop envirnmnet, great job! now let’s write the core code of the server side.

Go to controllers folder in app folder, you can see a file named home.js, right? It’s the router of the website. Let’s add a path nanmed /api/smarthome with these code:

router.get('/api/smarthome', function(req, res, next) {
    res.header('Content-Type', 'application/json');
    res.header('Access-Control-Allow-Origin', '*');
    res.render('json', {
        json: {message: 'foo'},
        layout: false
    });
});

And go back to app folder, and go to views folder, create a file named json.jade, and write this line code in it:

!=JSON.stringify(json)

Now restart your website, and visit http://127.0.0.1:3000/api/smarthome. You can press Ctrl+C to stop the server, and run npm start again.

Smart Home API Router

You can see a JSON in the page, right? Congratulations! You have created an API router in your website!

Next, we need to make it a real API. Go to models folder under app folder, create a file named iot-hub.js. In this file, we need use Azure IoT Hub module.

'use strict';

var IoTHubClient = require('azure-iothub').Client;
var Message = require('azure-iot-common').Message;

var targetDevice = '[Target Device Name]';
var connectionString = '[Connection String]';

var iotHubClient = IoTHubClient.fromConnectionString(connectionString);

function sendC2DMessage(msg, targetDevice) {
    targetDevice = targetDevice || 'Chrome';
    iotHubClient.open(function (err) {
        if (err) {
            console.error('Could not connect: ' + err.message);
        } else {
            console.log('Client connected');

            // Create a message and send it to the IoT Hub
            var data = JSON.stringify({ message : msg });
            var message = new Message(data);
            console.log('Sending message: ' + message.getData());
            iotHubClient.send(targetDevice, message);
        }
    });
}

module.exports = sendC2DMessage;

Replace [Target Device Name] with your device name that you created of your hub, we did it at the top of this post, remember? And replace [Connection String] with the device connection string. The connection string is something like

HostName=xxx.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=...

Now we have the IoT Hub model for our web and we can use it in our app. Create a new model named smarthome-turnonoff.js, and write a function to handle turn on off command:

function TurnOnOff(applianceId, requestType) {
    var sendC2DMessage = require('./iot-hub.js')
    new sendC2DMessage({
        applianceId: applianceId,
        request: requestType
    }, '[Target Device Name]');
    var result = {
        applianceId: applianceId,
        request: requestType
    };
    return result;
}

module.exports = TurnOnOff;

Also, remember to change [Target Device Name] to your own.

In home.js under controllers folder, change the /api/smarthome router to this:

router.get('/api/smarthome2', function(req, res, next) {
    res.header('Content-Type', 'application/json');
    res.header('Access-Control-Allow-Origin', '*');
    switch(req.query.request) {
        case 'TurnOnRequest' :
            res.render('json', {
                json: new TurnOnOff(req.query.applianceId, 'TurnOn'),
                layout: false
            });
            break;
        case 'TurnOffRequest' :
            res.render('json', {
                json: new TurnOnOff(req.query.applianceId, 'TurnOff'),
                layout: false
            });
            break;
    }
});

And and var TurnOnOff = require('../models/smarthome-turnonoff.js'); at the top of the file.

Restart your website, and visit http://127.0.0.1:3000/api/smarthome?request=TurnOnRequest&applianceId=34f8d140-0704-4d2a-b449-bf2c458afa0a, you can see a JSON show the id and request your sent, go to Azure portal, open IoT Hub dashboard, you can see you have already sent a message in overview tab.

IoT Hub Dashboard

Now let’s setup device side, Raspberry Pi.

First, We need setup Azure IoT Hub develop environment. Clone Azure IoT SDK repo from https://github.com/Azure/azure-iot-sdks. Cause the repo contains submodules, remember to clone with –recursive command like this:

git clone --recursive https://github.com/Azure/azure-iot-sdks.git

Follow the step in https://github.com/Azure/azure-iot-sdks/blob/e1c8c6df558823f21bd94875d940cdb864b490b8/doc/get_started/python-devbox-setup.md.

After setup develop environment completely, create a file name iothub.py in your Raspberry Pi. Remember copy iothub_client.so to the same path of the file you just create. In the file, write these code:

import iothub_client
from iothub_client import *
from time import sleep
import json

timeout = 241000
minimum_polling_time = 9
receive_context = 0
connection_string = '[Connection String]'

def receive_message(message, counter):
  buffer = message.get_bytearray()
  size = len(buffer)
  message = json.loads(buffer[:size].decode('utf-8')).get('message')
  print("Received Data: <%s>" % message)
  return IoTHubMessageDispositionResult.ACCEPTED

def iothub_init():
  iotHubClient = IoTHubClient(connection_string, IoTHubTransportProvider.HTTP)
  iotHubClient.set_option("timeout", timeout)
  iotHubClient.set_option("MinimumPollingTime", minimum_polling_time)
  iotHubClient.set_message_callback(receive_message, receive_context)
  while True:
    sleep(10)

if __name__ == '__main__':
  iotHubClient = iothub_init()

Again, replace [Connection String] with your own. At this place, the connection string is for device, so it should looks like

HostName=xxx.azure-devices.net;DeviceId=xxx;SharedAccessKey=...'

See? you can find a DeviceId parameter. You can find it under Devices tab in Azure portal.

Device Connection String

Let’s run it, and visit our website again. You can see the request has already be received by Raspberry Pi!

Terminal

Then we need make Raspberry Pi control the LEDs. Here’s how I connect LEDs and resistances, DuPont lines on the breadboard.

LED board

The blue line is GND known as ground, and the other three lines, white, gray and purple is light control. We need connect them to the GPIO pins on Raspberry Pi. I have drawn a picture to explain how to connect the lines to make you clear.

RPi GPIO

Now we need a python module call RPi.GPIO, you can download it from https://pypi.python.org/pypi/RPi.GPIO. After download RPi.GPIO, extend it and run setup.py.

Let’s create a script to test LED control via GPIO. Create a file called led.py, and write these code into it:

import RPi.GPIO as GPIO
from time import sleep

GPIO.setmode(GPIO.BCM)
GPIO.cleanup(17)
GPIO.cleanup(27)
GPIO.cleanup(22)
GPIO.setup(17, GPIO.OUT)
GPIO.setup(27, GPIO.OUT)
GPIO.setup(22, GPIO.OUT)
GPIO.output(17, False)
GPIO.output(27, False)
GPIO.output(22, False)
while True:
  GPIO.output(17, True)
  sleep(1)
  GPIO.output(17, False)
  GPIO.output(27, True)
  sleep(1)
  GPIO.output(27, False)
  GPIO.output(22, True)
  sleep(1)
  GPIO.output(22, False)

Now you should see the LEDs on and off one by one.

Great! You can control the hardware with code! Cool, uh?

Next let’s combine the two script we wrote, iothub.py and led.py. Name the new with a cool thing, for example, smartlight.py. Write these code into it:

import iothub_client
from iothub_client import *
from time import sleep
import json
import RPi.GPIO as GPIO

green_led_id = 'db87ffe4-5d5d-4af7-bb70-da8a43beac90'
red_led_id = '1266ab90-b23d-4e0f-83d0-ec162284952f'
yellow_led_id = '7b38a9f2-c9f4-42cf-bb63-59147eb685b4'

led_gpio = {green_led_id: 22, red_led_id: 27, yellow_led_id: 17}

GPIO.setmode(GPIO.BCM)
GPIO.setup(led_gpio[green_led_id], GPIO.OUT)
GPIO.setup(led_gpio[red_led_id], GPIO.OUT)
GPIO.setup(led_gpio[yellow_led_id], GPIO.OUT)
GPIO.output(led_gpio[green_led_id], False)
GPIO.output(led_gpio[red_led_id], False)
GPIO.output(led_gpio[yellow_led_id], False)

timeout = 241000
minimum_polling_time = 9
receive_context = 0
connection_string = '[Connection String]'

def receive_message(message, counter):
  buffer = message.get_bytearray()
  size = len(buffer)
  message = json.loads(buffer[:size].decode('utf-8')).get('message')
  print("ID: %snRequest: %s" % (message['applianceId'], message['request']))
  if message['request'] == 'TurnOn':
    GPIO.output(led_gpio[message['applianceId']], True)
  elif message['request'] == 'TurnOff':
    GPIO.output(led_gpio[message['applianceId']], False)
  return IoTHubMessageDispositionResult.ACCEPTED

def iothub_init():
  iotHubClient = IoTHubClient(connection_string, IoTHubTransportProvider.AMQP)
#  iotHubClient.set_option("timeout", timeout)
#  iotHubClient.set_option("MinimumPollingTime", minimum_polling_time)
  iotHubClient.set_message_callback(receive_message, receive_context)
  while True:
    sleep(10)

if __name__ == '__main__':
  iotHubClient = iothub_init()

Replace '[Connection String]' with your own. Am I tooooo nagging? If so, sorry about that.

OK, magic things will happen! Run the script in Raspberry Pi, restart your website, visit http://127.0.0.1:3000/api/smarthome2?request=TurnOnRequest&applianceId=1266ab90-b23d-4e0f-83d0-ec162284952f.

See what! The red LED is on, right! It is amazing! Now you can control the LED from HTTP request!

Of course, this is not a complete smart light, we should make control interface more friendly. Next we will add Amazon Echo Dot in!

You can learn how to create a Alexa Smart Home Skill from https://developer.amazon.com/public/community/post/Tx4WG410EHXIYQ/Five-Steps-Before-Developing-a-Smart-Home-Skill.

To add LEDs into your Alexa console, we need use Alexa Smart Home Skill discovery request. And to control the LEDs, we need control request.

Here’s the smart home skill Lambda function I wrote:

var https = require('https');
var REMOTE_CLOUD_BASE_PATH = '/api/smarthome';
var REMOTE_CLOUD_HOSTNAME = '[Cloud Hostname]';

exports.handler = function(event, context) {

    log('Input', event);

    try{
        switch (event.header.namespace) {
            case 'Alexa.ConnectedHome.Discovery':
                handleDiscovery(event, context);
                break;
            case 'Alexa.ConnectedHome.Control':
                handleControl(event, context);
                break;
            default:
                log('Err', 'No supported namespace: ' + event.header.namespace);
                context.fail('Something went wrong');
                break;
        }
    }
    catch(e) {
        log('error', e);
    }
};

function handleDiscovery(event, context) {
    log('Discovery', event);
    var basePath = '';
    basePath = REMOTE_CLOUD_BASE_PATH + '?request=Discovery';
    var options = {
        hostname: REMOTE_CLOUD_HOSTNAME,
        port: 443,
        path: basePath,
        headers: {
            accept: 'application/json'
        }
    };

    log('Discovery', options);
    var serverError = function (e) {
        log('Error', e.message);
        context.fail(generateControlError(event.header.name,'DEPENDENT_SERVICE_UNAVAILABLE','Unable to connect to server'));
    };

    var callback = function(response) {
        log('Discovery Get', response);
        var str = '';

        response.on('data', function(chunk) {
            str += chunk.toString('utf-8');
            log('Discovery Data', str);
        });

        response.on('end', function() {
            log('Result', str);
            var result = JSON.parse(str);

            context.succeed(result);
            log('Result', result);
        });

        response.on('error', serverError);
    };

    https.get(options, callback).on('error', serverError).end();

    log('Discovery Got', 'Got');
}

function handleControl(event, context) {

    if (event.header.namespace === 'Alexa.ConnectedHome.Control') {

        /**
         * Retrieve the appliance id and accessToken from the incoming message.
         */
        var applianceId = event.payload.appliance.applianceId;
        var accessToken = event.payload.accessToken.trim();
        log('applianceId', applianceId);

        var basePath = '';
        basePath = REMOTE_CLOUD_BASE_PATH + '?applianceId=' + applianceId +'&request=' + event.header.name;

        var options = {
            hostname: REMOTE_CLOUD_HOSTNAME,
            port: 443,
            path: basePath,
            headers: {
                accept: '*/*'
            }
        };

        var serverError = function (e) {
            log('Error', e.message);
            context.fail(generateControlError(event.header.name,'DEPENDENT_SERVICE_UNAVAILABLE','Unable to connect to server'));
        };

        var callback = function(response) {
            var str = '';

            response.on('data', function(chunk) {
                str += chunk.toString('utf-8');
            });

            response.on('end', function() {
                log('done with result');
                var headers = {
                    namespace: 'Alexa.ConnectedHome.Control',
                    name: event.header.name.replace('Request', 'Confirmation'),
                    payloadVersion: '1'
                };
                var payloads = {
                    success: true
                };
                var result = {
                    header: headers,
                    payload: payloads
                };
                log('Done with result', result);
                context.succeed(result);
            });

            response.on('error', serverError);
        };

        https.get(options, callback)
            .on('error', serverError).end();
    }
}

function log(title, msg) {
    console.log('*************** ' + title + ' *************');
    console.log(msg);
    console.log('*************** ' + title + ' End*************');
}

function generateControlError(name, code, description) {
    var headers = {
        namespace: 'Alexa.ConnectedHome.Control',
        name: name,
        payloadVersion: '1'
    };

    var payload = {
        exception: {
            code: code,
            description: description
        }
    };

    var result = {
        header: headers,
        payload: payload
    };

    return result;
}

Replace [Cloud Hostname] with your own. That hostname is just of the node website you developed in local. Do not use 127.0.0.1, it doesn’t work for Lambda, you need publish it to the Internet. I use an SSL connection between Lambda and Azure web app, if you use Azure, you can just use .azurewebsites.net, Azure supports SSL. If you use your own domain, you need an SSL certificate. You can get a free SSL certificate for you domain from https://ssl.md.

Echo Dot cannot work correctly with our smart light currently, because Echo Dot cannot understand the callback message we give from the website, also, we have done nothing about the discovery command.

OK, let’s make some change with our website. Now go to app/models of your website, create 3 new files named smarthome-discovery.js, smarthome-adddevice.js and smarthome-removedevice.js.

In smarthome-discovery.js, write these code:

function Discovery() {
    var headers = {
        namespace: 'Alexa.ConnectedHome.Discovery',
        name: 'DiscoverAppliancesResponse',
        payloadVersion: '1'
    };

    var payloads = {
        discoveredAppliances: appliances
    };

    var result = {
        header: headers,
        payload: payloads
    };

    return result;
}

module.exports = Discovery;

In smarthome-adddevice.js, write these code:

function AddDevice(manufacturerName, modelName, version, friendlyName, friendlyDescription, actions) {
    if (!friendlyName) {
        return {success: false};
    }
    friendlyName = 'Azure ' + friendlyName;
    var uuid = require('uuid');
    var applianceId = uuid.v4();
    manufacturerName = manufacturerName || friendlyName.replace(/s+/g, '');
    modelName = modelName || manufacturerName;
    version = version || '1.0';
    friendlyDescription = friendlyDescription || 'No Description';
    actions = actions || ['turnOn', 'turnOff'];
    var appliance = {
        applianceId: applianceId,
        manufacturerName: manufacturerName,
        modelName: modelName,
        version: version,
        friendlyName: friendlyName,
        friendlyDescription: friendlyDescription,
        isReachable: true,
        actions: actions,
        status: 'TurnOff'
    }

    appliances.push(appliance);

    return {success: true};
}

module.exports = AddDevice;

In smarthome-removedevice.js, write these code:

function RemoveDevice(applianceId) {
    var index = -1;
    console.log(appliances)
    appliances.forEach(function(appliance) {
        console.log(appliance.applianceId)
        console.log(applianceId)
        console.log(appliance.applianceId === applianceId)
        index++;
        if (appliance.applianceId === applianceId) {
            appliances.splice(index, 1);
            removed = true;
            return {success: true};
        }
    });
    return {success: false};
}

module.exports = RemoveDevice;

Also, edit home.js in app/controllers folder, add these 3 models on the top of the script:

var Discovery = require('../models/smarthome-discovery.js'),
    AddDevice = require('../models/smarthome-adddevice.js'),
    RemoveDevice = require('../models/smarthome-removedevice.js');

Modify smarthome-turnonoff.js to make Echo Dot understand we have accepted the request it sent:

function TurnOnOff(applianceId, requestType) {
    var headers = {
        namespace: 'Alexa.ConnectedHome.Control',
        name: requestType + 'Confirmation',
        payloadVersion: '1'
    };

    var payloads = {
        success: true
    };
    
    var result = {
        header: headers,
        payload: payloads
    };

    var sendC2DMessage = require('./iot-hub.js')
    new sendC2DMessage({
        applianceId: applianceId,
        request: requestType
    }, 'Alexa');

    appliances.forEach(function(appliance) {
        if (appliance.applianceId === applianceId) {
            appliance.status = requestType;
        }
    });
    
    return result;
}

module.exports = TurnOnOff;
module.exports = TurnOnOff;

And modify /api/smarthome router to

router.get('/api/smarthome', function(req, res, next) {
    res.header('Content-Type', 'application/json');
    res.header('Access-Control-Allow-Origin', '*');
    switch(req.query.request) {
        case 'Discovery' :
            res.render('json', {
                json: new Discovery(),
                layout: false
            });
            break;
        case 'AddDevice' :
            res.render('json', {
                json: new AddDevice(
                        req.query.manufacturerName,
                        req.query.modelName,
                        req.query.version,
                        req.query.friendlyName,
                        req.query.friendlyDescription,
                        req.query.actions
                    ),
                layout: false
            });
            break;
        case 'RemoveDevice' :
            res.render('json', {
                json: new RemoveDevice(req.query.applianceId),
                layout: false
            });
            break;
        case 'TurnOnRequest' :
            res.render('json', {
                json: new TurnOnOff(req.query.applianceId, 'TurnOn'),
                layout: false
            });
            break;
        case 'TurnOffRequest' :
            res.render('json', {
                json: new TurnOnOff(req.query.applianceId, 'TurnOff'),
                layout: false
            });
            break;
    }
});

We use a global variable called appliances to store devices we add, you can use a database to replace it. So we need declare appliances in app.js in the root path of the website:

appliances = [];

Do not use var to declare it as we need it be global.

All things are ready! Let’s test it!

First, visit your website to add a device from https:///api/smarthome?request=AddDevice&friendlyName=Test+Device, and you’ll see {"success":true}. Go to Alexa dashboard, enter Smart Home tab, click Discovery devices link, or just say Discover to your Echo Dot. After about 20 seconds, you can find the device you just added.

Test Device

Now let add 3 new devices with the above step, and visit https:///api/smarthome?request=Discovery to find out their appliance ids.

Appliance Id

After find out those ids, change the python script in you Raspberry Pi, replace db87ffe4-5d5d-4af7-bb70-da8a43beac90, 1266ab90-b23d-4e0f-83d0-ec162284952f and 7b38a9f2-c9f4-42cf-bb63-59147eb685b4.

Now, let Echo Dot forget the Test Device, run python script on Raspberry Pi, recover devices again, and ask Alexa to turn on Azure Red Light.

Alexa LED

If you do not have a Raspberry Pi, you can download a Chrome extension called Azure Smart Light Simulator I wrote from https://github.com/Sneezry/Azure-Smart-Light-Simulator.

Light Simulator

Have fun to build your own smart home devices with Azure IoT!

MSDN Blogs: Azure Analysis Services と Power BI on SSRS の発表

$
0
0

Microsoft Japan Data Platform Tech Sales Team 伊藤

10 月末に行われた PASS Summit 2016 で Microsoft の BI に関する 2 つの大きな発表がありました。

Azure 上で PaaS として提供される分析用データベースである「Azure Analysis Services プレビュー」 と 「SQL Server Reporting Services 内での Power BI レポート のテクニカル プレビュー」です。今回はこの 2 つの発表についてご紹介します。

ちなみに PASSというのは Microsoft のデータプラットフォームを使用する世界中のデータの専門家をサポートする独立した非営利団体で、PASS Summitは 4000 人以上が技術の習得やネットワーキングなど、キャリアアップのために参加する年次イベントです。マイクロソフト主催のイベントを差し置いて SQL Server 関連の大きな発表がここで行われたりします。

 

Azure Analysis Services プレビュー

10 月 25 日 (日本では日付変わって 10 月 26 日) に待ちに待った Azure Analysis Services のパブリック プレビューが発表されました!これは SQL Server Analysis Services (SSAS) 表形式モード相当を Azure (PaaS) で提供するものです。

Analysis Services は、データソースを隠蔽してユーザーがビジネス用語でデータを扱えるようにし、管理者が適切なアクセス制御を設定して、パフォーマンス良く非定型分析を行うための BI 基盤です。この機能 (ドキュメントなどでは「BI セマンティック モデル」と書かれています) が PaaS で提供されるようになり、エンタープライズ BI 導入のハードルを下げるとともに、Power BI によるセルフサービス BI からの発展がしやすくなりそうです。

※ SQL Server Analysis Services (SSAS) についてはこちらをご覧ください。

 

Azure Analysis Services のポイント

  • PaaS であり、環境構築・スケールアップ / スケールダウン・管理が容易
    (スケールアップ / スケールダウンについては今後できるようになる予定)
  • データソースはオンプレミス / クラウドのいずれにも対応
  • Excel や Power BI をはじめ、主要なデータ可視化ツールをサポート

基本的には SQL Server 2016 Analysis Services 表形式モード (互換性レベル:1200) と同等であり、モデル開発・配置は SQL Server Data Tools, モデル管理は SQL Server Management Studio, インスタンス監視・トレース キャプチャ&再生は SQL Server Profiler というように、SSAS と同じツールを使用します。つまり、今までの SSAS テクノロジーの延長線上にあり、既に行った投資を生かすことができるというわけです。

Azure Analysis Services は Azure Portalからすぐにお試しいただけます。現時点では利用できる場所 (データセンター) は米国中南部と西ヨーロッパだけですが、順次展開されていく、と思われます。

image

 

SQL Server Reporting Services 内での Power BI レポートのテクニカルプレビュー

こちらも同日に発表されました。 Power BI Desktop で作成したレポートを SQL Server Reporting Services で閲覧できるようにする機能であり、「Power BI 良いんだけど、オンプレで使いたいんだよねぇ…」というご要望を実現するであろうものです。Power BI Desktop でレポートを作成して Report Server に保存することで Web ポータル (2016 で「レポート マネージャー」から名前が変わっています) からレポートにアクセスし、そのままブラウザで参照することも、Power BI Desktop で開くこともできるというものです。Power BI Desktop がレポートビルダーと同じような位置づけになるということです。

「テクニカルプレビュー」というのはプレビューよりも軟らかい状態で、仕様は未確定ですし制限事項も多くなっています。まずはテクニカルプレビュー専用のバーチャルマシンが Azure 上で提供されます。この中に必要な製品・機能一式が入っていて、逆に言うとこのバーチャルマシン内にインストールされている Power BI Desktop と SQL Server Reporting Services, データソースは SQL Server Analysis Services (表形式でも多次元でも OK) への直接接続という組み合わせでしか使用できません。また、この Virtual Machine は 2017 年 4 月に失効するという期限付きとなっています。

このバーチャルマシンは Azure Portalで「Power BI」と検索するとヒットする「SQL Server Reporting Services Technical Preview」という名前のものです。こちらもすぐにお試しいただけます。

image

 

このように Microsoft の BI は、サーバーも、クライアントツールも、クラウドも、様々な組み合わせを自由に選択いただけるよう開発されているのが大きな特長です。これからも進化と発展にご注目ください!

MSDN Blogs: Experiencing Alerting failure for Availability Data Type – 11/08 – Resolved

$
0
0
Final Update: Tuesday, 08 November 2016 22:51 UTC

This is a retrospective notification. After revisiting the customer impact metrics from today morning, we identified that some customers must have experienced alerting failures. We’ve confirmed that all systems are back to normal with no customer impact as of 11/08, 21:25 UTC. Our logs show the customers must have impacted from 11/08 16:30 UTC to 11/08 19:30 UTC and 11/08 20:00 UTC to 11/08 21:25 UTC and that during the 5 hours that it took to resolve the issue 30% of customers experienced alerting failures.
  • Incident Timeline:  3 Hours  -11/08 16:30 UTC through 11/08 19:30 UTC
  •                               1 Hour and 25 Minutes – 11/08 20: UTC through 11/08 21:25 UTC 

We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Sapna



MSDN Blogs: Running Azure Automation runbooks from IFTTT tasks

$
0
0

I recently had an idea (ok lets call it ‘collaboratively came up with while talking to one of the grads in our office) to look at ways that I could trigger runbooks in my Azure Automation account from my phone. The specific issue we were looking to solve in this case was “how can I easily shut down all the VMs in my subscription from my phone if I forget to turn them off”. Now I already have a runbook that was scheduled to shut down the VMs every night at 11pm as a ‘just in case’ but surely there was a better way if I remembered before than to rely on that automation schedule. After a little bit of thought, the idea of create a “do button” from IFTTT (stands for “if this then that”, great little automation site that I recommend you check out if you haven’t used it already) seemed to be the perfect solution for me. So here’s a run through of the solution from start to end.

Creating the runbook

For my example here my runbook I wanted to look for all Virtual Machines in my subscription and shut them down to avoid excess billing. The script I use for this is fairly straightforward:

$connectionName = "AzureRunAsConnection"
try
{
    $servicePrincipalConnection = Get-AutomationConnection -Name $connectionName        

    Add-AzureRmAccount `
        -ServicePrincipal `
        -TenantId $servicePrincipalConnection.TenantId `
        -ApplicationId $servicePrincipalConnection.ApplicationId `
        -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint | Out-Null
}
catch 
{
    if (!$servicePrincipalConnection)
    {
        $ErrorMessage = "Connection $connectionName not found."
        throw $ErrorMessage
    } 
    else
    {
        Write-Error -Message $_.Exception
        throw $_.Exception
    }
}

Write-Output -InputObject "Looking for VMs that are running"
Get-AzureRmVm | ForEach-Object -Process {
    $status = Get-AzureRmVM -ResourceGroupName $_.ResourceGroupName -Name $_.Name -Status
    if($status.Statuses | Where-Object -FilterScript { $_.Code -match "PowerState/running" })  
    {
        Write-Output "Stopping VM [$($_.Name)] from Resource Group [$($_.ResourceGroupName)]"
        $_ | Stop-AzureRmVM -Force
    }
}

This bit of script basically calls out all the VMs in the subscription, checks their power state and if they are running, we pipe out to Stop-AzureRMVM to tell Azure to de-provision the virtual machines.

Creating a webhook

The next piece of the puzzle is to create a webhook on the runbook. This allows the runbook to be triggered by a specific URL being called. To create a webhook for our runbook we simply go in to the runbook in the Azure portal, select “webhooks” and then select the “add webhook” at the top. You’ll be presented with a screen like this:

New webhook screen

Here you set whether or not a webbook can be enabled, when it expires (it’s a good idea from a security stand point to not make this a never ending period of time, rather roll these over and use new URLs at regular intervals to keep them secret and secure) and you see the URL that will be created at the bottom there. Also note the security message at the top of the window there – the URL is never again shown anywhere in the portal and can’t be retrieved again from here (again, for security purposes) so it’s important to copy/paste the URL somewhere safe at this point so you don’t lose it before you click the OK button to save the new webhook.

Build a new applet in IFTTT

Now things are set up on the Azure side of things, we need to look at how we can create an applet in IFTTT. To start sign in and go to the “my applets” screen and select “new applet”.

New applet

Next we click the “this” link and then need to search for the “do button” trigger, this allows us to tell IFTTT that our applet will be triggered whenever we press the specific “do” button in the mobile app on either an iOS or Android device that I’ve signed in to. It’s worth noting that there are dozens and dozens of other ways to trigger things in IFTTT, the do button is the one that suits the specific type of function I need this time around, but you should also consider exploring some of their other recipes for applets and get a feel for other ways it can be triggered as well.

do button

Select “button press” as the trigger and then we can select the “that” link and search for a task called “maker”. Maker is designed to allow applets to be triggered by web requests, or to send our web requests in response to a trigger (which is exactly what we want this time around).

Maker

Once we are at the maker configuration screen select “make a web request” and then we can craft the type of request we want to send. We start by pasting the URL in we created earlier for our webhook and then setting the method to “POST”. Now in my example runbook for this scenario there is no need to craft a body as I don’t have any parameters to pass to it, however if you adapt this scenario and need to pass parameters to a runbook, go and have a read of the Azure Automation Webhooks documentation to get a feel for how that should be structured and how to use the variables in your runbooks.

Maker example

Click create action and after this you’ll be all set!

Triggering the applet

I have an android device so these steps will vary a little for someone on iOS, but basically once you install the “do button” app on iOS, or the IFTTT app on android you can then trigger the do button. In my case I add a widget to the homescreen of my android that I can now simply press the button on, and this will trigger my action!

Android home screen

You can tell if it’s being triggered in a couple of places. Firstly if you head to the IFTTT activity screen you should see where the action was triggered and if you allow GPS access to the IFTTT app it’ll show you where it was triggered from, in this case you can see I ran it from the Microsoft office in Canberra.

IFTTT activity

You can also go back to the runbook in the Azure portal and see that the run was completed by going in to the “jobs” section of the runbook.

Runbook jobs

And that’s it! Now when I remember that I haven’t shut any of my VMs down it’s a simple matter of unlocking my phone and pressing a single button – super straightforward to do and it leverages my existing runbook to do it! Now you can take this basic example and look at all the other ways you might want to trigger runbooks in Azure Automation using webhooks and extend it to better support your use of the cloud – Enjoy!

MSDN Blogs: Performance issues with Visual Studio Team Services – 11/9 – Investigating

$
0
0

Initial Update: Wednesday, 9 November 2016 01:09 UTC

We are actively investigating issues with Visual Studio Team Services in East Australia scale unit.  Some customers may experience slow response times and errors while accessing the service. 

  • Next Update: Before 02:00 UTC

Sincerely,
Sri Harsha

MSDN Blogs: Dear Meetup Organisers

$
0
0

One of the best parts of my job as a technical evangelist at Microsoft is to get out into the local community and get involved in tech meetups, conferences and other local events.

I try to get to as many events as I can as a member, but I’m also happy to come and do talks, demos, presentations etc too.

I have a range of talks which you can see on the speaking page on my website. I cover quite a broad range of technologies but my current favourites are around Machine Learning, Cognitive Services, Bots and Web Development.

If you are looking for something different, I’m part of a broader team that cover pretty much any technology that has a loose connection to Microsoft so if I don’t have anything that suits your group, I’m sure my colleagues do.

I’m based in Worcester so groups in Worcester, Birmingham, and the West Midlands generally are perfect but I’m also happy to travel to most places in the UK too, especially for larger groups. If I’m too far away, I have a colleagues that are scattered all over the UK, you can see them all on our Microsoft UK Technical Speakers site.

So, if you are running a meet-up, ideally with a typical attendance of over 35 people and think you’d like someone form Microsoft to come and speak or get involved in your event, please contact me or one of my colleagues and I’m sure we can help.

MSDN Blogs: Cloud computing guide for researchers – get started in one hour

$
0
0

Microsoft Azure can help with almost any research computing task, due to its huge range of capabilities. It can, however, be hard to know where to start… This short guide will show you the ropes in about an hour, after which you’ll be all set to explore further. Once you have an account then we recommend you first learn a little bit about the Azure portal and cloud storage, as this will help to familiarise you with how Azure works. This should set you up to explore some of its more advanced features, taking advantage of online tutorials and large amounts of available documentation.

  • Launch your first Azure virtual machine (VM) – a remote workstation in the cloud. There are a number of pre-built images (VMs with software pre-installed). A good one to start with is the Linux Data Science VM also the Windows version here. You can create a basic VM (e.g. Ubuntu Linux) and install whatever software you like by following these instructions. We suggest that you start with a small VM (A1) to get started while you are getting to grips with the basics. You can easily rebuild/reboot a bigger VM (many cores and large memory) once you are familiar with the environment.

  • Try the Jupyter Notebooks-as-a-Service on Azure. They are free, executable and shareable over the web. Organize your notebooks and datasets in one centralized location. Libraries are saved automatically and can be viewed from any device, anywhere. These are a great way to do research in a reproducible way.  You can start from scratch or upload your existing notebooks to Azure at https://notebooks.azure.com  .


  • Explore Azure Machine Learning that is a complete end-to-end, easy to use, web-based system to experiment with your own machine learning algorithms. It makes it easy to test and deploy machine learning models, including with your own Python and R code using standard libraries like Sci-Kit Learn. There are many walkthroughs and tutorials to get you started. See https://studio.azureml.net/


There are more general Azure getting started videos at https://azure.microsoft.com/en-us/get-started/
 and a full set of Azure for Research self-pace walkthroughs at http://aka.ms/a4rgithub

We are publishing more in this blog series on more advanced topics for researchers to take advantage of Azure. So stay posted in our cloud computing guide for researchers.

Need access to Microsoft Azure?

There are several ways you can get access to Microsoft Azure for your research. Your university may already make Azure available to you, so first port of call is to speak to your research computing department. There are also other ways for you to start experimenting with the cloud:

MSDN Blogs: Issues with TechNet Forums site 11/09 – Mitigating

$
0
0

We are working to resolve an issue with the Social Forums website at this time.

We apologize for the inconvenience and thank you for the patience.

– MSDN Service Delivery Team

Viewing all 3015 articles
Browse latest View live