Quantcast
Channel: Randy Riness @ SPSCC aggregator
Viewing all 3015 articles
Browse latest View live

MSDN Blogs: Key rollover announcement – Aug 1st 2016

$
0
0

Key rollover conducted on Aug 1st 2016

On Aug 1st 2016, we rolled over the keys used to sign JWT tokens issued by Azure AD B2C. This is a part of our ongoing efforts to deliver a secure identity service. Please read on, as this note has important calls to action to ensure that this change causes no disruption to your apps or your end users.

What is key rollover?

Azure AD B2C uses a token signing key (in adherence to OpenID Connect and other standard protocols) to sign data (in our case, the end user’s identity token). Key rollover is the process of transitioning from one key to another. Regular rotation of keys helps improve the security posture of both the service and your apps.

The mechanics

Azure AD B2C advertises the public portion of the key at this endpoint: https://login.microsoftonline.com/<YourTenantName>.onmicrosoft.com/discovery/keys?p=<YourPolicyId>. Your app (more specifically, the OpenID Connect-libraries that your app usesverifies the authenticity of any incoming user token using this information. During key rollover, we advertise two keys – the key currently in use, and the key that we are going to use in the future. The old key will removed after a period of time (usually, a week); in this case, on Aug 8th 2016. We plan to roll over keys on a regular cadence.

What do you need to do?

Most popular OpenID Connect-libraries, including Microsoft’s .NET library – Microsoft.Owin.Security.OpenIdConnect, already have built-in mechanisms for handling key rollover seamlessly. Make sure to check your library’s documentation to ensure that it handles “issuer key rollover”. If it doesn’t, you’ll have to switch to a library that does handle it for your platform of choice.

Help

In case of any Qs, please use the comments section or email us at aaddev@microsoft.com.


MSDN Blogs: Compile and build specific Hadoop source code branch using Azure VM

$
0
0

Sometimes you may want to test a Hadoop feature that is available in a specific branch that is not available as a binary release. For example, in my case, I want to try accessing Azure Data Lake Store (ADLS) via its WebHDFS endpoint. Access to ADLS requires OAuth2, support for which was added in Hadoop 2.8 (HDFS-8155) but is not available in the current Hadoop 2.7.x releases.

Hadoop source code is available in this mirrored GitHub repo https://github.com/apache/hadoop. Version 2.8 specific code is available in the branch appropriately called “branch-2.8”

image

Deploy Azure VM with Ubuntu 14.04-LTS

As is described in the Building instructions for Hadoop, “the easiest way to get an environment with all the appropriate tools is by means of the provided Docker config” (for Linux or Mac). Since my primary laptop is running Windows 10, I will deploy a Ubuntu 14.04 LTS virtual machine in my Azure subscription, use it to build Hadoop 2.8 binary tar.gz file, download the resultant file, and delete the VM once I am done.

I am using Standard_DS2 VM size created from Canonical Ubuntu 14.04 LTS Azure gallery image https://portal.azure.com/#create/Canonical.UbuntuServer1404LTS-ARM

image

Install Docker on Ubuntu 14.04

After the VM is deployed, I SSH into it using its public IP and quickly install Docker following the instructions for Ubuntu 14.04 from https://docs.docker.com/engine/installation/linux/ubuntulinux/

sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" | sudo tee --append /etc/apt/sources.list.d/docker.list
sudo apt-get update
sudo apt-get purge lxc-docker
apt-cache policy docker-engine
sudo apt-get install linux-image-extra-$(uname -r)
sudo apt-get install docker-engine
sudo service docker start
sudo docker run hello-world

By default, I am not able to run “docker run hello-world” using my user account (i.e. azureuser) without using sudo. When I try it, I get back this message “docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?” This happens because by default docker daemon’s Unix socket is owned by the user root and other users can access it only with sudo.

To enable azureuser to run docker without sudo, I follow the instructions from Docker to create group called “docker”, add my user to that group, logout, log back in, and try docker run again.

sudo groupadd docker
sudo usermod -aG docker `whoami`
logout

After logging back in, I now can run “docker run hello-world” without problems.

Clone Hadoop 2.8 Branch

Since I want to compile specifically the branch called “branch-2.8”, I use Git to clone only that specific branch to my home directory (/home/azureuser/hadoop-2.8) using this command:

git clone -b branch-2.8 --single-branch https://github.com/apache/hadoop.git hadoop-2.8

image

Start Docker Container with Hadoop Build Environment

Following instructions from https://github.com/apache/hadoop/blob/trunk/BUILDING.txt, I start the Hadoop build environment using the provided script:

cd hadoop-2.8/
./start-build-env.sh

image

This process will take some time (~5-10 min) since it installs all of the required build environment tools (JDK, Maven, etc.) in the container.

Building Hadoop within the Docker Container

After the creation process is finished, I see my Hadoop Dev docker container running.

image

I try to start the Maven binary distribution build without native code, without running the tests, and without documentation.

mvn package -Pdist -DskipTests -Dtar

Resolving Permissions Error

However, I get a permissions error regarding the /home/azureuser/.m2 directory (used by Maven).

image

To fix this problem, I exit the docker container, and set the ownership of the /home/azureuser/.m2 directory to azureuser:azureuser.

sudo chown azureuser:azureuser ~/.m2

image

Restarting Container and Starting Maven Build

After the permission problem is resolved, I restart the docker container:

cd hadoop-2.8/
./start-build-env.sh

Once within the container, I again try to start the Maven build and package:

mvn package -Pdist -DskipTests -Dtar

image

This process will take some time to complete. For me, on the Standard_DS2 Azure VM, it took about 9 minutes.

image

Download Binary Distribution File

After the build process is complete, the resultant files are found in the hadoop-dist/target directory.

image

I download the hadoop-dist-2.8.0-SNAPSHOT.tar.gz (200MB) file to my local machine from the Ubuntu Azure VM (e.g. using WinSCP, MobaXterm SFTP, etc.).

I also store this file as a block blob in a Azure Storage container so that I can quickly download it from there without rebuilding (https://avdatarepo1.blob.core.windows.net:443/hadoop/hadoop-2.8.0-SNAPSHOT.tar.gz)

Once I have the binary distribution file ready, I can go ahead and delete my Azure VM.

Conclusion

It is very convenient and quick to be able to use an Azure VM running Ubuntu 14.04-LTS and Docker to setup the temporary Hadoop build environment. Although in this case I specifically built the “branch-2.8” branch, the same process can be used to build other Hadoop branches (or trunk) from source.

I’m looking forward to your feedback and questions via Twitter https://twitter.com/ArsenVlad

MSDN Blogs: A ton of new great content on MSDN for VSTS and TFS

$
0
0

MSDN Blogs: Issue with Android Emulator running under Hyper-V (VM’s)

$
0
0

Recently I prepared a VM in our lab for checking an issue repro. The issue was with Android Studio running(Java code) hitting Azure Storage blobs throwing an exception for some weird reason. So quickly jumped in creating a Lab machine and installed all the software like Android Studio, ton of updates and 90+ packages having various emulators. It took quite sometime to install everything(tiring), fix the Java path(this is pain and behaves differently between editors) etc.

After setting up all, I ran the code for the first time happily but got this error which brought everything to halt. After bit of research, learned that we cannot run phone emulators within a Hyper-Vised (VM‘s), looks like emulator runs under another Hyper-V which means running Hyper-V in another Hyper-V not supported.  

emulator

Solution:- We may have to use our own laptop/physical machine connecting to phone directly with USB cable Sad smile. This gives good experience rather emulator way. this is also faster when it comes to debugging or disconnect or redeploy or tracing end to end.

Let me know if there is a way to mitigate this along with your dev setup experience. Any tips and tricks also welcome..

Happy learning !

MSDN Blogs: DirectXMath 3.09

$
0
0

DirectXMath version 3.09 is included in the Windows 10 Anniversary Update SDK (14393) that ships with VS 2015 Update 3 when you install the Windows Tools 1.4.1 and select the 10.0.14393 Target PlatformVersion (see this blog post).

The new version includes the following:

  • Support for additional optimizations when built with /arch:AVX or /arch:AVX2
  • Added use of constexpr for type constructors, XMConvertToRadians, and XMConvertToDegrees
  • Marked __vector4i, XMXDEC4XMDECN4XMDEC4, and associated Load & Store functions as deprecated. These are vestiges of Xbox 360 support and will be removed in a future release.
  • Renamed parameter in XMMatrixPerspectiveFov* to reduce user confusion when relying on IntelliSense
  • XMU565XMUNIBBLE4 constructors take uint8_t instead of int8_t

Arch Switch

The DirectXMath library assumes on x86 and x64 that both SSE and SSE2 are always supported. Later instruction sets are not always supported on PCs, and to avoid the penalty of additional guarded code paths or runtime selection the library avoids using them. As I’ve covered in the past, you can use specific versions in your own guarded code paths as the developer can pick a high-enough level to do the runtime selection to amortize the penalties involved.

For fixed-platforms like the Xbox One, however, the library can rely on additional instructions being present. These optimizations are now also available in the Windows version of DirectXMath when built using the /arch:AVX or /arch:AVX2 switches. Keep in mind that the resulting EXE or DLL only works correctly on systems with AVX and/or AVX2 support, so you should ensure that these binaries are only used on such systems (XMVerifyCPUSupport will perform the additional tests as appropriate when built with these switches).

GitHub

In addition to shipping with the Windows SDK and the Xbox One XDK, DirectXMath is now also available on GitHub under the MIT license. This repo also includes all the extension libraries such as Spherical Harmonics math, XDSP, etc.

The GitHub’s master branch is already host to a new revision in progress for a future DirectXMath 3.10 release.

Related:Known Issues: DirectXMath 3.03, DirectXMath 3.06, DirectXMath 3.07, DirectXMath 3.08

MSDN Blogs: [セミナー紹介] これからの DB は SQL Server 2016! セミナー(2016/9/9 日本マイクロソフト品川本社開催)

$
0
0
SQL Server の豊富なノウハウをお持ちのパートナー様との合同セミナーをご紹介します。

2016 年 6 月 1 日より販売が開始された SQL Server 2016 は、リアルタイム運用分析、モバイル デバイスでのリッチな視覚化、組み込みの高度な分析、新しい高度なセキュリティ テクノロジ、新しいハイブリッド クラウド シナリオが搭載されている、マイクロソフト史上最強のデータプラットフォームです。

SQL Server 2016 に対応した、株式会社システムインテグレータ「SI Object Browser」や、SQL Server のライセンス体系、購入方法などについて説明いたします。最後には個別質問会も。

また SQL Server 2016 の新機能や設計、開発、移行ノウハウもお伝えします。

ぜひ、奮ってご参加ください。

詳細、お申込みはこちら→ http://www.sint.co.jp/products/siob/sn/2016/0909.html

 

セミナーに関するお問合せ先

株式会社システムインテグレータ 東京営業所
セミナー事務局 担当:汪(おう) mail:oob@sint.co.jp
TEL:03-5768-7979 FAX:03-5768-7884
(TEL受付:平日9:30~17:30/FAX・インターネット受付:24時間)

 

開催日程・概要

日時2016年 9月9日(金)13:30~17:00(受付13:00~)
共催株式会社システムインテグレータ、日本マイクロソフト株式会社、ウチダスペクトラム株式会社
定員100名
場所
日本マイクロソフト セミナールーム(31F)
〒108-0075 東京都港区港南 2-16-3 品川グランドセントラルタワー
※JR 品川駅 港南口よりスカイウェイにて直結 徒歩3分
京浜急行 品川駅より 徒歩6分
https://www.microsoft.com/ja-jp/mscorp/branch/sgt.aspx
参加費無料(※事前登録制)

セミナー内容

時間概要
13:30

13:35

ご挨拶

株式会社システムインテグレータ

取締役 Object Browser 事業部長 鈴木 敏秀

13:35

14:35

(60分)

「SQL Server 2016の新機能ご紹介」
新しい SQL Server には OLTP 経由での高パフォーマンスのインメモリ テクノロジ、リアルタイム運用分析、モバイル デバイスのリッチな視覚化、新しい高度なセキュリティ テクノロジが提供され、ミッション クリティカルなアプリケーションおよびビッグ データ ソリューションを構築できます。
また、共通のツール を使用して、オンプレミスとクラウドの両方でデータベースを展開および管理することができるため、既存のスキルで容易にクラウドを活用できます。ここでは、進化した SQL Server 2016 の全貌をご紹介します。日本マイクロソフト株式会社
デベロッパーエバンジェリズム統括本部 テクニカルエバンジェリスト 井上大輔
14:45

15:45

(60分)

「SI Object Browser /SI Object Browser ER で SQL Server徹底活用!」
データベース開発ツール『SI Object Browser 』がいちはやくSQL Server 2016に対応!SQLServer 2016上でもデータメンテナンスや、ストアドプログラムの開発が可能となりました。また、データベース設計ツール「SI Object Browser ER」のデータベース連携機能により他社データベースからSQL Serverの移行も可能です。これらの機能をデモンストレーションを交えてご紹介します。株式会社システムインテグレータ

Object Browser 事業部 担当マネージャー 後迫 潤

16:55

16:25

(30分)

「SQL Server 2016とSCE(サーバーおよびクラウド)契約のご紹介」
SQL Serverのライセンス体系と、SCE契約を中心としたご購入方法についてご案内させていただきます。 加えて、ますます複雑になるソフトウェア資産管理について、弊社が長年培ってきたライフサイクルプロセスに対応したクラウド型ソフトウェア資産管理サービスにつきましてもご紹介させていただきます。ウチダスペクトラム株式会社
アカウント マネージメント セールスグループ
16:30

17:00

(30分)

「個別質問会」 ※全体終了後に実施いたします。
個別でお話が聞きたい方に、日本マイクロソフト、ウチダスペクトラム、システムインテグレータがご対応させていただきます!

※プログラムは都合により内容が変更する場合がございます。予めご了承ください。
※お申込み多数の場合は抽選とさせていただきます。

MSDN Blogs: Windows 10 Anniversary Update における Microsoft Edge の新機能

$
0
0

Windows 10 が公開されてちょうど一年、Windows 10 の無償アップグレード期間が終了したのと入れ替わるように本日 (2016/8/3 日本時間) Windows 10 Anniversary Update が公開されました。

この Windows 10 Anniversary Update では、Windows 10 が発表された際の様々なビジョンのいくつかが実装され、さらにその当時は想像だにできなかった  Windows Subsystem for Linux (WSL) といったようなまったく新しい機能も搭載されています。

Windows 10 から搭載された新しい Web ブラウザーである Edge にも、当初の計画にあった新しい拡張 (エクステンション)モデルの実装や、Windows フィードバック、Developer FeedBack (旧 Edge Suggestion Box) に寄せられたフィードバックや提案をもとにした新しい機能が実装されています。

今回の記事では Windows 10 Anniversary Update の新機能、とくにユーザーが直接対話するデスクトップ UI の新機能と、新しく追加された Edge 関連のグループポリシーの設定について紹介します。

 

Microsoft Edge のデスクトップ UI 機能の強化

Windows 10 がリリースされてからこれまで、Edge にもたゆまず機能強化が行われてきましたが、その内容はパフォーマンスの向上であったり、サポートする API の数を増やすであるとか、どちらかというと開発者向けのものが多かった印象があります。

これら機能は UI を持たないため、開発者以外の多くのユーザーは Edge の機能向上の進捗を肌感として感じれなかったかもしれません。

しかし、Windows 10 Anniversary Update での Edge では、誰しもがアクセス可能なデスクトップ UI に複数の新機能を搭載しています。

この機能は、おおまかに分類すると以下の 4 つに別けられます。

  1. ナビゲーション
  2. お気に入り
  3. ダウンロード
  4. コンテキストメニュー

以降は、上記 4 つの項目について、具体的にどのどのような機能が追加されたのかを紹介していきます。


 

ナビゲーション

Web ブラウジングにおける “ナビゲーション” はページを遷移させるための操作や、その動作そのものを指します。

このナビゲーション関連では以下の機能が追加されています。

右クリックでのナビゲーション履歴表示

Edge のナビゲーションバーの左側にある 戻るボタン/進むボタン上で、マウスの右ボタンをクリックすることで、それまでの履歴と現在の位置(履歴内のどこをブラウズしているか)が表示され、クリックすることでそのページに遷移できるようになりました。

従来の 戻る/進む ボタンでは、ブラウザーを起動してからの履歴に対し、ひとつずつしか前後に遷移できませんでしたが、この機能を使用すると履歴の任意の位置に直接移動できます。

context_history

貼り付けて移動/検索

クリップボードにコピーした URL のページを Edge で表示させる場合、ナビゲーションバーに一度 URL を貼り付けてからキーボードの [ Enter] キーを押下する必要がありました。

また同様に、クリップボードにコピーしたキーワードで検索を行う場合には、 ナビゲーションバーに一度 キーワードを貼り付けてからキーボードの [ Enter] キーを押下する必要がありました。

Windows 10 Anniversary Update からの Edge では、ナビゲーションバー上でマウスの右ボタンをクリックした際に表示されるコンテキストメニューに [貼り付けて移動] もしくは [貼り付けて検索] というメニューが追加されており、このメニューを選択することで [Enter] キーを押下しなくてもクリップボードにある URL への移動、あるいはクリップボードにあるキーワードで検索を行うことができます。なお、「移動」か「検索」かは、クリップボードの内容により自動的に判断されます。

paste_and_go
(クリップボードの中身が URL の場合)


 

paste_and_find
(クリップボードの中身が URL ではない場合)

スワイプによるナビゲーション

スワイプによるナビゲーションに対応しました。

Windows Phone や タッチ対応のモニタの PC で Edge を使用する場合、画面を左/右にスワイプすることで一度遷移した URL について遷移することができます。

左にスワイプするとひとつ前に戻り、右にスワイプするとひとつ進みます。

 

お気に入り

Edge において「お気に入り」 (favorite) はユーザーが任意の URL を Web ブラウザーに登録しておく機能です。

お気に入り関連では以下の機能が追加されています。

ピン留め

ピン留めは Internet Explorer 8 からサポートされた機能で、当時はピン留め対応した Web サイトのショートカット アイコンとナビゲーションメニューを Windows のタスクバーに登録することができました。

Windows 10 Anniversary Update で提供される Edge には、ショートカットアイコンの登録場所や提供される機能は異なりますが、ピン留め機能が復活しています。

また、Web サイト側での特別な設定は不要となっています。

Edge のピン留め機能を使用するには、任意のページを表示したページのタブの上で右クリックし、表示されたコンテキストメニューから [ピン留めする] メニューを選択します。

Pinned tab

これによりタブの左端にページがピン留めされ、Edge が起動されると同時にページがロードされるようになります。

image

[お気に入り] メニューのツリー表示と並べ替え機能

これまで Edge の [お気に入り] メニューでは、同一階層にあるリンクのみが一覧で表示されていましたが、Windows 10 Anniversary Update で提供される Edge ではツリー表示されるようになりました。

favorit_tree

また、名前で並べ替えも出来るようになっています。

name_sort

[お気に入り] のインポート元の表示

Edge では他の Web ブラウザーから [お気に入り] をインポートできますが、インポート元の Web ブラウザーの名前がついたフォルダが作成され、どの Web ブラウザーからインポートされたのかわかるようになりました。

Inport

お気に入りバーのメニュー追加

お気に入りバー上にショートカットの表示で、アイコンだけを表示できる設定が追加されました。

favorite_setting


また、これまでお気に入りバーにはコンテキストメニューは設定されていませんでしたが、今回のアップデートで [新しいフォルダの作成] と [アイコンのみ表示する] メニューが追加されました。

FavoritesBar

 

ダウンロード

Edge のダウンロード機能については、保存場所や保存のさいのアクションについて、以下のような機能が追加されています。

保存先フォルダの指定

これまでの Edge では、既定のダウンロード先のフォルダは %user%ダウンロード となっており、これを変更することは出来ませんでしたが、今回のアップデートでは [詳細設定] メニューからこれを変更できるようになりました。

image

ダウンロード先の指定

これまで、Edge ではファイルをダウンロードする際に保存先の指定を任意で行うことはできませんでしたが、今回のアップデートでは、ダウンロード時に表示されるダイアログボックスに [名前を付けて保存] ボタンが追加され、保存先の指定や、保存する際のファイル名の指定ができるようになりました。

dl_dialog

ダウンロード中に Edge 終了の際の警告

ファイルをダウンロード中に Edge をクローズ場合、警告メッセージが表示されるようになりました。

 

コンテキストメニューへの機能追加と変更

Web ページを表示する部分で、右クリックした際に表示されるコンテキストメニューにも機能が追加されています。

Cortana との連携

Edge に Cortana との連携機能が実装されました。

文字列、もしくは画像を選択し、マウスの右クリックメニューから [Cortana に質問] を選択すると、選択された内容に関連する情報を判断して検索を行い結果を列挙します。

Cortana


 

たとえば、以下のように画像を問い合わせた場合、Cortana は関連する情報と類似する画像を調査して列挙します。

image

 

開発者用メニューの表示タイミング

これまでの Edge では、表示されている Web ページ上での右クリックメニューに [要素の検査] と [ソースの表示] という開発者向けのメニューが、初期状態から表示されるようになっていました。

今回のアップデートでは、初期状態ではこれら開発者向けのメニューは表示されず、F12 開発者ツールが起動されてからはじめて表示されるようになっています。

 

ここまで Windows 10 Anniversary Update で追加される Edge の新機能について紹介してきました。

Edge の新機能については、かねてからのロードマップと、ユーザーから寄せられたフィードバックにもとづき実装しています。

しかしながら、期待していた機能が実装されていなかった、ということもあるでしょう。そういった場合には Edge に拡張インストールし、昨日を追加することでご要望を満たすことができるかもしれません。

 

拡張 (エクステンション) による Edge の機能強化

Windows 10 Anniversary Update の Edge では、拡張がサポートされました。

拡張とは、外部で作られたプログラムで、Edge にプラグインすることで Edge の機能を文字どおり拡張します。

Edge の拡張は Windows ストアから入手することができ、すでに Ad ブロックやマウスジェスチャー、翻訳等、さまざまな機能を提供する拡張が用意されています。

拡張の入手と Edge へのインストール

拡張の具体的な入手とインストール方法は以下のとおりです。

  1. Edge のツールバー上の  […] アイコンをクリックして [拡張機能] メニューをクリックします。
    image


     

  2. [拡張]パネルが表示されるので [ストアから拡張機能を取得する] リンクをクリックします。
    image


     

  3. Windows ストアの 「Microsoft Edge の拡張機能」 画面が開くので任意の拡張のアイコンをクリックします。
    ExtensionList


     

  4. 選択した拡張の説明画面に遷移するので [無料] 、もしくは金額の書かれた(※有料の拡張の場合) ボタンをクリックします。
    image

以上の手順で拡張のダウンロードとインストールが行われます。

拡張の設定とアンインストール

Edge にインストールした拡張のアンインストールやオプション設定は、Edge の [拡張機能] メニューで行います。

具体的な手順は以下のとおりです。

  1. Edge のツールバー上の  […] アイコンをクリックして [拡張機能] メニューをクリックします。
  2. [拡張]パネルが表示され、インストール済の拡張のリストが表示されるので、任意の拡張をクリックします。
  3. 拡張の詳細設定画面が表示されるので以下から目的の操作を行います。
    • 拡張を有効、もしくは無効に –  トグルボタンを操作
    • 拡張のオプション設定を行う – [オプション] ボタンをクリック
    • 拡張をアンインストールする – [アンインストール] ボタンをクリック
    image


 

Edge への拡張のインストール/アンインストール方法は以上です。

Edge 用の拡張を自分で作成する方法についてはこのブログの前回の記事で紹介していますので、興味のある方はぜひご覧ください。

 

Windows 10 Anniversary Update で追加される Edge のグループポリシー設定

Windows 10 Anniversary Update では Edge のグループポリシーで設定できる内容も追加が行われています。

グループポリシーとは、IT 管理者が管理対象の使用者に対してさまざまな設定を一括で行うための機能です。

従来から Edge では、以下の内容について IT 管理者がグループポリシーを用いて Windows ドメイン ユーザーの使用する Edge を管理することができます。

設定説明
オートフィルを無効にするEdge の使用中にフォームのフィールドにオートフィルで自動入力できるかどうか
開発者ツールを無効にするF12開発者ツールの使用を許可するかどうか
トラッキング拒否ヘッダーの送信を従業員に許可するトラッキング情報が要求される Web サイトに従業員がトラッキング拒否ヘッダーを送信を許可するかどうか
InPrivate ブラウズを無効にするInPrivate ブラウズの使用の有無
パスワードマネージャーを無効にするパスワードマネージャーを使用してパスワードのローカル保存を許可するかどうか
ポップアップブロックを無効にするポップアップブロック機能を使用可とするかどうか
アドレスバーの検索候補を無効にするアドレスバーに検索候補を表示するかどうか
SmartScreen フィルターを無効にするSmartScreen フィルターを有効にするかどうか
Open a new tab with an empty tab ([新しいタブ]ページでのWebコンテンツの許可)新しいタブを開いたときに表示するページの種類を構成
Cookie を構成するCookie の扱いを構成
エンタープライズモードサイト一覧を構成するエンタープライズ モードとエンタープライズ モード サイト一覧を使用するかどうかを構成
お気に入りを構成するユーザーに表示される既定のお気に入りを構成できます
WebRTC による LocalHost IP アドレスの共有をしないWebRTC プロトコルを使用した通話中にユーザーの LocalHost IP アドレスが表示されるかどうかを指定
企業のホームページを構成するドメインに参加しているデバイス用に企業のホーム ページを構成
SmartScreen フィルター機能の警告の上書きを許可しない有害である可能性のある Web サイトに関する SmartScreen フィルター機能の警告を従業員が上書きできるかどうかを指定
確認されていないファイルに関するSmartScreen フィルター機能の警告の上書きを許可しない確認されていないファイルのダウンロードに関する SmartScreen フィルター機能の警告をユーザーが上書きできるかどうかを指定
すべてのイントラネットサイトを Internet Explorer 11 に送るイントラネット サイトを Internet Explorer 11 で表示するかどうかを指定

 

今回のアップデートでは、新たに以下の内容が制御可能となりました。

設定説明
Microsoft Edge で about:flags ページへのアクセスを禁止するユーザーが about:flags ページにアクセスできるかどうかを指定
拡張機能の許可ユーザーが拡張機能を読み込めるかどうかを許可
Interner Explorer でサイトを開くときのメッセージ表示Internet Explorer 11 でサイトが開かれたことを示す追加ページを Microsoft Edge で従業員に表示するのかどうかを指定

これらグループポリシーの新しい設定を使用することで、IT 管理者は今回のアップデートで追加された Edge の新機能の使用の有無を管理することかできます。

 

Microsoft Edge の更新履歴について

Windows 10 のビルド単位での、より詳しい更新履歴については、以下のページで確認することができます。

 

Microsoft Edge がサポートする API については以下をご参照ください。

 

なお、F12 開発者ツールの新機能 (そんなにない)、 Internet Explorer 11 の後方互換性関連の新機能 (これも、そんなにない) については後日紹介したいと思います。

 

まとめ

これまでの Edge では、Web コンテンツの閲覧中 Internet Explorer 11 と比較して機能が少ないと感じる場面が多々あったかもしれません。しかし、そういった不満は Windows 10 Anniversary Update によってずいぶんと解決されていると思います。

また、それでも足らないと感じる機能については、拡張を追加することである程度解消できることでしょう。

Edge を使ってみて他の Web ブラウザーを使うようになってしまった人も、Edge をまだ使用したことがない人もぜひこの機会に Edge をお試しくださいませ。

 

Real Time Analytics

Clicky

MS Access Blog: Announcing the general availability of the Microsoft Excel API to expand the power of Office 365

$
0
0

Today, we are pleased to announce the general availability of the Microsoft Excel REST API for Office 365—now developers can use Excel to power custom apps. Excel is an indispensable productivity tool; users across all industries and roles embrace it. It is used for everything from simple task tracking and data management, to complex calculations and professional reporting. With our new interface, Excel for Office 365 can extend the value of your data, calculations, reporting and dashboards.

The Excel REST API release is a continuation of our journey to make Office an open platform for developers. The Office developer framework uses modern web development standards, so developers can build smarter apps that operate as part of Office on mobile, web and desktop platforms. The new Excel interface is exposed through the Microsoft Graph to access to intelligence and insights from the Microsoft cloud.

Here are a few scenarios how developers can use the new Excel REST API:

Excel as a calculation service

Users love the ease with which they can perform deep and complex calculations within Excel. Developers can now access Excel’s powerful calculation engine with instant results. For example, a mortgage calculator can take advantage of the PMT function from Excel—using a simple API call including principal, rate and number of payments. Excel does all the heavy lifting and returns the monthly payment instantly. With more than 300 Excel worksheet functions available, you have full access to the breadth of formula supported by Excel today. Complex business models don’t need to be rebuilt repeatedly; developers can leverage Excel to perform those calculations instantly and retrieve the results with simple API calls.

Excel as a reporting service

Excel is a reporting hero, from simple data tables to professional dashboards. Today, we are giving developers full access to all of Excel’s reporting features—making Excel an online reporting service within Office 365. Imagine any of the reporting scenarios users rely on today pulled into a custom app to create professional charts or analyze large sets of data intelligently, seamlessly blending Excel into those customized experiences.

Excel as a data service

Excel is also a great tool to store and track data. If your information is stored in a workbook, that data is available to any app integrating with Office 365, making its contents available to read from custom solutions and enabling them to use Excel as the data storage.

We look forward to working with developers and partners to discover and build new places and scenarios where Excel will continue to enable people to be more productive. Companies are already taking advantage of our new Excel REST API, including:

zapierZapier lets users easily automate tedious tasks. Zapier recently announced a new Excel integration, powered by Excel REST API, with near-infinite use cases, like simplifying a data collection process. Users can now build zaps where data is automatically added into Excel from other services, like emails and surveys, making Excel the data repository for all your connected services.

sageSage is working on integrating Sage 50 accounting software with Office 365, leveraging Excel via the new Excel REST API to access and combine business data in Sage 50 with the productivity benefits of Office. With powerful interactive Microsoft Excel reporting and business performance dashboards, Sage has simplified how to interact with the data and enabled small and medium businesses to make sense of data quickly—and make faster, better decisions.

Want to build custom apps using Excel?

Excellent! Visit dev.office.com/excel/rest, where you’ll find documentation and code samples to help you get started. It only takes a few lines of code to set up a basic integration with our Excel to-do list. Once you jump in, tell us what you think. Give us your feedback on the API and documentation through GitHub and Stack Overflow, or make new feature suggestions on UserVoice.

—The Office Extensibility team

The post Announcing the general availability of the Microsoft Excel API to expand the power of Office 365 appeared first on Office Blogs.


MSDN Blogs: And your new Imagine Cup Champion IS … Team ENTy of Romania!

$
0
0

team enty 850

Team ENTy of Romania celebrates on stage in the Quincy Jones Performing Arts Center at Garfield High School after winning the 2016 Imagine Cup Championship.

On July 29th, a new Imagine Cup Champion was crowned– Team ENTy of Romania, creators of a device and solution that monitors balance and posture and has wide-ranging potential for use in the global healthcare industry.

That morning, backpack-bedecked and wide-eyed students streamed into Garfield High School’s Quincy Jones Performing Arts Center near downtown Seattle, ready to find out who would take home the much-coveted Imagine Cup Championship trophy. Excited banter was steady against a backdrop of dance music. A long, white couch for the judges, stools for the finalists and a workstation laid out with two notebooks and a tablet set the stage.

Blake Lewis

After human beatbox artist and American Idol star Blake Lewis (pictured above) amped up the energy in the packed, 600-seat auditorium, the three first-place teams from the Games, Innovation, and World Citizenship competitions took their places on stage. There, they were met by our three judges– John Boyega, lead actor from Star Wars: The Force Awakens; Dr. Jennifer Tang, one-half of the duo that won the 2014 Imagine Cup Championship; and Kasey Champion, an accomplished software engineer and Computer Science Curriculum Developer at Microsoft.

Each team had three minutes to present their project, followed by in-depth feedback on how to take their technology and their plans to the next level. Dr. Tang said, above all, the teams should make sure to absorb and enjoy the life-changing experience that is Imagine Cup.

Once the confetti settled, the audience relocated to Garfield High’s gymnasium for a Robo-Hack.

Robots, robots, robots

Hackathon participants came from local Seattle high school and Boys and Girls Clubs, and they were quick to settle into their workstations, which were equipped with a robot kit and laptops running Windows 10. About 200 students gathered in teams of four to turn a fleet of blue and white robots into soccer-playing champions.

With the guidance of Microsoft Student Partners (MSPs), teams spent the last two hours of the morning learning how to configure their robot’s Raspberry Pi 3. Using the Universal Windows Platform App, students could control their robots using an Xbox One controller.

Robohack-Girl-Assembly-850

A young developer works meticulously to get her team’s robot together for the Robo-Hack held in the Garfield High School gymnasium after the Imagine Cup Championship.

The students’ ingenuity showed no bounds. Some teams equipped their robots with bits of the packaging it had come boxed in, creating a way to scoop up the ball during play. Other robots were designed not only to gain control of the ball, but to keep the ball inaccessible by the opponent. Another team built an arm onto their robot, which could effectively swipe the ball from another robot’s grasp.

Once their robotic soccer player was built, the teams went head-to-head in a single-elimination tournament.

Students huddled shoulder-to-shoulder around each playing field with an MSP acting as referee. As in real soccer, whichever team scored the most goals won. Fifty teams squared off, and the gym was full of shouts of celebration and encouragement. Cheers erupted every time a goal was scored, and matches continued until 50 teams dwindled to twelve, then six, until three teams remained in the game.

Robohack-Girl-Excited-850

A Microsoft Student Partner guides a young team of developers in readying and programming their robot for the Robo-Hack.

Meet the Robo-Hack winning teams

A middle school team from the Boys and Girls Club landed in third place, leaving at least one team member hungry for more. He already had ideas for improvements and wished he could take the robot home.

The energy in the gym was intense and excited as the final two teams—the University of Washington STEMsub and the Garfield High “Dawgbotics” Robotics team—faced off for the title of Robo-Hack Champion.

The teams tied — twice! — and the Robo-Hack championship came down to which team could score the most goals in one minute. Victory went to Garfield High’s robotics team!

The Dawgbotics congratulated each other on their first-place win with fierce hugs and high-fives. Each team member was awarded an Xbox One, an HP Spectre and a Robo-Hack trophy.

The students piled their trophies into the arms of their teacher, Earl Bergquist, as soon as the competition ended with the request to put them on display.

And, if you missed this year’s Imagine Cup action, don’t worry – you can catch the Imagine Cup Awards Ceremony and the Imagine Cup Championship on demand over on Channel 9!

___________

Microsoft Imagine, anywhere, anytime:

  • Follow us on Twitter, Facebook, Instagram and YouTube.
  • Subscribe to our blog to meet students just like you who are changing the world with their exciting new tech. Plus, stay on top of all the new products and offerings for students.
  • Get inspiration delivered to your inbox via our monthly Microsoft Imagine Newsletter, featuring the latest tech tips, competition news and all kinds of online tutorials.
  • Bookmark Microsoft Imagine for all the student developer news, downloads to free student products, online tutorials and contests you could want.

MSDN Blogs: Adding Custom API routes with Node SDK

$
0
0

Azure Mobile Apps use a new SDK and there is a change in the way you can add custom routes to you API.  The Easy API interface allows you to quickly stub out a function for an API like http://mymobileservice.azurewebsites.net/api/Health for example, but documentation on how to add another route like http://mymobileservice.azurewebsites.net/api/Health/Databases is a little sparse.  This walkthrough will show you how to add that route.

Setup

This assumes you are using the Quickstart code the portal generates for you.  Also it assumes you already have an Easy API called Health using the Easy API interface in your Azure Mobile App.

Walkthrough

The current code looks like this for my Easy API Health:

module.exports = {
    "get": function (req, res, next) {
        res.status(200).type('text').send("All Healthy");
    }
};


When I call /api/Health  it returns “All Healthy”.

I want to add a route so I can drill into the health of my other services with a call to /api/Health/Databases for example.

Using the App Services Editor navigate to the api folder off of the wwwroot folder of you code (you can get there by clicking on any Easy API in your portal and choose: Edit Script or https://yourmobileappname.scm.azurewebsites.net/dev/wwwroot/api).

 

Add the code that we want to execute when we hit the route /api/Health/Database.

Then right click on the .gitignore file (because if a folder is highlighted you create a sub folder)  and choose the folder icon to create a new directory:

snip_20160803160958

Let’s call it custom_api_routes:

capture20160803161226172

Tab out and right click on that new folder and create a new file:

capture20160803161400780

Name it something intuitive like DatabaseHealth.js (important to add the .js extension here) and you will be in the editor.

 

Here is the sample code with some comments to describe what it does and why.  Add it to this new file:

// we need access to express to get the Router Object so require the express module
var express = require('express');

// define an anonymous funtion to avoid name collisions and assign it to the exports of this module (this file)
module.exports = function () {
    // get the router Object    
    var router = express.Router();
    // add a route for the http GET verb and add your code to it.    
    router.get('/', (req, res, next) => {
        res.status(200).type('text').send("DB Healthy");
    });
    // return the router object with your code defined for the HTTP GET verb (add POST etc... if you want).
    return router;
};

Update the app code to use this code for our route

In the wwwroot folder find app.js and add toward the top of the file after the azureMobileApps require add a ‘,’ and the require to get the HealthDatabase code we added in the previous step:

var express = require('express'),
    azureMobileApps = require('azure-mobile-apps'),
    // Obtain our custom API - it exports an anonymous function so we assign it to a variale to use later
    databaseHealthApi = require('./custom_api_routes/HealthDatabase');

At the very bottom of app.js, add the custom route we want:

// Register the router we configure in our custom API module
// This must be done after registering the mobile app
mobileApp.use('/api/Health/Database', databaseHealthApi());

Hit the Run button on the left side of the App Services Editor and try your changes (note the editor will show you any problems with the build etc…)

For example:  http://<yourmobileappurihere>/api/health/database should return: DB Healthy
and  http://<yourmobileappurihere>/api/health should return: All Healthy

Summary

It is pretty easy to add custom routes using the App Services Editor.  This is due to the fact that we can use the Express middleware.  Related sample: https://github.com/Azure/azure-mobile-apps-node/tree/master/samples/custom-api-routing

Drop me a note if you found this helpful!

MSDN Blogs: Changes in SQL Server 2016 Checkpoint Behavior

$
0
0

Reviewed by: Denzil Ribeiro, Mike Weiner, Arvind Shyamsundar, Sanjay Mishra, Murshed Zaman, Peter Byrne, Purvi Shah

SQL Server 2016 introduces changes to the default behavior of checkpoint. In a recent customer engagement, we found the behavior change to result in higher disk (write) queues on SQL Server 2016 vs. the same workload on SQL Server 2012. In this blog we’ll describe the changes, options are available to control these and what impact they might have on workloads that are upgrading to SQL Server 2016. In this specific case changing the database to use the new default behavior of checkpoint proved to be very beneficial.

Checkpoints in SQL Server are the process by which the database engine writes modified data pages to data files. Starting with SQL Server 2012 more options have been provided to better control how checkpoint behaves, specifically indirect checkpoint. The default checkpoint behavior in SQL Server prior to 2016 is to run automatic checkpoints when the log records reach the number of records the database engine estimates it can process within the “recovery interval” (server configuration option). When an automatic checkpoint occurs the database engine flushes the modified data pages in a burst fashion to disk. Indirect checkpoint provides the ability to set a target recovery time for a database (in seconds). When enabled, indirect checkpoint results in constant background writes of modified data pages vs. periodic flushes of modified pages. The use of indirect checkpoint can result in “smoothing” out the writes and lessoning the impact short periodic bursts of flushes have on other I/O operations.

In addition to configuring indirect checkpoint SQL Server also exposes the ability to utilize a startup parameter (-k) followed by a decimal value which will configure the checkpoint speed in MB per second. This is also documented in the checkpoint link above. Keep in mind this is an instance level setting and will impact all databases which are not configured to use indirect checkpoint.

For further internals around checkpoint reference: “How It Works: Bob Dorr’s SQL Server I/O Presentation”. For the purposes of this blog we’ll focus on what has changed and what this means for workloads that are upgrading to SQL Server 2016.

Key Changes to Checkpoint Behavior in SQL 2016

The following are the primary changes which will impact behavior of checkpoint in SQL Server 2016.

  1. Indirect checkpoint is the default behavior for new databases created in SQL Server 2016. Databases which were upgraded in place or restored from a previous version of SQL Server will use the previous automatic checkpoint behavior unless explicitly altered to use indirect checkpoint.
  2. When performing a checkpoint SQL Server considers the response time of the I/O’s and adjusts the amount of outstanding I/O in response to response times exceeding a certain threshold. In versions prior to SQL Server 2016 this threshold was 20ms. In SQL Server 2016 the threshold is now 50ms. This means that SQL Server 2016 will wait longer before backing off the amount of outstanding I/O it is issuing.
  3. The SQL Server engine will consolidate modified pages into a single physical transfer if the data pages are contiguous at the physical level. In prior versions, the max size for a transfer was 256KB. Starting with SQL Server 2016 the max size of a physical transfer has been increased to 1MB potentially making the physical transfers more efficient. Keep in mind these are based on continuity of the pages and hence workload dependent.

To determine the current checkpoint behavior of a database query the sys.databases catalog view.

SELECT name, target_recovery_time_in_seconds FROM sys.databases WHERE name = ‘TestDB’

A non-zero value for target_recovery_time_in_seconds means that indirect checkpoint is enabled. If the setting has a zero value it indicates that automatic checkpoint is enabled.

RecoveryInterval

This setting is controlled through an ALTER DATABASE command.

Example of Differences in Checkpoint Behavior by Version

Below are some examples of the differences in behavior across versions of SQL Server, and with/without indirect checkpoint enabled. Notice the differences in disk latency (Avg. Disk sec/Write) in each of the examples. Each of the examples below is from an update heavy transactional workload. For each a 30-minute comparable sample has been captured and displayed.

Figure1

Figure 1 – Checkpoint Pattern on SQL Server 2012

 

Figure2

Figure 2 – Checkpoint Pattern on SQL 2014

 

Notice that there is little difference in behavior from SQL Server 2012 to SQL Server 2014.

Figure3

Figure 3 – Checkpoint Pattern on SQL Server 2016 (Using Automatic Checkpoint – Maintains 2012 Behavior on Upgrade)

 

After moving to SQL Server 2016 notice that the latency and amount of I/O being issued (Checkpoint pages/sec) during the checkpoints increases. This is due to the change in how SQL determines when to back off the outstanding I/O being issued.

Figure4

Figure 4 – Checkpoint Pattern on SQL 2016 (After Changing to Indirect Checkpoint)

 

After changing the configuration of the database to utilize indirect checkpoint the SQL engine issues a constant stream of I/O flushes the modified buffers. This is represented as Background writer pages/sec on the graph above. This change has the effect of smoothing the checkpoint spikes and results in providing a more consistent response time on the disk.

Table1

Table 1 – Checkpoint and I/O Performance Metrics for Different SQL Versions and Checkpoint Configurations

In the above observe the following:

  • Automatic checkpoint in SQL Server 2012 can Issue less outstanding I/O than SQL Server 2016. For this particular hardware configuration, the result is higher disk latency on SQL Server 2016 (and more queued I/O’s) than on SQL Server 2012.
  • Indirect checkpoint in SQL Server 2016 has the effect of “smoothing” out the I/O requests for checkpoint operations and significantly reducing disk latency. So while this results in a more constant stream of I/O to the disks the impact of the checkpoint on the disk as well as any other queries running is lessoned.
  • The counters which measure the amount of work being performed by checkpoint are different and depend on the type of checkpoint enabled. The different counters can be used to quickly expose which type of checkpoint and how much work the operations are doing on any given system.
    • Automatic checkpoints are exposed as “Checkpoint Pages/sec
    • Indirect checkpoints are exposed as “Background Writer pages/sec

Summary

There are subtle differences in checkpoint behavior when migrating applications from previous versions of SQL Server to SQL Server 2016 and also differences in configurations options you have available to control these. When migrating applications from to SQL Server 2016 make sure to understand the difference in behavior of databases newly created on SQL Server 2016 vs. those created on previous versions and the configurations options you have available to control these. Indirect checkpoint is the new default and you should consider changing the configuration of existing databases to use indirect checkpoint. Indirect checkpoint can be a very effective approach at minimizing the impact of the more aggressive automatic checkpoint in SQL Server 2016 for systems with I/O configurations that cannot handle the additional load.

MSDN Blogs: Notes about the UWP on Xbox One program and the Xbox Summer Update

$
0
0

As previously reported by Xbox Wire and Major Nelson, the Xbox One Summer Update has been released.  If you have a console that was enrolled in the UWP on Xbox One program prior to the release of the Xbox One Summer Update, it should already be running the Summer Update console OS build (10.0.14393.1018 rs1_xbox_rel_1608.160725-1822). 

If you have been thinking about enrolling a console in the UWP on Xbox One program, now is a great time to do so because with the release of the Xbox One Summer Update you no longer need to install a pre-release console OS build to do so.  All you need to do is make sure that your retail Xbox One console is running the Xbox One Summer Update, then activate Xbox One Developer Mode and switch from Retail Mode to Developer Mode using the instructions at https://msdn.microsoft.com/windows/uwp/xbox-apps/devkit-activation.

Also, the Windows 10 Anniversary Update SDK has been released, and you should use this SDK and Visual Studio 2015 Update 3 when building, deploying and debugging UWPs on Xbox One.  Here are links to blog posts with additional information about the Windows 10 Anniversary Update SDK:

MSDN Blogs: Azure Data Lake U-SQL August 1st 2016 Updates: ACLs on Databases, Skipping Header Rows, Sampling and more!

$
0
0

MSDN Blogs: Windows 10 Anniversary Update SDK now available – Submit your desktop apps to the Windows Store by using Centennial bridge

$
0
0

Screenshot of the Visual Studio 2015 notifications pane, showing the 'Windows 10 Anniversary Update SDK' and 'Tools for Universal Windows Apps and Windows SDK', released on 2 August 2016. Graphic: Microsoft

Download here: Anniversary Update SDK.

Lots of good stuff here, including improvements to Ink, Cortana APIs, and Hello. The big news is the cross-platform story:

  • Desktop Bridge (Project Centennial): The millions of developers using Win32 and .NET to build desktop apps can now get access to the benefits of the Universal Windows Platform and the Windows Store. Using the modern deployment technology of UWP, desktop apps can cleanly install, uninstall, and update, as well as get full access to UWP APIs including Live Tiles, roaming storage, push notifications, and more.
  • Improved Tools and Bridges for iOS and Android Developers: Visual Studio now includes Xamarin tools built-in, ready to create apps for Windows and iOS and Android. In addition, our open source Windows Bridge for iOS enables iOS developers to bring Objective-C code into Visual Studio and compile it into a UWP app.

More details from Kevin Gallo: Windows 10 Anniversary Update SDK Now Available! Windows Store Open for Submissions.

Technorati Tags: ,,

MSDN Blogs: Some highlights of the August 1st U-SQL refresh: Skipping header rows, Database Level ACLs, improved Extractor framework

$
0
0

We just released some big updates on U-SQL and Azure Data Lake.

Check out the U-SQL release notes and the blog post summarizing the new file and folder-level access control in Azure Data Lake Storage.

In this blog post I want to call out some of the highlights.

First we released access control at the database level. This gives you now control on who can create databases, and use them to read from them or use them to create objects in them. Note that the master database in U-SQL per default continues to be open for everyone to use.

Secondly, we fixed the so-called record boundary-extent boundary alignment issue that causes large files to fail during extraction unless they were uploaded in specific ways. Fixing this issue also gives us now correct information about the segments (extents) that are being passed to an extractor. Which in turn now allows us to finally enable the skipFirstNRows parameter on the built-in extractors as well as write parallelizable custom extractors that process the first or last file segment specially. I will provide some more detailed blog posts on such custom extractors soon.

Finally, I strongly suggest to sign up to test the new sampling capabilities.


MSDN Blogs: Release Management issues with Visual Studio Team Services – 8/4 – Investigating

$
0
0

Initial Update: Thursday, 4 August 2016 00:27 UTC

We are actively investigating issues with Release Management service. Customers who has VS Team Services accounts created in our South Brazil scale unit may not be able to use Release Management Services.

We are working to resolve this issue and apologize for any inconvenience.

Sincerely,
Krishna

MSDN Blogs: Microsoft Dynamics Partner Enablement Update – August 2016

$
0
0

BREAKING NEWS

Get US$2,500 savings on Advanced Support!

Advanced Support for Partners offers cloud support at an accessible price point to help you serve customers better and grow your cloud business faster, with:

  • Less than 1-hour incident response time for your most critical issues for Microsoft Azure, Microsoft Office 365, Microsoft Dynamics CRM Online, and Microsoft Dynamics AX
  • Services Account Managers to act as your support advisors
  • Proactive guidance and training so you are ready for what’s next

From now through August 15, 2016, partners who sign up for an annual subscription will save US$2,500 with our special promotion*. Use the promo code ‘ASFPSPECIAL16’ at sign up. To purchase:

  1. Visit https://aka.ms/asfpwebsite and click ‘Buy now’.
  2. Enter the promo code at the bottom of section 1 of the purchase form in the ‘Special Instructions’ field. Note: Do not select an ASfP subscription in the “Support Preferences” section unless you want to add an optional package.
  3. Enter company contact information in section 2.
  4. Accept the Terms and Conditions in section 3 and click ‘Next’.
  5. Review the details and click ‘Finish’.

The Team will contact you within 48 business hours to finalise and activate your subscription.

Sign up to save on Advanced Support

* The Advanced Support offer is subject to terms and conditions accessible on the portal.

 Please reach out to Sameer Saxena (sameer.saxena@microsoft.com) to know more.

 

REMINDER: Dynamics CRM Exam Requirements Changes from Sept 1, 2016 onwards

Microsoft Dynamics CRM partners must be on current product versions of exams for Microsoft Dynamics CRM (2015 or 2016) starting September 1, 2016 to be able to order and claim fees.

For more information on these changes, please refer to the “What’s Coming – Changes to Microsoft Dynamics Exam Requirements” page on Partner Source.

 

MICROSOFT DYNAMICS AX

Learn all about Data Management, Entities and Migration!

Newly launched topic Dynamics AX Data Management is a compilation of courses on data migration and integration. Dynamics AX Code Migration also shows you how to migrate code and data from earlier versions of on-prem Dynamics AX to the new AX.  

 Latest AX Contents @ DLP

Be sure you are up to date on the most current readiness content and review the new topic pages and courses on DLP today!

Upcoming AX Trainings

Taking your certification? Requires instructor-led training workshops? Refer to AX Calendar Events for the latest updates and sign up for these workshops.

 

MICROSOFT DYNAMICS CRM

Launch of Tech Talk Series

Three Dynamics CRM Technical Support Tech Talk videos are now available:

Latest CRM Contents @ DLP

Get the latest readiness contents from the topic pages and course catalog:

  • Customer Service in Microsoft Dynamics CRM 2016 Update 1 course focuses on how an organization can nurture customer satisfaction through automation of business processes, providing the latest insights on Customer Service & Service Scheduling functionalities.
  • Sales Management in Microsoft Dynamics CRM 2016 Update 1 provides a flexible framework for organizations to track, manage, and analyze parts of their sales cycle as well as its overall success. This course provides information on the Sales functionality of Dynamics CRM 2016, ranging from various customer scenarios, product catalog management to sales transaction processing and analysis.

Upcoming CRM Trainings

Taking your certification? Requires instructor-led training workshops? Bookmark CRM Calendar Events for the latest updates and sign up for these workshops.

 

MICROSOFT DYNAMICS NAV

Latest NAV Directions Webinar Video is NOW Available

Find the latest webinar recording on News from WPC in Toronto – Dynamics 365. Recordings of past webinars are also available here.

Latest NAV Contents @ DLP

More of the How Do I series:

Upcoming NAV Trainings

Taking your certification? Requires instructor-led training workshops? To view the latest training schedule, please go to NAV Calendar Events and sign up for these workshops.

MSDN Blogs: “Method of failed”

$
0
0

https://youtu.be/pJ60BeA0SDM

Here with I am sharing my audition video of Pluralsight which I submitted to become an author. Unfortunately I couldn’t make it but it’s now matter of few months after which I can reapply again and hopefully become an author for Pluralsight.

Despite this failure, my experience with Pluralsight acquisition editor and curriculum team was quite inspiring for me. Initially I thought that only learning I would get from recording the audition video would be of creating meaningful slides, voice over and explaining the code under 10 mins with nice storyline. However, it started with the learning of Camtasia studio. Video editing. Something that I did 10 years back in college days! Followed by hours of recording sessions and retakes. Then came the submission of the first version of the audition. Got very constructive feedback from the acquisition editor. Back then, the only tough improvement part for me was learning of voice inflection techniques as I was talking monotone through out the video. Fortunately, Bhagyashree came to rescue and tutored me for some voice inflection sessions. They mostly went like this.
https://youtu.be/Z6oeAdemFZw

And finally, I could submit 2nd version of the audition. I always felt that what I have done is perfect. But then I reached to 2nd round of review from curriculum team. That review ended my journey with Pluralsight for time being but started my other journey to perfect myself as an author and tutor. The precise constructive feedback from the curriculum team has many points I can improve upon when it comes to teaching over video. And I never felt that before! Wow! In this entire two months interaction with Pluralsight team, I only witnessed transparency in communication and high standards for quality & review. Their editors are doing great work. This only excites me to become a Pluralsight author and become part of this community of great people. ‪#‎Pluralsight‬‪#‎RodePodcaster‬‪#‎Camtasia‬‪#‎1080p‬‪#‎audition‬

MSDN Blogs: How to render SQL Server acyclic blocking graphs using Visual Studio Code, TypeScript, NodeJS and TreantJS – Part 2

$
0
0

Recap

In the previous blog post (you can find it here: http://bit.ly/bcts1) we’ve seen how to setup a NodeJs REST application in Visual Studio Code. We’ve also seen how easy is to debug the application without leaving the IDE. In this second part, we will see how to wire some data to a gorgeous library: TreantJS. In our example, we wanted to be able to graphically visualize the SQL Server acyclic blocking graph. Let’s review the server code:

This code is fairly simple. All we do is register a path (/spwho) in express. That URL will trigger the spWhoService.GetEntries() code. That code, upon success, will return a JSON representation of the sp_who stored procedure.

The service, in case you are wondering, is like this:

As you can see we are returning a strongly-typed array – in fact a promise to be fulfilled. We use an interface in order to render unit testing using mock services easier.

Next step

What we need now is to create an html page that will call our web service, do some data manipulation and then render the graph. We want to do it in TypeScript (obviously) and we want to use Visual Studio Code.

Debugger for Chrome

There are many techniques for debugging a TypeScript transpiled code. Visual Studio Code, using the Debugger for Chrome plugin, has first class support to web debugging. You can set breakpoints in the code and, upon hit, the browser will pause and surrender control to Visual Studio Code. From there, you will be able to inspect the variables, call stack, execute a step over, step into, and so on.

Dual debugging

We want to be able to debug the NodeJs server and the Chrome webapp so we can follow the data from end to end. But how to debug a NodeJs server and a Chrome webapp at the same time in the same Visual Studio Code solution? Ideally we want to setup two different Visual Studio Code solutions, one for the NodeJs server and another for the webapp. But the webpage will be sent by the server (so the code must be there), not by the webapp solution. There are many ways to solve this chicken-and-egg solution. I usually end up using symbolic links but any vendoring technique should be fine.

In my case I will have the final web folder in the NodeJs solution (to be exported as static files by express). I will not, however, use it from there. I will, instead, create a new solution specific for the webapp and link the web folder from the NodeJs solution. This way I can work on my webapp solution and publish only the transpiled files (which I would want to do anyway). I’ll be able to debug the webapp from my source TypeScript files since I’ll configure the source maps. The downside of this technique is that is very git unfriendly. You can solve this limitation with git submodules but it’s beyond the scope of this article (for more info, see here:https://git-scm.com/docs/git-submodule).
This is how the webapp folder appears in my solution. Notice I’ve also linked the typings folder so the code will compile successfully in both solutions.

00

Once set – I will skip the details now – you will be able to hit breakpoints in Chrome even if the source code comes from another solution. Here I show you how the debugging works.

01

TypeScript and browsers

One of the main problems when working with browsers instead of NodeJs is the complexity of module loading. Fortunately there are many high-quality module loaders. I use SystemJs (https://github.com/systemjs/systemjs) because I find it clear. Your mileage may vary, pick the one that better suits your needs. Here is a very simple SystemJs loader (extracted from index.html):

We don’t need many advanced SystemJs features because the transpilation process will be handled by tsc (and Visual Studio Code) so all we have to do is to point it to the dependency folders.

Mapping JSON to strongly typed entries

In our NodeJs server solution we publish, as JSON, entities called WhoEntry. These are the representation of a single sp_who row.

Our client webapp, however, receives the JSON which is not strongly typed. In order to get the WhoEntry class back all we have to do is to map the JSON to our class:

This of course works since we have a non-written contract between server and client. Using the same source file helps to avoid conversion errors.

TreantJS

TreantJS requires you to create a hierarchical structure: any node can have zero to infinite childs and so on. What we will do is to start from the root SPID (the blocking one) and then add its blocked SPIDs as childs. Then, recursively, we’ll add their own blocked SPIDs until there are no more blocked SPIDs. But first how can we identify the root blocking SPID?

Root blocking SPIDs

The logic is simple. The root blocking SPID is the one blocking something but not blocked by something else. In other word, there must be at least one WhoEntry with blk equals the SPID. And the SPID’s blk entry must be zero:

Notice in the code we relaxed the assumption of having only one root blocking SPID in a given scenario.

Recursively add blocked SPIDs

The recursive function can be functional – better – or purely side effecting like this one:

TreantJS again

Now that we have the hierarchical structure in place all we have to do is to add some mandatory fields (container, etc…) and create the Treant object. We choose to hide the root node since it does not give any useful information. You can experiment a bit, TrantJS is very powerful and customizable.

Final result

Let’s try our solution. Fire the NodeJs server (remember to configure the SQL Server connection properties or use the mock service) and open a browser pointing to http://localhost:3000. You should see the blocking graph (or nothing if you don’t have any blocking, play with TRANSACTIONs to create one!).

For example, suppose to have this output from sp_who:

03

Our code will create this JSON:

TreantJS ignores the extra fields, like whoEntry in our case. We added it to make the recursive function easier.

This JSON, given to TreantJS with a very basic CSS, gives us the expected result:

04

As you can see from the picture above, someone (SPID 54) left a transaction open! All in all, the sloc is minimal and the result is useful. If you export the connection string you will have a handy tool to impress fellow DBAs: all you need is a NodeJs installation. Even your laptop can do that!


Happy coding,

Francesco Cogno

MSDN Blogs: UK Dev Briefing Day

$
0
0

In this post, Senior Application Development Manager, Neal Champion shares an overview of the UK Dev Briefing DayThis annual event typically runs in the spring of each year, covering up-to-date presentations on the latest tools and development capabilities for the Microsoft platform.  It is held at the Microsoft UK Campus in Reading, Berkshire.  If you want to be a part of the next event, please contact your ADM or reach out to learn more about Premier Support for Developers.


Building a Mobile App with a Cloud Backend

At Microsoft UK, the Cloud and Developer team of Application Development Managers (ADMs) run an annual event for the customers they support. This event is known as Dev Briefing, and provides a deep dive into a group of related Microsoft development technologies, to an audience of around 200 developers and architects.

For 2016, we focused on how the Microsoft technology stack supports the development and distribution of mobile applications. As we planned this, we decided to take on a new challenge – we would actually build some mobile apps, using the tools we were demonstrating, and distribute them in advance to the audience to use at the event.

Content of the Day

We had a full timetable, and covered the following topics

  • Setting the requirements
    The main requirement was for attendees to be able to submit feedback on the sessions electronically, and also to respond to real-time pulse questions
  • How the team worked
    A quick overview of our use of Visual Studio Team Services and its Kanban support to coordinate a distributed team
  • Responsive Web Applications
    Building a responsive web application using Visual Studio Code and React, Redux and WebPack
  • Hybrid Applications
    Using Cordova to build a hybrid application
  • Native Applications
    Using Xamarin to build cross-platform native applications
  • Using Azure App Service as the Backend
    Using Azure Web API and API Apps to provide persistence and other backend functionality
  • Alternative Azure Backend
    Micro services architecture with Service Fabric and Containers
  • Data Persistence options
    Analysis of the pros and cons of using SQL Azure, Azure Table Storage or Document DB for this application
  • Build and Release
    How Visual Studio Team Services can automate and manage your build and release processes
  • Analyzing the Data
    Using the Power BI suite to provide analysis of the data collected by the application
  • Instrumentation
    How to use Application Insights to instrument and monitor your application
  • Internet of Things Suite

At the end of an intense day, we finished with a little light relief with an Internet of Things demo. This demo used a Raspberry Pi and a video camera, and ran the Face and Emotion APIs to capture data of the emotions of the audience. The demo was technically sound, but also a lot of fun.

As you can see, we packed a lot in. Each of these sessions were 30 ~ 40 minutes, almost all including hands-on demos of working code. It would take a whole individual blog post to describe each session properly, so this post can only provide a high level overview.

The App

We built two versions of the app to support the event:

  • A web application using React
  • A native application using Xamarin. We published this using HockeyApp, another Microsoft acquisition which greatly simplifies publishing cross-platform apps to controlled user groups, without having to deal with the Apple and Google app stores.

In each session we published a number of pulse questions to the app that the audience was using. The responses were collated and analysed by Power BI in real-time, and presented back to the room on a large monitor. Here’s the first question (“How far did you travel to come here today?”) to introduce the concept to the delegates.

clip_image002

The Data

In previous years, delegates had completed paper feedback forms, which then had to be collated and analysed. This year we instantly had access to real data, which Power BI could analyse for us..

clip_image004

As well as the analysis of feedback scores, Power BI also produced analysis of the individual verbatim comments submitted:

feedback

Conclusion

The Microsoft UK Cloud and Developer team has run this event for several years now, and it always receives positive feedback from delegates. This year we actually put the code we were demonstrating in their hands, and so raised the bar. The day is an excellent demonstration of both the power and scope of the Microsoft development platform and toolset, and the knowledge of these held by the Premier Support for Developers team.

Viewing all 3015 articles
Browse latest View live