test post mostly disabled plugins!
MSDN Blogs: test post
MSDN Blogs: Small Basic – Flipping Shapes
Today, I’ll introduce a sample program to flip a picture drawn with Shapes object. The program ID is MMF211.
This program uses Shapes.Move(), Shapes.Zoom() and Shapes.Rotate() operations to flip the picture. In this logic, a triangle must be symmetric (an isosceles triangle).
MSDN Blogs: Azure SQL Data Warehouse での統計の管理
Microsoft Japan Data Platform Tech Sales Team
高木 英朗
以前のエントリで統計の概要と作成方法について紹介しました。今回は統計の管理方法について紹介します。
統計はクエリの実行プランを作成するための情報を提供する重要な要素であることをお伝えしましたが、最適なパフォーマンスを得るためには、この統計情報を最新にしておくということも重要です。
統計の更新タイミング
統計を最新にしておくための最適なタイミングは、データの追加や更新の後です。これはデータの追加や更新時にテーブルのサイズや値の分布が変わる可能性が高いためです。
もし、すべての統計を管理するのは時間がかかりすぎる場合は、例えば新しい値が毎日追加されるような日付列だったり JOIN、GROUP BY、ORDER BY、DISTINCT 等に使われる列に限定すると良いでしょう。
統計が最新かどうかを判断する方法
統計が最新かどうかを判断するため、統計が最後に更新された日時を確認することができます。
確認するには、以下のクエリを実行します。
SELECT sm.[name] AS [schema_name], tb.[name] AS [table_name], co.[name] AS [stats_column_name], st.[name] AS [stats_name], STATS_DATE(st.[object_id],st.[stats_id]) AS [stats_last_updated_date] FROM sys.objects ob JOIN sys.stats st ON ob.[object_id] = st.[object_id] JOIN sys.stats_columns sc ON st.[stats_id] = sc.[stats_id] AND st.[object_id] = sc.[object_id] JOIN sys.columns co ON sc.[column_id] = co.[column_id] AND sc.[object_id] = co.[object_id] JOIN sys.types ty ON co.[user_type_id] = ty.[user_type_id] JOIN sys.tables tb ON co.[object_id] = tb.[object_id] JOIN sys.schemas sm ON tb.[schema_id] = sm.[schema_id] WHERE st.[user_created] = 1;
最後に更新された日時以降にテーブルサイズや値の分布が変わっている場合(データの追加や更新等が行われている場合)は統計を更新する必要があります。
以前の記事でも触れましたが、現在 Azure SQL Data Warehouse では統計情報が自動的に作成されないため、手動で作成する必要があります(これは今後変更される予定です)が、データのサイズや分布が変更された場合には、統計を更新することも重要なポイントになりますので、是非ご活用ください。
MSDN Blogs: How to create an IIS website that requires client certificate using self-signed certificates
Some IE/IIS issues may involve client certificate. It always took me hours to deploy a test website that requires client certificate. Therefore, I am going to write this blog to record every steps including: creating self-signed root CA, server certificate, client certificate and configuring IIS.
Prerequisites
Domain | iislab.com |
IIS server | iis-lab-server @ Windows Server 2012 R2 Standard |
Client machine | iis-lab-client @ Windows 7 Enterprise |
Applications | makecert.exe and pvk2pfx.exe. You can find the two EXEs from a system with Visual Studio installed. On a Windows 10 x64 PC with VS2015 installed, you can get them from: C:Program Files (x86)Windows Kits10binx64. For your convenience, I’ve packaged the files and shared them at: http://static.joji.me/b/i/makecert_pvk2pfx.zip. |
Step 1: Creating self-signed root CA
- Create directory: c:cert on the IIS server.
- Copy makecert.exe and pvk2pfx.exe to c:cert.
- Save following content as c:certCreateCARoot.cmd.
makecert.exe ^
-n “CN=IIS Lab CARoot” ^
-r ^
-pe ^
-a sha512 ^
-len 4096 ^
-cy authority ^
-sv CARoot.pvk ^
CARoot.cer
pvk2pfx.exe ^
-pvk CARoot.pvk ^
-spc CARoot.cer ^
-pfx CARoot.pfx ^
-po Password1
- Run CMD and execute c:certCreateCARoot.cmd.
- Enter password: Password1 in following three password prompt dialogs.
- It will create three files: CARoot.cer, CARoot.pfx and CARoot.pvk.
- Run mmc on the IIS server to launch Console.
- Press Ctrl+M or click File → Add/Remove Snap-in, select Certificates, click Add >.
- Choose Computer account and then click Next.
- Keep default settings, click Finish, then click OK.
- Right click Trusted Root Certification Authorities → Certificate, choose All Tasks → Import.
- Click Next, choose the self-signed root CA: CARoot.cer and then click Next.
- Keep default settings, click Next and then click Finish.
- The self-signed CA root will appear in the list.
- Copy the CARoot.cer to the client machine and import it using the same steps.
Step 2: Creating self-signed server certificate
- Save following content as c:certCreateServerCertificate.cmd on the IIS server.
makecert.exe ^
-n “CN=iis-lab-server.iislab.com” ^
-iv CARoot.pvk ^
-ic CARoot.cer ^
-pe ^
-a sha512 ^
-len 4096 ^
-b 01/01/2014 ^
-e 01/01/2040 ^
-sky exchange ^
-eku 1.3.6.1.5.5.7.3.1 ^
-sv ServerCert.pvk ^
ServerCert.cer
pvk2pfx.exe ^
-pvk ServerCert.pvk ^
-spc ServerCert.cer ^
-pfx ServerCert.pfx ^
-po Password1
- Run CMD and execute c:certCreateServerCertificate.cmd.
- Enter password: Password1 in all password prompt dialogs.
- It will create three files: ServerCert.cer, ServerCert.pfx and ServerCert.pvk.
- Go to the Certificate Console on the IIS server, right click Personal → Certificate, choose All Tasks → Import.
- Change the file extension to *.pfx* when selecting certificate and choose ServerCert.pfx we just created.
- The self-signed server certificate will appear in the list.
Step 3: Creating self-signed client certificate
- Save following content as c:certCreateClientCertificate.cmd on the IIS server.
makecert.exe ^
-n “CN=AnyClientInIISLab” ^
-iv CARoot.pvk ^
-ic CARoot.cer ^
-pe ^
-a sha512 ^
-len 4096 ^
-b 01/01/2014 ^
-e 01/01/2040 ^
-sky exchange ^
-eku 1.3.6.1.5.5.7.3.2 ^
-sv ClientCert.pvk ^
ClientCert.cer
pvk2pfx.exe ^
-pvk ClientCert.pvk ^
-spc ClientCert.cer ^
-pfx ClientCert.pfx ^
-po Password1
- Run CMD and execute c:certCreateClientCertificate.cmd.
- Enter password: Password1 in all password prompt dialogs.
- It will create three files: ClientCert.cer, ClientCert.pfx and ClientCert.pvk.
- Copy ClientCert.pfx to the client machine.
- On client machine, go to Management Console, add a new certificate snap-in, choose My user account this time (not Computer account).
- Import ClientCert.pfx into Current user → Personal → Certificate on the client machine, the password is Password1.
- The self-signed client certificate will appear in the list.
Step 4: Creating IIS website that requires client certificate
- Install IIS onto the IIS server, make sure that security components: IIS Client Certificate Mapping Authentication and Client Certificate Mapping Authentication are installed together.
- Open IIS manager (inetmgr.exe), there is a Default Web Site, next we will configure it to require client certificate.
- Right click Default Web Site and click Edit Bindings….
- We are going to add HTTPS 443 port for Default Web Site. Click Add, select https for Type, choose the self-signed server certificate we created in step 2 for SSL Certificate.
- Default Web Site is now a HTTPS site, we can verify it by browsing to https://iis-lab-server.iislab.com from the client machine.
- However the website currently does not require any client certificate, next let’s configure it to use client certificate.
- Go to Default Web Site → SSL Settings.
- Enable Require SSL, choose Require for Client certificate and then click Apply to save the settings.
- Now we can use the client certificate to authenticate the website, next we are going to configure many-to-one certificate mapping.
- Go to Default Web Site → Configuration Editor.
- Select section: system.webServer/security/authentication/iisClientCertificateMappingAuthentication.
- Choose True for enabled, click the … button in manyToOneMappings field to add a many-to-one mapping rule.
- Click Add.
- Configure as below.
description | Whatever description text |
enable | True |
name | Whatever name |
password | The login password of the windows account you want the certificate to be mapped to |
permissionMode | Allow |
rules | You can add rules as you want, like let the server to check whether the certificate is signed by the correct CA root. |
userName | The windows account you want the certificate to be mapped to |
- Close the window after you complete configuration. You will see the count of manyToOneMappings increases to 1. Click Apply to save the changes.
- Let’s verify from the client machine, open IE and browse to https://iis-lab-server.iislab.com, you will see the client certificate selection prompt. The website can be accessed by choosing the self-signed client certificate we created. If you cancel the prompt (means not using any client certificate) or choose a wrong client certificate, you will encounter 403 Access Denied error.
- If you don’t see the client certificate selection prompt, it might because you have only one client certificate exists and IE security setting: Don’t prompt for client certificate selection when only one certificate exists is enabled. You should be able to see the prompt after disabling the setting of the security zone that the site is mapped to.
References
· Creating self signed certificates with makecert.exe for development
· Client Certificate Selection Prompt
Thanks,
Sheng Jiang from DSI Team
MSDN Blogs: Office 365 E5 Nuggets of week 33
Back from my vacation in sunny and beautiful Spain, typing this summary now in Bellevue, close to Seattle.
- Announcing the Power BI brand & campaign management solution template for Twitter | Official Blogpost
- Power BI Embedded GA pricing update| Official Blogpost
- Power BI On-Premises Data Gateway August update | Official Blogpost
- Power BI: Inspire and Get Inspired with the R Showcase | Official Blogpost
- 3 Power BI scenarios to help you understand Azure Active Directory | Official Blogpost
- Power BI Webinars for the week of 8/21: Row Level Security, and Finding and Transforming Data with Power Query | Official Blogpost
- Individualizing instruction with the new Microsoft Forms | Official Blogpost
- Free Webinar: Social collaboration improves an organization’s ability to react quickly to new data and information | Official Blogpost
- Episode 104 with Mike Ammerlaan on the Developer Preview of the SharePoint Framework—Office 365 Developer Podcast
- Released SharePoint Framework SPFx | Summary Blogpost by Eric Overfield
- How two entrepreneurial teens tamed retail data and helped make the world a better place | Official Blogpost
- Office 365 news roundup | Official Blogpost
- Migrate traditional Distribution Groups to Office 365 Groups | Official Blogpost
- Customer Story: Coles Supermarket uses Office 365 | YouTube Part1,Part2 HR Scenario, Part3 IT
- (in German): SkypefB Cheat Sheet | Comparex Blogpost
- (in German): Migrate SharePoint enPremises to the Cloud | Channel9
- (Customer Story in German): Via Skype zum Job – Telefónica Deutschland setzt zunehend auf Vorstellungsgespräche per Video | Official Blogpost
MSDN Blogs: [Sample Of Aug. 22] How to create master-detail ListBox in Win 10 UWP app
Sample : https://code.msdn.microsoft.com/How-to-create-detail-1de67d41
This sample demonstrates how to create master-detail ListBox in Win 10 UWP app through CollectionViewSource.
You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.
MSDN Blogs: Web page layout broken issue due to "Natural Metrics" in IE11
After upgrading to IE11, web page layout may be broken. The most common reason is the web page runs in a newer document mode in IE11. However, the layout issue might still occur even if the document mode is same as before. This is because IE11 uses natural metrics for font rendering while previous IE versions use Windows Graphics Device Interface (GDI) metrics.
Symptom
The following code snippet is a typical content of a legacy web page. The developer specified the width of the DIV container to display the texts within a single line. The developer also specified the page to be displayed in IE7 document mode using X-UA-Compatible meta tag to prevent compatibility issues in newer IE browsers.
<!DOCTYPE html> <html lang=”en”> <head> <meta http-equiv=”X-UA-Compatible” content=”IE=7″> <style> #top-box { background: #ccc; padding: 5px; width: 330px; } </style> </head> <body> <div id=”top-box”>Single line text appears in a fixed width DIV container.</div> </body> </html> |
The page looks good in IE8~IE10.
However, the text is wrapped in IE11 even if the document mode is the same.
Root Cause
By default, Internet Explorer 11 uses natural metrics. Natural metrics use inter-pixel spacing that creates more accurately rendered and readable text. However, old IE browsers use GDI metrics for font rendering. We can see from the picture below that the text is much smoother in “natural metrics”.
As the web page was developed in IE7/8-era, the DIV width: 330px was calculated based on “GDI metrics”. Obviously, 330px is not enough to display the text within one line using “natural metrics”.
Comparing to “GDI metrics”, the text is wider when it is rendered in “natural metrics”. We can use the code below to measure the width of text: “ABC”.
<span id=”span”>ABC</span> <br/> <button id=”show-offset” onclick=”showoffset()”>Show offset</button> <script> function showoffset() { alert(document.getElementById(‘span’).offsetWidth); } </script> |
In IE8~IE10 (“GDI metrics”), the width of “ABC” is 32.
In IE11 (“natural metrics”), the width of “ABC” is 33.
Solution
Client side solution
Add the web site to Local intranet zone, IE11 keeps using “GDI metrics” for sites in Local intranet zone.
Code solution
Add following meta tag to the website requires “GDI metrics” rendering: <meta http-equiv=”X-UA-TextLayoutMetrics” content=”gdi” />.
<!DOCTYPE html> <html lang=”en”> <head> <meta http-equiv=”X-UA-Compatible” content=”IE=7″> <meta http-equiv=”X-UA-TextLayoutMetrics” content=”gdi” /> <style> #top-box { background: #ccc; padding: 5px; width: 330px; } </style> </head> <body> <div id=”top-box”>Single line text appears in a fixed width DIV container.</div> </body> </html> |
Development suggestion
Browsers have different font rendering engines/methods. The size you get in one browser might not work in another browser. Therefore, we should make the text as flexible as possible to fit into the container rather than specify a fixed width. If you have to specify a fixed width, leave some spaces around the texts.
References
· Fix font rendering problems by turning off natural metrics
Thanks,
Sheng Jiang from DSI Team
MSDN Blogs: Road to APC: MOQdigital dives deep into digital and learns from experience
In this week’s‘Road to APC’ partner profile series, we take a closer look at MOQdigital who was named Australian Country Partner of the Year who is deeply engaged in the digital transformation initiatives of their customers, and are contributing more broadly to the innovation fabric of Australia.
Each engagement in a vertical sector provides an opportunity to dive a little deeper into digital transformation and then stand on the shoulders of experience.
MOQdigital CEO Nicki Page explains; “We know that if we go deeper into a vertical we have more success.” But she explains that sectoral success can also be replicated in adjacent markets.
The company for example leveraged its success with a solution developed for healthcare, into dental, then across to construction and engineering. “Yes, there’s a bit of customisation that needs to happen with that, but the beautiful thing is it’s not too much.
“We’ve got the core IP that we can now customise to different scenarios.”
So, after developing a global solution for a leading dental insurance provider, the insights and expertise gained in that engagement were leveraged when working with construction and engineering company Laing O’Rourke.
Working with Microsoft’s IoT team and the client’s internal innovation group MOQdigital developed and rolled out an Azure based solution and a ‘smart hardhat’ that collects data to reveal the health of the worker in order to allow management to take immediate action to prevent heatstroke or hypothermia.
Laing O’Rourke is now looking at other areas where cloud based IoT solutions could be deployed – managing plant on site, or optimising interactions between ground crews and plant.
MOQdigital’s extensive understanding of cloud solutions also allowed it to work with Brisbane Airport Corporation on a complete overhaul of its storage systems, and a migration of key administrative functions to the cloud to reduce complexity.
It also worked with accounting and consulting business BDO on a transformation of the IT service delivery model with a number of services transitioned to the Azure cloud. The project was intended to help rein in technology costs and build greater resilience into the core computing infrastructure. In just the Brisbane office, software licensing costs were reduced by $100,000 a year.
The success of its clients in transitioning to a more digital model and leveraging the capability of the cloud is also reflected in MOQdigital’s own success.
In the first half of 2016 the company’s cloud business soared by triple digit growth compared to the same period a year earlier.
That success says Page comes from” Always listening to the customer, thinking differently and going above and beyond to add value. It’s about driving those outcomes for customers, solving their business problems. Everybody is looking for faster and smarter ways to do business nowadays and we can help them with that.
“Our job at MOQdigital is to help them bridge the gap between the challenges that they have every day and which vendor to choose and which technology to choose. It’s about us being the trusted partner that thinks outside the box to get them the result they need.”
Partners who are interested in attending the Australian Partner Conference can register here.
MSDN Blogs: Tip na plugin pro VS Code: Settings Sync
Používáte Visual Studio Code a střídáte často počítače, třeba i napříč různými platformami, a chcete aby byla vaše konfigurace jako třeba nastavení klávesových zkratek nebo třeba doplňků všude vždy stejná? Právě k tomu účelu slouží doplněk Visual Studio Code Settings Sync!
Doplněk synchronizuje vaše nastavení, snippety, klávesové zkratky a doplňky pomocí Gist v GitHubu. Jak tedy z přechozí věty vyplývá, pro fungování tohoto doplňku je nutné mít účet na GitHubu, který je zdarma.
Jak doplněk správně nainstalovat a používat najdete na jeho stránkách – je to opravdu velmi jednoduché a užitečné.
MSDN Blogs: [Sample Of Aug. 22] How to launch an UWP app from another app
Sample : https://code.msdn.microsoft.com/How-to-launch-an-UWP-app-5abfa878
This sample demonstrates how to launch an UWP app from another app.
You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.
MSDN Blogs: testblogpost_sayhg
MSDN Blogs: “Surface” アプリの法人のお客様向け配信がビジネス向け Windows ストア経由になりました
こんにちは。Surface 法人向けサポート担当の岩松です。
Surface 3 以降には、Surface ペンの筆圧検知感度やボタンで起動するアプリの設定ができる “Surface” アプリがプリインストールされています。
従来、ボリューム ライセンス版の OS を利用し、カスタム イメージを作成・展開される法人のお客様にもアプリをご利用いただけるよう、オフライン インストール用のパッケージを当社ダウンロード センターでご提供しておりました。
今般、パッケージの公開方法が変更され、他の Windows 10 向けアプリと同様に法人のお客様向けのアプリ配信システム “ビジネス向け Windows ストア” からご利用いただくように変更されました。
変更後のインストール方法については以下の当社技術文書で解説しておりますが、現時点で英語版のみのため、この記事では、 ビジネス向け Windows ストアから Surface アプリのパッケージをダウンロードし、オフラインでインストールする際のポイントをご紹介します。
Deploy Surface app with Windows Store for Business
ビジネス向け Windows ストア から Surface アプリのパッケージをダウンロードする
ビジネス向け Windows ストアへのサインイン
ビジネス向け Windows ストア を利用するには、あらかじめ登録が必要です。
ビジネス向け Windows ストアの利用方法については、以下の解説ページをご参照ください。
ビジネス向け Windows ストアから従業員にアプリを配布する
ビジネス向け Windows ストア のアカウントを用意して、以下のページからサインインします。
Surface アプリのパッケージのダウンロード
サインインした後、[管理者] メニューから [アカウント情報] を選択し、[ストアでショッピングしている人にオフライン ライセンスのアプリを表示します] にチェックします。
これで、アプリのオフライン インストール用のパッケージが利用できるようになります。
次に、[ショップ] を選択し、画面右上の検索ボックスに [Surface] と入力して、結果から Surface アプリを選択します。
以降の例では、社内展開向けのオフライン パッケージを利用しますので、[オフライン] を選択して [アプリをゲット] を選択します。
これにより、Surface アプリが在庫に追加されます。
[管理者] – [在庫] と選択し、Surface アプリを選択します。表示されたアプリの詳細画面から、パッケージのインストールに必要なファイルをダウンロードします。
インストールに必要なファイルは、以下の 3 種類がありますので、もれなくダウンロードします。
- アプリのオフライン利用向け パッケージ本体
- アプリのライセンス(エンコードされた / エンコードされていないライセンスの 2 種類があります。Windows PowerShell では後者を使用します)
- 必要なフレームワーク(各フレームワークに x86 版 / x64 版があります。 Surface の場合は x64 版を利用します)
Windows PowerShell を利用した Surface アプリのオフライン インストール手順
各ユーザーがストアから自分でアプリをインストールするのではなく、管理者があらかじめ端末にアプリを展開する場合、オフライン インストールが適しています。
(オフライン インストールの場合、ユーザーには Azure AD のアカウントは必要ありません)
アプリのオフライン インストールは、様々な方法で実行することができますが、ここでは最もシンプルな Windows PowerShell を使用する方法をご案内します。
(他にも、Windows Imaging and Configuration Designer / Microsoft Deployment Toolkit / System Center Configuration Manager などのツールを利用して端末に展開することもできます。)
以下は、インストールに必要なファイルを C:Temp フォルダーに保存した場合の例になります。
Windows PowerShell を管理者権限で起動し、以下のコマンドレットを実行します。
(アプリのインストール)
Add-AppxProvisionedPackage–Online–PackagePathc:Temp<パッケージ ファイル名> –LicensePathc:Temp<ライセンス ファイル名>
(フレームワークのインストール ※各フレームワークについてそれぞれ実行します)
Add-AppxProvisionedPackage –Online –SkipLicense –PackagePath c:Temp<フレームワークのパッケージ ファイル名>
以上で、コンピューターに Surface アプリがプロビジョニングされた状態となり、ユーザーの初回ログオン時に Surface アプリがプリインストールされるようになります。
ユーザーはログオン後、 Surface アプリを起動して利用できます。
(参考)Surface アプリの機能について
ペンの筆圧検知感度の調整や、ペンのボタンを押したときに起動するアプリの変更、本体上の Windows ボタンの無効化(該当機種のみ)、Surface のユーザー ガイドの参照などの機能があります。
詳しくは、以下のページをご参照ください。
MSDN Blogs: Linux 版的 PowerShell 發佈囉!而且是開源的喔!
我從 10 年前就開始做 PowerShell 了。來看看這個影片是 2007 年我從 Jeffrey Snover 身上學習有關 PowerShell 的知識。我致力於 PowerShell 好幾年,並廣泛地寫部落格,最終用 PowerShell 來編寫自動化與在全世界的線上零售銀行業建立許多龐大的系統。
而來到了今天,Microsoft 發布了 Linux 版的 PowerShell,是使用 .NET Core 而且是完全開源的,您可以到這裡查看。
Jeffrey Snover 在 2014 年內部預告,PowerShell 終究會變成開源的,隨著.NET Core 的到來與持續努力在多個 Linux 發行版本,揭開了開源的序幕。如果您有在關注的話,應該不難預測到這件事的發生。以下是 PowerShell 已經開源的部分:
到處都可以取得 PowerShell
好,那您要到哪裡可以取得呢?http://microsoft.com/powershell是 PowerShell 的首頁,而所有相關的東西都可以從這裡找到。
PowerShell 的開源專案在 https://github.com/PowerShell/PowerShell,而這裡是 Ubuntu 14.04/16.04,CentOS 7.1與 Mac OS X 10.11的 alpha。
明白來說,這是 alpha 版的建置,仍然需要社群的支持。
一個官方支援的「release」將會在之後到來。
什麼是可能的?
在我看來它是想要讓做到讓您可以管理任何事情在任何地方。或許您是一個 Unix 人,而有一些 Windows 的機器需要管理(不論本地或是在 Azure 上)。您可以在 Linux 使用 PowerShell 來做到。或許您有一些 bash 指令碼在您的公司與一些 PowerShell 指令碼。兩種可以交互使用。
如果您了解 PowerShell,您將也會有辦法在 Linux 上使用這些技巧。如果您管理一個混合的環境,PowerShell 不是 bash 的替代品而是另一個在您工具箱的工具。在 *nix 世界有許多 shell(不只是 bash、zsh,還有 ruby、python 等等),所以 PowerShell 將會一個優秀的夥伴。
相關連結
務必看看網路上的報導與許多不同觀點的部落格貼文!
- PowerShell Team Blog
- PowerShell Webinar
- PowerShell Team 官方 Youtube 頻道
- GitHub PowerShell Project
- .NET Core Project
祝您玩得愉快!這開源的東西是不是有點在 Microsoft 流行開來?
本文翻譯自 Announcing PowerShell on Linux – PowerShell is Open Source!
MSDN Blogs: test
twetwetwe
MSDN Blogs: The Cassandra Challenge
Our team is responsible for running multiple Cassandra clusters reliably and flawlessly at scale leveraging Azure PaaS.
As a result, I am able to appreciate the technology and resonate with its steep learning curve. The following quiz was made to help alleviate the on boarding pain for folks. Most of the questions are not a simple “How to does concept A work ?” (that can be binged easily), but “How does A impact B,C and D if E is disabled ?”. Also, this is geared more towards operating the clusters, and less towards data modelling which does have good online literature.
I hope you find this useful (the target Cassandra version here is 2.1.13). The pre-requisite is basic knowledge of Cassandra.
Many thanks to the Datastax support, Cassandra Users / Developers List, and other online forums for the guidance !
The Quiz
1. have one data center in my setup, and I have setup a replication factor of 5 for my keyspace. If I query data using LOCAL_QUORUM consistency, how many nodes can go down before I’ll start seeing data loss ?
Will this change if I change replication factor to 6 ? If yes, to what value ?
2. Suppose I have setup replication=3 for my setup, and I read and write using LOCAL_QUORUM. Initially all nodes had a value of “10” for a piece of data. When I wrote a new value of “20”, I got an error back suggesting “Could not achieve LOCAL_QUORUM”. Now, if I read the value using LOCAL_QUORUM would I get an error, “10”, “20”, or something else ?
What if hinted handoff is enabled ?
3. Suppose I have setup replication=3 for my setup, and I read and write using LOCAL_QUORUM. Let us assume that all 3 nodes holding my data have different values (because repairs did not run / data went missing / whatever other reason). Will I get a LOCAL_QUORUM failure on read ?
4. We specify contact points when establishing a Cassandra connection. If those nodes go down afterwards, does the connection keep working ?
What if the nodes are seed nodes ?
5. How can I tell if my keyspace is evenly distributed across the datacenter ? How can I tell what nodes hold a particular piece of data ?
6. I have one keyspace set to replicate in two data centers. I changed it to not replicate in one data center. Now, I want to remove a node in this data center. Should I use “nodetool removenode” or “nodetool decommission” ? Which one would be faster and why ?
7. Is it necessary to run a “nodetool repair” after doing a “nodetool removenode”, what about “nodetool decommission” ?
8. Suppose a node in a data center is down. Now, I want to remove another node. Will I be able to remove successfully if I use consistency of LOCAL_QUORUM for my read / write queries ?
9. I am running a “nodetool rebuild” to build my node from scratch. Unfortunately, it gives up in the middle. If I run it again, will the node try to fetch the same data again from other nodes ?
10. I am replacing a node in the cluster using -Dcassandra.replace_address=X in cassandra-env.sh. How can I tell the time it will take to completely join the ring ?
11. I am replacing a node in the cluster using -Dcassandra.replace_address=X in cassandra-env.sh. Until the node joins the ring completely, will it serve any read / write requests ?
12. I am replacing a node in the cluster using -Dcassandra.replace_address=X in cassandra-env.sh. Once the node joins the ring, do I need to run a “nodetool repair” ? If yes, why ? If no, why not ?
13. I am replacing a node in the cluster using -Dcassandra.replace_address=X in cassandra-env.sh. Is it correct this entry can only be removed after the node joins the ring (meaning, if the node reboots in the middle and the entry does not exist, it won’t be able to proceed) ?
14. “nodetool” commands run on the node where started and don’t have any effect on other nodes. What is the exception to this behavior ?
15. Which of the following commands are safe to run multiple times without unexpected side-effects / performance implications ?
A) nodetool repair
B) nodetool cleanup
C) nodetool snapshot
D) nodetool removenode
16. Suppose we add a new node to an existing data-center. Once the node has completely bootstrapped, some data from old node must have moved over to this node to balance the ring. Thus, we should see disks from old nodes getting free. Is this correct ?
17. Does everything in Cassandra run at same priority ?
18. How can I tell how many threads are allocated for handling write requests, read requests and compactions ? What about repairs and streaming ?
19. What is the best way to reduce the number of SS Tables present on a node ?
A) nodetool compact B) increase concurrent_compactors in yaml C) nodetool cleanup D) nodetool repair
20. Explain the difference between /var/log/cassandra/output.log and /var/log/cassandra/system.log.
21. Suppose OpsCenter is down, and you need to find out approximately how much data Cassandra node is receiving per minute. What can you do ?
22. When OpsCenter shows that a node is down (“grey”), does this mean DSE is down or DSE is up but CQLSH is down ?
23. What is a “schema mismatch” ? How do you detect it, and get out of it ?
24. Suppose a node goes down and moves to another rack. When it comes back up, we update cassandra-rackdc.props to tell Cassandra about it. However, Cassandra won’t start up if you change racks in a running cluster. Why ?
25. Suppose we have a 2 data-center setup with same number of nodes in each DC and we are using random partitioner with vnodes. Is it safe to assume there must exist a node in both DCs that have same tokens (therefore same data if such a replication is set) ?
26. You’re seeing high number of SS Tables for a table on a node. What could be possible reasons for this ?
A] Low concurrent_compactors in yaml B] The table received a lot of data only on this node C] Poor Compaction properties defined on the table while creation D] All of the above
How can you confirm / deny [B] ?
Can you think of any other reasons ?
27. Suppose we are running spark and Cassandra on a node (like how we do on all of our nodes). At any point, how many Java processes would be running on the box ? Why ?
28. You want to tell CQLSH to execute a query on a given Cassandra node. How would you do that ? Will setting CONSISTENCY to ONE on the node from where you started CQLSH do the job ?
29. In what situations the tokens assigned to nodes change ?
A] When a new node joins the ring
B] When a node leaves the ring
C] When “nodetool repair” is run
D] When a node goes down for long period of time
30. Suppose a table uses Size Tiered Compaction Strategy. What is a typical file layout you will see on disk ?
31. When running “nodetool cleanup”, you notice that the disk usage is going high at times. Why would this happen ?
32. How will you determine if a node has “wide partitions” for a table ? How will you fix such a “wide partition” ?
33. Will “Truncate Table” work if any nodes in the cluster are down ?
34. Why is re-creating a table (drop followed by create) problematic ?
35. Which of the following statements are true ?
A] Seed nodes act as coordinators for Cassandra clients
B] Ops Center uses seed nodes to talk to the Cluster
C] Once setup, seed nodes cannot be changed
D] All of the above
36. Is it possible to disable CQLSH Auth on a node temporarily ? If so, how ?
37. If a few nodes are down in the cluster, and you attempt to add a new node you may get an error like below. Can you concretely explain what this is about ?
java.lang.RuntimeException: A node required to move the data consistently is down (/xx.xx.xx.xx). If you wish to move the data from a potentially inconsistent replica, restart the node with -Dcassandra.consistent.rangemovement=false
38. What ports does Cassandra use ?
39. Will Cassandra move data on dynamic properties such as load on nodes, nodes going up / down (meaning if you run “nodetool repair” will Cassandra move data from nodes that are now down to other nodes to increase availability?)
40. Can you rename a data center on a cluster ? What about renaming a cluster ?
41. As you know, SSTableLoader and CQLSH import are the two ways to load data into Cassandra. Besides the fact that CQLSH does not work for large amounts of data, is there something else that makes it a non-viable choice for a dataset ?
42. Suppose a requirement came up, where you need to export all the data written from now onward somewhere else. There are no timestamp columns on your tables, so you can’t query data that way and use CQLSH export.
Can you think of any other way to achieve this ?
43. Suppose your data pattern is such that you never update an existing row. In such a situation, does compaction add any value at all (since there are no “duplicates” so to speak) ?
44. You’re seeing a large number of Dropped Mutations on a node in Ops Center. Which of the following statement are true about this key metric ?
A] The number reported is per node.
B] The number will be persisted even if you restart DSE.
C] The number represents the number of reads that failed
D] All of above
45. Which of the following may fix the high number of Dropped Mutations ?
A] Increase write request timeout setting in yaml
B] nodetool setstreamthroughput 0
C] nodetool repair
D] None of above
46. You deleted some row in a table. That deleted data magically started re-appearing after some time in your read queries.
What can be the possible causes for this ?
A] Some nodes went down and did not come up until after the gc_grace_seconds setting on the table
B] You deleted using consistency LOCAL_QUORUM but read the row later with consistency ONE
C] The data did not get deleted in the first place. Cassandra dropped the mutation, but never returned an error back to the client.
D] Any of above
47. Secondary indexes are usually frowned upon. Which of the following are the reasons for it ?
A] They are “local” to the node. So, Cassandra has to talk to all nodes to figure out the column you’re asking for.
B] They need to be rebuilt every time the column value changes, which is costly
C] Compactions don’t run on secondary index tables as often as other tables, thus they may take disk space
D] All of above
48. Suppose you’re using vnodes and random partitioner. If you now read data in a table in a paged fashion, will you get the data in same order every time ? Why or why not ?
49. Which of the following statements are true ?
A] If Cassandra gets a read and write request at the same time, the write request gets higher priority.
B] The only way to generate new SS Tables on disk is through memtable flush.
C] Repairs run continuously in the background
D] All of above
E] None of above
50. If a node goes down, which of the following will try to bring it up ?
A] Datastax Ops Center Agent
B] Linux Kernel
C] Some other nodes through gossip
D] None of above
51. How can you tell how much CPU compaction is consuming ?
The Answers
Answer 1
3 nodes need to go down before you will start seeing issues.
Here is why: https://docs.datastax.com/en/cassandra/2.1/cassandra/dml/dml_config_consistency_c.html
Answer 1.1
Same as 1.
Answer 2
It depends.
Let’s assume the nodes are called N1, N2 and N3. When the write request failed, let’s assume it wrote data to N1, but could not write to N2 and N3 (thus failing QUORUM). When the read request comes, if it falls on N1, you will get “20” back and N2,N3 will get fixed as part of Read Repair. However, if it falls on N2 or N3 and N1 is down, Cassandra has no way of knowing that N2 and N3 have stale data, thus you will get “10” back.
Answer 2.1
If Cassandra was able to replicate the data to N2 or N3 before you issued a Read, you will get “20” back.
Answer 3
No, you will get the latest result. QUORUM fails iff the node does not respond back in allotted time. Cassandra has no way of determining if the data is “correct” given its distributed nature.
Answer 4
Yes. The client get updates from the cluster as nodes go up and down. However, if you restart the client and try establishing a new connection using the nodes that are now down, it will not work.
Answer 4.1
Seed nodes only play a role for gossip. So, it doesn’t matter if they go down – everything will continue to work as is assuming the cluster is healthy in general.
Answer 5
nodetool status <keyspace> and make sure the percentage on every node is equal
https://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsStatus.html
Answer 5.1
nodetool getendpoints
Note, this will tell what nodes should hold the data. It will not tell if they actually hold the data. The logic of this command simply runs the math that does the token computation.
https://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsGetEndPoints.html
Answer 6
Either. Since, the node does not own the keyspace there is nothing to stream out to other nodes when you run “nodetool decommission”. Normally, decommission streams the data owned by the node to other nodes thus taking a lot of time. Therefore in this case you can do a decommission if DSE process is up else “removenode”. Both will take equal amount of time.
Answer 7
“nodetool decommission” streams the tokens owned by the node to other nodes as well as the data meaning the cluster continues to stay consistent. “nodetool removenode” simply moves the tokens so that the new data falls on correct nodes but doesn’t move existing data. Therefore a repair is necessary.
Answer 8
The answer has nothing to do with your consistency. It depends on if the node that is already down needs to own some of the token ranges that the leaving node owns. Obviously, Cassandra is smart to give token ranges to other nodes, so you should not see any problems in a reasonably big cluster.
Answer 9
Yes, unfortunately.
Answer 10
Run “nodetool status” and check the size on other nodes. The node that is being replaced needs to get to that size. Byt running “nodetool status” a few times, you will get a feel for the rate at which the node is pulling data, thus helping you compute the total time.
Answer 11
No.
Answer 12
No. The node simply replaced an existing node, so the token arrangement hasn’t changed. Running a repair is necessary only when the data corresponding to tokens needs to move to appropriate nodes.
Answer 13
No. Once the node figures out it needs to replace itself, this is written to one of the system tables. Thus, you can remove the entry as soon as you see “JOINING THE RING” in system.log
Answer 14
“nodetool repair”. Read more here: https://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsRepair.html
Note any commands that show information about other nodes (e.g. status, ring, gossipinfo etc) are all really just showing cached information on the node where you’re running nodetool.
Answer 15
“nodetool removenode” is the only command that’s safe for re-entry. All other commands will have varying level of impact.
“repair” if run multiple times will cause too much data movement (if not actual, at least for merkle tree computation).
“cleanup” is fine to run multiple times, but it will also cause unnecessary work on the node
Answer 16
No. Unless a “nodetool cleanup” is run, the data does not get deleted even though the node doesn’t own it. This is by design (you can feel free to call it questionable design if you like).
Answer 17
Everything runs at the NORM_PRIORITY except compactions and repairs (they run at LOW_PRIORITY)
Answer 18
Generally you can’t tell. This is hardcoded for various stages. However, for compactions you can configure this through a yaml setting called “concurrent_compactors”
Answer 19
B is the safest way to do so. A is strongly advised against as it will create one giant SS Table that will then create problems for future compactions. C and D will most likely not lead to any success unless the node is indeed imbalanced.
Answer 20
When DSE starts is spews stuff in output.log for first minute or so. Then everything goes to system.log (which includes the stuff in output.log). For practical purposes, output.log is important if the process can’t start after doing a “sudo dse service start” as it will tell why it could not start.
Answer 21
Look at the size of files in /mnt/cassandra/commitlog. Here is a good refresher on what commitlog is: http://stackoverflow.com/questions/34592948/what-is-the-purpose-of-cassandras-commit-log
Answer 22
CQLSH is down. Therefore, it is a great way of telling if the node is functional.
Answer 23
This happens when the schema changes don’t propagate to all nodes. Every node generates a hash based on the schema it knows, and if all nodes agree this must match. When nodes are down and schema changes are made, they may not make it to all nodes. Similarly, if you re-create tables it will lead to unexpected mismatches.
This can be detected using “nodetool describecluster”, and fixed using “nodetool resetlocalschema” or restarting the node or a combination of both.
Answer 24
Cassandra uses rack information to distribute the load to different racks. Thus changing a rack messes up with this logic, and they have safe guarded by not letting you change it on the fly.
You can tell Cassandra to ignore the rack change using -Dcassandra.ignore_rack. However, you must run a “nodetool repair” after this so that Cassandra correctly places replicas.
It is a good mental exercise to think why a repair is needed here.
Answer 25
No. As much as it is unintuitive, it is not implemented that way.
Answer 26
Any / all of the above. You will need to eliminate one thing at a time.
Answer 26.1
You can add a graph in Ops Center that shows “Write Requests” and compare the numbers with other nodes.
Answer 26.2
No TTL on the table. If using DTCS, extremely low max_sstable_age.
Answer 27
1 (Cassandra started as sudo service dse start) + Number of spark executors + 1 (If backup is running)
Answer 28
There is no way to achieve this reliably. While you can experiment with CONSISTENCY ONE, Cassandra may still pick a different coordinator every time.
Answer 29
A and B. Repair makes sure that the nodes that are expected to hold data for its token range holds it, it never changes the token assignment.
Answer 30
Couple of really big files, a few medium sized and 3/4 really small ones.
Answer 31
As part of cleanup, Cassandra is reading existing SS Tables and writing new ones with the unnecessary data removed. Thus, you may notice a little bump in disk usage while this completely settles.
Answer 32
“nodetool cfstats” will show you Maximum Partition Bytes. If this number is greater than 100 MB, it would be considered a “wide partition”.
You will need to change the schema of your table and add more entries to the Partition Key to fix this. You can not do anything once the schema is made to fix this.
Answer 33
No. It requires CONSISTENCY ALL.
Answer 34
This is really a bug in Cassandra. If a node is down / does not get the schema update when DROP TABLE is run, it will continue to use the old ID for the table after the new table is created. When it receives writes / reads for this table, it results into ID mismatch causing “couldn’t find cfid” exceptions in system logs and eventually system instability.
Answer 35
Correct answer is “None of the above”. The only function seed nodes provide is, all nodes provide frequent gossip updates to them.
Answer 36
Yes. Adjust auth related settings in yaml.
Answer 37
When you add a node it will take over the range of an existing node, and thus it needs to stream data from it to maintain consistency. If the existing node is unavailable, the new node may fetch the data from a different replica, which may not have some of data from the node which you are taking the range for, what may break consistency.
For example, imagine a ring with nodes A, B and C, RF=3. The row X=1 maps to node A and is replicated in nodes B and C, so the initial arrangement will be:
A(X=1), B(X=1) and C(X=1)
Node B is down and you write X=2 to A, which replicates the data only to C since B is down (and hinted handoff is disabled). The write succeeds at QUORUM. The new arrangement becomes:
A(X=2), B(X=1), C(X=2) (if any of the nodes is down, any read at QUORUM will fetch the correct value of X=2)
Now imagine you add a new node D between A and B. If D streams data from A, the new replica group will become:
A, D(X=2), B(X=1), C(X=2) (if any of the nodes is down, any read at QUORUM will fetch the correct value of X=2)
But if A is down when you bootstrap D and you have -Dcassandra.consistent.rangemovement=false, D may stream data from B, so the new replica group will be:
A, D(X=1), B(X=1), C(X=2)
Now, if C becomes down, reads at QUORUM will succeed but return the stale value of X=1, so consistency is broken.
See CASSANDRA-2434 for more background.
Answer 38
By default, Cassandra uses 7000 for cluster communication (7001 if SSL is enabled), 9160 for Thrift clients, 9042 for native protocol clients, and 7199 for JMX. The internode communication, Thrift, and native protocol ports are configurable in cassandra.yaml. The JMX port is configurable in cassandra-env.sh (through JVM options). All ports are TCP.
Answer 39
No
Answer 40
You cannot rename a data center. You can however rename the cluster.
Answer 41
CQLSH export / import does not support preserving TTLs.
Answer 42
You can take a snapshot of data using “nodetool snapshot” and then enable “incremental_backups” in yaml. This will make sure that all the new SS Tables are being written to another folder (under snapshots) that you can then easily copy and load somewhere else. It is a hack, but a good one !
Answer 43
You would still need some process to expire data based on TTL. Since compaction is that process today, you would need compactions.
Answer 44
A] is true.
Answer 45
A] assuming you don’t have any other cluster issue
Answer 46
A] is the most likely cause for this. Here is a good reading: https://wiki.apache.org/cassandra/DistributedDeletes
B] will also cause this, but the hope is you don’t do this normally
Answer 47
A] and B]
Answer 48
Yes. The data is returned in the Murmur3 hash order on the partition key, and then sorted on clustering key.
Answer 49
Correct answer is E]
Cassandra does not have a notion of priority, so both write and read requests will be handled in the same priority.
New SS Tables can be generated through repairs or some nodetool commands such as cleanup.
Repairs don’t run by default.
Answer 50
D]
There is no built-in mechanism to re-start a node that went down.
Answer 51
Run “sudo iostat” and look at the %nice column
MSDN Blogs: Testing private/intranet applications using Cloud-based load testing
Cloud-based Load Testing Service can be used for performance and scale testing of an application by generating load from Azure. This type of load generation can only hit/generate load on an internet/publically accessible application. But we have seen many times customer needs to load test their application which is not publically accessible. Reasons could be many, some of them are listed below:
- Testing an internal application only –In many large scale organizations there are applications/web sites that have the cater the need of whole organization. It becomes crucial to test it with peak load to eradicate any performance/stress related bugs.
- Testing the application internally before releasing it over internet –Before actually going into public for a big gung-ho launch, organizations love to make sure that there is no performance glitch or the worst site will not crash in high user load.
To provide support for above scenarios we are working on a feature using which users can load test their internal/ABF (application behind firewall) applications. Before talking about the solution, let us walk you through the following flow chart to figure out the best tailored solution as per your requirements. You can then read about a particular solution (1-5) and try it out.
- Use default path, CLT will auto provision agents. This is the default scenario for load testing using CLT where the application has a publicly available end point. The load testing service will automatically provision load agents (in Azure) to simulate the user load. Refer this page for more details on CLT usage.
- Use ARM template to deploy your own load agents. #1 solution provisions the agents in CLT service boundaries and user won’t be able to have control over it. If you wish to have control (access) over the load generation agents for any customization needs, you can deploy them yourself using ARM template in your Azure subscription. These machines will get registered with CLT can generate load. More details on this will follow in the same blog.
- Use ARM template to deploy load agents in a VNet. If the application under test (AuT) is inside a Azure VNet or if there is an ExpressRoute between application’s private network and Azure, you can use a pre-defined ARM template deploy IaaS VMs in Azure in a specific VNet to act as load agents. The machines will be provisioned in your Azure subscription and registered against your VSTS account. The VNet where you create these machines should have a line of sight to the application, so that the load generators are able to reach the app. More details on this can be seen in the further segments of this blog.
- Use ARM template to deploy load agents with static IPs. If you don’t have ExpressRoute connectivity and want to test apps hosted on-premises, you can use an ARM template to deploy IaaS VMs in Azure to act as load agents. Choose to create these VMs with static IPs that you can allow traffic from in the firewall, so that the necessary load can be generated. The machines will be provisioned in your Azure subscription and registered against your VSTS account. More details on this will follow in the same blog.
- Use cloud load agents on your infrastructure. We have also come up with a simple powershell script that can help you to configure physical or virtual machine(s) of your choice as load agents. These machine(s) will be registered against your VSTS account for load generation. You can read more on this by following a separate blog, ‘Use cloud load agents on your infrastructure’.
- Use Test Controller/Test Agents for on-premises testing on your infrastructure. If you want to do test apps on-premises, but have constraints such as not being able to store results on the cloud for some reason (say, regulatory compliance) you can use the Test Controller / Test Agents based solution for load testing. This requires you to use your own infrastructure for load generation and results will be stored on SQL Server. https://msdn.microsoft.com/en-us/library/ms243155.aspx
Now we’ll deep dive into solution #2 , #3 and #4 in this blog.
This section talks about how you can provision load agent(s) using Azure IaaS VMs in details. User must have an Azure subscription where the IaaS VMs and related resources will be provisioned. This is primarily useful in following two use cases:
- You want to test a private application which is not accessible through CLT/internet.
- Bring your own Subscription (BYOS)– You have your own Azure subscription and want to leverage it for load testing. You can also use the Azure free credits, if you have.
Azure gives an edge here as users can spread their load testing across different geo-locations which we have seen many customers are interesting in these days.
Use ARM template to deploy load agents in a VNet
Following is the simple topology where load agents are present under user’s VNet and henceforth they will have a line of sight to the application. We have published one ARM template in github to help user to provision machines easily and quickly.
For this to work user must have an existing VNet as depicted in above diagram. VNet identification requires its resource group name as well. If you wish to use an existing subnet, you should have this info as well.
To deploy such rig we have published one ARM template. You can click on the following link to automatically load the ARM template in Azure management portal.
Provision load agents in an existing rig
Once you click on it, it will load the template in Azure portal and you’ll see following view. User can fill in the parameters and choose the subscription/resource group/location as per the requirement.
If you wish to dig deep into the template and modify as per your needs, you can check it out from following github repo:
Use ARM template to deploy load agents with static IPs
We have published another ARM template where user doesn’t need to have an existing VNet. This can be used for following two purposes:
- If you don’t have ExpressRoute in Azure but want to do load testing using his own subscription, you can use this ARM template which just deploys a rig with its own VNet. If you need to test an private application, you can deploy the rig with static IPs (provided as an option), punch the firewall for these IP(s) to make a route for load agents.
- If you want to have control over load generation agents (as CLT auto-provisioned agents can’t be accessed by the user). You can choose to have static/dynamic IPs for these VMs.
To leverage this, just click on following link to load the ARM template in Azure management portal.
Provision load agents in a new Vnet
Github repo link for this ARM template is as follows:
https://github.com/Azure/azure-quickstart-templates/tree/master/101-vsts-cloudloadtest-rig
Once you deploy the VMs, it may take 10-15 mins to have machines configured with CLT and ready for load test. The load test runs done on these agents will not be charged VUMs by the CLT service but user will incur the cost of Azure resources consumed under his subscription.
In ARM template we have set the machine size to ‘Standard_D4_V2’. This size machine(s) comes up with 8 CPU cores and 28 GB of memory. User can change this value at his end by editing the template. Refer this to know more about the azure machine sizes and capabilities.
How to queue a run using load agents
We are working on making it as first class experience in our product. Till then user can queue a run on these machines by having a specific context parameter in Visual Studio Load Test file.
Context parameter name – UseStaticLoadAgents
Context parameter value – true
In Visual Studio, it will look like something as follows:
User can set the number of machines to be used for a load test run through Agent core count property present in Run Settings. In user’s own load agents scenario every core is treated as a single machine. As shown in below image, 5 machines will be used for the run.
The runs done on user’s own load agents machines are not charged. User should be able to confirm this by looking into the status messages of the run.
Before you do that, we would recommend you to go through the following FAQs as well.
How to know about the registered machines with CLT
To check the status of machines configured under a VSTS account, users can download a basic script GetStaticMachines.ps1 from following link.
Download powershell script to list down the registered machines
After downloading it, please make sure you unblock the file.
Following are the mandatory parameters:
- TeamServicesAccountName : It is the name of the VSTS account with which you want to get configured machines. Use just need to pass the account name here e.g., please put ‘xyz’ if your VSTS account url is https://xyz.visualstudio.com
- PATToken : It is required for authentication. One has to first get the PAT token for the VSTS account. Follow this post to get it. The scope should be selected as ‘Load Test (read and write)’.
Example:
.GetStaticMachines.ps1 -TeamServicesAccountName xyz -PATToken zzdw6bnzk2q73qsqxukmfonzycscdgmsl2quhqo24so7hrplctcq
Sample Output:
Account Uri – https://xyz.vsclt.visualstudio.com
machineName status lastHeartBeat
———– —— ————-
DDDSINGHAL016 Free 2016-04-26T12:41:00.6548647Z
DDDSINGHAL017 Free 2016-04-26T12:40:59.5005437Z
With this user can get idea about the status of his machines configured for load test run.
Frequently asked question –
- How do the load agents communicate with CLT?The load agents will communicate with CLT using HTTPs protocol. Since these machines/VMs are inside user’s private network (Azure/on-prem), they can reach the Application under Test (AuT) easily. The results are published back to the CLT service so that the analysis can happen in similar manner as done for other type of load test runs in CLT.
- How I am being charged for this?At present this feature is in preview mode and you will not incur load testing VUM charges for the runs where you deploy load agents on your premises or in your Azure subscription. However, you will be charged the applicable Azure VM costs.
- Can I use these machines for some other purposeThese machines can be used other your other tasks as well but it is recommended to not have anything running while a load test run is in progress.
- Can I shut down the machines where I have configured load test agent?Yes, the machines can be shut down when not in use. The load agent service will automatically start to receive commands from CLT, once the machine is up. If you are using Azure ARM template to deploy these agents, you can start/stop the VMs based on your need. You can also do this using a powershell script. Refer following link to more on this.https://gallery.technet.microsoft.com/scriptcenter/Stop-All-VMs-in-Specified-40c8531eYou are also recommended to delete the Azure resource group once you are done with load testing. You should be able to re-create it anytime later if you need to.https://azure.microsoft.com/en-in/documentation/articles/resource-group-portal/
- I have proxy settings on my machines, will this work?We support only default proxy scenario i.e., when the proxy settings are controlled through IE and it uses the current user’s credentials to connect to proxy server. In other cases, please reach out to us.
With all this, you should be able to go ahead and try out the load testing. Do reach out to us on vsoloadtest@microsoft.com, if you have any query.
Happy Load Testing!
MSDN Blogs: Výrazná vylepšení Windows 10 IoT Core
Windows 10 Anniversary Update se nevyhnul ani verzi pro “bezhlavá” zařízení – tedy IoT desky typu Raspberry. Windows 10 Iot Core přináší oficiální podporu nového hardwaru, Windows Store, běh více aplikací zároveň a další.
Mezi desky, které nyní oficiálně podporují Windows 10, se řadí Raspberry Pi 3, nová verze populárního malého počítače, a také chystané zařízení od Intelu – Joule.
Vzdálený přístup
Na IoT zařízení se můžete připojovat vzdáleně z počítače, mobilu nebo tabletu, takže není potřeba, aby měly připojen vlastní monitor. Kromě obrazu se mezi propojenými zařízeními přenáší také vstupy (např. klávesnice a myš) a data ze senzorů (např. akcelerometr). Můžete tak aplikaci běžící na mikropočítači ovládat. Komunikaci zprostředkovává UWP aplikace z Windows Store, kterou stačí nainstalovat na počítač, mobil nebo tablet. Celý postup a návod pro vzdálenou práci s IoT Core najdete v dokumentaci.
Windows Store
OEM partneři mohou díky programu Windows Store OEM Preinstall předinstalovat aplikace na IoT zařízení z Windows Store a také je přes Store automaticky aktualizovat. Headless aplikace je potřeba mírně upravit – postup najdete v dokumentaci.
Více foreground aplikací
Anniversary Update podporuje běh několika aplikací v režimu “foreground” a jejich přepínání pomocí API PackageManager. Zároveň můžete nastavit, které tlačítko nebo vstup bude symbolizovat příkaz “Domů”, který aktivuje výchozí aplikaci.
Podpora Remote Wiring z Arduino IDE
Pokud máte v oblibě Arduino IDE, nyní můžete používat soubory INO k tvorbě aplikací pro Windows 10 IoT Core, které vzdáleně ovládají Arduino (Remote Wiring). Stačí v Arduino IDE doplnit správný Board Manager a zvolit Windows 10 IoT Core. Celý postup popisuje dokumentace.
Pokud vás zajímá více, určitě se podívejte na kompletní seznam novinek.
Martin
MSDN Blogs: test create new post
test create new post
MSDN Blogs: tewe
sadas
MSDN Blogs: werwe
rweew