Quantcast
Channel: Randy Riness @ SPSCC aggregator
Viewing all 3015 articles
Browse latest View live

MSDN Blogs: How to render SQL Server acyclic blocking graphs using Visual Studio Code, TypeScript, NodeJS and TreantJS – Part 1

$
0
0

Introduction

SQL Server uses blocking – among other technologies – to ensure ACID transactions. There are a lot of lock types in SQL Server, ranging from Shared and Exclusive, Page and Range or even table level. You can find more about these here: [TechNet] – Lock Modes. Whenever SQL server tries to acquire a lock on a resource and that resource is already owned by another, incompatible lock, blocking occurs. Depending on how long – and how often – these blocks persist the execution slowdown becomes noticeable. We will use this premise in order to show how to create a simple SQL Server-backed web site using Visual Studio Code, NodeJS and TypeScript. Since the post was very long I’ll split it into two. You can find the following post here whenever it will be ready.

How to inspect blocking

There are many ways to find out the SPIDs (process IDs) involved in blocking. For the purpose of this article, we will focus on the sp_who stored procedure. This little helper will give back the current list of processes active in SQL Server along with the blocking process, if any. You can find more details here: https://msdn.microsoft.com/en-us/library/ms174313.aspx.

In the following example we have some blocked processes:

01

The column blk is either zero – no blocking – or the SPID blocking the session. In the example we see the SPID 54 blocked by 55. The SPID 54 blocks SPID 56 and SPID 62 in turn. The SPID 56 blocks SPID 60. These blocked processes are a directed acyclic graph (if it were a cyclic graph we would have had a deadlock instead).

The blocked graph could be better visualized like this:

00

Suppose now you are a DBA tasked on killing the offending SPID. But you cannot kill SPIDs at random: for example, killing SPID 56 is not guaranteed to resolve the problem because the SPID 60 might end up blocked by SPID 55. So, in general, you want to kill the root SPID – that is, the one blocking one or more SPIDs without being blocked by someone else. In our example the root SPID is SPID 55. It’s easy to find the root in our graph but it’s hard to do with the sp_who output because we will have to re-create the graph in our heads first (and when there are hundreds of blocked processes the task becomes daunting).

So what about a tool to visualize the blocking graph on demand?

Architecture

For our tool we will use many open source technologies. We will use NodeJs with Express for the backend and TreantJs for the graph.

Something like this:

02

Setup

npm and TypeScript

Generally speaking setting up a TypeScript project is cumbersome – especially if you are spoiled like me by Visual Studio. You need to have Node and npm installed (if you don’t, go here). First create a working folder, go in it and initialize the project:

Now install TypeScript and Typings:

Now initialize the TypeScript compiler options and Typings:

Once done your folder should be like this:

03

Transpilation and debugging

Let’s edit tsconfig.json a bit to allow debugging via source maps. We want to add these compilerOptions:

OptionUse
outDir
Directory where to store the transpiled JavaScript files
sourceMap
 true  since we want to be able to debug our code in the TypeScript source files directly.

The sourceMap field takes care of that. emitDecoratorMetadata and experimentalDecorators are not strictly needed in our project but are useful and required if you plan to use metadata-heavy frameworks (such as Angular 2).

We also tell the TypeScript compiler (tsc) to avoid transpiling the node_modules and the typings module files. Now let’s add a dummy app.ts file with this code:

Task runner for building the solution

Now let’s ask Visual Studio Code to build our solution. Since we don’t have the appropriate configuration (again) we will be asked to create one. Luckily Visual Studio Code does that for us:

Ux5bYBhT4a

Debugging

Last step is to debug our code. Press F5 in Visual Studio Code and – again – we will be asked to create a configuration file. We pick Node.Js as runtime. Remember we must change the configuration because we have created a typescript file and we use source maps.

OptionUse
program
Main ts file to start
sourceMaps
 true since we want to be able to debug our code in the TypeScript source files directly.
outDir
Where our transpiled files are stored (in our case, the dist folder).

3H6fUfAbAT

We should also check if the breakpoints are working correctly. Just add a breakpoint in our  ts  file and  Start Debugging:

VL0wlsVBWN

You will be able to inspect the running process regardless of the transpilation and see the variables as usual.

Install required packages

For out project we will use three open source packages using npm. Let’s add them:

Also install the typings in order to exploit TypeScript’s strong type checking:

.gitignore

The –save flag tells npm to store the package information in our package.json file. The same applies with typings and the typings.json file. This is useful because in order to get all the dependencies again all we have to do is issue npm install (or typings install). In general you don’t want to ship the external packages with your code. The user will install the dependencies upon download/clone. To avoid checking in the downloaded packages, we will add the node_modules folder to our .gitignore file. My final .gitignore is like this:

I’ve also excluded the output folder of the transpilation (dist), the typings folder and the .vscode specific folder.

Create a dummy program

Let’s create a dummy program to test that everything is in place. We will create a web server that responds to the URL /sp_who sending back the result of the stored procedure as JSON:

The result will be browsable from http://localhost:3000/spwho:

HpTjvGFnlm

Hide noisy folders from Visual Studio Code

Our solution is cluttered with many noisy folders. We have the  node_modules , the  typings  folders which are automatically handled by npm and typings, respectively. We do not need to concern with them so we want to hide them from the Visual Studio Code folder view. All we have to do is to add the  “files.exclude”  property to the workspace settings:

This will leave less clutter in the Visual Studio Code folder tree allowing you to focus on your code.

LHnrjPYB9N

Moving on

Right now we have:

  • A functional, unauthenticated web server serving static pages.
  • A functional, unauthenticated web server serving REST verbs in JSON.
  • A way to interact with SQL Server.
  • We are able to debug step by step in our TypeScript source code, directly in Visual Studio Code.

All we have to do is to render the data graphically. We need to serve some static JavaScript that will interact with our REST API service. We will use AJAX for that. However, this is the topic of the following blog post.

Happy coding

Francesco Cogno


MSDN Blogs: Creating “Codeless” Swagger Connectors for Logic Apps

$
0
0

With the exciting announcement that Azure Logic Apps reached general availability I decided to spend a bit of my weekend re-building a few cloud workflows that help me with my day-to-day. One of my most useful logic apps is one that will monitor our MSDN forum RSS and alert me whenever there is a new post (with similar workflows for StackOverflow and other sources). I could just use one of the many out-of-the-box connectors to trigger via RSS and send me an email or a text message, but I prefer to get these updates through a different service, PushBullet, which I have on all of my devices. I want to send a push notification to all my devices whenever a new post is submitted – and ideally link directly to the post. However, Logic Apps doesn’t (yet at least) have a PushBullet connector, but that was fine. I had a few options to extend Logic Apps:

  1. Create a custom connector like one of the many in our GitHub
  2. Call the API directly via the HTTP action
  3. Author a codeless connector with a Swagger document I can use with the HTTP + Swagger action

For speed and efficiency, I went with option #3. Nothing to code, nothing to manage, and a nice visual designer experience in Logic Apps.

Understand Swagger and the Open API Initiative

Swagger, as the website advertises was created with the following goal:

The goal of Swagger™ is to define a standard, language-agnostic interface to REST APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection. When properly defined via Swagger, a consumer can understand and interact with the remote service with a minimal amount of implementation logic. Similar to what interfaces have done for lower-level programming, Swagger removes the guesswork in calling the service.

Azure Logic Apps can use swagger to discover and cater a design experience across customer and 3rd party APIs.

Now in my case I didn’t see a swagger document in existence for PushBullet – but that’s ok – because authoring a swagger document isn’t too complex.

Authoring Swagger

First off, if you are developing your own API, I would say if at all possible don’t author the swagger by hand. There are dozens of tools for almost every language that will do it for you. I’m partial to Swashbuckle myself. However in this case, I don’t control the code of PushBullet, so I was going to need to author by hand.

Swagger Editor

Swagger gives you a nice Swagger editor you can run and install, but you can even just use their live demo to author basic docs.

Creating PushBullet Swagger

First I found the PushBullet REST API reference so I knew what the API I needed to describe was. I saw a call tohttps://api.pushbullet.com/v2/pushes would allow me to get or send pushes. I quickly fired up Postman and made a call to the API. I knew I had the call I needed, now I just needed to describe it.

Executing with Postman

Swagger only had a few sections I had to fill out in order to mimic this call and create a first class Logic Apps experience:

  • Host: The domain of the service. In this case: api.pushbullet.com
  • basePath: Any prefix needed for calls. In this case: /v2
  • Paths: The paths to make calls. I only needed one path for my use case /pushes Under the /pushes path I needed a description for the POST method.

Those 3 pieces were all I really needed to describe the call. Now luckily Swagger also lets you do a lot of other powerful and useful things like adding descriptions and request and response schemas. I went through and defined all of the request and response schemas for the operation, and exported the JSON.

Saving the Swagger

I prefer to edit my Swagger in YAML, and then when finished I can simply click File -> Download JSON to get it in a format for other services like Logic Apps to use.

Swagger Editor

For a full copy of the swagger document I generated, you can check out my GitHub here

Using in a Logic Apps

Now that the swagger was authored, I needed to access it from my Logic App. Unfortunately you cannot reference a swagger doc hosted in GitHub yet (they don’t like the request headers), but enabling CORS on Azure Blob following these steps here only took a minute and in no time I had a swagger doc in Azure blob I can reference from a Logic App. You are more than welcome to use it as well:

https://jehollanswagger.blob.core.windows.net/swagger/pushbullet.json

This now lets me add a PushBullet action into any of my workflows and have a first class designer experience.

Logic Apps with PushBullet Swagger

MSDN Blogs: Power Query で Excel マクロから卒業?! - 列のピボット と ピボット解除 –

$
0
0

Microsoft Japan Data Platform Tech Sales Team 伊藤

 

Power Query をご存知ですか?Excel 2010/2013 のアドインとして提供していた機能ですが、Excel 2016 では [データ] メニューの [取得と変換] として標準機能となりました。

image

Excel だとエディションだのバージョンだのが引っかかるという場合には、Power BI Desktop にも同じ機能があり [ホーム] メニューの [データを取得] あるいは [クエリを編集] というボタンから使用できます。

image

Power BI Desktop や Excel 2016、Power Query アドインのクエリ エディタ (Query Editor) を使用すると、データの取り込みと変換を GUI で定義でき、その後繰り返し必要となるであろうデータ取り込みを 1 クリックで行えます。あまりに便利で、手作業でデータをコピペしたりマクロを駆使したりしてレポートを作りこんでいたのは何だったのか…という気分になります。今回はデータ加工に苦労されている方にぜひお試しいただきたい機能の一つである [列のピボット解除] と [列のピボット] について、Power BI Desktop (2016年6月バージョン) を使ってご紹介します。

例えばこのようなデータ (1行目は列名) の場合 (データソース:Wikipedia 「都道府県の人口一覧」)

image


「マトリックス」で都道府県別に年別の人口を表示するべく [行] に「都道府県」を、[値] に「2010年」「2005年」…を配置すると、次のように表示されます。

clip_image002

 

行と列 (縦横) を入れ替えるために [列] に「都道府県」を配置すると次のようになり、意図した表示になりません (「2010年」とか、行見出しとして表示してほしい)。

clip_image003

 

このようなデータで行と列の入れ替えを実現するには、[クエリを編集] して「2010年」「2005年」…という各列を 「月」列と「人口」列に変換する必要があります。 ここで活躍するのが [列のピボット解除] です。

 

目的の表現 (行と列を入れ替え)

clip_image004


[列のピボット解除] の使い方

  1. [ホーム] メニューの [クエリを編集] をクリック

    image

  2. 分解したい列 (ここでは各年) を選択し (Ctrl キーを押しながらクリックすることで複数選択が可能) 、[変換] メニューから [列のピボット解除] をクリック

    clip_image005

    ※ そのまま残す列 (ここでは「都道府県」) を選択した状態で [その他の列のピボット解除] とすることも可能です。

  3. 「属性」と「値」という2つの列に変換されます。それぞれの列を適当な名前に変更します。(属性→月、値→人口)

    clip_image006

  4. [ホーム] メニューの [閉じて適用] をクリックし、 Query Editor を閉じます

    clip_image007

  5. [視覚化] から「マトリックス」を選択し、[行] に「年」、[列] に「都道府県」、[値] に「人口」を配置します

    clip_image004[1]

以上で目的の表現に変更できました。

[列のピボット解除] については、こちらの YouTube 動画で Excel 2013 の Power Query での使い方を紹介しています。

 

ここまでは「列のピボット解除」を行いましたが、逆に列を分解したい場合もあると思います。同じデータを使用して、今度は「都道府県」列の値、つまり都道府県名を列名とする列を作ります。


[列のピボット] の使い方

  1. [クエリを編集] をクリック
  2. 分解したい列 (ここでは「都道府県」列) を選択し、[変換] メニューの [列のピボット] をクリック

    image

  3. 以下のようなダイアログが表示されるので、[値列] に値が格納されている列 (ここでは「人口」列) を選択し、[詳細設定オプション] を開いて集計方法を設定します

    image

  4. 以下のように変換されたことが確認できます

    image

  5. 最後に [ホーム] メニューの [閉じて適用] をお忘れなく

 

使用した Power BI Desktop ファイルはこちらからダウンロードいただけます。

今回ご紹介した [列のピボット解除] や [列のピボット] のように、名前だけでは何ができるのか分からないものもありますが、少しずつこちらのブログで紹介していきますのでぜひご活用ください!(ちなみに、コーディングスキルがあるとさらに便利に使えたりしますが、それはゆくゆく…)

MSDN Blogs: Nemeth Braille—the first math linear format

$
0
0

The 6-dot Nemeth braille encoding was created by Abraham Nemeth for mathematical and scientific notation and is general enough to encode almost all of the Office math notation. He started working on his encoding in 1946 and it was first published in 1952 by the American Printing House for the Blind. It’s a little like the Unicode math linear format. Like the linear format, spaces play important roles and Nemeth braille is a globalized notation, so localization isn’t needed except for embedded natural language. Also both formats strive to make simple things easy and concise at the cost of additional syntax rules. But because a mere 64 codes (including the space) are used to encode virtually all of math notation plus a variety of other things, the semantics of the codes depend heavily on their contexts. This level of complexity contrasts with the linear format which has the luxury of the exhaustive Unicode math symbol set. Accordingly, encoding math expressions can become devilishly tricky as revealed in the full specification. For a less daunting intro, see this Nemeth Code Cheat Sheet. Nemeth recounts some history in this 1991 interview. The present post describes aspects of Nemeth braille and compares how Nemeth braille, the Unicode math linear format, and TeX express subscripts/superscripts, fractions and integrals.

First note that Nemeth Braille can be displayed in 6-dot ASCII Braille as shown in this table

NemethBraille

The dots are numbered 1..6 starting from the upper left, going down to 3 and continuing with 4..6 in the second column. The letters and numbers look like themselves as do the / and (). The braille cells for 1..9 are the same as those for the letters A..I, but shifted down one row. The cells for the letters K..T are the same as those for A..J but with a lower-left dot (dot 3). Letters are lowercase unless prefixed by a cap prefix code (solo dot 6) or pair of cap prefixes for a span of uppercase letters.

A simple table look up converts Nemeth braille codes to 8-dot Unicode Braille in the U+2800 block. The braille cells for 6-dot braille are the first 64 characters of Unicode braille block. With a little practice you can enter braille codes into Word, OneNote, and WordPad by typing 28xx <alt+x>, where xx is the hex code given by the braille dots. To do this, read dots as binary 1’s and missing dots as 0’s, sideways from right to left, top to bottom. So ⠮ is 1011102 = 2E16 and the corresponding Unicode character is U+282E.

To get a feel for simple Nemeth braille math, consider the expression 12x2+7xy-102. In ASCII Braille it displays as

#12x^2″+7xy-10y^2_4

In Nemeth Braille it displays as

NemethFormula1

In the linear format and TeX, it displays as 12x^2+7xy-10y^2.

It’s tantalizing that the superscript code ⠘ has the ASCII braille code ‘^’ used by the linear format and [La]TeX. But the subscript code is ⠰, which has the ASCII braille code ‘;’ instead of the ‘_’ used by the linear format and TeX. These braille codes also work differently from the linear format and TeX superscript/subscript operators in that they are script level shifters that must be “cancelled” instead of being ended. So in the formula above, the Nemeth ‘^’ for the first square is cancelled by the ‘”’, while the ‘+’ terminates the superscript for the linear format and a TeX superscript consists of a single character or an expression of the form {…}. The following table compares how the three formats handle some nested superscripts and subscripts

NemethSubSups

Here to keep the Nemeth braille code sequences simple, I’ve omitted the Nemeth math italic, English-letter prefix pair ⠨ ⠰ before each math variable. Hopefully there’s a way to make math italic the default, as it is in the linear format, MathML, and TeX, but I didn’t find such a mode in the full specification. A space before literary text terminates the current script level shift, that is, it initiates base level. This is also true for a space that indicates the next column in a matrix, but it’s not true for a function-argument separator as illustrated in the table below. Spaces can also be used for equation-array alignment (you need to think in terms of a fixed-width font).

Simple fractions are written in a fashion similar to TeX’s {<numerator>over <denominator>}. For example,NemethFraction

or in ASCII braille as ?1/2#. The ⠹ and ⠼ work as the curly braces do in TeX fractions as in {1over 2}. In the linear format, the fraction is given by 1/2. Fractions can be laid out in a two-dimensional format emulating built-up fractions but using Nemeth braille. Nested fractions require additional prefix codes (solo dot 6). For single-line braille devices it seems worthwhile to use the linear display since the fraction delimiters can be nested to any depth. Stacked, slashed, and linear fractions can be encoded and correspond to those structures in the linear format and in TeX.

The Nemeth alphabets are similar to the Unicode math alphanumerics discussed in Sections 2.1 and 2.2 of Unicode Technical Report #25. One difference is that math script and math italic variants exist for English, Greek, Cyrillic, and German (Fraktur) alphabets, whereas in Unicode math script variants are only available for the English alphabet. We may need to generalize Unicode’s coverage in this area, since TeX also has the ability to represent more math alphabets (see, for example, Unicode Math Calligraphic Alphabets).

At some point, I hope to give a listing of correspondences between the linear format and Nemeth Braille. It’s a long topic, so as a start the following table gives some more examples. Note the spaces needed around the equals sign (and other relational operators), but the lack of a space between the ‘a’ and “sin” in “a sin x”. The Nemeth notation is ambiguous with respect to using asin for arc sine.

NemethFormulas

The Unified English Braille code can handle some mathematical notation, but it’s not general enough to deal with Office math zones. Some discussion on the differences is given here, and the Accessible Math Editor author Sam Dooley explained to me that more advanced math needs the power of the Nemeth encoding. One possible way to reduce the large number of rules governing Nemeth braille would be to use an 8-dot standard in which math operators could be encoded with the aid of bottom row dots. This would work with current technology since Braille displays let you read and enter all possible 8-dot Braille codes.

MSDN Blogs: Small Basic – Draw Moon Quiz

$
0
0

I’d like to introduce a new game program Draw Moon Quiz (BNN571).

A moon picture is displayed on left.  So, select patterns on top and type the sequence on the bottom text box then push [Draw] button.  There are 8 questions.  Have fun!

DrawMoon

MSDN Blogs: What Startups Can Learn from Large Corporations

$
0
0

Guest post by James Burbank, editor in chief at BizzMarkBlog

In many people’s minds there is this dichotomous image of startups and large corporations pitted against each other. For the majority, the story usually describes brave little companies trying to eke out an existence surrounded by the established big bad players in their industry.

It goes without saying that the reality is very different, with many large corporations actually supporting startups and innovation (like for instance, Microsoft’s BizSpark program, while we are on the subject) and with new players benefiting greatly from such support while also returning the favor through innovations that are later potentially adopted by the mythic Big Players.

Even though many people are aware of this, there are still those who believe the whole narrative is black-and-white and that startups have it figured out, while large corporations are these dinosaurs sloshing in some primordial corporate muck, destroying everything they touch.

While people have the right to such a one-sided view of things, it can actually prevent startups from learning some very useful lessons when they are run by people who share this single-minded opinion.

But, what exactly are these lessons?

Organization is Everything (or at least a lot)
There is something uplifting or even comforting in the idea that all you need is that one fantastic idea. It makes us all feel like we are just one sleepless night away from fame and riches and that feels great. In reality, however, things do not work this way, as so many startups have discovered over the years. For every Atlassian, for every Bigcommerce, for every Canva, there are innumerable startups that have failed because they thought ideas are everything and organization is nothing.

We are not saying that ideas do not mean anything. If they hadn’t, you would have probably contemplated opening yet another small business. Startups are based on ideas, but organization is still an incredibly important aspect of running a company, startup or not.
Large corporations know how to do organization. They have to. Poor organization within a large company can lead to a catastrophe, and it will. Large companies are aware of this and a huge part of their operation is making sure things are functioning.

It may be difficult for startup founders to find the time or the expertise to be as organized as multi-billion companies, but it is important not to ignore the importance of organization. They need to know who is working on what, what the deadlines are and how much money is going in and out.

Compromise is a Necessity
The world of business is the world of compromise. Without compromise, it would be impossible to make any deal that would benefit anyone. Large companies are built on compromise that have allowed them to become large in the first place. Most of them had to make painful cuts or compromise certain short-term goals in order to keep the big picture intact.

In many cases, startup founders adopt this stance where they see any compromise as a betrayal of their own ideas. This singlemindedness can be extremely harmful, especially since compromises can actually turn a startup into a solid company that will grow in the coming years and decades.

An example of this could be a business intelligence company called Panorama which was founded back in the day when the term startup didn’t even exist. They developed their own OLAP technology and in less than three years, they were approached by none other than Microsoft who ended up purchasing their technology and making it an inseparable part of Microsoft SQL Server. Nowadays, that same company is still in operation, integrating their BI solutions with Microsoft’s, and thriving.

Back in the mid-1990s, someone could have simply said that selling their tech to Microsoft was too much of a compromise and they could have closed their doors before the new millennium rolled in.

Being Careful Is Not Cowardly
Startups are supposed to be fearless and brash, tearing down walls and kicking down the doors. The only problem is that this kind of behavior often results in the ceiling crashing down on the one doing the whole brash pulling and kicking down.

You are probably familiar with the story of Theranos, a biomedical startup that promised to revolutionize the way in which blood tests are done. Their new analysis was supposed to be able to provide all the necessary readings from a single drop of blood, as opposed to an entire vial. The initial results were promising and they soon attracted unprecedented funding. They kept the ball rolling by constantly appearing in the media and vying to change the world.

Then, all of a sudden, it turned out that their promises were not exactly as realistic as possible. Studies started showing that the results of their analyses were lacking and that people’s lives were put in danger. They were careless with their clinical studies and they handled the fallout carelessly.

They are currently being investigated by pretty much every possible American authority, both financial and medical ones.
Large companies are usually much more careful than this, making sure their bases are covered and there is no possibility of catastrophic oversights being made. Startups should be aware of this. If Theranos was more aware of this, maybe their story would have been a different one.

Closing Word
While there is a lot that big companies can learn from startups, this is a process that goes both ways. Every large company started off somewhere and learning a lesson or two from them might make a difference between a success and a failure.

MSDN Blogs: Cool Ideas to get started with Office 365 in your classroom

$
0
0

Guest post by Trent Ray, Microsoft Teacher Ambassador and Expert Educator!

Anytime, anywhere access to resources and tools for learning is an essential element for today’s online learning environments. Thanks to the cloud (and some clever technical engineers) digital tools are enabling seamless connections between teachers and learners and transforming the what, when, how and where we work. In this post, I will share some of the productivity elements of Office 365 classroom that have helped make my teaching life easier!

Collaborate with Word Online

We have all lost track of file versions and are probably very familiar with the following “Activity1_V2_Draft_Final.doc” scenario. Word Online allows you to create and collaborate in real time within the Word Web App without losing your formatting (or having to press ‘save’). You can start a Word Document straight from your OneDrive and share them with people in your organisation.

trent o365 1

Collaborative Group Tasks: The ability for up to 10 users to collaborate on one Word Document provides the ideal space for groups of students or project teams to develop ideas and share research. Students work on ONE document in the cloud so there even if one student is away – the others can continue accessing the learning.

trent o365 2

Drafting and feedback: Before students submit their final essay, practical report or project students can share their Word directly from their OneDrive. Rather than receiving a duplicate copy as an attachment, receive a share link from the students OneDrive. Using ‘comments’ in Word Online you can add your feedback and within moments your students can see it.

trent o365 3

Fun Feature: When using Word Online you can easily switch to the full desktop version is as simple as clicking ‘Work in Word Offline’. This gives you the full Word experience without losing your formatting.

Microsoft Forms

Microsoft Forms is the newest service now available in Office 365 and allows any user to easily create survey’s and self-marking quizzes with instant student feedback.

Microsoft Forms can be shared with students, colleagues and even parents by providing a website URL or QR code. The best part is all of the results generate straight into Excel which makes sharing, reviewing and evaluating the data you collect super seamless!

Exit Tickets or Lesson Reflection/Feedback– Capture what your students have gained from the day’s lesson. Ask questions such as did we meet today’s learning objectives, what are you still working to understand and what would you like to learn (or need to learn) next?

image

Fun Feature: Paste the Microsoft Forms URL into a page in OneNote and you will see that it has automatically embedded straight onto the page. This means students can complete a quiz without leaving OneNote!

Projects and Presentations with Sway

The newbie in town, Sway is currently Microsoft’s ‘Cool tool for School’ changing the way we create and curate information online. Sway enables you to create snazzy single page websites that do just that – ‘sway’. With simple drag and drop ‘cards’ (aka elements) you can build a professional and engaging resources in minutes.

Classroom Application: Sway has so many great uses, from collaborating on team projects, curating lesson resources, creating a class newsletter for parents to access or individual uses such as blog or story telling platform.

Fun feature: Not many people know that you can have co-authors in Sway, which means when working on team projects you can have up to 10 people adding information, media, embedding videos and even excel spreadsheets!

Here is cool sway created by students: Owl Pallet Dissection Report – Year 7 Biology

7

 

Staying organised with Outlook Tasks

One of the biggest bug bares for teachers is staying on track with homework, task due dates and even small items like ‘return the field trip permission slip’. Tasks are a severely underutilised feature of Outlook that can help you and your students stay on top of your activities, tasks and to-dos!

Classroom Application– At the end of every class I always review the to-dos before next session such as up-coming due dates, things to prepare for next class etc. I simply ask students to open Outlook, click on ‘Tasks’ and create a new item. This allows them to create an task that can be tracked. If a due date is set it will add it to their calendar and even provide options to set reminders and track progress.

Fun fact– you can even create coloured personalised categories – such as subjects or task types.

trent o365 6

OneNote Class Notebooks

OneNote Class Notebooks have become my one stop curriculum shop. With a completely flexible canvas to create curriculum including embedded online videos, images, hand writing, any existing documents I can now quickly and easily share collaborative digital class notebooks with my students.

trent o365 7

Classroom Application: With a Content Library for read only resources my students can access learning and copy these tasks/resources into their Student Section which is private to them (and me as their teacher). OneNote Class Notebooks also have a Collaborative Section which is a perfect space for students to share work, peer assess, collaborate on team projects.

trent o365 8

Fun Fact: OneNote 2016 can now embed online videos including Office Mix, Vimeo and YouTube!

MSDN Blogs: CodeDom provider type could not be located

$
0
0

I deployed an MVC web app to an IIS server and received this error, also show in Figure 1.

Server Error in ‘/’ Application.

Configuration Error
Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately.

Parser Error Message: The CodeDom provider type “Microsoft.CodeDom.Providers.DotNetCompilerPlatform.CSharpCodeProvider, Microsoft.CodeDom.Providers.DotNetCompilerPlatform, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35” could not be located.

Source Error:

Line 91:   <system.codedom>
Line 92:     <compilers>
Line 93:       <compiler language=”c#;cs;csharp” extension=”.cs” type=”Microsoft.CodeDom.Providers.DotNetCompilerPlatform.CSharpCodeProvider, Microsoft.CodeDom.Providers.DotNetCompilerPlatform, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″ warningLevel=”4″ compilerOptions=”/langversion:6 /nowarn:1659;1699;1701″ />
Line 94:       <compiler language=”vb;vbs;visualbasic;vbscript” extension=”.vb” type=”Microsoft.CodeDom.Providers.DotNetCompilerPlatform.VBCodeProvider, Microsoft.CodeDom.Providers.DotNetCompilerPlatform, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″ warningLevel=”4″ compilerOptions=”/langversion:14 /nowarn:41008 /define:_MYTYPE=&quot;Web&quot; /optionInfer+” />
Line 95:     </compilers>

Source File:  C:inetpubCSharpGuitarBugsweb.config    Line:  93
Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.34274

image

Figure 1, C# 6 compilation error when deployed to IIS

After looking at a lot of possibilities, I realized that because I was deploying from a source repository and the repository did not include the BIN directory that I did not have all the dependent binaries.  Once I copied the BIN folder to my IIS server into the root of the web site, all worked out just fine.

image


MSDN Blogs: Node.js Tools 1.2 for Visual Studio 2015 available

$
0
0

 

Screenshot of the IntelliSense experience provided by Node.js Tools 1.2 for Visual Studio 2015 (NTVS). Graphic: Microsoft

Download: Node.js Tools 1.2.

The Visual Studio Engineering Team has announced the latest version of the Node.js Tools for Visual Studio (NTVS). Having used NTVS for developing an internal publishing tool, I can strongly recommend it for all of your Node.js needs.

What do you get in the latest update? Sara Itani provides details: Node.js Tools 1.2 for Visual Studio 2015 released.

  • Faster, better ES6 IntelliSense
  • More reliable debugging
  • Improved Performance
  • Improved Unit Testing Experiences

Go get it: Node.js Tools 1.2.

MSDN Blogs: Lesson Learned #6: Enabling SQL Auditing: what are the things that I need to be aware of?

$
0
0

When we enable the SQL Auditing we need to be aware of three mainly considerations:

  • The connection will be redirected to another endpoint server.
  • We have some impact in the connection time and execution time.
  • Some data providers don’t have implemented redirection feature.

 

The connection will be redirected to another proxy server.

  • When you enable the SQL Auditing option you need to know that the connection will go instead of through to the endpoint datacenter.control.database.windows.net will go through another endpoint called datasec-xxxxxx.cloudapp.net ( where xxxxxx is special format), see this example, using netsh.

DirectConnection

Direct connection to the database.

SqlAuditing

Connection using the SQL Auditing Endpoint
  • If you have defined a restrictive firewall setting, you need to configure from your local applications to be able to connect to these endpoints. You could find out the IPs address and any other information in this URL.
  • If you don’t have enabled this firewall rule, you could receive at the connection attempt the following error message trying to connect to your Azure SQL Database: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 – Access is denied.)

 

SQL Auditing Impact in the connection time and execution time.

  • Thanks to Provider statistics for SQL Server we have to see the impact in terms of connection and execution command after enabling the SQL Auditing.
  • For example, I have created the following table and I added 20000 rows.

CREATETABLE [dbo].[ValoresEjemplo]([ID] [int] IDENTITY(1,1)NOTNULL,[Nombre] [varchar](50)NULL,CONSTRAINT [PK_ValoresEjemplo] PRIMARYKEYCLUSTERED([ID] ASC))

declare @p as int
set @p=0

begin transaction
WHILE @P<20000
begin
set @p=@p+1
insert into [ValoresEjemplo] (nombre) values(‘Ejemplo: ‘ + convert(varchar(20), @p))
end
commit transaction

  • Using the following source code I’m going to take the time spent for a TSQL command execution between two databases one with SQL Auditing and another one without it.

using (SqlConnection awConnection = new SqlConnection(connectionString))
{
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();

awConnection.StatisticsEnabled = true;

                    string productSQL = “SELECT top ” + nRows.ToString() + “* FROM ValoresEjemplo”;
SqlDataAdapter productAdapter = new SqlDataAdapter(productSQL, awConnection);

                    DataSet awDataSet = new DataSet();

                    awConnection.Open();

                    productAdapter.Fill(awDataSet, “ValoresEjemplo”);

                    IDictionary currentStatistics = awConnection.RetrieveStatistics();

                    Console.WriteLine(“Total Counters: ” + currentStatistics.Count.ToString());
Console.WriteLine();

long bytesReceived = (long)currentStatistics[“BytesReceived”];
long bytesSent = (long)currentStatistics[“BytesSent”];
long selectCount = (long)currentStatistics[“SelectCount”];
long selectRows = (long)currentStatistics[“SelectRows”];
long ExecutionTime = (long)currentStatistics[“ExecutionTime”];
long ConnectionTime = (long)currentStatistics[“ConnectionTime”];

 Console.WriteLine(“BytesReceived: ” + bytesReceived.ToString());
Console.WriteLine(“BytesSent: ” + bytesSent.ToString());
Console.WriteLine(“SelectCount: ” + selectCount.ToString());
Console.WriteLine(“SelectRows: ” + selectRows.ToString());
Console.WriteLine(“ExecutionTime: ” + ExecutionTime.ToString());

  • The execution and connection time ( first time ) the query took:
    • Without SQL Auditing enabled. 
      • Execution Time: 312 ms.
      • Connection Time: 187 ms.
    • With SQL Auditing enabled and automatic redirection:
      • Execution Time: 600 ms.
      • Connection Time: 234 ms
    • With SQL Auditing enabled, but changing in the connection string adding the word secure, for example, <servername>.database.secure.windows.net:
      • Execution Time: 482 ms.
      • Connection Time: 234 ms.

 

Some data providers don’t have implemented redirection feature.

Please, review that the data client that you are using implements TDS 7.4. If not, for example JDBC 4.0 is not fully supported and Tedious for Node.JS  you have to use the FQDN: <server name>.database.secure.windows.net

MSDN Blogs: Failures while restoring nuget packages in Visual Studio Team Services – 8/1 – Investigating

$
0
0

Initial Update: Monday, 1 August 2016 20:06 UTC

We are actively investigating issues with packaging service where a subset of customers using the Nuget restore build task, attempting to restore packages from a Visual Studio Team Services package source will experience intermittent exceptions of type “Unable to find version xx of package aaa”

We are working to resolve this issue and apologize for any inconvenience.

Sincerely,
Manjunath

MSDN Blogs: vNext Build: Using a variable for Server Path for repository is not supported.

$
0
0

Until TFS 2015 update 1, it was possible to define and use a variable, for Server Path, to specify the repository to be used with a vNext build, as indicated below:
• Create a variable (“SolutionPath” in the below example) under the Variables section, and specify the server path (TFS path) that contains the solution, as the value.

1

• Under the Repository tab, provide the variable created earlier, for the value of Server Path under Mappings.

2
This configuration was allowed until TFS 2015 update 1. We have removed the support for this configuration starting update 2, as we have seen cases where this could lead to severe workspace corruption issues.

If you see the build stuck in the “Waiting for an agent to be requested” state, please check the configuration to see if you have used a variable for Server Path. If yes, provide the value for Server Path directly, instead of using a variable.

3

We are also working on improving the message from the build agent.
Hope this helps!

Content: Sreeraj Rajendran
Review: Manoj Mohunthan

SPSCC Posts & Announcements: Skokomish Tribe makes strides in adult high school completion

MSDN Blogs: Analysing data in SQL Server 16, combining R and SQL

$
0
0

By Michele Usuelli, Data Scientist Consultant

Overview

R is the most popular programming language for statistics and machine learning, and SQL is the lingua franca for the data manipulation. Dealing with an advanced analytics scenario, we need to pre-process the data and to build machine learning models. A good solution consists in using each tool for the purpose it’s designed for: the standard data preparation using a tool supporting SQL, the custom data preparation and the machine learning models using R.

SQL Server 2016 has the option to include an extension: Microsoft R Services. It’s based on MRS (Microsoft R Server), a tool designed by Revolution Analytics to scale R across large data volumes. In SQL Server 2016, R Services provides a direct interface between MRS and the SQL databases. In this way, it’s possible to analyse the SQL tables and to create new tables using the MRS tools. In addition, the R package RODBC allows to run SQL queries from R.

This article shows a simple example integrating R and SQL. To follow all the steps, some prior knowledge about the basic R functions is required.

Setting-up the environment

In this article, we are using a SQL Server 2016 VM. To set-up the environment, the steps are

  1. Log-in to Azure and set-up a SQL Server machine with R Services. To install R Services, refer to this: https://msdn.microsoft.com/en-us/library/mt696069.aspx
  2. Connect to SQL Server: from the Windows task bar, open SQL Server Management Studio and connect to it using your credentials.
  3. Create a new database: right-click on Databases and choose New Database….

  4. Give a name to the new database, e.g. iristest. Then, click on Add and OK.

  5. Open the query editor: right-click on the new database and choose New Query.

  6. Define the credentials to authenticate to the database: in the editor, paste and run the following query. You should replace the parameter testiris with the name of your database, and username and password with new credentials to access this database.

USE [testiris]
GO
CREATE LOGIN [username] WITH PASSWORD=‘password’, CHECK_EXPIRATION=OFF,
CHECK_POLICY=OFF;
CREATE
USER [username] FOR LOGIN [username] WITH DEFAULT_SCHEMA=[db_datareader]
ALTER
ROLE [db_datareader] ADD
MEMBER [username]
ALTER
ROLE [db_ddladmin] ADD
MEMBER [username]

  1. Build a new RStudio or Visual Studio project and create a new R script. See https://www.rstudio.com/products/RStudio/
  2. Define a list with the parameters to connect to the database. They should be the same as in the previous SQL query.

sql_par <-
list(

Driver =
“SQL Server”,
Server =
“.”,
Database =
“testiris”,
Uid =
“username”,
Pwd =
“password”
)

  1. Starting from the list of parameters, define a connection string for the SQL database. For this purpose, you can use the function paste.

sql_connect <-
paste(names(sql_par), unlist(sql_par),

sep =
“=”, collapse =
“;”)

  1. Define a MRS data source object containing the information to access the iristest database. The inputs are the table name, e.g. iris, and the connection string.

table_iris <-“iris”
sql_iris <-
RxSqlServerData(connectionString = sql_connect,

table = table_iris)

  1. Import the data into a SQL table. For this purpose, we can use rxDataStep. Its inputs are the local dataframe iris, the data source object sql_iris, and overwrite = TRUE, specifying that we are overwriting the table if existing already. To avoid issues in the SQL queries, we also re-name the columns of iris, replacing the . with an *_*.

names(iris) <-
gsub(“[.]”
, “_”, names(iris))
rxDataStep(iris, sql_iris, overwrite =
TRUE)

## Rows Read: 150, Total Rows Processed: 150, Total Chunk Time: Less than .001 seconds

  1. Install and load the package RODBC and initialise it using the connection string. We define the object channel containing the information to access the database.

library(RODBC)
channel <-
RODBC::odbcDriverConnect(sql_connect)

Exploring the data

To explore the data, we can either use MRS or SQL. In MRS, there is the option to quickly access the metadata using rxGetVarInfo.

rxGetVarInfo(sql_iris)

## Var 1: Sepal_Length, Type: numeric, Low/High: (4.3000, 7.9000)
## Var 2: Sepal_Width, Type: numeric, Low/High: (2.0000, 4.4000)
## Var 3: Petal_Length, Type: numeric, Low/High: (1.0000, 6.9000)
## Var 4: Petal_Width, Type: numeric, Low/High: (0.1000, 2.5000)
## Var 5: Species
## 3 factor levels: setosa versicolor virginica

The output contains the name and type of each field. Also, it’s possible to compute a quick summary of the table using rxSummary. The inputs are

  • Formula: what variables we want to summarise, using the formula syntax. For more info about the formula, have a look at the related material typing ?formula into the R console. To include all the variables, we can use ..
  • Data: the data source object, containing the information to access the data.

summary_iris <-
rxSummary(~
., sql_iris)

## Rows Read: 150, Total Rows Processed: 150, Total Chunk Time: Less than .001 seconds
## Computation time: 0.005 seconds.

summary_iris$sDataFrame

## Name Mean StdDev Min Max ValidObs MissingObs
## 1 Sepal_Length 5.843333 0.8280661 4.3 7.9 150 0
## 2 Sepal_Width 3.057333 0.4358663 2.0 4.4 150 0
## 3 Petal_Length 3.758000 1.7652982 1.0 6.9 150 0
## 4 Petal_Width 1.199333 0.7622377 0.1 2.5 150 0
## 5 Species NA NA NA NA 150 0

This summary contains some basic statistics like the mean and the standard deviation.

Using RODBC, we can run any SQL query. The inputs are the connection object channel, defined previously, and the query string. Let’s see a simple example.

RODBC::sqlQuery(channel, “select count(*) from iris”)

As expected, the output is the number of records.

R contains good string manipulation tools that allow us to build complex queries and to write extensions. To show the approach, we perform a simple group by operation. The steps are

  1. Define the skeleton of the query, leaving the column names as %s parameters
  2. Define the parameters
  3. Define the query inclusive of its parameters using sprintf
  4. Run the query

The R code is

query_sql <-“select %s, avg(%s) as avg_sl from iris group by %s”
col_group <-“Species”
col_value <-“Sepal_Length”
query_count <-
sprintf(query_sql, col_group, col_value, col_group)

df_avg_value <-
RODBC::sqlQuery(channel, query_count)
df_avg_value

##
## Attaching package: ‘dplyr’

## The following objects are masked from ‘package:stats’:
##
## filter, lag

## The following objects are masked from ‘package:base’:
##
## intersect, setdiff, setequal, union

## Species avg_sl
## 1 setosa 5.006
## 2 versicolor 5.936
## 3 virginica 6.588

Using a similar approach, we can define more complex queries. Also, if we need to run similar queries many times, it’s possible to build R functions that build the query depending on some given parameters.

Processing the data

To process the data, it’s possible to use both MRS and SQL. With MRS, we can process the data using the function rxDataStep. This function allows to build custom data processing operations. A simple example consists in defining a new column Sepal.Size with simple maths operations. The steps are

  1. Define a new SQL data source object
  2. Define the SQL table using RxSqlServerData. Its arguments are the input and output data object, and overwrite = TRUE, similarly to before. In addition, we define the new column using the transforms input.
  3. Have a look at the new table using rxGetVarInfo

This is the code:

table_iris_2 <-“iris2”
sql_iris_2 <-
RxSqlServerData(connectionString = sql_connect,

table = table_iris_2)
rxDataStep(inData = sql_iris,
outFile = sql_iris_2,
transforms =
list(Sepal.Size = Sepal.Length *
Sepal.Width),
overwrite =
TRUE)
rxGetVarInfo(sql_iris_2)

## Rows Read: 150, Total Rows Processed: 150, Total Chunk Time: Less than .001 seconds

## Var 1: Sepal_Length, Type: numeric, Low/High: (4.3000, 7.9000)
## Var 2: Sepal_Width, Type: numeric, Low/High: (2.0000, 4.4000)
## Var 3: Petal_Length, Type: numeric, Low/High: (1.0000, 6.9000)
## Var 4: Petal_Width, Type: numeric, Low/High: (0.1000, 2.5000)
## Var 5: Species
## 3 factor levels: setosa versicolor virginica
## Var 6: Sepal_Size, Type: numeric, Low/High: (10.0000, 30.0200)

As expected, we have a new column: Sepal_Size.

Using the same approach, it’s possible to build complex custom operations, making use of the open-source R functions.

Using RODBC, it’s possible to run an SQL query defining a new table. For instance, we can join two tables. Using an approach similar to the previous SQL query, we build the related string and execute it using RODBC.

table_avg_value <-“avg_value”
sql_avg_value <-
RxSqlServerData(connectionString = sql_connect,

table = table_avg_value)

rxDataStep(df_avg_value, sql_avg_value, overwrite =
TRUE)
query_join <-“select Sepal_Length, Species from iris
left join avg_value
on iris.Species = avg_value.Species”
df_join <-
RODBC::sqlQuery(channel, query_join)
df_join <-
df_join[, c(“Sepal_Length”, “Species”, “avg_sl”)]
head(df_join)

## Joining by: “Species”

## Sepal_Length Species avg_sl
## 1 5.1 setosa 5.006
## 2 4.9 setosa 5.006
## 3 4.7 setosa 5.006
## 4 4.6 setosa 5.006
## 5 5.0 setosa 5.006
## 6 5.4 setosa 5.006

After having processed the data, it’s possible to build machine learning models starting from the SQL tables. In this way, we don’t need to pull the data in-memory and we can deal with a large data volume. For instance, to build a linear regression, we can use rxLinMod. Similarly to rxSummary, the arguments are formula, defining the variables to include, and data, defining the data source. After having built the model, we can explore is using summary, similarly to open-source R.

model_lm <-
rxLinMod(formula = Petal_Length
~
Sepal_Length +
Petal_Width,
data = sql_iris)

## Rows Read: 150, Total Rows Processed: 150, Total Chunk Time: Less than .001 seconds
## Computation time: 0.000 seconds.

summary(model_lm)

## Call:
## rxLinMod(formula = Petal_Length ~ Sepal_Length + Petal_Width,
## data = sql_iris)
##
## Linear Regression Results for: Petal_Length ~ Sepal_Length +
## Petal_Width
## Data: sql_iris (RxXdfData Data Source)
## File name: iris.xdf
## Dependent variable(s): Petal_Length
## Total independent variables: 3
## Number of valid observations: 150
## Number of missing observations: 0
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -1.50714 0.33696 -4.473 1.54e-05 ***
## Sepal_Length 0.54226 0.06934 7.820 9.41e-13 ***
## Petal_Width 1.74810 0.07533 23.205 2.22e-16 ***
## —
## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ‘ 1
##
## Residual standard error: 0.4032 on 147 degrees of freedom
## Multiple R-squared: 0.9485
## Adjusted R-squared: 0.9478
## F-statistic: 1354 on 2 and 147 DF, p-value: < 2.2e-16
## Condition number: 9.9855

The output is the same as using open-source R function lm. The difference is that we can apply this function on a large table without needing to pull it in-memory.

Conclusions

SQL and MRS are designed for different purposes. Having them into the same environment, it’s possible to use each of them in the most proper context: SQL to prepare the data, MRS to build advanced machine learning models. Also, the string manipulation tools provided by R allow to build SQL queries, making it possible to write code extensions. Another option is to include R code within the SQL queries although we haven’t dealt with that in this article.

MSDN Blogs: 8/1 – Errata added for [MS-RMPR]: Rights Management Services (RMS): Client-to-Server Protocol


MSDN Blogs: 8/1 – Errata added for [MS-SAMR]: Security Account Manager (SAM) Remote Protocol (Client-to-Server)

MSDN Blogs: 8/1 – Errata added for [MS-WSDS]: WS-Enumeration Directory Services Protocol Extensions

MSDN Blogs: Microsoft наградила лучшие студенческие проекты конкурса Imagine Cup 2016

$
0
0

708x255

28 июля в Сиэтле, США, были выбраны победители конкурса Imagine Cup 2016, представившие лучшие технологические решения в категориях «Игры», «Инновации» и «Социальные проекты». В международном финале приняли участие 35 лучших команд со всего мира, в том числе гейм-дизайнеры из России InfinitePizza с игрой Partycles.

 

Москва, 29 июля 2016.– Конкурс технологических проектов Imagine Cup прошел уже в 14-й раз, в Международном финале в Сиэтле приняли участие более 110 талантливых молодых разработчиков из разных стран. Лучшие из них были награждены грантами на сумму $50 000, а также шансом лично встретиться с главой Microsoft – Сатьей Наделлой. Победителей определило авторитетное жюри, в составе которого были такие известные эксперты как Кики Вулфкилл, в прошлом арт-директор Microsoft Studios и одна из самых влиятельных женщин в игровой индустрии, и Марк Руссинович, программист и писатель, специалист по внутреннему устройству операционной системы Microsoft Windows.

 

Победителем категории «Игры» стала команда PH21 из Тайланда. Они представили проект Timelie – стелс-пазл с уникальным геймплеем. В основе его сюжета лежит история таинственной женщины Мерза, которая вторглась в секретную лабораторию и украла устройство, помогающее предвидеть ближайшее будущее. Во время своей миссии она встретила Альфу, маленькую девочку со способностью управлять временем, которую использовали, чтобы создать устройство. Они должны сотрудничать, чтобы выбраться из сложившейся ситуации.

 

Российская команда InfinitePizza со своим проектом Partycles, симулятором взаимодействия элементарных частиц, также принимала участие в финале. Предыдущие два года команды из России становились победителями в этой же номинации: Brainy Studioиз Перми с игрой TurnOn и проект OVIVOкоманды IzHard.

 

«Наша команда из Санкт-Петербурга конкурентоспособно смотрелась на фоне других, в конце презентации ребят все члены жюри начали играть в Partycles! Российские команды побеждали в категории «Игры» предыдущие 2 года, и в этот раз нашу страну также представляли талантливые гейм-разработчики. Это подтверждает высокий уровень профессионализма и мотивированности наших участников, – рассказал Дмитрий Сошников, координатор академических программ департамента стратегических технологий Microsoft в России. – Мы всегда поддерживаем талантливых ребят и реализуем целый спектр образовательных инициатив для школьников и студентов в рамках программы Youthspark. Надеемся, что в следующем году российские команды снова примут участие в финале».

В категории «Инновации»победила команда ENTy из Румынии с одноименным проектом – высокотехнологичным носимым устройством, которое отслеживает баланс внутреннего уха и проверяет положение позвоночника в режиме реального времени.

 

В номинации «Социальные проекты»одержала победу команда AMANDA из Греции. Основная задача проекта состоит в проведении поведенческого анализа детей, основанного на ИКТ, для выявления фактов запугивания и другого негативного вмешательства в их жизнь.

 

Microsoft организует большое количество программ, направленных на профессиональное развитие молодых специалистов, а также поддержку перспективных технологических проектов. Image Cup уже 14 лет дает шанс талантливым студентам доказать работоспособность своего проекта, получить финансирование на его развитие и заручиться одобрением авторитетных ИТ-экспертов. Для многих конкурс стал первым шагом к развитию своего стартапа.

 

Более подробную информацию о победителях конкурса вы можете найти по ссылке.

MSDN Blogs: Submissions using Windows 10, version 1607 and Windows Server 2016 are now being accepted!

$
0
0

Submissions covered by Windows 10, version 1607

The Windows Hardware Lab Kit (HLK) has been updated to support Windows 10, version 1607 and Windows Server 2016.

The HLK is available for download on the Hardware Dev Center:

HLK version 1607 enforces the Windows 10 hardware requirements and polices posted at https://aka.ms/compatreq and is designed for testing the following Windows 10 and Windows Server releases:

  • Client version 1607
  • Client version 1511
  • Client version 1507
  • Windows Server 2016

New Features in the HLK

This release of the HLK introduces two new features to the infrastructure:

Exporting failed HLK jobs

Failed jobs can now be exported and re-run on a machines that are not connected to an HLK controller, enabling driver developers to easily diagnose and fix issues that cause the HLK test to fail. This feature supports all tests running on Desktop and Server operating systems except for multi-machine or manual tests.

Improved ability to diagnose failed HLK tests

If an HLK test fails due to a system crash on the client under test, there is now an eye-catching indicator (sad smiley face icon) in the Results tab. The information that is displayed will be the Bugcheck summary from the associated Bug Check along with a link to the HLK help file for debugging information. This feature supports tests running only on Desktop and Server operating systems.

Errata fixes and Expiration date

110 errata were fixed and are set to expire on 10/31/2016. Please open a CSS case for any errata that needs further investigation. All other errata have been reviewed and have been set to expire on 7/1/2017.

Playlists to support the incremental Windows releases

As Windows 10 continues to evolve with new features, there is an increasing need to differentiate the incremental versions of Windows 10 within the HLK. HLK version 1607 incorporates OS metadata to aid in determining which Windows 10 version is presently attached and which tests should target the specific version identified. These incremental identification changes are also reflected in the playlist for the HLK version 1607. There will now be two playlists available to support the incremental Windows 10 releases; one playlist is to be used with the HLK version 1511 and the other for HLK version 1607. In order to successfully test for Windows 10 compatibility and Windows Server 2016 certification, you will need the latest available Windows Hardware Compatibility Program Playlist for the correct OS version under test.

Both playlists are available here:

Compatibility Policy for Windows 10, version 1507/Windows 10, version 1511 drivers Factory-Installing on Windows 10, version 1607 image

Per Windows Hardware Compatibility Policy, all partners must transition to HLK version 1607 and use the HLK version 1607 Compatibility playlist no later than October 31, 2016. After this date, logs from HLK version 1511 and corresponding playlist will no longer be accepted for Compatibility submissions.

You must continue to use the Windows Hardware Certification Kit (HCK) version 2.1 to certify hardware for following operating systems:

  • Windows 7
  • Windows 8
  • Windows 8.1
  • Windows Server 2008 R2 (x64)
  • Windows Server 2012
  • Windows Server 2012 R2

For the first 90 days after the release of Windows 10, version 1607, OEMs looking to achieve Compatibility for systems shipping Windows 10, version 1607 may factory-install drivers for components that achieved Compatibility with Windows 10, version 1511. After 90 days, OEMs looking to achieve Compatibility for systems shipping Windows 10, version 1607 can factory-install only drivers for components that achieved Compatibility with Windows 10, version 1607. This is not applicable to OEMs certifying for Windows Server 2016, as all components within the system must be certified for Windows Server 2016 in order to be considered compatible.

Submitting test results for Windows 10, version 1607

As previously mentioned, submissions for Windows 10, version 1607 must be done using HLK version 1607. One important item to remember is that if you are submitting results for Windows 10 version 1607, the results must be packaged for submission using a controller with HLK version 1607 installed. If a controller with HLK version 1511 is used to package and submit results for Windows 10, version 1607, the submission will fail due to the data in the package not being in the correct format. Submissions affected by this will appear to be stuck in the Validating HCK/HLK Submission Package step. We will monitor for this scenario and reach out to partners with affected submissions. A controller with HLK version 1607 installed can be used to submit results from previous kits (HCK/HLK).

OEM down level system testing for Windows 10

OEMs looking to achieve compatibility for systems shipping Windows 10, version 1511 must continue to use HLK version 1511 to achieve compatibility for Windows 10, version 1511 until the OS version 1607 is required to be factory installed on systems.

MSDN Blogs: July 2016 release notes

$
0
0

The Microsoft Dynamics Lifecycle Services team is happy to announce the immediate availability of the July release of Lifecycle Services.

NEW FEATURES

SharePoint integration improvements in LCS (Preview)

With the July release, we have made significant improvements to simplify the LCS SharePoint integration for AX7 projects. The new framework uses user OAuth for performing operations in SharePoint.

Set up SharePoint in an LCS project

  1. To set up a SharePoint site in an LCS project, go to your LCS AX7 Project, scroll to the right, and click the Project Settings tile.
  2. On the Project settings page, click the SharePoint Online tab.
  3. Enter your SharePoint site URL and then click Next. Make sure the account you are using has access to the SharePoint that you are trying to setup.

1

  1. Select the folder you want to use for the project. If you don’t have a folder, go to SharePoint and create a new folder. Then select that project from the project selection dropdown and click Save.

2

After the setup is successfully complete, you will see a screen similar to the following:

3

When you go to the SharePoint Online library in the project, you can see the list of documents in the folder.

4

Add additional project documents by clicking +.

5

LCS project team members who have access to SharePoint folder can directly download files from LCS and update new project documents.

In SharePoint, you can see the document in the SharePoint folder.

6

LCS solution improvements

If you are an ISV who is working on an AX solution, you can now add the Power BI report model to the solution from solution management.

7

Customers looking for ISV solutions on Dynamics AX, can now find them at Microsoft AppSource. Learn more about AppSource here.

8

If you are interested in publishing your Dynamics AX solution to Microsoft AppSource, find more information about the listing process here.

Viewing all 3015 articles
Browse latest View live