Quantcast
Channel: Randy Riness @ SPSCC aggregator
Viewing all 3015 articles
Browse latest View live

MSDN Blogs: Connectivity issues with Packaging service – 11/15 – Investigating

$
0
0

Update: Tuesday, 15 November 2016 23:11 UTC

Our DevOps team continues to investigate issues with packaging service. Root cause is not fully understood at this time. Some customers continue to experience connectivity issues who installed extension after 11/2. We are working to establish the start time for the issue, initial findings indicate that the problem began at 15 Nov 2016 19:15 UTC. We currently have no estimated time for resolution.

  • Work Around: You can reach our support team at https://www.visualstudio.com/team-services/support/ to unblock your issue 
  • Next Update: Before 02:00 16 Nov 2016 UTC


Sincerely,
Bapayya


MSDN Blogs: Microsoft Translator launching Neural Network based translations for all its speech languages

$
0
0

Microsoft Translator is now powering all speech translation through state-of-the-art neural networks.

All speech translation apps that use this service, such as Skype Translator and the Microsoft Translator app for mobile devices, are now using neural network technology.  Furthermore, the technology is available to all developers and end-users who want to use the Microsoft Translator speech API to integrate the technology into their favorite apps and services.

In addition to the nine languages supported by the Microsoft Translator speech API, namely Arabic, Chinese Mandarin, English, French, German, Italian, Brazilian Portuguese, Russian and Spanish, neural networks also power Japanese text translations. These ten languages together represent more than 80% of the translations performed daily by Microsoft Translator.

Neural network technology has been used for the last few years in many artificial intelligence scenarios, such as speech and image processing. Many of these capabilities are available through Microsoft Cognitive services. Neural networks are making in-roads into the machine translation industry, providing major advances in translation quality over the existing industry-standard Statistical Machine Translation (SMT) technology. Because of how the technology functions, neural networks better capture the context of full sentences before translating them, providing much higher quality and more human-sounding output.

Even though Microsoft’s use of neural networks for speech and text translation is still at an early stage, it is producing superior translations to what SMT provides. As with any new technology (we’re in the so-called ascent phase of the s-curve), we know the quality improvements neural networks provide today are only a first step towards future improvements. You can learn more about neural network-powered translation here.

By leveraging the scale and power of Microsoft’s AI supercomputer and the Microsoft Cognitive Toolkit, the team was able to release ten languages simultaneously; additional languages will be supported over time.

The ten languages are in production today, and available for users for translated Skype calls, for Windows desktop and the Skype Preview app for Windows 10, and in the conversation feature of the Microsoft Translator app for iOS and Android.

Anyone can directly try and compare the improved quality of these new neural network models by testing translations at http://translator.microsoft.com

In addition, for developers and enterprise customers of the Microsoft Translator API, the deployment into production of these new models comes as Microsoft Translator becomes available for test and purchase on the Azure portal. To get started with the Microsoft Translator API using your Azure subscription, click here. If you are already a Microsoft Translator subscriber and want to learn how to move your subscription to Azure, click here.

Neural network-powered translation is available for developers using both the speech and text APIs:

  • All speech API calls are neural network-powered beginning today.
  • Text API calls using the “generalnn” standard category are powered by neural networks for translations between the above 10 languages. Learn more about standard categories here.

 

Learn More:

MSDN Blogs: Maintenance Mode for OMS Alerts

$
0
0

 

Azure Automation Runbook to enable and disable OMS Alerts

OMS is a hyper scale, hybrid and heterogenous monitoring system which can alert on thresholds from any system anywhere. The alerting can be either an email notification, a webhook or even a runbook.

Now what happens when you want to suspend alert during a maintenance window? SCOM has the ability of pausing workflows and suspending alerts for a period. In OMS you would have to disable the alerts one by one:

capture20161116123043036

Or you can trigger or schedule a runbook to do it for you!

This blog takes you step by step on setting your runbook to start or stop a maintenance window.

First things first. You’ll need:

  • OMS workspace with alerts configured
  • Azure Automation

That’s it!

Step 1 – Create your SPN for authentication:

I use a service principal get a token for authentication.

You can find more details here: https://docs.microsoft.com/en-us/azure/resource-group-authenticate-service-principal

You can create it in the new portal, or via powershell:

$app = New-AzureRmADApplication -DisplayName "{app-name}" -HomePage "https://{your-domain}/{app-name}" -IdentifierUris "https://{your-domain}/{app-name}" -Password "{your-password}"
New-AzureRmADServicePrincipal -ApplicationId $app.ApplicationId
New-AzureRmRoleAssignment -RoleDefinitionName Contributer -ServicePrincipalName $app.ApplicationId.Guid

Or via the portal:

Click on Azure Active Directory, then choose “App Registrations”:

capture20161116123220389

Click on Add, enter a name for the app, choose “Web App / API” and choose a Sign-on URL, then click on Create.

capture20161116123458974

Click on the app, then settings and then “Keys”. Create a new key and click on save. Make sure you copy the key before you close the blade

capture20161116123932081

Take note of the AppID and run this powershell line:

New-AzureRmRoleAssignment -RoleDefinitionName Contributer -ServicePrincipalName $app.ApplicationId.Guid

Step 2 – Add Assets to your Automation Account:

Add a connection asset for your SPN, with your Subscription ID, your Tenant ID, the SPN Application ID, the Application key (in the certificate thumbprint) called ‘AzureRunAsSPN’:

capture20161116124941822

Add a variable for your OMS workspace details called “OMSWorkspaceName”:

capture20161116125222456

And another one for the name of the resource group for your OMS called “OMS-Resource-Group-Name”:

capture20161116125236682

Step 3 – Create your runbooks:

Create a Powershell runbook, called “Start-OMS-MaintenanceMode” with the following code:


$AlertsEnabled = "false"

$OMSResourceGroupId = Get-AutomationVariable -Name 'OMS-Resource-Group-Name'
$OMSWorkspaceName = Get-AutomationVariable -Name 'OMSWorkspaceName'

$SPNConnection = Get-AutomationConnection -Name 'AzureRunAsSPN'
$SubscriptionID = $SPNConnection.SubscriptionId
$TenantID = $SPNConnection.TenantID
$AzureUserNameForOMS = $SPNConnection.ApplicationId
$AzureUserPasswordForOMS = $SPNConnection.CertificateThumbprint

#region Get Access Token
$TokenEndpoint = {https://login.windows.net/{0}/oauth2/token} -f $TenantID
$ARMResource = "https://management.core.windows.net/";

$Body = @{
'resource'= $ARMResource
'client_id' = $AzureUserNameForOMS
'grant_type' = 'client_credentials'
'client_secret' = $AzureUserPasswordForOMS
}

$params = @{
ContentType = 'application/x-www-form-urlencoded'
Headers = @{'accept'='application/json'}
Body = $Body
Method = 'Post'
URI = $TokenEndpoint
}

$token = Invoke-RestMethod @params -UseBasicParsing
$Headers = @{'authorization'="Bearer $($Token.access_token)"}
#endregion

#get all saved searches
$savedSearches = (([string] (Invoke-WebRequest -Method Get -Uri "https://management.azure.com/subscriptions/$SubscriptionID/Resourcegroups/$OMSResourceGroupId/providers/Microsoft.OperationalInsights/workspaces/$OMSWorkspaceName/savedsearches?api-version=2015-03-20" -Headers $Headers -ContentType 'application/x-www-form-urlencoded' -UseBasicParsing).Content) | ConvertFrom-Json).Value.id

foreach ($savedSearch in $savedSearches)
{
#call for schedules associated with the saved searches
$schedules = ([string] (Invoke-WebRequest -Method Get -Uri "https://management.azure.com/$savedSearch/schedules?api-version=2015-03-20" -Headers $Headers -ContentType 'application/x-www-form-urlencoded' -UseBasicParsing).Content) | ConvertFrom-Json
#check if the saved search has a schedule
if ($schedules -ne $null)
{
$schedules.Properties.Enabled = $AlertsEnabled
$scheduleurl = $schedules.id + "?api-version=2015-03-20"
$body = $schedules | ConvertTo-Json
#set new property to schedule
Invoke-WebRequest -Method Put -Uri "https://management.azure.com/$scheduleurl" -Headers $Headers -ContentType 'application/json' -Body $Body -UseBasicParsing
}
}

You can now associate whatever schedule to suit you.

To stop maintenance mode, create another runbook called “Stop-OMS-MaintenanceMode”, changing the following line in the code:

From $AlertsEnabled = "false"

To $AlertsEnabled = "true"

MSDN Blogs: The day Microsoft’s CEO met ordinary kids doing extraordinary things from Sydney Secondary College

$
0
0

Satya Nadella meets with kids from Sydney Secondary College

By Pip Cleaves, head teacher, learning innovation, at Sydney Secondary College – Leichhardt Campus

Sydney Secondary College Leichhardt Campus works hard at being future focused by experimenting with new ways to learn that encourage students to dream big. So when we heard we would meet Satya Nadella at the flagship Microsoft Store in Sydney, plenty of them said they were keen to show him what they’ve achieved. We asked students to showcase their ideas so we could decide on two groups to take to the store.

We have about 910 students in years 7 to 10, and we think of ourselves as a pretty ordinary school with a technology edge. We emphasise one-on-one, future-focused learning, which is a wonderful way for our students – average Aussie kids in Sydney’s Inner West – to unleash their natural curiosity about the world around them.

Among the students who wanted to show their work to Satya were kids in years 9 and 10 who participate in a project-based learning subject, ACCORD, in which they choose a topic and study independently. Like all of our subjects, ACCORD uses OneNote and many students also make imaginative use of the digital storytelling application Sway.

Students participating in ACCORD are working on a variety of projects. These include building a healthy snack food business, offering awesome vegan chocolate cheesecake;  from one of our girls whose current project is building a healthy snack food business. Another had her guitar on hand to play some of the music she has made into an album.

In addition to these culinary and musical delights, we also had a few flying drones coded by one of our lunchtime technology groups, the Tech Ninjas; our school media group, Leichardt TV; and a video developed by a student that has just won an Australia-wide competition.

In addition, we also have kids from a year 8 Japanese class who play Minecraft and have constructed a medieval Japanese village in the program.

My own kids introduced me to Minecraft and I had long wanted to use it to create a learning experience in class. So when Minecraft for Education became available, I structured a unit around the question ‘What would Musashi Miyamoto do if he woke up in Minecraft?’

To work out what the famous samurai would do, we divided the class into different social groups: samurai, shogun, daimyo (feudal lords), peasants and merchants. We loaded structured content into OneNote and the kids created the village, working out rules and consequences.

We also have a class in our support unit for students with autism. These students enjoy exploring robotics and coding with Sphero app-enabled robots and littleBits, which are cool modular electronics components that snap together.

After much deliberation, we decided to take some Minecraft players and some students from the support unit to the Microsoft Store.

Once we were logged in and playing in the store, Satya dropped in to chat.

In the collaboration space of OneNote, the students explained their research and what they were building. It was a fantastic success. The learning, engagement and fun were there to see. And we thought Satya would like to see what they had built.

Satya spent some time with the special needs students, sharing the infectious joy they experience when learning comes through doing.

As you walk around our campus, you see how technology is part of school life. We are a bring-your-own-device school, which works incredibly well for us, mainly because 85 per cent of teachers use OneNote to deliver lessons. That’s up from 0 per cent in March 2015. Part of my job is coaching teachers in learning innovation, working with them one on one. They have all cooperated because OneNote gives them what they need, and it works well with the educational website Edmodo for communication and collaboration.

If you empower teachers to try new technology at a level they are comfortable with, you lay the foundation for success. We have a lovely, blended environment that suits each teacher’s skills and capabilities; one where they feel confident enough to keep trying new things.

For kids, trying new things comes naturally. Their curiosity and fearlessness should be celebrated. It was great to see Satya do just that with our kids.

MSDN Blogs: Photosynth viewer code now available as open source

$
0
0

Ahead of our February 6th shutdown we released an offline viewer for most of the content on the Photosynth site last week.

Today we are following through by open sourcing this viewer to make sure it is maintainable by any interested party in the future. Anyone can build their own version of the current viewer or fork the repository and make changes. Start here: https://github.com/photosynth/offlineViewer

 

MSDN Blogs: Data Exposed: Azure Data Lake GA!

$
0
0

This week’s episode of Data Exposed welcomes Saveen Reddy and Rajesh Dadhia to the show to make the important announcement that Azure Data Lake Store and Azure Data Lake Analytics is now Generally available! This is exciting news! Barely a minute into the show and Saveen and Rajesh, both of whom are GPMs in the Big Data team at Microsoft, give us the great news. For those who are unfamiliar with Azure Data Lake, Rajesh and Saveen give us quick overview of both Azure Data Lake Store and Azure Data Lake Analytics are all about.

Rajesh focuses on Azure Data Lake Store, a hyper-scale data repository for big data analytic workloads. He explains how ADLS compares to other cloud storage options, and talks about how enterprises can leverage ADLS for storage of any type of data without limitations to size. Saveen then takes over to discuss Azure Data Lake Analytics and shows how ADLA compares in relation to HDInsight and the other compute components. Both Rajesh and Saveen talk about how easy it is to get started and create an ADLS and ADLA account and the ease at which you can develop massively parallel programs to gain insights into your data. They provide insight into the architecture, how compute and data work together to provide highly optimized and scalable analytical solutions.

Learn more about Azure Data Lake Store: https://azure.microsoft.com/en-us/services/data-lake-store/

Learn more about Azure Data Lake Analytics: https://azure.microsoft.com/en-us/services/data-lake-analytics/

MSDN Blogs: How to build a sales force with an Ironman attitude

$
0
0

How to build a sales force with an Ironman attitude

An Ironman event comprises of a 3.8 km swim, 180 km on a bike and a 42.2km run. Written like that, it’s a straightforward statement, but to complete one requires commitment and the right attitude.

What’s described above is the same as what goes on in your business. Let’s say you have a target of $5mAUD to achieve this fiscal – you need to know you can count on your team to make it happen, because if you can’t, you may as well go home. So, using some of the knowledge and insights from the Incredibleresults Sales Academy, and my personal experiences of Ironman, here’s 5 quick tips on how to build a sales force with the right kind of attitude.

It’s ok to have different personalities

People who participate in Ironman are just like a sales team. Some will brag incessantly and bore you to death with tales of what makes them so successful. Others will quietly get on with it and accept awards with humility. Some just want to get there faster than the guy 10 years younger than them. These personalities each have the same goal, but they’ll address it in different ways depending on their motivation, the challenge for you is to make sure you understand what that is.

Understand the personalities, and you’ll understand their motivators

Not everyone wants a medal

If you acknowledge that everyone’s drivers are different, increasing every sales person’s quota by 10% may not give you the desired results. Think about who wants to beat their personal best, who just wants to finish the race and who craves the top spot on the podium. Giving each of your sales people a challenge that is truly personal to them will get you the biggest buy-in.

Make the success of the business a truly personal matter

Do your team understand what’s required?

It’s unlikely that anyone’s turned up for an Ironman and been surprised that they have to swim, but lots of people accept sales roles without understanding what’s required of them, and plenty of sales managers hire people expecting them to instinctively know what’s needed. Make sure you are crystal clear when it comes to targets – especially if your sales people sell a range of products or services.

Be clear and check understanding

Have your team got the right skills?

There’s a whole myriad of skills involved in the selling process so make a list of what you expect your team to be able to do – i.e: negotiate, influence, prospect, close. Then look at your team and assess what they can do, and what they need help with. Be honest – it’s not possible to improve if you can never pinpoint areas for adjustment or growth.

Regularly assess your team – even elite sales people benefit from coaching

Practice and predictability creates permanency

Anyone participating in an Ironman will have a series of smaller competitions planned in to improve their speed, strength, stamina and (crucially) keep them in the correct mindset. Help your team get there by making sure you have reviews, training and coaching scheduled in at regular intervals throughout the year – they should not be a surprise. Doing so helps create an atmosphere of continuous growth through personal commitment to development. This generates positive habits in your sales people (and helps you to establish which ones aren’t bringing the right attitude to the table).

Creating a framework for growth encourages the right behaviours

 

That’s just a few thoughts on how you can create the right kind of attitude in your sales force; to find out more about the Sales Academy (or learn a bit about my Ironman experiences) get in touch

incredible-results-logo-high-res

 

MSDN Blogs: Management Agent Configuration – Part 1: Active Directory Management Agent

$
0
0

Hey party people – long time, no see! You may have noticed (or maybe you didn’t) that I took a hiatus from blogging for a while. Well I’m back and kicking off my return with another post series, this time about management agents. Although eventually I’d like to cover the configuration of all commonly used management agents, I’m going to start with the few core MAs you’re probably going to need (ADMA, MIMMA, SQLMA). Bear with me as some of the screenshots I’m using are recycled and I will need to update them. That being said, all the technical details are the same and I want to go ahead and get this out there where it might help someone.

 

So, without further adieu, let’s get rollin’…

 

Before we can manipulate users and/or groups with the FIM Synchronization Engine, it is necessary that we create Management Agents. Here, we will create a Management Agent for connecting to Active Directory.

 

Begin by opening the Synchronization Engine

clip_image002

 

In the menu on the top right-hand corner, select “Create”

clip_image004

 

This will open the “Create Management Agent” wizard. For “Management agent for:”, select “Active Directory Domain Services”. Enter a name for this MA, then click “Next” to continue

clip_image006

 

Enter your Forest name, as well as an administrative user account, its password and domain, then click “Next”

image

 

Select the partition you wish to manage. Next, click on “Containers”

image

 

This will open a list of available containers. Select the ones you wish to manage with FIM, then click “OK”

clip_image012

 

This will return you to the previous window. Click “Next” to continue.

image

 

For the time being, we will leave this default. Click “Next” to continue.

clip_image014

 

Provisioning hierarchy, in case you’re wondering, gives us the ability to create OUs that currently do not exist and bring them into scope based on a defined path in the DN. For example, if I am attempting to build a user DN like:

 

CN=Abe Lincoln,OU=Republicans,OU=Presidents,DC=Contoso,DC=Local

 

And there is no “Republicans” OU under “Presidents”, if I have provisioning hierarchy configured it will automagically create it and bring it into scope for me. While this can be very handy in certain circumstances (i.e., acquisitions and mergers), this can also get you into some seriously trouble if you’re building malformed DNs.

 

Under “Object Types”, place a check in the box next to “user” and click “Next”

clip_image016

 

Select the attributes you wish to manage, then click “Next”

clip_image018

 

For the “Configure Connector Filter” tab, we’re going to leave these default (blank) and click “Next”.

image

 

For “Join and Projection Rules”, select the “User” and click “New Join Rule”.

image

 

Under “Data Source Attribute” and “Metaverse Attribute”, select the corresponding (unique) values you’d like to attempt a join on. Here, we see I am mapping “employeeNumber” in AD to the custom attribute “PoliticianID” in my myetaverse. Once you have made your selection, click “Add Condition”:.

 

image

 

You may then receive the following warning:

image

 

Click “OK” to continue.

 

Here we see our join rule:

image

 

At this point, you may follow the same steps to add additional join criteria, as such:

image

 

It is worth noting that you may have any number of join conditions here, as we would prefer a join to a possible projection of a duplicate object. Also of interest is these become an “or” where it starts with the first condition and, if a join is unable to occur, it continues down the list attempting joins until there is no more criteria. At that point a project happens.

 

For “Attribute Flow”, you may leave this default and click “Next” to continue.

clip_image028

 

For “Deprovisioning”, you may leave this default and click “Next” to continue.

clip_image030

 

For an explanation of these options, please see this post for disconnections, this post for explicit disconnections and this post for deletions.

 

For “Extensions”, you may leave this default. Now, click “Finish”

clip_image032

 

 

Questions? Comments? Love FIM so much you can’t even stand it?

EMAIL US >WE WANT TO HEAR FROM YOU!<

## https://blogs.msdn.microsoft.com/connector_space##


MSDN Blogs: Docker containers – Should I use .NET Core or .NET Framework?

$
0
0

 

Below is a summary decision table depending on your architecture or application type and the server operating system you are targeting for your Docker containers.

Take into account that if you are targeting Linux containers you will need Linux based Docker hosts (VMs or Servers) and in a similar way, if you are targeting Windows containers you will need Windows Server based Docker hosts (VMs or Servers).


image

 

Architecture / App Type

Linux containers

Windows containers

Microservices

.NET Core

.NET Core

Monolithic deployment App

.NET Core

.NET Framework

.NET Core

Best-in-class performance and scalability

.NET Core

.NET Core

Windows Server “brown-field” migration to containers

.NET Framework

Containers “green-field”

.NET Core

.NET Core

ASP.NET Core

.NET Core

.NET Core recommended

.NET Framework is possible

ASP.NET 4 (MVC 5, Web API 2)

.NET Framework

SignalR services

.NET Core in upcoming releases

.NET Framework

.NET Core in upcoming releases

WCF, WF and other traditional frameworks

.NET Framework

Consumption of Azure services

.NET Core

(Eventually all Azure services will provide Client SDKs for .NET Core)

.NET Framework

.NET Core

(Eventually all Azure services will provide Client SDKs for .NET Core)

 

And here’s a written summary about it:

You should use .NET Core for your containerized Docker server application when:
•    You have cross-platform needs. You want to use Linux containers and Windows containers.
•    Your application architecture is based on microservices.
•    You need best-in-class high performance and hyper-scale.

You should use .NET Framework for your containerized Docker server application when:
•    Your application currently uses .NET Framework and has strong dependencies on Windows
•    You need to use third-party .NET libraries or NuGet packages not available for .NET Core.
•    You need to use .NET technologies that are not available for .NET Core.
•    You need to use a platform that doesn’t support .NET Core.

I’m writing further details about this subject in a separate and longer document, but I thought it would be good to share this summary table with the community and get some feedback. Please, post your feedback here in the blog post or send me direct feedback to cesardl at microsoft.com

MSDN Blogs: Docker containers – Should I use .NET Core or .NET Framework?

$
0
0

image

 

The short answer is: “For Docker containers, use .NET Core whenever is possible”. But, below is a summary decision table depending on your architecture or application type and the server operating system you are targeting for your Docker containers.

Take into account that if you are targeting Linux containers you will need Linux based Docker hosts (VMs or Servers) and in a similar way, if you are targeting Windows containers you will need Windows Server based Docker hosts (VMs or Servers).


 

Architecture / App Type

Linux containers

Windows containers

Microservices

.NET Core

.NET Core

Monolithic deployment App

.NET Core

.NET Framework

.NET Core

Best-in-class performance and scalability

.NET Core

.NET Core

Windows Server “brown-field” migration to containers

.NET Framework

Containers “green-field”

.NET Core

.NET Core

ASP.NET Core

.NET Core

.NET Core recommended

.NET Framework is possible

ASP.NET 4 (MVC 5, Web API 2)

.NET Framework

SignalR services

.NET Core in upcoming releases

.NET Framework

.NET Core in upcoming releases

WCF, WF and other traditional frameworks

.NET Framework

Consumption of Azure services

.NET Core

(Eventually all Azure services will provide Client SDKs for .NET Core)

.NET Framework

.NET Core

(Eventually all Azure services will provide Client SDKs for .NET Core)

 

And here’s a written summary about it:

You should use .NET Core for your containerized Docker server application when:
•    You have cross-platform needs. You want to use Linux containers and Windows containers.
•    Your application architecture is based on microservices.
•    You need best-in-class high performance and hyper-scale.

You should use .NET Framework for your containerized Docker server application when:
•    Your application currently uses .NET Framework and has strong dependencies on Windows
•    You need to use third-party .NET libraries or NuGet packages not available for .NET Core.
•    You need to use .NET technologies that are not available for .NET Core.
•    You need to use a platform that doesn’t support .NET Core.

I’m writing further details about this subject in a separate and longer document, but I thought it would be good to share this summary table with the community and get some feedback. Please, post your feedback here in the blog post or send me direct feedback to cesardl at microsoft.com

MSDN Blogs: Management Agent Configuration–Part 2: FIM Service Management Agent

$
0
0

This is the second post in a series on management agent configuration. In Part 1, I covered configuration of an Active Directory management agent. In this post, I’d like to step through the FIM Service Management Agent (FIMMA). While you don’t necessarily have to have a FIMMA, you cannot move data between the Portal and Sync service without one.

 

Before we can manipulate users and/or groups with the FIM Synchronization Engine, it is necessary that we create Management Agents. Here, we will create a Management Agent for connecting the Synchronization Engine with the FIM Service Portal.

 

Begin by opening the Synchronization Engine

clip_image002

 

In the menu on the top right-hand corner, select “Create”

clip_image004

 

This will open the “Create Management Agent” wizard. For “Management agent for:”, select “FIM Service Management Agent”. Enter a name for this MA, then click “Next” to continue

clip_image006

 

Enter the name of the server, database and FIM Service base address. Next, select “Windows Integrated Authentication” and enter the previously created service account, password and domain, then click “Next” to continue.

clip_image008

 

In the “Object Types” window, be sure to select “Person” and then click “Next” to continue.

clip_image010

 

In the “Attributes” window, you may select as many (or as few) attributes as you wish. Please note, however, that only attributes selected here will be available in the FIM Portal.

clip_image012

 

For “Connector Filter”, you may configure these using the same steps found under this tab on the ADMA, found here. In my environment, I filter two accounts: administrator and the Built-in Synchronization Account. Administrator is the default portal admin account (typically, the account you were logged in as when you installed the service/portal). The Built-in Synchronization Account is a default account (and very important one!) that gets created during the install. This is the account which fires workflows, performs modifications and generally does work for you in FIM; break it and everything goes off the rails.

image

 

For “Configure Object Type Mappings”, as a best practice, there are two things we should do. First, select “Group”, click on “Add Mapning” and in the drop-down menu next to “Metaverse object type:”, select “group”. Click “OK”

clip_image016

 

Next, select “Person”, click on “Add Mapping”, and in the drop-down menu next to “Metaverse object type:”, select “person”. Click “OK”, and then click “Next” to continue.

clip_image018

 

For “Attribute Flow”, you may leave these default. Please note, if you wish you flow custom attributes, you will need to create an associated flow here. Click “Next” to continue.

clip_image020

 

For “Deprovisioning”, you may choose the default, choose to make explicit disconnectors or choose to stage a deletion. Click “Next” to continue.

clip_image022

 

“Extensions” may be left default. To complete configuration and build the Management Agent, click “Finish”

clip_image024

 

Questions? Comments? Love FIM so much you can’t even stand it?

EMAIL US !

>WE WANT TO HEAR FROM YOU<

## https://blogs.msdn.microsoft.com/connector_space##

MSDN Blogs: Management Agent Configuration – Part 3: SQL Management Agent

$
0
0

This is the third part in a series dealing with basic configuration of commonly used management agents. In Part 1 we looked at configuration of an Active Directory Management Agent (ADMA), and in Part 2 we looked at the configuration of the FIM Service Management Agent (FIMMA). With this post, we’ll take a look at another common type, a MSFT SQL Management Agent (SQLMA).

 

Now we are going to create a management agent (MA) for Microsoft SQL Server. This type of MA may be used for any connected data source which utilizes Microsoft SQL Server as the underlying data system.

 

To begin, open the Synchronization Engine. In the right-hand menu, select “Create”.

clip_image002

 

For “Management agent for:”, select “SQL Server” from the drop-down list. Enter a name for this MA and click “Next” to continue.

clip_image004

 

Next, enter the connection information for the server, database, connection/view and authentication, then click “Next” to continue.

image

 

Once successfully authenticated to the database/table/view, the MA will pull in the column names automatically. Review that the data is correct, then click “Set Anchor…”. The anchor attribute value must be unique to each user object and exist in both the connected data source (in this case, Microsoft SQL Server) as well as the metaverse.

clip_image008

 

For “Configure Connector Filter”, the defaults may be used. Click “Next” to continue.

clip_image010

 

Now we must configure join rules. These rules will allow a user object in the connected data source to join its associated object in the metaverse, rather than creating another (duplicate) object. For a join to occur, the attribute in the join rule must be unique to each individual user object. Also, multiple joins rules can be created to assure a join occurs. In this scenario, joins are attempted between the “EmployeeID” and “PoliticianID” attributes. To create a join rule, click on “New Join Rule…”. Click “Next” when finished.

image

 

For “Configure Attribute Flow”, we may leave this default. Much like an ADMA, we want to put everything into FIM and then provision intelligently based on what we need. Click “Next” to continue.

clip_image014

 

For “Configure Deprovisioning”, you may leave this default (to create disconnectors), or select accordingly based on your environment. Click “Next” to continue.

clip_image016

For an explanation of these options, please see this post for disconnections, this post for explicit disconnections and this post for deletions.

 

For “Configure Extensions”, we will leave this default.

clip_image018

 

 

 

Questions? Comments? Love FIM so much you can’t even stand it?

EMAIL US !

>WE WANT TO HEAR FROM YOU<

## https://blogs.msdn.microsoft.com/connector_space##

MSDN Blogs: AX client cannot connect to AOS or crash at start up when AX database restored from another environment

$
0
0

Two symptoms have been detected so far:

  1. AX Client may not crash, but you will see dialog failed to connect to AOS in a second or two after launch Ax32.exe
  2. AX Client will crash with call stack below:

    00 Ax32!std::_Tree<std::_Tmap_traits<formMessageSecurityRule::Message,formMessageSecurityRule *,std::less<formMessageSecurityRule::Message>,std::allocator<std::pair<formMessageSecurityRule::Message const ,formMessageSecurityRule *> >,0> >::_Lbound

    01 Ax32!std::_Tree<std::_Tmap_traits<formMessageSecurityRule::Message,formMessageSecurityRule *,std::less<formMessageSecurityRule::Message>,std::allocator<std::pair<formMessageSecurityRule::Message const ,formMessageSecurityRule *> >,0> >::lower_bound

    02 Ax32!std::_Tree<std::_Tmap_traits<formMessageSecurityRule::Message,formMessageSecurityRule *,std::less<formMessageSecurityRule::Message>,std::allocator<std::pair<formMessageSecurityRule::Message const ,formMessageSecurityRule *> >,0> >::find

    03 Ax32!formMessageSecurity::DoMessageSecurity

    04 Ax32!AxWnd::AxWndBase::ProcessViaMSGInterface

    05 Ax32!AxWnd::AxMainFrame::ProcessWindowMessage

    06 Ax32!WTL::CMDIFrameWindowImpl<AxWnd::AxMainFrame,WTL::CMDIWindow,ATL::CWinTraits<114229248,262400> >::MDIFrameWindowProc

    07 user32!_InternalCallWinProc

    08 user32!InternalCallWinProc

    09 user32!UserCallWinProcCheckWow

    0a user32!CallWindowProcAorW

    0b user32!CallWindowProcW

    0c Ax32!ATL::CContainedWindowT<ATL::CWindow,ATL::CWinTraits<1442840576,0> >::DefWindowProcW

    0d Ax32!ATL::CContainedWindowT<ATL::CWindow,ATL::CWinTraits<1442840576,0> >::WindowProc

    0e user32!_InternalCallWinProc

    0f user32!InternalCallWinProc

    10 user32!UserCallWinProcCheckWow

    11 user32!DispatchClientMessage

    12 user32!__fnINDEVICECHANGE

    13 ntdll!KiUserCallbackDispatcher

    14 user32!_PeekMessage

    15 user32!PeekMessageW

    16 combase!CCliModalLoop::MyPeekMessage

    17 combase!CCliModalLoop::HandlePendingMessage

    18 combase!CCliModalLoop::HandleWakeForMsg

    19 combase!CCliModalLoop::BlockFn

    1a combase!ClassicSTAThreadWaitForHandles

    1b combase!CoWaitForMultipleHandles

    1c clr!MsgWaitHelper

    1d clr!Thread::DoAppropriateAptStateWait

    1e clr!Thread::DoAppropriateWaitWorker

    1f clr!Thread::DoAppropriateWait

    20 clr!CLREventBase::WaitEx

    21 clr!CLREventBase::Wait

    22 clr!WaitForEndOfShutdown_OneIteration

    23 clr!WaitForEndOfShutdown

    24 clr!EEShutDown

    25 clr!HandleExitProcessHelper

    26 clr!EEPolicy::HandleExitProcess

    27 clr!ForceEEShutdown

    28 clr!ExternalShutdownHelper

    29 clr!ShutdownRuntimeWithoutExiting

    2a mscoreei!RuntimeDesc::ShutdownAllActiveRuntimes

    2b mscoreei!CorExitProcess

    2c msvcr90!__crtCorExitProcess

    2d msvcr90!doexit

    2e msvcr90!exit

    2f Ax32!ExceptionFilterFunc

    30 KERNELBASE!UnhandledExceptionFilter

    31 ntdll!RtlpThreadExceptionFilter

    32 ntdll!__RtlUserThreadStart

    33 ntdll!_RtlUserThreadStart

But the root cause behind this is the dirty data in table SYSSERVERCONFIG, if the field SERVERID is incorrect or we have more than records in this table than the actual AOS number, Client will have a chance to connect to the wrong SERVERID and fail.
The actual error call stack in AX Client as below:

00 kernelbase!RaiseException

01 rpcrt4!RpcpRaiseException

02 rpcrt4!RpcRaiseException

03 rpcrt4!NdrGetBuffer

04 rpcrt4!NdrClientCall2

05 ax32!ServerGetLoadBalanceServers

06 ax32!GetLoadBalanceServers

07 ax32!CSessionMgr::GetLoadBalanceServers

08 ax32!Srv_DBSessionGet

09 ax32!CSessionMgr::CreateNewClientSession

0a ax32!CSessionMgr::CreateMainSession

0b ax32!gopts

0c ax32!xApp::init

0d ax32!AxWnd::AxMainFrame::OnXMInit

0e ax32!AxWnd::AxMainFrame::ProcessWindowMessage

0f ax32!WTL::CMDIFrameWindowImpl<AxWnd::AxMainFrame,WTL::CMDIWindow,ATL::CWinTraits<114229248,262400> >::MDIFrameWindowProc

10 user32!_InternalCallWinProc

11 user32!InternalCallWinProc

12 user32!UserCallWinProcCheckWow

13 user32!CallWindowProcAorW

14 user32!CallWindowProcW

15 ax32!ATL::CContainedWindowT<ATL::CWindow,ATL::CWinTraits<1442840576,0> >::DefWindowProcW

16 ax32!ATL::CContainedWindowT<ATL::CWindow,ATL::CWinTraits<1442840576,0> >::WindowProc

17 user32!_InternalCallWinProc

18 user32!InternalCallWinProc

19 user32!UserCallWinProcCheckWow

1a user32!DispatchMessageWorker

1b user32!DispatchMessageW

1c ax32!AxWnd::AxApp::go

1d ax32!AxWnd::AxApp::main

1e ax32!wWinMain

1f ax32!__tmainCRTStartup

20 ax32!DomainBoundILStubClass.IL_STUB_PInvoke()

21 clr!CallDescrWorkerInternal

22 clr!CallDescrWorkerWithHandler

23 clr!MethodDescCallSite::CallTargetWorker

24 clr!MethodDescCallSite::Call_RetArgSlot

25 clr!RunMain

26 clr!Assembly::ExecuteMainMethod

27 clr!SystemDomain::ExecuteMainMethod

28 clr!ExecuteEXE

29 clr!_CorExeMainInternal

2a clr!_CorExeMain

2b mscoreei!_CorExeMain

2c mscoree!ShellShim__CorExeMain

2d mscoree!_CorExeMain_Exported

2e kernel32!BaseThreadInitThunk

2f ntdll!__RtlUserThreadStart

30 ntdll!_RtlUserThreadStart

Solution:

Check the records in table SYSSERVERCONFIG and make sure you have the correct SERVERID, also have the same records as the actual AOS instances. After changes made in that table please restart AOS to take effect.

MSDN Blogs: Management Agent Configuration – Part 4: Delimited Text File Management Agent

$
0
0

This is Part 4 in a series of commonly used management agents. In Part 1 we looked at configuration of an Active Directory Management Agent (ADMA); in Part 2 we looked at the configuration of the FIM Service Management Agent (FIMMA), and in Part 3 we looked at the configuration of a Microsoft SQL Management Agent (SQLMA). With this post, we’ll take a look at what I consider to be the last of the most commonly used management agent types: delimited text files.

 

As before, to begin navigate to your Synchronization server.

clip_image002

 

From here, click “Create”. Using the “Management agent for:” drop down menu, select “Delimited text file”. Enter a name, then click “Next”.

clip_image004

 

It is necessary to input a “template” file so that FIM may know the formatting. It is perfectly acceptable here to use the actual user data containing file as the template. To locate it, click “Browse”.

clip_image006

 

Select the text file you wish to use and click “Open”, then click “Next” to continue.

clip_image008

 

If your input text file has a header row (such as the below example), place a check in the box next to “Use first row for header names”. Select the delimiter type and text qualifier. In this scenario, we are delimiting with commas and using an apostrophe as a qualifier. Click “Next” to continue.

clip_image010

 

Here we should see all the attributes being read in from the file. If any of these are multi-valued and need to be changed, select “Edit”. Also, under “Configure special attributes”, select “Set Anchor…”

clip_image012

 

This will display the “Set Anchor” dialogue. Select an anchor attribute from the left-hand list and click “Add”. Note that an anchor attribute must be present and unique on each side (i.e. text file as well as inside FIM). Once selected, click “OK”.

clip_image014

 

For “Define Object Types”, we may leave these default. Click “Next” to continue.

clip_image016

 

For “Configure Connector Filter”, we will leave these default. Click “Next” to continue.

clip_image018

 

For “Configure Join and Projection Rules”, you may wish to include join logic. If so, click “New Join Rule..”

clip_image020

 

In this scenario, I am creating a “Direct” join mapping of “PoliticianID” to “PoliticianID”. This is because I know (in this environment) it is unique for every user object. Click on “Add Condition” and then click “OK”. Click “Next” to continue.

clip_image022

 

For “Configure Attribute Flow”, we will leave this default. By doing so, we will pull all available attributes from the source file into FIM and then manipulate them there (based on workflows, sets and synchronization rules). Click “Next” to continue.

clip_image024

 

For “Configure Deprovisioning”, we may leave this default or select another option based on your environment. Click “Next” to continue.

clip_image026

For an explanation of these options, please see this post for disconnections, this post for explicit disconnections and this post for deletions.

 

Click “Finish” to complete.

clip_image028

 

We should now see our newly created management agent.

clip_image030

 

 

 

 

Questions? Comments? Love FIM so much you can’t even stand it?

EMAIL US !

>WE WANT TO HEAR FROM YOU<

## https://blogs.msdn.microsoft.com/connector_space##

MSDN Blogs: Issues with Hosted Build in Visual Studio Team Services – 11/17 – Investigating

$
0
0

Initial Update: Thursday, 17 November 2016 19:12 UTC

We are actively investigating issues with Hosted Build Service. Subset of customers in South Central US and South Brazil regions may experience build requests waiting for agent allocation longer than usual time.

  • Next Update: Before 21:00 UTC.


Sincerely,
Krishna


MSDN Blogs: In-Memory OLTP in Standard and Express editions, with SQL Server 2016 SP1

$
0
0

We just announced the release of Service Pack 1 for SQL Server 2016. With SP1 we made a push to bring a consistent programming surface area across all editions of SQL Server. One of the outcomes is that In-Memory OLTP (aka Hekaton), the premier performance technology for transaction processing, data ingestion, data load, and transient data scenarios, is now available in SQL Server Standard Edition and Express Edition, as long as you have SQL Server 2016 SP1.

In this blog post we recap what the technology is. We then describe the resource/memory limitations in Express and Standard Edition. We go on to describe the scenarios for which you’d want to consider In-Memory OLTP. We conclude with a sample script illustrating the In-Memory OLTP objects, and some pointers to get started.

How does In-Memory OLTP work?

In-Memory OLTP can provide great performance gains, for the right workloads. One of our customers managed to achieve 1.2 Million requests per second with a single machine running SQL Server 2016, leveraging In-Memory OLTP.

Now, where does this performance gain come from? In essence, In-Memory OLTP improves performance of transaction processing by making data access and transaction execution more efficient, and by removing lock and latch contention between concurrently executing transactions: it is not fast because it is in-memory; it is fast because it is optimized around the data being in-memory. Data storage, access, and processing algorithms were redesigned from the ground up to take advantage of the latest enhancements in in-memory and high concurrency computing.

Now, just because data lives in-memory does not mean you lose it when there is a failure. By default, all transactions are fully durable, meaning that you have the same durability guarantees you get for any other table in SQL Server: as part of transaction commit, all changes are written to the transaction log on disk. If there is a failure at any time after the transaction commits, your data is there when the database comes back online. In addition, In-Memory OLTP works with all high availability and disaster recovery capabilities of SQL Server, like AlwaysOn, backup/restore, etc.

To leverage In-Memory OLTP in your database, you use one or more of the following types of objects:

  • Memory-optimized tables are used for storing user data. You declare a table to be memory-optimized at create time.
  • Non-durable tables are used for transient data, either for caching or for intermediate result set (replacing traditional temp tables). A non-durable table is a memory-optimized table that is declared with DURABILITY=SCHEMA_ONLY, meaning that changes to these tables do not incur any IO. This avoids consuming log IO resources for cases where durability is not a concern.
  • Memory-optimized table types are used for table-valued parameters (TVPs), as well as intermediate result sets in stored procedures. These can be used instead of traditional table types. Table variables and TVPs that are declared using a memory-optimized table type inherit the benefits of non-durable memory-optimized tables: efficient data access, and no IO.
  • Natively compiled T-SQL modules are used to further reduce the time taken for an individual transaction by reducing CPU cycles required to process the operations. You declare a Transact-SQL module to be natively compiled at create time. At this time, the following T-SQL modules can be natively compiled: stored procedures, triggers and scalar user-defined functions.

In-Memory OLTP is built into SQL Server, and starting SP1, you can use all these objects in any edition of SQL Server. And because these objects behave very similar to their traditional counterparts, you can often gain performance benefits while making only minimal changes to the database and the application. Plus, you can have both memory-optimized and traditional disk-based tables in the same database, and run queries across the two. You will find a Transact-SQL script showing an example for each of these types of objects towards the end of this post.

Memory quota in Express and Standard Editions

In-Memory OLTP includes memory-optimized tables, which are used for storing user data. These tables are required to fit in memory. Therefore, you need to ensure you have enough memory for the data stored in memory-optimized tables. In addition, both Standard Edition and Express Edition each database a quota for data stored in memory-optimized tables.

To estimate memory size required for your data, consult the topic Estimate Memory Requirements for Memory-Optimized Tables.

These are the per-database quotas for In-Memory OLTP for all SQL Server editions, with SQL Server 2016 SP1:

SQL Server 2016 SP1 EditionIn-Memory OLTP quota (per DB)
Express352MB
Web16GB
Standard32GB
DeveloperUnlimited
EnterpriseUnlimited

The following items count towards the database quota:

  • Active user data rows in memory-optimized tables and table variables. Note that old row versions do not count toward the cap.
  • Indexes on memory-optimized tables.
  • Operational overhead of ALTER TABLE operations, which can be up to the full table size.

If an operation causes the database to hit the cap, the operation will fail with an out-of-quota error:

Msg 41823, Level 16, State 171, Line 6
Could not perform the operation because the database has reached its quota for in-memory tables. See 'http://go.microsoft.com/fwlink/?LinkID=623028' for more information.

* Note: at the time of writing, this link points to an article about In-Memory OLTP in Azure SQL Database, which shares the same quota mechanism as SQL Server Express and Standard edition. We’ll update that article to discuss quotas in SQL Server as well.

If this happens, you will no longer be able to insert or update data, but you can still query the data. Mitigation is to delete data or upgrade to a higher edition. In the end, how much memory you need depends to a large extend how you use In-Memory OLTP. The next section has details about usage patterns, as well as some pointers to ways you can manage the in-memory footprint of your data.

You can monitor memory utilization through DMVs as well as Management Studio. Details are in the topic Monitor and Troubleshoot Memory Usage. Note that memory reported in these DMVs and reports can become slightly higher that the quota, since they include memory required for old row versions. Old row versions do count toward the overall memory utilization and you need to provision enough memory to handle those, but they do not count toward the quota in Express and Standard editions.

Usage scenarios for In-Memory OLTP

In-Memory OLTP is not a magic go-fast button, and is not suitable for all workloads. For example, memory-optimized tables will not really bring down your CPU utilization if most of the queries are performing aggregation over large ranges of data – Columnstore helps for that scenario.

Here is a list of scenarios and application patterns where we have seen customers be successful with In-Memory OLTP.

High-throughput and low-latency transaction processing

This is really the core scenario for which we built In-Memory OLTP: support large volumes of transactions, with consistent low latency for individual transactions.

Common workload scenarios are: trading of financial instruments, sports betting, mobile gaming, and ad delivery. Another common pattern we’ve seen is a “catalog” that is frequently read and/or updated. One example is where you have large files, each distributed over a number of nodes in a cluster, and you catalog the location of each shard of each file in a memory-optimized table.

Implementation considerations

Use memory-optimized tables for your core transaction tables, i.e., the tables with the most performance-critical transactions. Use natively compiled stored procedures to optimize execution of the logic associated with the business transaction. The more of the logic you can push down into stored procedures in the database, the more benefit you will see from In-Memory OLTP.

To get started in an existing application, use the transaction performance analysis report to identify the objects you want to migrate, and use the memory-optimization and native compilation advisors to help with migration.

Data ingestion, including IoT (Internet-of-Things)

In-Memory OLTP is really good at ingesting large volumes of data from many different sources at the same time. And it is often beneficial to ingest data into a SQL Server database compared with other destinations, because SQL makes running queries against the data really fast, and allows you to get real-time insights.

Common application patterns are: Ingesting sensor readings and events, to allow notification, as well as historical analysis. Managing batch updates, even from multiple sources, while minimizing the impact on the concurrent read workload.

Implementation considerations

Use a memory-optimized table for the data ingestion. If the ingestion consists mostly of inserts (rather than updates) and In-Memory OLTP storage footprint of the data is a concern, either

The following sample is a smart grid application that uses a temporal memory-optimized table, a memory-optimized table type, and a natively compiled stored procedure, to speed up data ingestion, while managing the In-Memory OLTP storage footprint of the sensor data: release and source code.

Caching and session state

The In-Memory OLTP technology makes SQL really attractive for maintaining session state (e.g., for an ASP.NET application) and for caching.

ASP.NET session state is a very successful use case for In-Memory OLTP. With SQL Server, one customer was about to achieve 1.2 Million requests per second. In the meantime, they have started using In-Memory OLTP for the caching needs of all mid-tier applications in the enterprise. Details: https://blogs.msdn.microsoft.com/sqlcat/2016/10/26/how-bwin-is-using-sql-server-2016-in-memory-oltp-to-achieve-unprecedented-performance-and-scale/

Implementation considerations

You can use non-durable memory-optimized tables as a simple key-value store by storing a BLOB in a varbinary(max) columns. Alternatively, you can implement a semi-structured cache with JSON support in SQL Server. Finally, you can create a full relational cache through non-durable tables with a full relational schema, including various data types and constraints.

Get started with memory-optimizing ASP.NET session state by leveraging the scripts published on GitHub to replace the objects created by the built-in session state provider.

Tempdb object replacement

Leverage non-durable tables and memory-optimized table types to replace your traditional tempdb-based #temp tables, table variables, and table-valued parameters.

Memory-optimized table variables and non-durable tables typically reduce CPU and completely remove log IO, when compared with traditional table variables and #temp table.

Case study illustrating benefits of memory-optimized table-valued parameters: https://blogs.msdn.microsoft.com/sqlserverstorageengine/2016/04/07/a-technical-case-study-high-speed-iot-data-ingestion-using-in-memory-oltp-in-azure/

Implementation considerations

To get started see: Improving temp table and table variable performance using memory optimization.

ETL (Extract Transform Load)

ETL workflows often include load of data into a staging table, transformations of the data, and load into the final tables.

Implementation considerations

Use non-durable memory-optimized tables for the data staging. They completely remove all IO, and make data access more efficient.

If you perform transformations on the staging table as part of the workflow, you can use natively compiled stored procedures to speed up these transformations. If you can do these transformations in parallel you get additional scaling benefits from the memory-optimization.

Getting started

Before you can start using In-Memory OLTP, you need to create a MEMORY_OPTIMIZED_DATA filegroup. In addition, we recommend to use database compatibility level 130, and set the database option MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT to ON.

You can use the script at the following location to create the filegroup in the default data folder, and set the recommended settings:
https://raw.githubusercontent.com/Microsoft/sql-server-samples/master/samples/features/in-memory/t-sql-scripts/enable-in-memory-oltp.sql

The following script illustrates In-Memory OLTP objects you can create in your database:

-- configure recommended DB option
 ALTER DATABASE CURRENT SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON
 GO
 -- memory-optimized table
 CREATE TABLE dbo.table1
 ( c1 INT IDENTITY PRIMARY KEY NONCLUSTERED,  c2 NVARCHAR(MAX))
 WITH (MEMORY_OPTIMIZED=ON)
 GO
 -- non-durable table
 CREATE TABLE dbo.temp_table1
 ( c1 INT IDENTITY PRIMARY KEY NONCLUSTERED,
   c2 NVARCHAR(MAX))
 WITH (MEMORY_OPTIMIZED=ON,
       DURABILITY=SCHEMA_ONLY)
 GO
 -- memory-optimized table type
 CREATE TYPE dbo.tt_table1 AS TABLE
 ( c1 INT IDENTITY,
   c2 NVARCHAR(MAX),
   is_transient BIT NOT NULL DEFAULT (0),
   INDEX ix_c1 HASH (c1) WITH (BUCKET_COUNT=1024))
 WITH (MEMORY_OPTIMIZED=ON)
 GO
 -- natively compiled stored procedure
 CREATE PROCEDURE dbo.usp_ingest_table1
   @table1 dbo.tt_table1 READONLY
 WITH NATIVE_COMPILATION, SCHEMABINDING
 AS
 BEGIN ATOMIC
     WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT,
           LANGUAGE=N'us_english')  DECLARE @i INT = 1
 
   WHILE @i > 0  BEGIN
     INSERT dbo.table1
     SELECT c2
     FROM @table1
     WHERE c1 = @i AND is_transient=0
 
     IF @@ROWCOUNT > 0      SET @i += 1
     ELSE
     BEGIN
       INSERT dbo.temp_table1
       SELECT c2
       FROM @table1
       WHERE c1 = @i AND is_transient=1
 
       IF @@ROWCOUNT > 0        SET @i += 1
       ELSE
         SET @i = 0
     END
   END
 
 END
 GO
 -- sample execution of the proc
 DECLARE @table1 dbo.tt_table1
 INSERT @table1 (c2, is_transient) VALUES (N'sample durable', 0)
 INSERT @table1 (c2, is_transient) VALUES (N'sample non-durable', 1)
 EXECUTE dbo.usp_ingest_table1 @table1=@table1
 SELECT c1, c2 from dbo.table1
 SELECT c1, c2 from dbo.temp_table1
 GO

A perf demo using In-Memory OLTP can be found at: in-memory-oltp-perf-demo-v1.0.

Try In-Memory OLTP in SQL Server today!

Resources to get started:

MSDN Blogs: Azure Container Registry User Accounts; Single, Multi, Admin and Service Principals

$
0
0

The Azure Container Registry went into public preview yesterday. We’re excited to add this core platform feature for the breadth of container deployments being added to Azure. These include Azure Container Service, Azure Service Fabric, Azure App Service and Azure Batch, …as of now. More are coming…

When designing the Azure Container Registry, we felt it important to maintain common CLI support, such as using Docker login, push and pull.

Docker login is a basic auth flow, requiring a username and password. What we want to do, and will do, is support your Azure Login credentials so you can manage access to the registry with Azure Active Directory groups. Such as your development team has read/write access, while others outside your org may not have access, or they may have read access. To use active directory identities, we need a token flow, and because many companies require 2 factor auth, it gets even more complex. At this point the docker client doesn’t yet support 2 factor auth flows. Rather than wrap the docker api in an Azure specific API, we chose to stay true to the docker CLI. Which means things got a bit more complex and/or limited in the short term.

Shipping a Public Preview – managing time/resources/quality

When we decided to land a public preview for Connect() 2016, we needed to decide which top problems we were going to take on.

  • Fully Qualified Image Names – that won’t change
    We felt the top priority was to design the service such that image fully qualified names won’t chnage as we add features or go GA. We went through several designs for how to name the registry URL. We want to provide geo-replication features that mean an image tag of stevelas-microsoft.azurecr.io/helloworld would be available across any region I decide. But, I can decide later, not at the time of creation.
    We also want to support several groups autonomy from each other. For intance, warranty-contoso.azurecr.io/web from marketing-contoso.azurecr.io/web from contoso.azurecr.io/aspnetcore for corporate images.
  • Headless access to the registry for Build/Deploy and Production Systems
    When VSTS, Jenkins or other build systems need to push/pull images, they need authentication. This includes your production hosting solution, such as ACS or App Services. While we want to provide token flows with Service Principals over time, the current docker flows use basic auth.
  • Multi-Region Support
    While we will support geo-replicaiton n the future, we first need to support multiple regions. Deploying a core Azure Service and resource provider turns out to be a lot of work.
  • Azure CLI
    An open source CLI for managing the container registry, such as az acr
  • Azure Portal Integration 
    Enabling developers to easily provision in the portal
  • Reliable Availability
    What good is an azure service if it’s not reliably available for your build and deploy scenarios.

When we considered these priorities, along wiht our backlog, we realized we couldn’t fit the token flows in reliably in time for the public preview at Connect(), so, what to do…?

The Azure Container Registry Admin Account and Service Principals

The docker CLI supports basic auth. You use this for docker hub and other private registries. We felt this was the first important goal.

We also wanted to support more than one account, and we know we need to support headless scenarios. Azure AD and service principals is great for this. You create a service principal and we can use the App Id as the username and the password as the password. You can regenerate the passwords and manage these over time with secured storage of Azure Ad.

At the same time, the AD integration can be tricky. We don’t yet have great AD support in the new Azure Portal.

Until we get full AD / Individual Identity and 2 factor token flows working with the Azure Container Registry, we chose to add a special Admin Account.

When you create an ACR instance, you have the option of creating it with the Admin Account enabled/disabled. We debated the default as we want this to go away over time.

When the admin account is enabled, you get a single user, username/password combination you can immediately use to interact with the registry.  Simply issue: docker login myregistry-company.azurecr.io -u [admin username] -p [admin password] and your good to go.

However, when you want multiple users to have access to the registry, you really don’t want to give out these credentials. This means everyone would use a single account, which doesn’t scale as it makes it near impossible to reset the password without breaking someone/something you want to keep running.

Until we get AD Identity complete, you can add service principals. The easiest way is to use the az cli.

az resource group create -l westus -n myregistry-acr

az acr create -n myregistry -l westus -g myregisry-acr --admin-enabled true

Once the registry is created, the az acr cli will provide some helpful commands:

Create a new service principal and assign access:
az ad sp create-for-rbac --scopes /subscriptions/[your subscription id]/resourcegroups/myregistry-acr/providers/Microsoft.ContainerRegistry/registries/myregistry --role Owner --password <password>

You can also assign existing service principals to the registry

az role assignment create --scope /subscriptions/[your subscription id]/resourcegroups/myregistry-acr/providers/Microsoft.ContainerRegistry/registries/myregistry --role Owner --assignee <app-id>

You can use other roles of course for:

Summing it up:

We’re excited to bring you the Azure Container Registry to support your Azure and even On Prem Container workloads. We’re working to prioritize the features you need, expect, want and value. …in that approximate order.

We will be adding AD individual identity with 2 factor auth flows, but wanted to get customers something they can easily use today.

We do not recommend using the admin account for anything other than some basic testing. Please use the service principal flows, even for individual users, until we complete the AD individual identity flow complete.

Steve

 

MSDN Blogs: Relay Hybrid Connections .NET Core support – 0.1.2-preview release

MSDN Blogs: Prológus

$
0
0

Én annak a generációnak a tagja vagyok, akik fiatalon PC-s játékokkal játszva lett a számítástechnika szerelmese. Settlers, Civilization, Colonization, X-Com és társain keresztül belenőttünk az asztali alkalmazások és a kliens-szerver architektúra világába. Izgalmas korszak volt, nagyszerű szoftverekkel.

Ma viszont egy korszakváltás közepén állunk. Egy olyan korszakban, ahol a cégek digitális transzformációja, és a felhő technológiák térnyerése számtalan üzleti és technológiai lehetőséget illetve kihívást tartogat a tradicionális szoftverfejlesztő cégek számára. Sőtt, sokszor ezeknek a szervezeteknek maguknak is digitálisan transzformálódniuk kell a túléléshez.

Ebben a blogban az új korszak nyújtotta lehetőségeket és kihívásokat fogom bemutatni gyakorlatias szemszögből.

A digitális transzformációt érthetően leíró cikket itt találsz.

MSDN Blogs: SCP.Net with HDInsight Linux Storm clusters

$
0
0

SCP.Net is now available on HDInsight Linux clusters 3.4 and above.

Versions

storm-scp-version-matrix

Note: HDInsight Storm team recommends HDI 3.5 clusters for users looking to migrate their SCP.Net topologies from Windows to Linux.

 

Development of SCP.Net Topology

Pre-Steps

 

Azure Datalake Tools for Visual Studio

HDInsight tools for Visual Studio does not support submission of SCP.Net topologies to HDI Linux Storm clusters.

The latest Azure Datalake Tools for Visual Studio is needed to develop and submit SCP.Net topologies to HDI Linux Storm clusters.

The tools are available for Visual Studio 2013 and 2015.

 

Please note: Azure DataLake tools has compatibility issues with other/older extensions to Visual Studio.

One known issue is where no clusters are shown in the drop-down for topology submission.

If you encounter this issue, please uninstall all extensions to Visual studio, and re-install Azure Datalake tools.

 

Java

SCP.Net generates a zip file consisting of the topology DLLs and dependency jars.

It uses Java (if found in the PATH) or .net to generate the zip. Unfortunately, zip files generated with .net are not compatible with Linux clusters.

Java installation requirements:

  • Java should be installed on the machine (JDK1.7+) (example: C:JDK1.7)
  • JAVA_HOME system variable should be set to the installation path. (C:JDK1.7)
  • PATH system variable should include %JAVA_HOME%bin

One way to verify if your Java setup is good is to:

  1. Launch a command window
  2. execute: java -version
    The above command should output java usage.
  3. Execute: jar
    The above command should output jar usage

 

Creating a SCP.Net topology

    • Open visual studio, and the create new project dialog should have storm templates.visualstudio2015_newproject

 

    • Select Storm Project from Azure Datalake Templatesvisualstudio2015_newstormproject

 

    • For HDI 3.5, the SCP.Net nuget package needs to be updated to 1.0.0.1 version (or the latest available).visualstudio2015_stormproject_managenugetpackagesvisualstudio2015_newstormproject_updatescpnet_package_version

 

  • After update the package.config should look as belowvisualstudio2015_stormproject_updatedpackageconfig

 

Submission of Topology

    • Right click on the project and select Submit to Storm on HDInsight … optionvisualstudio2015_stormproject_submission_menu

 

  • Choose the Storm Linux cluster from the drop down.
    visualstudio2015_stormproject_submission_dialog
     
    Java File Path

    If you have java jar dependencies, you can include their full paths as a ; string.
    or
    you can use * to indicate all jars in a given directory.

Investigating Submission failures

 
Topology submissions can fail due to many reasons:

  • Required java dependencies are not included
  • Incompatible java jar dependencies. Example: Storm-eventhub-spouts-9.jar is incompatible with Storm 1.0.1. If you submit a jar with that dependency, topolopgy submission will fail.
  • Duplicate names for topologies

 

Logs

Topology submission operations are logged into /var/log/hdinsight-scpwebapi/hdinsight-scpwebapi.out on the active head node.
Users can look into the above file on head node to identify causes for submission failures. (In case where the output from the tool is not helpful).

Viewing all 3015 articles
Browse latest View live