Quantcast
Channel: Randy Riness @ SPSCC aggregator
Viewing all 3015 articles
Browse latest View live

MSDN Blogs: Small Basic – Exploring New Logos – Part 1

$
0
0

In the Small Basic team, we have a great designer, Ram, who is helping us explore new logos.

Please keep in mind that these logos are currently being used for internal Microsoft projects. We aren’t finished yet.

That said/written, we welcome your feedback! Please leave a comment below!

Here is our current logo:

Logo_CurrentSBLogo

Our Small Basic Community Council member, Liam had a fantastic futuristic version that modernizes the ideogram/symbol:

Logo_SmallBasic_444x144_LiamMcSherry

Small Basic carries on the Logo turtle graphics tradition, and so we wanted to combine that into our new logo. Our first version of that centered on four colors, to map into the Microsoft windows ideogram:

Logo01_Round4Colors

The next version focused on the blue and green (since red and yellow was in the text and line on the right):

Logo02_Pentagon2Colors

The next experiment was to see how it looked with the original building-block pastel colors:

Logo02a_Current3Pastels

That was a little bright and cheerful. =^)

Next we looked at some mixes of the Microsoft colors again. Here the emphasis is on red and yellow:

Logo02b_MSColors-RedYellow

And here the emphasis is on red and green:

Logo02c_MSColors-RedGreen

And this one focuses on green and blue:

Logo02d_MSColors-BlueGreen

And this one focuses on yellow and blue:

Logo02e_MSColors-YellowBlue

And next it was up for something a bit different. This one pushes more back to the Microsoft logo’s ideogram/symbol:

Logo03_4OuterScales

That one looks a little bit target like, and we wanted the turtle aspect. Here is a version of that, with the “SB” in white:

Logo04_GrayBorders

And here it is with the “SB” in black:

Logo05_WhiteBorders

And that leads us to where we’re currently at! Next we’ll play with the text part on the right some more.  Regardless, we’re getting some great use out of it at Microsoft! Special thanks go out to Ram.

Please leave a comment with your thoughts!

 

Small and Basically yours,

– Ninja Ed


MSDN Blogs: Changing SharePoint farm passwords

$
0
0

Another recent case from the customer.

What they had is a farm which was deployed with autospinstaller. https://autospinstaller.codeplex.com/ So they have quite a number of accounts for various SharePoint services , windows services and application pools.

 

So how should we change the accounts?

The easiest way (that was implemented at the customer) is to have account to be registered as managed accounts.

This way you or SharePoint can automatically change passwords for the accounts, and update all relevant records.

The managed account can change the passwords in AD or just update the SharePoint records. In our case there were some errors we have not resolved, so we have let the AD admins change the password and then we have changed the things in SharePoint.

 

So what happens after you change the AD account password?

If you do nothing, there will be problems. Most likely though you will only notice the problems after you restart the server.

 

The reason is that there are

Windows Service registrations on farm computers containing copies of the passwords

IIS Pools registrations containing copies of the passwords

In some cases (like search or workflow) other entities contain the reference to the password.

Managed accounts

The benefit of managed account is that SharePoint can automate some of these actions across the farm. Namely – changing the windows service registrations and iis pools registrations.

If you choose to change the passwords (and not let them be changed automatically), there are basically 2 ways to do it.

Option one – Central Admin

two

 

Press the edit button

Enter new password and press ok below

two

 

Note that option one – Set account password to a new value will try to change the password in AD first. Option two will just update the relevant services and IIS pools.

In some cases you would prefer to use PowerShell. If the account you are changing is also used to run the Central Administration application pool, then your command will kinda fail in the middle cause it will run under the pool that is going to be reset!

In this case you can use the Set-SPManagedAccount command

https://technet.microsoft.com/en-us/library/ff607617(v=office.16).aspx

If you want this command to change the AD password use this format

Set-SPManagedAccount -Identity $username -NewPassword $newpassword -ConfirmPassword $newpassword

If you want to use an existing password – use this one.


Set-SPManagedAccount -Identity $username -ExistingPassword $newpassword  -UseExistingPassword:$true

I have made a script that reads accounts and new passwords from the csv files and updates them in a bulk.


<#
.SYNOPSIS
Changes managed account passwords at the farm.
.DESCRIPTION
Changes accounts using the provided CSV file.
.EXAMPLE
.changepasswords.ps1    -inputFile "yourfile.csv" -newPasswords:$false
.NOTES
Author: Marat Bakirov
Date: 05 July 2016
#>
[cmdletbinding()]
param(
[string] $InputFile = "accountsandpasswords.csv",
[switch] $newPasswords = $true
)

####################################################
# Configurables
####################################################

Add-PSSnapin Microsoft.Sharepoint.Powershell

####################################################
# Main
####################################################

function Main
{

$passwords = Import-Csv $InputFile
$passwords | foreach {
$username = $_.Username
$newpwd1 = $_.NewPassword
$newpassword =  ConvertTo-SecureString -String $newpwd1 -AsPlainText -Force
$newpwd1
if ($newpasswords)
{
Write-Host “changing password for  {$username} to a new one”
Set-SPManagedAccount -Identity $username -NewPassword $newpassword -ConfirmPassword $newpassword -Confirm:$false
}
else
{
Write-Host “changing password for  {$username} to an existing one”
Set-SPManagedAccount -Identity $username -ExistingPassword $newpassword -Confirm:$false -UseExistingPassword:$true
}
}
}
Main

How to change other passwords

If the account participates in the user profile sync, search or workflow farm, you  might need to run additional scripts.

User profile sync

These accounts are managed and are changed within SharePoint but are also used for the User Profile Sync. So an additional configuration might be required.

Good reference can be found here

https://blog.zubairalexander.com/managing-passwords-for-service-accounts-in-sharepoint-sql-server/– section 5 5. User Profile Synchronization Connection Account

or https://blogs.msdn.microsoft.com/charliechirapuntu/2013/01/16/sharepoint-2010-service-accounts-passwords-change-guide/

 

Search crawler account

This has an additional impact – the search content account has to be updated in the active directory first and then updated in the search center.

https://technet.microsoft.com/en-au/library/dn178512.aspx

 

Workflow and service bus farm accounts

 

 

On each server in the farm that has workflow installed run the Service Bus PowerShell in the elevated mode. (Note: if the service buspower shell is missing, then skip the procedure for this server).

Run the changewfpassword.ps1 script.

The script will prompt for the new Password for the svcInsiteWfProd/ svcInsiteWfTest  account.

 


Write-Host "Please enter a new password"
$passwordText = Read-Host
$AccountPassword = ConvertTo-SecureString -String $passwordText -AsPlainText -Force

Stop-WFHost -Verbose
Update-WFHost -RunAsPassword $AccountPassword –Verbose
Start-WFHost -Verbose

Stop-SBHost -Verbose
Update-SBHost -RunAsPassword $AccountPassword –Verbose
Start-SBHost -Verbose

 

Source code

The scripts could be found here

 

https://1drv.ms/f/s!AguWtH15ywzQhI5kUYLXI1Jcmv4Y6Q

 

MSDN Blogs: [Sample Of Jul. 29] How to share data across multiple devices in Win10 UWP App

$
0
0
image
Jul.
29
image
image

Sample : https://code.msdn.microsoft.com/How-to-share-data-across-d492cc0b

This sample demonstrates how to share data across multiple devices in Win10 UWP app with roaming data.
image

You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.

MSDN Blogs: Office 365 E5 Nuggets of weeks 29 + 30

$
0
0

Back from my vacation in Munich and ready to share latest news:

  • Bring in business 24/7 with Microsoft Bookings | Official Blogpost
  • Episode 100 with Corporate Vice President Jeff Teper on Future of SharePointOffice 365 Developer Podcast
  • Take your teamwork to the next level with Office 365 and Windows 10 | Official Blogpost 
  • Power BI license assignments | Official Blogpost 
  • Using the new On-premises Data Gateway for Power BI | Official Blogpost 
  • New features for the Power BI Dataset API | Official Blogpost
  • Power BI Natural language Q&A – Type a question and create a report | Efficiency365 Blogpost
  • Optimize your phone experience with a new customizable Power BI dashboard | Official Blogpost 
  • Yammer updates—from new user experiences to new IT controls | Official Blogpost
  • Automatic albums, improved search, Pokémon and more updates to the OneDrive photos experience | Official Blogpost 
  • Office 365 news roundup | Official Blogpost 
  • Modern SharePoint lists are here—including integration with Microsoft Flow and PowerApps | Official Blogpost  
  • New to Office 365 in July—new intelligent services Researcher and Editor in Word, Outlook Focused Inbox for desktop and Zoom in PowerPoint | Official Blogpost 
  • Take a systematic approach to security and information protection with Office 365 | Official Blogpost 
  • Is your company’s data secure? | Official Blogpost 
  • Skype for BusinessVideo Broadcast: Ep. 21 Skype Operations Framework
  • Register for “From Demo to Action: Tips from everyday users” on Modern Workplace | Official Blogpost 
  • Sharegate Online: Stay in Control with Instant Alerts for Office 365 with their new Preview | Blogpost
  • Layer2 Cloud Connector Dynamic Columns Feature – How does it work | Blogpost
  • Microsoft Office 365 Planner Teil 3 – Projektverwaltung | German Blogpost from Kom4Tec
  • Aus Formularen PDFs erzeugen mit Nintex Forms für Office 365 | German Blogpost from HanseVision
  • How to improve the SharePoint search center with some easy and fast tricks | Blogpost from HanseVision
  • SharePointPodcast (German) SPPD351 about  WPC 2016
  • Infographics: Top Collaboration Add-Ons and Apps for SharePoint (Online)  | by Communardo

 

MSDN Blogs: Setting up Calculation parameters for Time & Attendance

$
0
0

A long time ago I was responsible for what is commonly known as the Shop Floor module which we however at the time renamed to the Time&Attendance, Time&Attendance Payroll and Manufacturing execution modules. Recently I was thrown into these modules again specifically in the area of Time&Attendance payroll on a customer implementation.

When setting up Time&Attendance and Time&Attendance payroll a key component is the Calculation parameters and these are in my mind quite poorly documented.

In AX “7” the calculation parameters are located under Time and attendance>Setup>Calculation parameters in AX2012 they are located under HRM>Setup>Time and Attendance.

image001

The Calculation parameters consists of four elements which are read from left to right. The initial one is the Registration specification.

The standard documentation reads the following: with an additional column from me

 

Registration typeDescriptionComment
Working timeThe employee is at work.
Legal absenceThe employee is not at work, but the selected absence group does not deduct from their overtime or reduce their flex balance.

For example, the absence could be due to illness or to taking an external training course.

So Absence group (T&A>Setup>Groups>Absence groups) Has no in both Reduce flex and Deduct overtime:

image003

Illegal absenceThe employee is not at work, but the selected absence group deducts from their overtime. However, the flexible hour’s balance is not reduced.

For example, this can be used if an employee is late for work but plans to make up the hours (if this is allowed).

So absence group has Yes in Deduct overtime and No in Reduce flex:

image005

Flex-reducing absenceThe employee is not at work. The selected absence group does not deduct from their overtime, but reduces their flex balance.

For example, this can be used if an employee takes a whole day off to reduce their flexible hour’s balance.

So absence group has no in Deduct overtime and Yes in Reduce flex:

image007

Flex-reducing illegal absenceThe employee is not at work. The absence group deducts from the employee’s overtime and reduces the flex balance.

For example, this can be used if an employee is allowed to work flexible hours but clocks in after the Flex-period tolerance is over without having a valid excuse for the absence.

So absence group has Yes in Deduct overtime and Yes in Reduce flex:

image009

Work-free flex zoneThe employee is not at work in a flexible hours reducing time zone. The employee gets paid and the flex balance is reduced.So this one does not have anything to do with the absence group, the employee has not clocked in but the profile states “Flex-“:

If the employee clocked in at 8:40. The period between 8 and 8:40 would be work free flex zone.

image011

 

In order to validate the above I suggest to personalize the form to include the log book field “Reg specification” into the approve or calculate forms and then try out different options

image012

:

The second column in the Calculation parameters are the Profile Specification types:

image014

These matches how the Time profile has been setup. Remember to make sure the profile type has been setup correctly.

image015

(I couldn’t capture more rows – but after 4 PM we reach Flex+ again).

image016

So to recapture what has been defined above. Is the worker within a set of clock/in out? Yes, then we are in Working time. Is the worker outside clock in/out then an absence code must in place and we have the four options and lastly we are outside clock in/out but the profile specifies Flex- then we are in the work free flex zone.

The time profile has six options based on the valid profile defined “time types”.

Now we get to the “Calculation” group on the Calculation parameters. These are associated to what is calculated the Approve/calculate forms (or in the batch job). The outcome is presented on the Times tab in these forms.

image018

I have marked the fields on the Times tab that corresponds to the (checkboxes/Sliders) in the Calculation group on the Calculation parameters, here in an example with a number of fun deviations.

Lastly we get to the “Paid” group on the Calculation parameters. The Paid group corresponds to what will be made available in the calculation of Pay items. However, pay items gets generated as a result the pay agreement lines, so if there is not a Pay agreement line to capture the “Output time” of the “Day’s total” calculation no pay item will be generated.

So the Paid group corresponds to the different types of pay agreement lines:

image020

The default calculation parameters for example has a yes in that a premium can be paid out for a standard hour:

image022

Which means that one or more of these could potentially be paid out if the time profile read “Standard time” (hard for the shown profile, but the scenario is here that the worker would have a “Late shift time profile” and therefore get these premiums):

image023

I hope the above assisted in understanding the Time and Attendance Calculation parameters.

 

MSDN Blogs: Azure Automation Sample Scripts

MSDN Blogs: Ironman or Pokémon: Both are helping make the virtual real!

$
0
0

This post is a collaboration between David S. Lipien– Director in Microsoft’s Premier Services, Jeremy Rule– Practice Manager in Microsoft’s Premier Services, and Dan Simmons– Technical Manger. They are all part of the Enterprise Services Business where they enable clients to leverage IT for business success in the cloud and on the ground.


Ironman and Pokémon both are telling the story of our future reality – virtual or otherwise.

It is gratifying when films and games allow for a better understanding of something we do as developers in the world of IT. Often the work we do, the bits and the bytes, are not necessarily exciting dialogue at a dinner party (maybe the bytes). Recently, movies like Ironman and apps like Pokémon Go are helping make augmented reality well understood, literally overnight.

To start let’s cover the four types of reality:

  1. True: the life we live every day.
  2. Virtual (VR): the computer-generated simulation of a three-dimensional image or environment can be interacted within a seemingly real or physical way by a person using special electronic equipment, such as a helmet with a screen inside and/or gloves fitted with sensors. The HTC Vive and Oculus are two commercial examples. Think about Total Recall when Arnold Schwarzenegger went on a virtual vacation.
  3. Augmented (AR): technology that superimposes a computer-generated image on a user’s view of the real world, thus providing a composite view. Two words – Pokémon Go!
  4. Mixed: a merging of real and virtual worlds where the physical and digital objects co-exist and interact in real time. One word – HoloLens. Ironman movies are a great way to communicate this powerful technology. MSHoloLens_Skype_03495_3x2_RGB

We are at another new technological change with VR/AR; think about the mainframe, client-service, internet and smartphones. The VR/AR Association stated that analysts have yet to agree on the market size, however estimates range from $16B to $30B for VR by 2020 and up to $120B when you include AR. Gartner predicts “…sales of HMDs (head-mounted-displays) for both AR and VR applications to increase from less than 500,000 units in 2015 to nearly 40 million by 2020. By 2030, immersive interfaces will have replaced all other user experience paradigms in 80% of use cases that require human-to-machine interactions.”

Hologram experience at Lowe's.Business processes and traditional customer experiences can change and already have examples, Lowe’s is changing the home renovation experience by using Hololens to visualize and interact with a designed space.

Another example is GM’s bet on virtual stores for their Cadillac brand as noted in the WSJ. GM as part of their extensive retail strategy “Project Pinnacle” is looking to retail locations where there will not be inventory on hand, but a VR headset and an amazing customer experience.

Think about safety. We have seen the heads-up-display from BMW, where they claim drivers are able to process driving information 50% faster. The SKULLY AR- 1 is the first augmented reality helmet to feature a built-in 180° blind spot Camera and Heads-Up Display for unparalleled situational awareness and safety. Anyone who has been on a bike or on the road with bikers can appreciate the safety and opportunity that brings.

The above explains how the technology can be used, now let’s see how it will impact the way technology will be supported. The Implications are numerous – all aspects of people, process and technology are impacted.

People

Help wanted! Developers no doubt will play a huge role as shown by the first-of-a-kind class from University of Washington a capstone course where students used 25 devices for 10 weeks built VR/AR applications. This trend will only continue to amplify. Software, hardware, design and business skills are critical and are the confluence of all those skills. Think about VR/AR as a recruiting tool, impacting people and the way we recruit talent. According to a WSJ article GE is using a VR headset to attract millennials to experience a ride aboard a GE locomotive or GE’s subsea oil-and-gas recovery machine. Think about as a training tool; Japan Airline and Microsoft recently partnered with the following outcome “Using Microsoft HoloLens, Japan Airlines (JAL) has developed two proof-of-concept programs to provide supplemental training for engine mechanics, and for flight crew trainees who want to be promoted to co-pilot status.” Developers, business analysts and project managers like today all have a role to play across all functions and industries.

Process

The SDLC (software development lifecycle) will need to be enhanced and adjust for the VR/AR paradigm. For example, for a Requirements Management process, wireframes and user stories will need to be enhanced to communicate intent and business outcomes. Testing cross-devices will need to be woven into test cases and processes, not unlike cross browser and mobile device.

D.A.R.E, an acronym for Data Collection, Analysis, Regression and Excel, is a regression-based framework. The challenge we have is there is no historical data, so project management approaches and experience in estimating techniques is critical. A recent PMI article reported on the challenges of a team building a VR theme park; “…they anticipate that building three different 3-minute virtual experiences will cost roughly three times the budget of a feature film, but no one knows for sure.

Technology

Existing technology will be enhanced as we know it. Satya Nadella puts on a Microsoft HoloLens and looks at a virtual interactive calendar seemingly projected on a wall of his house. Microsoft Office takes on a whole new life of ideas from a development perspective and a support perspective.

From a speed to market perspective, Vive, Oculus and Hololens run on Windows. This allows for developers to work with a familiar operating system. Other familiar programming/scripting languages such as HTML 5/WinJS, Unity 3d, UWP (Universal Windows Platform), XAML/C# and XAML/C++ will enable an accelerated transition.

Explore the devices at a local Microsoft Store; until you do, it is challenging to really appreciate the possibilities.

This is an exciting time in our industry and we only scratched the surface. Microsoft Premier Support has experts who can help your organization make the technical and people/process transitions described above and partner in your VR/AR journey. Contact us at PSADM@microsoft.com or contact your Microsoft Application Development Manager for more information.

MSDN Blogs: What’s in a PDB file? Use the Debug Interface Access SDK

$
0
0

It’s easy to use C# code and MSDia140.dll from the Debug Interface Access SDK to examine what’s inside a PDB.

A PDB is Program Database which is generated when an executable such as an EXE or DLL is built. It includes a lot of information about the file that is very useful for a debugger. This include names and addresses of symbols.

Managed code PDB contents are somewhat different from native code: a lot of the managed code information can be obtained from other sources. For example, the Type of a symbol can be obtained from the Metadata of the binary.

Below is some sample code that uses the DIA SDK to read a PDB and display its contents.

See also

Write your own Linq query viewer

Use DataTemplates and WPF in code to create a general purpose LINQ Query results display

<code>

using Dia2Lib;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.Runtime.InteropServices;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
// File->New->Project->C# Windows WPF Application.// Replace MainWindow.Xaml.cs with this content// add a reference to c:Program Files (x86)Microsoft Visual Studio 14.0Common7PackagesDebuggermsdia140.dllnamespace WpfApplication1
{
  publicpartialclassMainWindow : Window
  {
    classSymbolInfo
    {
      publicint Level { get; set; } //recursion levelpublicstring SymbolName { get; set; }
      publicuint LocationType { get; set; }
      publiculong Length { get; set; }
      publicuint AddressOffset { get; set; }
      publicuint RelativeAddress { get; set; }
      publicstring SourceFileName { get; set; }
      publicuint SourceLineNo { get; set; }
      publicSymTagEnum SymTag { get; set; }
      publicstring SymbolType { get; set; }
      publicoverridestring ToString()
      {
        return$"{SymbolName}{SourceFileName}({SourceLineNo})  {SymbolType}";
      }
    }
    public MainWindow()
    {
      InitializeComponent();
      this.Loaded += (ol, el) =>
        {
          try
          {
            this.WindowState = WindowState.Maximized;
            var pdbName = System.IO.Path.ChangeExtension(
                      Assembly.GetExecutingAssembly().Location, "pdb");
            this.Title = pdbName;
            var lstSymInfo = newList<SymbolInfo>();
            using (var diaUtil = newDiaUtil(pdbName))
            {
              Action<IDiaEnumSymbols, int> lamEnum = null; // recursive lambda
                    lamEnum = (enumSym, lvl) =>
                    {
                      if (enumSym != null)
                      {
                        foreach (IDiaSymbol sym in enumSym)
                        {
                          var symbolInfo = newSymbolInfo()
                          {
                            Level = lvl,
                            SymbolName = sym.name,
                            Length = sym.length,
                            LocationType = sym.locationType,
                            SymTag = (SymTagEnum)sym.symTag,
                            AddressOffset = sym.addressOffset,
                            RelativeAddress = sym.relativeVirtualAddress
                          };
                          var symType = sym.type;
                          if (symType != null)
                          {
                            var symtypename = symType.name;
                            symbolInfo.SymbolType = symtypename;
                          }
                          lstSymInfo.Add(symbolInfo);
                          if (sym.addressOffset > 0 && sym.addressSection > 0 && sym.length > 0)
                          {
                            try
                            {
                              IDiaEnumLineNumbers enumLineNums;
                              diaUtil._IDiaSession.findLinesByAddr(
                                        sym.addressSection,
                                        sym.addressOffset,
                                        (uint)sym.length,
                                        out enumLineNums
                                        );
                              if (enumLineNums != null)
                              {
                                foreach (IDiaLineNumber line in enumLineNums)
                                {
                                  var linenumber = line.lineNumber;
                                  symbolInfo.SourceFileName = line.sourceFile.fileName;
                                  symbolInfo.SourceLineNo = line.lineNumber;
                                  break;
                                }
                              }
                            }
                            catch (Exception)
                            {
                            }
                          }
                          switch (symbolInfo.SymTag)
                          {
                            caseSymTagEnum.SymTagFunction:
                            caseSymTagEnum.SymTagBlock:
                            caseSymTagEnum.SymTagCompiland:
                              IDiaEnumSymbols enumChildren;
                              sym.findChildren(SymTagEnum.SymTagNull, name: null, compareFlags: 0, ppResult: out enumChildren);
                              lamEnum.Invoke(enumChildren, lvl + 1);
                              break;
                          }
                        }
                      }
                    };
                    /* query by table of symbols
                    IDiaEnumTables enumTables;
                    diaUtil._IDiaSession.getEnumTables(out enumTables);
                    foreach (IDiaTable tabl in enumTables)
                    {
                        var tblName = tabl.name;
                        if (tblName == "Symbols")
                        {
                            IDiaEnumSymbols enumSyms = tabl as IDiaEnumSymbols;
                            lamEnum.Invoke(enumSyms, 0);
                        }
                    }
                    /*/// query by global scopevar globalScope = diaUtil._IDiaSession.globalScope;
              IDiaEnumSymbols enumSymGlobal;
              globalScope.findChildrenEx(SymTagEnum.SymTagNull, name: null, compareFlags: 0, ppResult: out enumSymGlobal);
              lamEnum.Invoke(enumSymGlobal, 0);
                    //*/
                  }
            var gridvw = newGridView();
            foreach (var mem intypeof(SymbolInfo).GetMembers().
                        Where(m => m.MemberType == MemberTypes.Property)
                  )
            {
              var gridCol = newGridViewColumn();
              gridvw.Columns.Add(gridCol);
              gridCol.Header = newGridViewColumnHeader()
              {
                Content = mem.Name
              };
              var template = newDataTemplate(typeof(SymbolInfo));
              var factTblk = newFrameworkElementFactory(typeof(TextBlock));
              factTblk.SetBinding(TextBlock.TextProperty, newBinding(mem.Name));
                    // for wide columns let's set the tooltip too
                    factTblk.SetBinding(TextBlock.ToolTipProperty, newBinding(mem.Name));
              factTblk.SetValue(TextBlock.MaxWidthProperty, 300.0);
              var factSP = newFrameworkElementFactory(typeof(StackPanel));
              factSP.SetValue(StackPanel.OrientationProperty, Orientation.Horizontal);
              factSP.AppendChild(factTblk);
              template.VisualTree = factSP;
              gridCol.CellTemplate = template;
            }

            var lv = newListView()
            {
              ItemsSource = lstSymInfo,
              View = gridvw
            };
            lv.DataContext = lstSymInfo;
            this.Content = lv;

          }
          catch (Exception ex)
          {
            this.Content = ex.ToString();
          }
        };
    }
  }
  publicclassDiaUtil : IDisposable
  {
    publicIDiaDataSource _IDiaDataSource;
    publicIDiaSession _IDiaSession;
    public DiaUtil(string pdbName)
    {
      _IDiaDataSource = newDiaSource();
      _IDiaDataSource.loadDataFromPdb(pdbName);
      _IDiaDataSource.openSession(out _IDiaSession);
    }

    publicvoid Dispose()
    {
      Marshal.ReleaseComObject(_IDiaSession);
      Marshal.ReleaseComObject(_IDiaDataSource);
    }
  }
}

</code>


MSDN Blogs: SQL Updates Newsletter – July 2016

$
0
0

Recent Releases and Announcements

 

Recent Whitepapers/E-books/Training/Tutorials

 

Monthly Script Tips

 

Windows Server 2016 – Get Started

 

Issue Alert

 

Fany Carolina Vargas | SQL Dedicated Premier Field Engineer | Microsoft Services

 

 

 

MSDN Blogs: .NET 4.6.2 and long paths on Windows 10

$
0
0

The Windows 10 Anniversary update is almost out the door. .NET 4.6.2 is in the update (as we’ve looked at in the past few posts). I’ve talked a bit about what we’ve done in 4.6.2 around paths, and how that is targeted at both allowing access to previously inaccessible paths and opens up the door for long paths when the OS has support. Well, as people have discovered, Windows 10 now has started to open up support. In this post I’ll talk about how to enable that support.

Enabling Win32 Long Path Support

Long paths aren’t enabled by default yet. You need to set a policy to enable the support. To do this you want to “Edit group policy” in the Start search bar or run “gpedit.msc” from the Run command (Windows-R).

In the Local Group Policy Editor navigate to “Local Computer Policy: Computer Configuration: Administrative Templates: All Settings“. In this location you can find “Enable Win32 long paths“.

Enabling Win32 long paths in the policy editor.

Enabling Win32 long paths in the policy editor.

After you’ve turned this on you can fire up a new instance of PowerShell and free yourself from the constraints of MAX_PATH! The key File and Directory Management APIs respect this and now allow you to skip the check for MAX_PATH without having to resort to using “\?” (look back to my earlier posts on path formats to understand how this works). This is also possible as PowerShell has opted into the new .NET path support (being that it is a .NET application).

If you look carefully at the description in the setting you’ll see “Enabling Win32 long paths will allow manifested win32 applications…“. That’s the second gate to getting support- your app must have a specific manifest setting. You can see what this is by opening C:WindowsSystem32WindowsPowerShellv1.0powershell.exe in Visual Studio or some other manifest viewer. Doing so you’ll see the following section in it’s manifest:

<application xmlns="urn:schemas-microsoft-com:asm.v3"><windowsSettings><longPathAware xmlns="http://schemas.microsoft.com/SMI/2016/WindowsSettings">true</longPathAware></windowsSettings></application>

These two gates will get you the native (Win32) support for long paths. In a managed app you’ll also need the new behavior in .NET. The next section covers this.

Configuring a Simple Long Path .NET Console App

This example uses a new C# Console Application in Visual Studio 2015.

The first thing to do after creating a new console app is edit the App.Config file and add the following after the <startup> end tag:

<runtime>
  <AppContextSwitchOverrides value="Switch.System.IO.UseLegacyPathHandling=false;Switch.System.IO.BlockLongPaths=false" /></runtime>

Once the 4.6.2 Targeting Pack is released you can alternatively select 4.6.2 as your target framework in the project properties instead of using the app.config setting. The defaults for these two values are true if the target framework is 4.6.1 or earlier.

The second thing to do is add the Application Manifest File item to your project. After doing so add the windowsSettings block I shared above. In the default template there is already a commented-out section for windowsSettings, you can uncomment this and add this specific longPathAware setting.

Here is a sample block to add to your Main() method to test it out:

string reallyLongDirectory = @"C:TestabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ";
reallyLongDirectory = reallyLongDirectory + @"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ";
reallyLongDirectory = reallyLongDirectory + @"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ";

Console.WriteLine($"Creating a directory that is {reallyLongDirectory.Length} characters long");
Directory.CreateDirectory(reallyLongDirectory);
You can open up PowerShell and go and look at the directory you created! Yayyyyyy!
This is the start of what has been a very long journey to remove MAX_PATH constraints. There is still much to do, but now the door is finally open. The rest can and will come- keep your feedback coming in to keep us on track!
Note that in this initial release CMD doesn’t support long paths. The Shell doesn’t add support either, but previously had limited support utilizing 8.3 filename trickery. I’ll leave it to the Windows team for any further details.

MSDN Blogs: Packaging issues with Visual Studio Team Services – 7/30 – Investigating

$
0
0

Initial Update: Saturday, 30 July 2016 02:01 UTC

We are actively investigating issues with Visual Studio Team Services. Some customers may see a build failure with Nuget errors if they have below conditions.

1) If you have at least one VSTS Nuget package source
2) If you have more than one NuGet restore task in the build definition.

The symptom is, the second NuGet restore build task fails.

You will see an error message like below : 
Unable to find version ‘1.8.1’ of package ‘Microsoft.Cosmos.Client’.
##[debug]rc:1
##[debug]success:false
##[error]Error: C:BAagentWorkerToolsnuget.exe failed with return code: 1
##[error]Packages failed to install
##[debug]task result: Failed

Work Around:: If you have the above repro then, you go to your agent pool in web UI, right click, and choose “Update All Agents”

Next Update: Before 30 July 2016 06:00 UTC

We are working to resolve this issue and apologize for any inconvenience.

Sincerely,
Bapayya

MSDN Blogs: Imagine Cup 2016 世界大会、閉幕しました!

$
0
0

DSC_0407

日本代表チームは惜しくも入賞を逃してしまいましたが、Imagine Cup は、テクノロジーのすごさだけでなく、そのテクノロジーをどのようにビジネスにするのか? Go to Market ストラテジーは? そのテクノロジーは既存の代替品と比べて何が優れているのか? このテクノロジーは、どれぐらいのマーケットポテンシャルがあり、どれぐらいの規模の人に役に立つのか? といったビジネス要素がとても必要だということを実感しました。

DSC_0505

筑波大学の朝倉さん、上原さんからは、「来年リベンジする!」との一声が!是非頑張っていただき来年はファイナルステージに上がりましょう!

 20160727_105555_HoloLens (1)

授賞式の後は、Imagine Cup出場の学生さん、MSP  Summitに参加した学生はHoloLensを使ったHolographic Academyへ。すごかった!とワクワクして帰ってきてくれました。

 DSC_0434

金曜日のワールドチャンピオンシップは、GameInnovationWorld Citizenship各部門の優勝チームが、ステージで、3分間のピッチを行いました。シアトルのGarfield 高校で行われた最終決選には、スターウォーズのキャラクターたちがお出迎え、マイクロソフト のExecutive Vice PresidentJudson AlthoffCorporate Vice President of Developer Platform & Evangelism and Chief EvangelistSteve Guggenheimer2014Imagine Cup優勝者、マイクロソフトのComputer Science Curriculum DeveloperKesey Champion、ハリウッドスターが登場してのセレモニーとなりました。優勝は、Innovation部門のルーマニアチームENTy。体のバランスと姿勢をリアルタイムにモニターできる手軽なメディカルデバイスを開発しました。既に、数名の医者と数百人の患者での利用実績がある点もプレゼンテーションでしっかりとアピールしていました。

DSC_0452

MSPの皆さんは、授賞式後、Robo World Cup Hackathon の会場で現地の子供たちのメンターとして大活躍してくれました!

Imagine Cup 日本代表の筑波大学Biomachine Industrialチームの皆さん、MSPのお二人、本当にお疲れ様でした!!

MSDN Blogs: Using SQL Server Stored Procedures to Dynamically Drop and Recreate Indexes

$
0
0

Recently, I’ve been working on a project that of necessity involves periodically updating data in some reasonably large tables that exist in an Operational Data Store (ODS). This particular ODS is used for both reporting via SQL Server Reporting Services and staging data for use in a SQL Server Analysis Services database. Since a number of the tables in the ODS are used for reporting purposes, it’s not entirely surprising that the report designers have created a few indexes to help report performance. I don’t have a problem with indexes, but as any experienced DBA is well aware, the more and larger the indexes the greater the impact on the performance of inserts and updates.

The magnitude of the performance impact was really brought home when a simple update on a 12 million row table that normally completed in roughly three minutes had to be killed at the two hour mark. On further investigation, it was found that over 30 indexes had been added to the table in question. So to address the immediate problem and allow the update to complete in a reasonable time period, DROP INDEX and CREATE INDEX commands were scripted out and added to a stored procedure which would first drop the indexes then run the update statement and finally recreate the indexes. That worked well for a couple of days and then performance again began to degrade. When this episode of performance degradation was investigated, it was found that of the indexes that had been scripted out and added to the stored procedure only one remained and several additional indexes had been created.

Not wishing to revise a rather lengthy stored proc on an almost daily basis, after a brief bit of research, I found a blog posting by Percy Reyes entitled Script out all SQL Server Indexes in a Database using T-SQL. That was great, but only covered NONCLUSTERED indexes and since we were seeing both CLUSTERED and NONCLUSTERED indexes, it would need a bit of revision. Coincidentally, at about the same time there was some serious talk of adding COLUMNSTORE indexes on one or two of these tables, which would essentially cause any update statement to fail. The possibility of having to contend with COLUMNSTORE indexes in addition to CLUSTERED and NONCLUSTERED indexes would necessitate a reasonably significant revision to the T-SQL presented in Percy’s blog, especially since it would be necessary to dynamically drop and then recreate indexes. With those bits of information, it was time to formulate a plan, which would mean accomplishing the following:

  1. Capturing and storing the names of tables, their associated indexes and the definitions of those indexes
  2. Dropping the indexes after the definitions had been safely stored
  3. Recreating the indexes from stored definitions using correct syntax

A relatively simple three step task, the first of which was to create a stored proc that would capture the names of the tables, associated indexes and the definitions of those indexes.  That lead to creation of the following SQL Server stored procedure:

CREATE PROCEDURE [dbo].[sp_GetIndexDefinitions]
as

IF NOT EXISTS (SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME=’WORK’)
BEGIN
EXEC sp_executesql N’CREATE SCHEMA WORK’
END

IF OBJECT_ID(‘[WORK].[IDXDEF]’,’U’) IS NOT NULL DROP TABLE [WORK].[IDXDEF]

CREATE TABLE WORK.IDXDEF (SchemaName NVARCHAR(100), TableName NVARCHAR(256), IndexName NVARCHAR(256), IndexDef NVARCHAR(max))

DECLARE @SchemaName VARCHAR(100)
DECLARE @TableName VARCHAR(256)
DECLARE @IndexName VARCHAR(256)
DECLARE @ColumnName VARCHAR(100)
DECLARE @is_unique VARCHAR(100)
DECLARE @IndexTypeDesc VARCHAR(100)
DECLARE @FileGroupName VARCHAR(100)
DECLARE @is_disabled VARCHAR(100)
DECLARE @IndexOptions VARCHAR(max)
DECLARE @IndexColumnId INT
DECLARE @IsDescendingKey INT
DECLARE @IsIncludedColumn INT
DECLARE @TSQLScripCreationIndex VARCHAR(max)
DECLARE @TSQLScripDisableIndex VARCHAR(max)

DECLARE CursorIndex CURSOR FOR
SELECT schema_name(st.schema_id) [schema_name], st.name, si.name,
CASE WHEN si.is_unique = 1 THEN ‘UNIQUE ‘ ELSE ” END
, si.type_desc,
CASE WHEN si.is_padded=1 THEN ‘PAD_INDEX = ON, ‘ ELSE ‘PAD_INDEX = OFF, ‘ END
+ CASE WHEN si.allow_page_locks=1 THEN ‘ALLOW_PAGE_LOCKS = ON, ‘ ELSE ‘ALLOW_PAGE_LOCKS = OFF, ‘ END
+ CASE WHEN si.allow_row_locks=1 THEN  ‘ALLOW_ROW_LOCKS = ON, ‘ ELSE ‘ALLOW_ROW_LOCKS = OFF, ‘ END
+ CASE WHEN INDEXPROPERTY(st.object_id, si.name, ‘IsStatistics’) = 1 THEN ‘STATISTICS_NORECOMPUTE = ON, ‘ ELSE ‘STATISTICS_NORECOMPUTE = OFF, ‘ END
+ CASE WHEN si.ignore_dup_key=1 THEN ‘IGNORE_DUP_KEY = ON, ‘ ELSE ‘IGNORE_DUP_KEY = OFF, ‘ END
+ ‘SORT_IN_TEMPDB = OFF’
+ CASE WHEN si.fill_factor>0 THEN ‘, FILLFACTOR =’ + cast(si.fill_factor as VARCHAR(3)) ELSE ” END  AS IndexOptions
,si.is_disabled , FILEGROUP_NAME(si.data_space_id) FileGroupName
FROM sys.tables st
INNER JOIN sys.indexes si on st.object_id=si.object_id
WHERE si.type>0 and si.is_primary_key=0 and si.is_unique_constraINT=0 –and schema_name(tb.schema_id)= @SchemaName and tb.name=@TableName
and st.is_ms_shipped=0 and st.name<>’sysdiagrams’
ORDER BY schema_name(st.schema_id), st.name, si.name

open CursorIndex
FETCH NEXT FROM CursorIndex INTO  @SchemaName, @TableName, @IndexName, @is_unique, @IndexTypeDesc, @IndexOptions, @is_disabled, @FileGroupName

WHILE (@@fetch_status=0)
BEGIN
DECLARE @IndexColumns VARCHAR(max)
DECLARE @IncludedColumns VARCHAR(max)

SET @IndexColumns=”
SET @IncludedColumns=”

DECLARE CursorIndexColumn CURSOR FOR
SELECT col.name, sic.is_descending_key, sic.is_included_column
FROM sys.tables tb
INNER JOIN sys.indexes si on tb.object_id=si.object_id
INNER JOIN sys.index_columns sic on si.object_id=sic.object_id and si.index_id= sic.index_id
INNER JOIN sys.columns col on sic.object_id =col.object_id  and sic.column_id=col.column_id
WHERE si.type>0 and (si.is_primary_key=0 or si.is_unique_constraINT=0)
and schema_name(tb.schema_id)=@SchemaName and tb.name=@TableName and si.name=@IndexName
ORDER BY sic.index_column_id

OPEN CursorIndexColumn
FETCH NEXT FROM CursorIndexColumn INTO  @ColumnName, @IsDescendingKey, @IsIncludedColumn

WHILE (@@fetch_status=0)
BEGIN
IF @IsIncludedColumn=0
SET @IndexColumns=@IndexColumns + @ColumnName  + CASE WHEN @IsDescendingKey=1  THEN ‘ DESC, ‘ ELSE  ‘ ASC, ‘ END
ELSE
SET @IncludedColumns=@IncludedColumns  + @ColumnName  +’, ‘

FETCH NEXT FROM CursorIndexColumn INTO @ColumnName, @IsDescendingKey, @IsIncludedColumn
END

CLOSE CursorIndexColumn
DEALLOCATE CursorIndexColumn
SET @IndexColumns = substring(@IndexColumns, 0, len(ltrim(rtrim(@IndexColumns))))
SET @IncludedColumns = CASE WHEN len(@IncludedColumns) >0 THEN substring(@IncludedColumns, 0, len(@IncludedColumns)) ELSE ” END

SET @TSQLScripCreationIndex =”
SET @TSQLScripDisableIndex =”
SET @TSQLScripCreationIndex=’CREATE ‘+ @is_unique  + @IndexTypeDesc + ‘ INDEX ‘ +QUOTENAME(@IndexName)+’ ON ‘ +QUOTENAME(@SchemaName) +’.’+ QUOTENAME(@TableName)+
CASE WHEN @IndexTypeDesc = ‘NONCLUSTERED COLUMNSTORE’ THEN ‘ (‘+@IncludedColumns+’) ‘
WHEN @IndexTypeDesc = ‘CLUSTERED COLUMNSTORE’ THEN ‘ ‘
ELSE ‘ (‘+@IndexColumns+’) ‘
END  +
CASE WHEN @IndexTypeDesc = ‘NONCLUSTERED COLUMNSTORE’ and len(@IncludedColumns)>0 THEN ”
when @IndexTypeDesc = ‘CLUSTERED COLUMNSTORE’ THEN ”
ELSE
CASE WHEN LEN(@IncludedColumns)>0 THEN CHAR(13) +’INCLUDE (‘ + @IncludedColumns+ ‘)’ ELSE ” END
END  +
CASE WHEN @IndexTypeDesc not like (‘%COLUMNSTORE%’) THEN CHAR(13) + ‘WITH (‘ + @IndexOptions + ‘) ‘ + ‘ ON ‘ + QUOTENAME(@FileGroupName) ELSE ” END  + ‘;’

INSERT INTO [WORK].[IDXDEF] (Schemaname,TableName,IndexName,IndexDef) values (@SchemaName, @TableName, @IndexName, @TSQLScripCreationIndex)

FETCH NEXT FROM CursorIndex INTO  @SchemaName, @TableName, @IndexName, @is_unique, @IndexTypeDesc, @IndexOptions, @is_disabled, @FileGroupName

END
CLOSE CursorIndex
DEALLOCATE CursorIndex

When that tested out, it was time for the next step of the task and simply dynamically dropping the indexes. But I wanted to ensure that when the indexes were dropped, that the index definitions would be safely stored (I’m like most other DBAs and sort of enjoy being employed). That resulted in creation of the following stored proc:

CREATE PROCEDURE [dbo].[sp_DropIndexes] as

EXEC sp_GetIndexDefinitions

DECLARE @DropIndex NVARCHAR(max)
DECLARE @SchemaName NVARCHAR(256)
DECLARE @TableName NVARCHAR(256)
DECLARE @IndexName NVARCHAR(256)
DECLARE CursorIDXDrop CURSOR FOR
SELECT DISTINCT ss.name AS schemaname, st.name AS tblname, si.name AS indexnname
FROM sys.tables st
INNER JOIN sys.schemas ss ON st.schema_id=ss.schema_id
INNER JOIN sys.indexes si ON st.object_id=si.object_id
WHERE si.type<>0 AND st.is_ms_shipped=0 AND st.name<>’sysdiagrams’ AND
(is_primary_key=0 AND is_unique_constraint=0)

OPEN CursorIDXDrop
FETCH NEXT FROM CursorIDXDrop INTO @SchemaName, @TableName, @IndexName
WHILE @@FETCH_STATUS=0
BEGIN
SET @DropIndex= ‘DROP INDEX ‘ + QUOTENAME(@IndexName) + ‘ ON ‘ + QUOTENAME(@SchemaName) + ‘.’ + QUOTENAME(@TableName)
EXEC sp_executesql @DropIndex
FETCH NEXT FROM CursorIDXDrop INTO  @SchemaName, @TableName, @IndexName
END
CLOSE CursorIDXDrop
DEALLOCATE CursorIDXDrop

After that worked, with the index definitions safely stored so that I could manually re-create them if necessary, it was time to move on to the third step of dynamically re-creating the indexes. That resulted in creation of the following stored proc:

CREATE PROCEDURE [dbo].[sp_RebuildIndexes]
as

IF OBJECT_ID(‘[WORK].[IDXDEF]’,’U’) IS NOT NULL
BEGIN

DECLARE @Command nvarchar(max)
DECLARE @IndexCmd nvarchar(max)
DECLARE @NumRows int
DECLARE CursorIndexes CURSOR for
SELECT DISTINCT CAST(IndexDef as nvarchar(max)) as IndexDef from work.IDXDEF
SET @NumRows=0
OPEN CursorIndexes
FETCH NEXT FROM CursorIndexes into @Command
WHILE @@FETCH_STATUS=0
BEGIN
EXEC sp_executesql @Command
FETCH NEXT FROM CursorIndexes into @Command
SET @NumRows=@NumRows+1
END;
CLOSE CursorIndexes
DEALLOCATE CursorIndexes
–PRINT LTRIM(RTRIM(CAST(@NumRows as varchar(10)))) + ‘ Indexes Recreated from stored definitions’
END

By having my update script call the sp_DropIndexes before the update and then call the sp_RebuildIndexes, it was very easy to drop the indexes, run updates and then re-create the indexes in a reasonable time period without the necessity of having to continually revise code.

MSDN Blogs: Froggy goes to Seattle: World Championship dan persiapan pulang

$
0
0

Di hari terakhir kegiatan Imagine Cup 2016, Juara 1 dari setiap kategori dipertandingkan lagi dalam World Championship, yang akan memilih 1 juara umum, atau World Champion. Ketiga tim yang tampil berturut-turut adalah Juara Games dari Thailand, Juara Innovation dari Rumania, dan Juara World Citizenship dari Yunani.

image

image

image

Ada 3 juri yang bertugas memilih World Champion, yaitu Jennifer Tang (World Champion Imagine Cup 2014), Kasey Champion (Software Engineer dari Microsoft) dan John Boyega (aktor yang memerankan tokoh Finn dalam Star Wars: The Force Awakens). Ketiga juri ini memilih ENTy dari Rumania sebagai World Champion!

Dengan terpilihnya World Champion, maka berakhir sudahlah seluruh rangkaian Imagine Cup 2016 di seluruh dunia. Sekarang kita menunggu pengumuman resmi mengenai Imagine Cup 2017.

Setelah acara World Championship selesai, tim None Developers kemudian menyempatkan waktu untuk berjalan-jalan di downtown Seattle, sambil berbelanja oleh-oleh. Setelah itu, seluruh rombongan kembali ke asrama dan mengikuti Closing Party Imagine Cup 2016 yang dihadiri oleh semua World Finalists.

image image

image image

image

Malam ini, seluruh rombongan akan mulai mengepak koper, karena rombongan utama akan berangkat dengan pesawat jam 9 pagi, yang artinya sudah harus berangkat ke airport dengan bis pukul 6 pagi. Rombongan kedua kemudian akan menyusul dengan bis pukul 10 pagi, dengan keberangkatan pesawat pukul 2 siang. Mohon doanya agar perjalanan kami dapat berlangsung dengan baik!

MSDN Blogs: BizTalk 2013: Suspended Error Message – The message found multiple request response subscriptions. A message can only be routed to a single request response subscription

$
0
0

BizTalk 2013- If you find the below suspended messages in your application :

  1. The message found multiple request response subscriptions. A message can only be routed to a single request response subscription.

  2. This service instance exists to help debug routing failures for instance “{84685AE1-3D71-49E0-BB16-87B6A3049AFD}”. The context of the message associated with this instance contains all the promoted properties at the time of the routing failure.

this could be when there are multiple ports/orchestrations trying to listen to the same request message. You can check in your application if a previous version of the Orchestration is in “Stopped” state or is running. It should be in unenlisted state or remove the old versions. Check for the receive ports and send ports also if there are ports which are listening to the same type of message.


MSDN Blogs: Adding/Updating SharePoint O365 Calendar Event

$
0
0

To add or update a calendar in SharePoint 0365:

First connect to the SharePoint site

Connect to SharePoint site

 

Now if the the listItemCollection already has data, update it or insert a new Calendar event.

SharePoint2

 

 

MSDN Blogs: Unable to start debugging on the web server. Operation not supported. Unknown error. 0x80004005

$
0
0

I was trying to run my application hosted in IIS from Visual Studio. On pressing F5, it gave me the following error: “Unable to start debugging on the web server. Operation not supported. Unknown error. 0x80004005”

 

 

Go to the Application Pool in IIS by which your application is running. Right click -> Advanced Settings.  Enable 32-Bit Applications to True.

 

MSDN Blogs: 11 августа, мастер-класс «Интернет вещей с «полей»: от «железа» до Azure»

$
0
0

Что будет на встрече?

Создавая IoT-решение, нужно заботиться обо всем, и, в первую очередь, об устройствах. У нас будут IoT-киты Raspberry Pi 2 (https://www.adafruit.com/product/2733), а также некоторое количество датчиков, проводков и прочих хороших вещей!

Посмотрев, как и что делать с устройством, мы покажем, как собирать данные с устройств, отправлять в облако и, если хватит времени, визуализировать и делать другие полезные штуки. Будем использовать Windows 10 IoT Core, Azure IoT Hub и Power BI. Поговорим про безопасность, .NET Micro Framework и все, что захотят участники.

Язык встречи: русский/английский

Что нужно для участника:

Компьютер с Windows 10 с установленной Visual Studio (с расширениями UWP) (https://developer.microsoft.com/en-us/windows/iot/win10/kitsetuppcrpi). Вы можете загрузить Visual Studio Community Edition – она бесплатна и в ней есть все необходимое.

Докладчики:

  • Каталин Георгиу

Архитектор решений, I Computer Solutions, обладатель звания MVP, Румыния

Каталин Георгиу — архитектор решений из Тимишоары. В настоящее время занимается разработкой приложений для мобильных и облачных платформ. Каталин — не только разработчик и архитектор решений, но еще и инструктор и консультант. В свободное время активно участвует в жизни технического сообщества, публикует статьи и сообщения в блогах нескольких групп пользователей (MrSmersh), читает лекции по всей Румынии и за рубежом, является лидером группы пользователей RONUA Timisoara.

С 2011 г. каждый год получает награду Microsoft MVP Award.

  • Александр Сурков

Занимается разработкой устройств на базе микроконтроллеров и программного обеспечения для них с 2006 года. С 2011 года возглавляет отдел разработки охранных систем в стартапе. Кроме работы с микроконтроллерами еще занимается системами видеоаналитики. В свободное время работает над хобби-проектом — 2D-игрой про футбол и пишет статьи на «Хабре».

Регистрируйтесь, будет интересно!

MSDN Blogs: WCF: POC for SAML Token Creation and Consumption

$
0
0

WCF: POC for SAML Token Creation and Consumption

Agenda:

We will understand how we can create a custom SAML token from code and how it can be used to test against a WCF service.

Importance:

This comes handle when we need to work on Interop scenarios and handle the received SAML tokens from Java Clients.

Security Requirement:

Service:

Protocol: Http

Client Credential at soap envelope level: SAML Token

Client:

         Technology: Java Client (For testing, we will use Fiddler)

         SAML Token: Client can get it from some STS or it can be hard coded.

Signing:

Despite we need credential as SOAP envelope level, we cannot use PURE message security.

Because Client will never sign the BODY.

It only sends a Signed SAML token.

Expectation:

WCF to do signature verification for received signed SAML token and parse SAML token to perform authentication/authorization.

Challenges:

The biggest channel is the parse received SAML Token for authentication and authorization.

Incoming Request:

Untitled

Service Security, achieved from custom binding:

static Binding GetBinding()

{

CustomBinding result = new CustomBinding();

TextMessageEncodingBindingElement myEncoding = new TextMessageEncodingBindingElement();

myEncoding.MessageVersion = MessageVersion.Soap12;

//myEncoding.MessageVersion = MessageVersion.Soap12WSAddressing10;

result.Elements.Add(myEncoding);

//SecurityBindingElement mySec = SecurityBindingElement.CreateMutualCertificateBindingElement(MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10);

IssuedSecurityTokenParameters myTokenReq = new IssuedSecurityTokenParameters();

myTokenReq.TokenType = “urn:oasis:names:tc:SAML:2.0:assertion”;

myTokenReq.KeyType = System.IdentityModel.Tokens.SecurityKeyType.BearerKey;

SecurityBindingElement mySec= SecurityBindingElement.CreateIssuedTokenOverTransportBindingElement(myTokenReq);

mySec.AllowInsecureTransport = true;

mySec.EnableUnsecuredResponse = true;

mySec.MessageSecurityVersion = MessageSecurityVersion.Default;

//mySec.MessageSecurityVersion = MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10;

mySec.SecurityHeaderLayout = SecurityHeaderLayout.Lax;

mySec.LocalServiceSettings.DetectReplays = false;

mySec.IncludeTimestamp = false;

result.Elements.Add(mySec);

HttpTransportBindingElement myTransport = new HttpTransportBindingElement();

result.Elements.Add(myTransport);

return result;

}

Please Note:

I have intentionally left few commented line, to help switch between Addressing mode and client credential as Client Certificate as well.

Service Host:

public static void Test()

{

string baseAddress = “http://” + “saurabswin7.fareast.corp.microsoft.com” + “:8000/Service”;

ServiceHost host = new ServiceHost(typeof(Service), new Uri(baseAddress));

host.AddServiceEndpoint(typeof(ITest), GetBinding(), “”);

ServiceMetadataBehavior myMeta = new ServiceMetadataBehavior();

myMeta.HttpGetEnabled = true;

host.Description.Behaviors.Add(myMeta);

host.Description.Behaviors.Remove<ServiceCredentials>();

host.Description.Behaviors.Add(new MyCred());

host.Open();

Console.WriteLine(“Host opened”);

Console.Write(“Press ENTER to close the host”);

Console.ReadLine();

host.Close();

}

 

To parse the SAML token, I need to add my custom credentials:

 

Custom Credentials:

 Untitled1

 

 

 

 

Custom Security Token Manager:

Untitled2

 

 

MyCustomAuthenticator:

Untitled3

 

 

MyTokenSerializer:

 Untitled4

 

Useful method to remember:

protected override System.Collections.ObjectModel.ReadOnlyCollection <System.IdentityModel.Policy.IAuthorizationPolicy> ValidateTokenCore(System.IdentityModel.Tokens.SecurityToken token)

protected override SecurityToken ReadTokenCore(XmlReader reader, SecurityTokenResolver tokenResolver)

Creating/Getting SAML Token:

To test the service, we need to create or get a SAML token and then send it to WCF service created with above configuration. I struggle a lot to create a sample STS and get the token, so I would share the relevant code to create SAML token easily.

 

One Drive Share (SAML Token Creator and WCF Service Authenticator classes):

https://1drv.ms/f/s!ArgnWb8iHXB6gpRsElvkqkuVjbaAvA

 

To test the service, we can use Fiddler:

Composer view:

Untitled5

 

Here I have added extra header to support the

WS Addressing.

Host: saurabswin7.fareast.corp.microsoft.com:8000

Content-Type: application/soap+xml; charset=utf-8; action=”http://tempuri.org/ITest/Echo”

For Body: Get the XML file from the one drive share.

Hope this helps when working on SAML tokens!

Thanks

Saurabh Somani

MSDN Blogs: WCF: POC for SAML Token Creation and Consumption

$
0
0

WCF: POC for SAML Token Creation and Consumption

Agenda:

We will understand how we can create a custom SAML token from code and how it can be used to test against a WCF service.

Importance:

This comes handle when we need to work on Interop scenarios and handle the received SAML tokens from Java Clients.

Security Requirement:

Service:

Protocol: Http

Client Credential at soap envelope level: SAML Token

Client:

         Technology: Java Client (For testing, we will use Fiddler)

         SAML Token: Client can get it from some STS or it can be hard coded.

Signing:

Despite we need credential as SOAP envelope level, we cannot use PURE message security.

Because Client will never sign the BODY.

It only sends a Signed SAML token.

Expectation:

WCF to do signature verification for received signed SAML token and parse SAML token to perform authentication/authorization.

Challenges:

The biggest channel is the parse received SAML Token for authentication and authorization.

Incoming Request:

Untitled


Service Security, achieved from custom binding:

static Binding GetBinding()

{

CustomBinding result = new CustomBinding();

TextMessageEncodingBindingElement myEncoding = new TextMessageEncodingBindingElement();

myEncoding.MessageVersion = MessageVersion.Soap12;

//myEncoding.MessageVersion = MessageVersion.Soap12WSAddressing10;

result.Elements.Add(myEncoding);

//SecurityBindingElement mySec = SecurityBindingElement.CreateMutualCertificateBindingElement(MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10);

IssuedSecurityTokenParameters myTokenReq = new IssuedSecurityTokenParameters();

myTokenReq.TokenType = “urn:oasis:names:tc:SAML:2.0:assertion”;

myTokenReq.KeyType = System.IdentityModel.Tokens.SecurityKeyType.BearerKey;

SecurityBindingElement mySec= SecurityBindingElement.CreateIssuedTokenOverTransportBindingElement(myTokenReq);

mySec.AllowInsecureTransport = true;

mySec.EnableUnsecuredResponse = true;

mySec.MessageSecurityVersion = MessageSecurityVersion.Default;

//mySec.MessageSecurityVersion = MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10;

mySec.SecurityHeaderLayout = SecurityHeaderLayout.Lax;

mySec.LocalServiceSettings.DetectReplays = false;

mySec.IncludeTimestamp = false;

result.Elements.Add(mySec);

HttpTransportBindingElement myTransport = new HttpTransportBindingElement();

result.Elements.Add(myTransport);

return result;

}

Please Note:

I have intentionally left few commented line, to help switch between Addressing mode and client credential as Client Certificate as well.

Service Host:

public static void Test()

{

string baseAddress = “http://” + “saurabswin7.fareast.corp.microsoft.com” + “:8000/Service”;

ServiceHost host = new ServiceHost(typeof(Service), new Uri(baseAddress));

host.AddServiceEndpoint(typeof(ITest), GetBinding(), “”);

ServiceMetadataBehavior myMeta = new ServiceMetadataBehavior();

myMeta.HttpGetEnabled = true;

host.Description.Behaviors.Add(myMeta);

host.Description.Behaviors.Remove<ServiceCredentials>();

host.Description.Behaviors.Add(new MyCred());

host.Open();

Console.WriteLine(“Host opened”);

Console.Write(“Press ENTER to close the host”);

Console.ReadLine();

host.Close();

}

 

To parse the SAML token, I need to add my custom credentials:

 

Custom Credentials:

 Untitled1

 

 


  

Custom Security Token Manager:

Untitled2

 

 


MyCustomAuthenticator:

Untitled3

 

 


MyTokenSerializer:

 Untitled4

 


Useful method to remember:

protected override System.Collections.ObjectModel.ReadOnlyCollection <System.IdentityModel.Policy.IAuthorizationPolicy> ValidateTokenCore(System.IdentityModel.Tokens.SecurityToken token)

protected override SecurityToken ReadTokenCore(XmlReader reader, SecurityTokenResolver tokenResolver)

Creating/Getting SAML Token:

To test the service, we need to create or get a SAML token and then send it to WCF service created with above configuration. I struggle a lot to create a sample STS and get the token, so I would share the relevant code to create SAML token easily.

 

One Drive Share (SAML Token Creator and WCF Service Authenticator classes):

https://1drv.ms/f/s!ArgnWb8iHXB6gpRsElvkqkuVjbaAvA

 

To test the service, we can use Fiddler:

Composer view:

Untitled5

 


Here I have added extra header to support the

WS Addressing.

Host: saurabswin7.fareast.corp.microsoft.com:8000

Content-Type: application/soap+xml; charset=utf-8; action=”http://tempuri.org/ITest/Echo”

For Body: Get the XML file from the one drive share.

Hope this helps when working on SAML tokens!

Thanks

Saurabh Somani

Viewing all 3015 articles
Browse latest View live