Retrocomputing – Install localized USB Windows 98SE driver (nUSB 3.6)

I was playing around with my recently acquired Pixel x86 mini computer. This machine allows to run basically a hard drive from an SD Card.

For nostalgic reasons I also wanted to run a localized version of Windows 98 Second Edition (in my case Dutch). As this computer has built-in USB 1.1 and 2.0, there is a need for better drivers. Out of the box Windows 98 SE only has USB 1.1 support and lacks proper devices drivers for various USB devices. The unofficial nUSB36 package solves this by providing improved 1.1 support and 2.0 support. However it has one disadvantage and that is that it is not working well in localized versions of Windows as it is based on the English version. It overwrites various Windows core components, causing for example the start menu to revert to English which is unfortunate when running a localized version. In this article I will describe how you can modify the driver package and make it a localized version with similar functionality.

Note: Windows 98 First Edition is not supported as this lacks USB support for WDM (Windows Driver Model), which is required to install these drivers.

Prerequisites

First make sure to download both the original nUSB36e package and the Q242975 update (the latest update package that Microsoft launched for Win 98SE with patches to the standard USB drivers/utilities). The package for the update contains all available localized variants.

The update package is hosted here, as it is almost nowhere hosted nowadays as the interest in these kind of packages is very limited, so I bundled all languages I could find online.

Preparing driver package

First extract the nUSB36e.exe file using a tool like 7-zip to a new folder. The folder should have similar contents like shown below (Note: make it easier for you in Windows to work with these files by making sure hidden and system files are shown and file extensions are enabled):

The folder contains a few system files which are overwriting the default Windows system files by English versions.

Next extract the Q242975 update package. Then determine which exe file you need to extract. The format is 242975<threeletterlanguagecode>8.exe. So for example for the language Dutch, the abbreviation is dut, so the file you need to use is 242975dut8.exe. Extract this file also with a tool like 7-Zip to a new folder. The folder contents should look like this:

Now make sure to copy the following files from the extracted folder to the earlier extracted nUSB36e package:

  • user.exe
  • user32.exe
  • qfecheck.exe
  • qfecheck.hlp
  • hardware.hlp
  • systray.exe
  • hotplug.dll

Next you need to open up a copy of the Windows 98 SE CD-ROM (nowadays you can find ISO’s on WinWorld) and open using a tool like 7-ZIP the WIN98/WIN98_25.CAB file. Copy the sysdm.cpl file and overwrite the one in the nUSB36E folder. This file is in the package overwritten, but results in that the Windows would otherwise identify itself as Windows ME in the computer properties screen.

Installation

Copy the newly created driver folder towards the target machine. This can be done in multiple ways, like SMB share, floppy drives, or by copying it directly onto for example an SD card like I use in my system.

Boot then Windows 98SE on the target system. On the target system open up the folder in Explorer with the prepped package:

Doubleclick on the _start.bat file. This file will automatically install the whole package and will request to reboot the machine. Then reboot the machine to finish installing the new driver library.

After reboot open the device manager. Probably the USB hubs are not recognized immediately. Right click on the not recognized device and choose Properties. Then go to the Device Driver Tab and choose to update the device driver. In the wizard select to search for a better device driver and then untick all options. Then the Windows Device Driver library gets rebuild with the newly copied inf files. Then it will propose to install the USB drivers. It can be that this needs to be repeated a couple of times as then also for example of external thumb drives or CD-ROM players the Mass Storage Device Driver must be installed as well. After completion the device manager will look something like this:

Also in case a device is plugged in you will notice in the taskbar tray the Eject icon. When clicking on it it should show a localized version of the Hotplug menu:

After this the installation is fully complete. Don’t expect a fast USB transfer rate like on modern Windows devices or even older computers but with running Windows XP which has far better USB support and support native USB 2.0.

Azure DevOps Pipeline issues GLIBC 3.4.20 in combination with Bicep 0.25

Recently I was working on deploying infrastructure in Azure using an Shared Agent Pool running on Red Hat Enterprise Linux (RHEL) 7.9 images. From one day to another my pipelines were failing with errors which were not there before:

ERROR: /var/adoagent/_work/_temp/.azclitask/bin/bicep: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /var/adoagent/_work/_temp/.azclitask/bin/bicep)

After analysis it turned out that this has to be related with the fact that since the introduction of Bicep version 0.25 implicitly the dependency of GLIBC library has been upgraded. As RHEL 7.9 doesn’t support GLIBC 3.4.20 this causes deployments to fail.

This behavior demonstrates with existing pipelines to occur without recent changes to the pipeline, when relying on AZ CLI tasks in your pipelines in combination with the az deployment command. This command implicitly downloads Bicep CLI to parse Bicep templates. This logic uses by default the latest version, so in case new versions with different dependencies are launched they can suddenly break your pipeline and prevent it from working.

Short term fix

As short term fix, there is an easy way to overcome this issue. Make sure to adjust your pipeline scrips to first download the latest Bicep version. Before executing in your script az deployment, add the following line:

az bicep install --version v0.24.24

This small adjustment is however not a permanent fix as you now not receive new updates to the Bicep and this might introduce other issues in future.

Long term fix

Long term fix is to make sure your agents are running a version of Linux which supports newer GLIBC libraries. This allows you to keep up to date with new features of Bicep and stay aligned with new Azure technology that is being launched over time.

Home Assistant and Somfy RTS with RFXCom

Time for a new post about a completely different subject. Last year we bought a new house, so it was time to start with some more serious smart home setup. I decided to go for Home Assistant, installed on a Raspberry Pi 4B paired with an SSD for reliability (as SD cards go bad over time when a lot of data is being written).

We installed electrical roller blinds at various windows to prevent heat coming inside during summer and wanted a way to control those as well with Home Assistant. The roller blinds are equipped with Somfy RTS motors, which can be controlled using a remote. The RTS motors have as advantage that they can be controlled more easily with external hardware, but the downside is that it only supports communication in one-direction. It is not communicating back state of various components. This means that when you open a shutter, the system never knows if it actually opened and also how far. Somfy io (also called io-homecontrol) does support this, but is a proprietairy protocol and requires specific hardware (like a TaHoma switch) which also needs to be connected to the cloud in order to use it and is just another brick in your house which needs to be powered 24/7. Note: Make sure to check which motor type you have or check it before you buy it!

To integrate Somfy RTS into Home Assistant, I decided to buy a specific piece of hardware to allow to send out RF signals to the Somfy RTS motors around the house. I bought the RFXtrx433XL, which you can connect using USB to a Raspberry Pi or other device to send out RF signals (on 433MHz) to various devices. However I struggled with configuring the Somfy RTS motors the first time and couldn’t find an easy guide so that’s the reason why I describe here the steps how to configure RFXCom properly together with Somfy RTS. Later I also explain how to configure the devices then in Home Assistant so that you can use them within Dashboards, Automations, etc.

Programming RFXtrx with Somfy RTS

First step is download RFXmngr. Check the link to find the latest available version and install it. Then connect the RFXtrx to your computer and open up RFXmngr.

Main screen of RFXmngr on startup

You will see a screen like above and need to click on File -> Connect. When you’ve connected an RFXtrx usually the com port is automatically selected:

Note: There are also RFXtrx versions available with network capabilities and are not directly connected to a computer. In that case it should be possible to use TCP/IP and by using the right IP and Port number to connect over ethernet.

Then navigate to the RFY tab:

RFY tab, this section allows to program for Somfy RTS motors (and other similar systems)

Here a few things are important:

  • ID
    This indicates the unique id
  • Unit Code
    Each unique id supports up to 5 devices, which are called units
  • Command
    This is needed to send a specific command to the device. In this case we must set it to program as we’re going to program some shutters to control them with RFXtrx.

My suggestion is to create some spreadsheet to keep track of all devices and unique id’s. I started with ID 00 00 00 and Unit Code 0. Then for the second device I increased the Unit Code. And then after Unit Code 4, I increased ID to 00 00 01 and start again with Unit Code 0 and so on. This way it is easier later to program all the devices in Home Assistant with the right code. Also already prefix the Unit Code with a 0 in your spreadsheet as you need this later on in this format for adding devices in Home Assistant.

Next step is to actually set the devices in programming mode to program the RFXtrx. To do this you need a few items:

  • Remotes of all devices you want to program
  • A pen or small screwdriver to put the remote in programming mode

First in RFXmngr set the ID and Unit Code to the desired configuration. Then make sure to set Command to Program. Then walk to the device which you want to program with the remote. On the back of the remote is a hole and with a pen or screwdriver you need to short circuit this to set it to program mode. The device should respond with opening and closing shortly to indicate that it is set to program mode. Sometimes it is a bit fiddly and you need a few tries to get this done. When the device indicated it is in programming mode, click on the Transmit button in RFXmngr. Then the device should again quickly open and close. That is an indicator it has been programmed. Make sure to write down in your spreadsheet which device it is, such that you later can rename the device in Home Assistant again. Now you can also test easily the device. If you for example change the Command to Down and then click on Transmit again, the device should move to the down position. Repeat this process until all devices have been programmed, and make sure that each device has a unique combination of ID and Unit Code! When finished it is time to close RFXmngr and disconnect the RFXtrx from the computer.

Configure RFXCOM in Home Assistant

Connect the programmed RFXtrx to your device running Home Assistant. Next step is to open Home Assistant and go to Settings -> Integrations. Then click on add Integration and search for rfxcom:

Add the integration and also check to box to automatically add new devices. For some reason in my case this not always work, so you might need to add the devices manually to Home Assistant. I will describe here how to add the earlier programmed devices manually.

As this guide describes, there is a fixed way how devices codes are working within RFXCOM. All Somfy RTS devices needs to be prefixed with 071a000 followed by the ID and the Unit Code. So the code should be something like this: 071a0000[id][unit_code]

Make sure to refer to your spreadsheet and lookup each ID and Unit Code. For the Unit Code is stated that is always should be prefixed with a 0. So when the Unit Code is for example 0 (as shown in the UI of RFXmngr) then the Unit Code is 00. In our example for the first device the code should be: 071a000000000000

In Home Assistant, on the integrations screen click on the Configure link on RFXCOM. Then a window should open like below:

In the middle textbox, you can enter manually the event code. In this case that is the code we determined earlier. Enter it and click save. Then a new entity should have been created in Home Assistant. Make sure to rename the device to a proper name and you can also assign it to a specific room to allow later to use it in automations for a specific room, etc. Then it should look like this:

You can now also test the device directly by clicking the up and down icons. Then the device should go up or down if possible. When that works, you can confirm the device has been configured correctly. Repeat these steps for other devices until all programmed devices are added to Home Assistant.

After configuring, you can do whatever you want to use the entities in dashboards or use them within Automations, Scenes or Scripts like with any other integrations.

Using Select2 and SharePoint 2013 REST API with large lists and infinite scroll

Recently during a project I was involved with creating a custom application with JavaScript inside SharePoint. One of the requirements was to load data from a large list (> 5000 items) into a Select2 dropdown control. Select2 is a very handy dropdown control to use in custom applications. It features out of the box search functionality, infinite scroll and many more nice features. However I struggled a few days with some specific problems in combination with SharePoint. In this article I’ll describe how to use Select2 in combination with the SharePoint REST API and large lists.

Select2 uses out of the box a HTML select or input control to bind itself to. Because we need to specify a custom ajax function, Select2 will only work in combination with an input control which is hidden. Add to your page an input type hidden control, like this:

<input id="selectData" type="hidden" />

We need to fulfill a few requirements which are important:
– Able to select from a very large list
– Support searching on title
– Fast
– Reliable

To meet these requirements we need to overcome a few challenges. The first challenge is large list support. To overcome this issue, you need at least indexed columns in case you want to query on specific columns. Be aware that you need to set the indexes before filling the list with data! Second challenge is how to use Select2 with the SharePoint REST API. Documentation on the internet is very rare on this specific subject and also there are a few problems with OData and SharePoint 2013 which you’ll face. In the following sample I will show how to bind Select2 and make it possible to search and use infinite scroll. It works also very fast and reliable. The sample uses a list named Books and uses the Title field to filter on.

var webAbsoluteUrl = _spPageContextInfo.webAbsoluteUrl;

$('#selectData').select2({
    placeholder: 'Select a book title',
    id: function (data) { return data.Title; },
    ajax: {
        url: webAbsoluteUrl + "/_api/web/lists/GetByTitle('Books')/items",
        dataType: "json",
        data: function (term, page, context) {
            var query = "";
            if (term) {
                query = "substringof('" + term + "', Title)";
            }
            if (!context) {
                context = "";
            }
            return {
                "$filter": query,
                 "$top": 100,
                 "$skiptoken": "Paged=TRUE&amp;p_ID=" + context
            };
        },
        results: function (data, page) {
            var context = "";
            var more = false;
            if (data.d.__next) {
                context = getURLParam("p_ID", data.d.__next);
                more = true;
            }
            return { results: data.d.results, more: more, context: context };
        },
        params: {
            contentType: "application/json;odata=verbose",
            headers: {
                "accept": "application/json;odata=verbose"
            }
        }
    },
    formatResult: function (item) {
        return item.Title;
    },
    formatSelection: function (item) {
        return item.Title;
    },
    width: '100%'
});

// Function to get parameter from url
function getURLParam(name, url) {
    // get query string part of url into its own variable
    url = decodeURIComponent(url);
    var query_string = url.split("?");

    // make array of all name/value pairs in query string
    var params = query_string[1].split("&");

    // loop through the parameters
    var i = 0;
    while (i < params.length) {
        // compare param name against arg passed in
        var param_item = params[i].split("=");
        if (param_item[0] == name) {
            // if they match, return the value
            return param_item[1];
        }
        i++;
    }
    return "";
}

First thing we do is getting the current web URL. This is needed to construct the path to the REST API. Then we bind the Select2 control using JQuery. We define first a placeholder. This is the text that will be shown when nothing is selected. We also specify a function to determine the id. Select2 standard can’t handle the data format which comes back from SharePoint, so we need to set manually that the identifier is the Title field of the book. The same applies for the formatResult and formatSelection which makes sure that the title is shown as field in the dropdow list. The most important part is the ajax parameter. First a URL to the REST API is specified, then we specify dataType JSON, because we want all the data as JSON and not XML. We also need to set this explicitly in the contentType and headers of the HTTP calls we need to make. In the data parameter we specify the logic to handle the queries and make sure that the skiptoken is specified. Because of a bug/designed feature in the SharePoint REST API, the skip parameter doesn’t work. The linked article describes an alternative by using the skiptoken. We store this skiptoken partially in the context variable to make sure Select2 tracks it for each next call. The next important part is the results function. This function parses the result and grabs the p_ID value to store in the context variable. In case the __next value is set, we know that more data is available, so then we also set the more variable to true. Select2 uses the more variable to indicate if a next page with more items is available for infinite scroll.

How to gather data from SharePoint using REST API on external website

Recently I was struggling to gather in an external web application data from SharePoint. I was not able to use the App Model as the customer didn’t have the proper App Infrastructure in place. In my example I wanted to call from an external web application the SharePoint REST API to gather search results. This is possible when using a service account, but I didn’t want that approach, because I wanted security trimmed search results. Passing through credentials is impossible as the environment was using NTLM authentication and then you’ll face the Double Hop problem. Activating Kerberos would solve this issue, because that features delegation, but was also not feasible as you need to modify the farm configuration which was not allowed. Next idea was to do it Client Side!

Client Side means in our case using Javascript. The browser should perform the request to SharePoint and then process the results to display it in an external web application. The advantage is that the end users credentials are used to authenticate against SharePoint and that security trimmed results are retrieved. Starting with this ideas I immediately struggled with browser security restrictions. The web application is running on a different domain then SharePoint and then you’ll face cross-domain issues. There are a few approaches to solve this:
– CORS (Cross-origin resource sharing)
– JSONP (JSON with padding)
– IFrame with PostMessage API

CORS is quite easy to implement in combination with JQuery. However it is not usable with SharePoint, because you need to change the server configuration. CORS works only when the server sends modified HTTP Access-Control-Allow-Origin headers. This requires server modifications and is therefore not a solution in our case.

JSONP is an alternative for doing web service requests. Instead of doing a normal JSON request you load the webservice request as a script with a callback method which gets the data as argument. This technology is however not usable in combination with SharePoint. SharePoint sends with all responses a HTTP header X-Content-Type-Options: nosniff. Because of browser security improvements an additional check is performed by the browser to see if the MIME-type is correct. SharePoint doesn’t send the MIME-type which is allowed in script tages so the browser will block the script. There is an article on MSDN about this security measures in Internet Explorer. Conclusion is that this technique is also not usable in our case.

Another option is to use IFrames with PostMessage. PostMessage is an API introduced as part of HTML5. Most browsers support this (IE since IE8). The idea is to use an IFrame in the web application and open a SharePoint page in the IFrame. The SharePoint page contains some Javascript to call the REST API and then sends back the results using PostMessage. When using IFrames you’ll need to tackle two problems. First thing is that you need to make sure that both the web application and SharePoint is using HTTP or HTTPS. When combining them this will give security warnings in the browser. Another problem is that SharePoint sends a HTTP header standard which blocks it from using SharePoint content in IFrames. Luckily there is a solution for this. You need to include the following control on your aspx page to allow frames:

<% Register Tagprefix="WebPartPages" Namespace="Microsoft.SharePoint.WebPartPages" Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
<WebPartPages:AllowFraming runat="server"/>

This control when added to the page will remove the X-FRAME-OPTIONS HTTP header which prevents displaying SharePoint content in a IFrame/Frame. Be aware that this only works on ASPX pages! Otherwise you’ll still need server modifications by configuring IIS. The ASPX page can for example be uploaded to the style library of a Site Collection. This library has normally read permissions for All Authenticated users which is typically useful for scenario’s like this.

In the following sample I will show some code samples to make it possible to send information cross-domain using the SharePoint REST API. First step is create a web application outside SharePoint or just an HTML page with some Javascript. Add an IFrame to it with the URL to the helper page we will create later on and add the JQuery script library. Then use the following code to make sure that we can receive messages from the helper page in SharePoint:

$(document).ready(function () {
  if (typeof window.addEventListener !== "undefined") {
      window.addEventListener("message", receiveMessage, false);
  } else if (typeof window.attachEvent !== "undefined") {
      window.attachEvent("onmessage", receiveMessage);
   }
});

function receiveMessage(event) {
    var eventData;
    try {
       eventData = JSON.parse(event.data);
        // Implement your logic here!
    } catch (error) {
        // Implement some error handling here!
    }
}

In the above sample we are using JQuery to wait before the page is loaded. Then we add a eventreceiver to the window object to receive messages using the PostMessage API. In case a message is received a method called ReceiveMessage is called. There we need to parse the JSON message to an Javascript object and then we can implement our logic.

Now we have the application page in place we need to create the helper page which needs to be uploaded to SharePoint. Create a new aspx page (you can change a standard publishing page in SharePoint), add the AllowFramingControl to it and add a script reference to a Javascript file we will upload as well. In the new Javascript file we need to add the following code:

var origin = "http://urltowebapplication.com" // Make sure that origin matches the url of you web application with iframe!!!!

// Start executing rest call when ready
$(document).ready(function() {
  callRestAPI();
});

function callRestAPI() {
    // Construct rest api call
    var restUrl = _spPageContextInfo.webAbsoluteUrl + "/_api/web";
    $.ajax(
    {
       url: restUrl,
        method: "GET",
        headers:
        {
           "accept": "application/json;odata=verbose",
        },
       success: onSuccess,
       error: onError
    });
}

function onSuccess(data) {
    if(data)
    {
        var response =
        {
            header: "response",
            message: data
        };
        parent.postMessage(JSON.stringify(response ), origin);
    }
}

function onError(err) {
   var response =
   {
       header: "error",
       message: err
   };
   parent.postMessage(JSON.stringify(response), origin);
}

The above code sample again uses JQuery to wait before the page is loaded. Then it constructs a url to call the SharePoint REST API to retrieve details about the Web object. You can extend this by passing parameters by querystring, etc to make it more dynamic. After that it calls the REST API using an Ajax call. It uses some headers to make sure that JSON is returned instead of XML. Then the postmessage call is used to send the message to the parent in JSON format when the call was succesfull or not. On the other side the response can be checked to see what kind of data has been send by checking the header. This concept can be extended with things like two way communication using Postmessage. The disadvantage of all of this is that everything runs in the browser and that automatic logon in the browser needs to be supported. Otherwise instead of the helper page an access denied page will be loaded, resulting in no messages being send. You can’t detect this properly from code because access to the DOM in IFrames is blocked in case of cross-domain content. The only thing you can do is implement some timeout mechanism and send always a lifesign signal using Postmessage that the page has been loaded to let the other page know that everything is ok.

Installing SharePoint 2013 Apps programmatically: what’s possible, what not?

Last week I was looking for a solution to install SharePoint Apps using Server Side Object Model (SSOM) or Client Side Object Model (CSOM). Microsoft does not really support this scenario as it is not the preferred ways to use Apps. Normally end users should install Apps themselves on sites they want. In this post I will cover the possibilities of provisioning Apps using code. But I investigated the API to see if there is a way to support the sample scenario somehow.

Scenario: I want a Web Service which enables me to provision Apps from the App Catalog Site Collection on request to a Web on a specific Site Collection. The Web Service should be a Full Trust Code solution which grabs the App in the App Catalog and provisions it. The App Catalog should be used to make it easy to deploy new Apps from a central place and to support versioning in an easy way:
Deploying Apps

Now we know what we want we can look into the API which is present in SharePoint and see what possibilities we have. Microsoft offers not much API’s for provisioning Apps. When browsing on the internet you will probably find a method called LoadAndInstallApp on the SPWeb object. This method however only accepts stream object to a binary representation of the .app file. The .app file is actually a zip file containing the application. The disadvantage of this solution is that you have to upload the App to every single Web where you want to deploy this App and not using the advantage of a central location like the App Catalog to distribute the Apps and taking care of versioning. A sample of how to install apps using SSOM can be found on MSDN Blogs.

When digging further in the SharePoint DLL’s using ILSpy you will notice that there is no publicly available API to install Apps directly from the App Catalog. Internally there are some methods available, as it is possible from UI to add an app from a Corporate Catalog on a Web. A possible workaround is to retrieve the App file from the App Catalog (it is internally just a list) and then forward the stream object to the LoadAndInstallApp method. This works however SharePoint doesn’t notice then that the App is coming from the Corporate Catalog. In case a new version of the App is installed on the App Catalog, SharePoint doesn’t know that the App which is installed on that specific Web is updated and can’t notify the site owner by a notification that an update is available (SharePoint won’t push new versions automatically!). Another thing which you’ll notice is that when opening the Site Contents page and open the context menu on the installed App that the About option is missing. This page is very important, as this page normally allows an end-user to update the App and see if an update is available. So this means that the end-user can’t update the App properly from UI. Conclusion:The API is not that mature to use it in a professional environment to support scenario’s like this. We need to hope that Microsoft will come with an API to support scenario’s like this in future. Currently it seems not to be possible to programmatically install an App from the App Catalog into a Web.

I also want to mention the CSOM variant of the LoadAndInstallApp method. It is present in both the .NET Managed CSOM and Javascript library. However when you call it, you will get an error that it is only supported when sideloading is enabled. Sideloading should only be used in development environments. Therefore CSOM doesn’t provide a way to deploy Apps programmatically in SharePoint on production environments. There reason is probably to reduce the risk to install corrupt Apps or Apps which can delete or malform existing data in the Web.

SharePoint Search & Migration Part 4: Search Result Types and Display Templates

In a few series of posts I will discuss changes in SharePoint Search which might affect you when migrating from SharePoint 2010 to 2013. Search is one of the most changed components in SharePoint 2013 and therefore I split the posts up in four parts covering some subjects you need to know before migrating to SharePoint 2013. All parts will be available with the links below when the posts are available:

Part 1: Search Settings
Part 2: Search Web Parts
Part 3: Search Results Sources
Part 4: Search Result Types and Display Templates

In this post I’ll discuss Search Result Types and Display Templates. Both are new features coming with the new Search platform in SharePoint 2013. You’ll need them in case you want to customize the way how search results are displayed. In the past XSLT was the way to go by changing the search results page and change the Web Part settings XSLT. In 2013 this functionality is no longer available and Results Types and Display Templates are the way to go.

Let’s focus first on Result Types. Microsoft provides on MSDN a nice article which gives you a basic understanding of what it is. In short: when searching you’ll get results back of different types. A site is different from a library, a word document is different from an excel document or image, etc. Each type of result is defined as a Result Type. Out of the box a set of result types is predefined like all types of Office documents, pages, sites, libraries, etc. Per type a Display Template is coupled which determines how the result of the specific type should be rendered in the Search Results Web Part. Result types are managed per Site Collection and Web. In the Site Settings menu you can select Search Result Types for managing them on Site Collection level and Result Types to manage them on Web level. After opening the page you’ll see a screen with all result types on the site collection or web:

Manage Result Types

Manage Result Types page for Site Collection Administrator

All out of the box delivered Result Types are read-only. This is similar as with Result Sources, where the out of the box settings are also read-only. When we open one of the Result Types we can see how it’s configured and what you can configure:

View Result Type page

View Result Type page

For each Result Type you can specify the name and conditions. The conditions filter first on Result Sources. You can also select the option to use all if you don’t want to filter on it. Next you can filter on the type of results. This is a predefined set of types which SharePoint recognizes. When you expand “Show more conditions” you have the ability to create filters based on Managed Property values. In case you have added additional fields to specific content types and you want to filter on them you can use this functionality to add a filter. The last option you need to select a Display Template. This list is created based on the approved Display Templates in the Master Page Gallery (in the subfolder Display Templates). This is actually the template that is used to render the Search Result which is matching this Result Type. When created it will appear in the top section of available Result Types.

Now we have discussed the Result Types we can focus on Display Templates. Display Templates are the replacement for XSLT and provide a powerful way to create templates within SharePoint. Microsoft has again a good article about what a Display Template is. Display Templates are stored inside the Master Page Gallery in a subfolder called Display Templates. Inside that folder there a some subfolders for specific categories of Display Templates. In our case we need to open the Search folder and that should look like the screenshot below:

Search Display Templates

List of Search Display Templates in the Master Page Gallery.

In case the Publishing Infrastructure feature is not activated on the Site Collection, you’ll only see .js files. Be aware of this! In that case if you don’t want to activate it and want to create or change display templates, read this article  from Martin Dreyer where he describes how to change display templates on non-publishing websites. On publishing websites you should ignore the .js files. They are automatically updated by SharePoint when you make changes to the HTML files. The MSDN article which I’ve shared already describes how you can change Display Templates. You can adjust them to display for example additional Managed Properties or change the styling. On the internet you can find a bunch of examples to built-in pretty cool stuff with display templates. A small selection is stated below:

Add twitter links using Display Templates
Image Slider
Customize Display Templates and deploy them using a solution

Setting up SharePoint 2013 devbox for provider hosted apps

For preparing another article I was trying on my pretty straight forward dev box to debug an empty provider hosted app solution. However I was experiencing some problems with setting up my dev box to allow provider hosted apps to run on the same box using IIS Express. First I started with a blank new SharePoint 2013 App solution in Visual Studio 2012 and tried to deploy it on my machine (using a IISExpress instance for running the actual contents of the app). First deployment was failing because the services needed to run apps where not running. Make sure that the appropiate services are running as shown in the following screenshots:

AppManagementService

The App Management Service Application should be started.

AppManagementService2

The App Management Service should also be started.

After starting the services and performing an IISReset I could successfully deploy the app to SharePoint. A window will open with the question to trust the app. After that I was getting the following exception:

TokenError

A token error which was occurring…

After googling a bit around on this error I found an >article which explains why it isn’t working. The app is trying to authenticate using the so-called Low-Trust model and expects Access Control Service (ACS) as a trust broker. I decided that I want to use High-Trust because I don’t want to rely on an internet connection in the dev box to connect to O365 ACS. The advantage is that you don’t need Office 365 for ACS, the disadvantage is the bunch of configuration work to do. First you need to do some preparation work which is described by Microsoft in the following article. Probably you already have a farm installed, so you can start at step 6. To make it easier I included the full PowerShell script from the article here with some useful comments:

#Start SharePoint Services
net start spadminv4
net start sptimerv4

#Set App Domain, change name if you want to
Set-SPAppDomain "App-Domain"

#Verify services are started
Get-SPServiceInstance | where{$_.GetType().Name -eq "AppManagementServiceInstance" -or $_.GetType().Name -eq "SPSubscriptionSettingsServiceInstance"} | Start-SPServiceInstance
Get-SPServiceInstance | where{$_.GetType().Name -eq "AppManagementServiceInstance" -or $_.GetType().Name -eq "SPSubscriptionSettingsServiceInstance"}

#Create new managed account, remove this line if you already have one!
$account = New-SPManagedAccount

#Create services for app domain, please change domainname\username to correct user
$account = Get-SPManagedAccount "domain\user" 
$appPoolSubSvc = New-SPServiceApplicationPool -Name SettingsServiceAppPool -Account $account
$appPoolAppSvc = New-SPServiceApplicationPool -Name AppServiceAppPool -Account $account
$appSubSvc = New-SPSubscriptionSettingsServiceApplication –ApplicationPool $appPoolSubSvc –Name SettingsServiceApp –DatabaseName SettingsServiceDB 
$proxySubSvc = New-SPSubscriptionSettingsServiceApplicationProxy –ServiceApplication $appSubSvc
$appAppSvc = New-SPAppManagementServiceApplication -ApplicationPool $appPoolAppSvc -Name AppServiceApp -DatabaseName AppServiceDB
$proxyAppSvc = New-SPAppManagementServiceApplicationProxy -ServiceApplication $appAppSvc

#Change tenant name if you want to
Set-SPAppSiteSubscriptionName -Name "app" -Confirm:$false

After executing the script you’ll see new service applications popping up in Central Administration:

NewServiceApplicationsAppManagement

New service applications and proxys has been added by the script.

Then you can start with the preparations to create your High-Trust Provider Hosted app. Microsoft described this again in a article. Again I’m sharing the PowerShell which is posted along the article:

#Path to exported certificate, change if needed
$publicCertPath = "C:\Certs\HighTrustSampleCert.cer"

#Read certificate and create trustrootauthority
$certificate = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2($publicCertPath)
New-SPTrustedRootAuthority -Name "HighTrustSampleCert" -Certificate $certificate

#Create Trusted Security Token Issuer 
$realm = Get-SPAuthenticationRealm
$specificIssuerId = "11111111-1111-1111-1111-111111111111"
$fullIssuerIdentifier = $specificIssuerId + '@' + $realm
New-SPTrustedSecurityTokenIssuer -Name "High Trust Sample Cert" -Certificate $certificate -RegisteredIssuerName $fullIssuerIdentifier –IsTrustBroker
iisreset 

#Configure to use without HTTPS, don't use this on NON-DEV boxes!!!
$serviceConfig = Get-SPSecurityTokenServiceConfig
$serviceConfig.AllowOAuthOverHttp = $true
$serviceConfig.Update()

The article also describes what to do when creating the solution in Visual Studio. You need a .pfx export of your certificate with private key to let it work. When running the default sample you should see at the end a page with the Site Title displayed on it:
App

Now you can start creating your app! Happy coding!

SharePoint 2013 Lazy loading Javascript

In SharePoint 2013 the Javascript loading mechanism seems to be changed a bit. When porting a SharePoint 2010 solution to 2013 I found out that sometimes some weird script errors where occuring when calling SharePoint Javascript libraries. On some pages in SharePoint 2013 it happens that not all SharePoint Javascript libraries are loaded because of the built-in lazy loading mechanism. This reduces bandwidth when loading pages, because no unneeded libraries are downloaded to the client. But this causes issues when you want to use not loaded libraries. The following sample Javascript codes shows how you can load some Javascript Libraries and then automatically call your function where you want to use those libraries:

//Register SOD's
SP.SOD.registerSod('core.js', '\u002f_layouts\u002fcore.js');
SP.SOD.executeFunc('core.js', false, function(){});;
SP.SOD.registerSod('sp.js', '\u002f_layouts\u002fsp.js');
SP.SOD.executeFunc('sp.js', false, function(){});
SP.SOD.registerSod('sp.core.js', '\u002f_layouts\u002fsp.core.js');
SP.SOD.executeFunc('sp.core.js', false, function(){});

function doSomething() {
   //Your Logic here which calls sp core libraries
}

// Load asynchronous all needed libraries
ExecuteOrDelayUntilScriptLoaded(function() { ExecuteOrDelayUntilScriptLoaded(doSomething, 'sp.core.js') }, 'sp.js');

In the example above we’re using the SP.SOD library provided with SharePoint. Those existed already in SharePoint 2010 and are still present in 2013. With the SOD library it is possible to lazy load Javascript files and load them on the moment you need them. The sample script exists of three parts. In the first step we register Sod’s (Script on Demand) where we define a key and as value the relative path to the Javascript file. We also call executeFunc to load the file using a dummy function. In the second step we create a custom function. This is the function where you want to call specific methods in the Javascript libraries loaded. Then we call ExecuteOrDelayUntilScriptLoaded. Because in this sample we want both sp.core.js and sp.js loaded, we nest it with another call to ExecuteOrDelayUntilScriptLoaded and finally let the callback call the function which needs the libraries loaded before executing. This method of loading scripts seems to work well in SharePoint 2013 and can also be used for other OOTB Libraries, like the Taxonomy Javascript Library. When your site is still running in SharePoint 2010 mode however, this doesn’t work properly. The registering of Sod’s seems to break with the 2010 way of loading the OOTB Javascript files so there you need only the ExecuteOrDelayUntilScriptLoaded calls. If you need to detect the mode in Javascript you can use the SP.Site.get_compatibilityLevel() function to retrieve that info using JSOM and then dynamically decide which method of loading to use.

SharePoint Migration & Search Part 3: Result Sources

In a few series of posts I will discuss changes in SharePoint Search which might affect you when migrating from SharePoint 2010 to 2013. Search is one of the most changed components in SharePoint 2013 and therefore I split the posts up in four parts covering some subjects you need to know before migrating to SharePoint 2013. All parts will be available with the links below when the posts are available:

Part 1: Search Settings
Part 2: Search Web Parts
Part 3: Search Results Sources
Part 4: Search Result Types and Display Templates

In this post I will cover the new Result Sources functionality in SharePoint 2013 and the impact on SharePoint migrations from SharePoint 2010. When upgraded to SharePoint 2013 your Site Collections will remain in 2010 mode in the beginning. In that mode all functionality which was present in 2010 will keep working including Search Scopes. When upgrading your Site Collection to 2013 mode a few things the Search Scopes will become read only. Editing and deleting is blocked in UI and in the API. When trying to modify Search Scopes from API you will get an exception. When creating Search Scopes using custom code, you need to check in which mode your site collection is running. You can easily implement that by checking the CompatibilityLevel property of the SPSite object:

using(SPSite&nbsp;site = new SPSite(siteUrl)
{
    if(site.CompatibilityLevel == 14)
    { // Add your 2010 mode code here

    }
    else if(site.CompatibilityLevel == 15)
    { // Add your 2013 mode code here

    }
}

As I explained in Part 2 of these Search series, also the Search Web Parts are changed so that implicates that you need to mitigate Search Scopes. Result Sources are the replacement for this in SharePoint 2013. The Search Scopes can be managed on three levels: Farm, Site Collection and Web. Out of the box SharePoint will provide 16 Result Sources. You can’t edit or delete the default Result Sources, but you can create new ones based on the default ones. Creating can be done on all three levels where Search Settings can be managed, but they will be available only within that scope. When adding a new Result Source a form will open where you can select the source where should be searched in and there is the ability to create custom query transformations using a Query Builder.

QueryBuilder

With the Query Builder it is possible to create custom query transformations using the User Interface.

Within the custom Query’s you can include managed properties which makes it very useful to create Result Scopes for using custom fields and content types. When added a Result Source, you can use it in customized search pages and make it available to end users by adding it in the Search Navigation. There’s a Technet Blogpost which describes this process.