Reducing SharePoint Framework Code Smells: 2 – Setting up a sample for unit testing

This is a series on how to set up SonarQube as a Quality Gate in your SharePoint Framework development process. The end goal is to add SonarQube to your build and release process through DevOps. These articles will explain:

  1. How to set up a sample SonarQube server in Azure
  2. Setting up a unit test sample locally
  3. How to run a sonar-scanner review manually
  4. How to integrate the code review into your Azure DevOps build and release process.

It is apparently going to take more blog posts than I expected. But I like to spread these things out – easier to maintain and easier to find what people are looking for.

Introduction

In the previous article we saw how to create a sample SonarQube server in Azure. In this article we will look at how to manually run a SonarQube scan linked to the server we created. The results might be smelly.

In this example we are going to use the SharePoint PnP example for creating unit test with React in SFPx

Setting up the repository locally

Create a new folder for the repo locally and then clone the repository through the terminal with “git clone https://github.com/SharePoint/sp-dev-fx-webparts”. We have to take the whole repo but are not going to use the whole of it – just the react unit testing section.

Then once that is complete navigate the ./samples/react-jest-testing folder

run an npm install and we are ready to go.

Initial npm test

Immediately after the install you can run and npm test and see how the sample code hang up under testing. You will get one intentional fail and some code which is not convered by tests. This is to be expected.

The reason we add unit tests to a project is ultimately to improve the quality of the code. This leads to reduction in maintenance costs, lifts the confidence of the development team and allows for continuous integration builds to identify where breaking code has been introduced into a build process within a team.

This project is a great place to start to learn how to unit test within React and the SharePoint Framework.

What we want to be able to do ultimately is collect all of this information on the SonarQube server. We will get to that 🙂

In the next article we will look at sonar scanner and how we hook that up to the SonarQube server.

 

 

 

 

 

 

 

Reducing SharePoint Framework Code Smells: 1 – Setting up SonarQube in Azure

This is a three part series on how to set up SonarQube as a Quality Gate in your SharePoint Framework development process. The end goal is to add SonarQube to your build and release process through DevOps. These three articles will explain:

  1. How to set up a sample SonarQube server in Azure
  2. How to run a code review manually
  3. How to integrate the code review into your Azure DevOps build and release process.

As part of a quality development process not only should developers be linting their code, running unit test and so forth, another step in the process which can be added is a “Code Quality” check using the open source project SonarQube.

In this article we will see how to create a stand alone sample SonarQube server in Azure (and locally if you really want as well).

Introduction

“SonarQube provides the capability to not only show health of an application but also to highlight issues newly introduced. With a Quality Gate in place, you can fix the leak and therefore improve code quality systematically.” 

In practice what it means is an additional tool which developers can use to write better, more maintainable code. This increases quality and reduces overall maintenace costs when implemented as part of a continuous build and deploy process.

There are plugins for JavaScript and TypeScript and thus makes this very applicable to SharePoint Framework development.

Setting up the server

The first step is to create a SonarQube server upon which your code can be reviewed. Some VERY nice person by the name of vanderby has created an ARM template to “Deploy Sonar Cube to Azure“. It is limited by using an embedded database, but it will at least show you the basics before you are ready to scale this properly.

As the github page states – it does take a while to get started but once it is up you can start to use it.

To log into the server I used admin/admin. As this is a sample setup it doesn’t really matter.

Creating a project

Once you are set up and running you can create a project and a key which can then be used to access the server from a command line interface (CLI).

Under the administration server create a new project and once that is complete generate a key for your project

Using these credentals we can test out code from the command line.

Conclusion

Setting up a sample SonarQube server in Azure is pretty simple. As it states though this will not scale and if you are going to use this in an enterprise it will need some better set up. But for the sake of demonstration, it’s just fine.

In the next article we will look at how to apply this to an Azure DevOps build and deploy process for SPFx.

 

Note

You can just as easily set up your own local SonarQube server by following the 2 minute set up installation instructions

Get Started in 2 minutes

 

Securing your AzureDevOps SharePoint tenant credentials with an Azure key Vault.

If you are following an automated Build and Release process for your SharePoint Framework then you will have come across the need to store your tenant SharePoint admin username and password as variables in the pipeline.

Whle this works and I believe the credentials are encrypted, this is not going to fly with enterprise corporate security. They are going to insist that the credentials are kept centrally in a secure KeyVault. Conveniently for us, a KeyVault is available for us to use in Azure.

Using the process described by the Azure DevOps Labs team you can set up a KeyVault and integrate it into your pipeline.

I am adding the KeyVault pipeline into an older version of an SPFx release (for the most up to date doc check this post out).

Once that is run the new password is successfully utilized instead of the variable I had stored within Azure DevOps.

 

Fixing SPFx node-sass binding error on ADO release pipeline

When trying to run the gulp upload-to-sharepoint  encountered the following issue when creating a release pipeline for an SPFx web-part. There was a problem with no binding available for node-sass

[command]C:\NPM\Modules\gulp.cmd upload-to-sharepoint –gulpfile D:\a\r1\a\build\release\gulpfile.js –ship –username *** –password *** –tenant mckinseyandcompany –cdnsite sites/apps/ –cdnlib ClientSideAssets
2019-06-12T14:51:53.5954467Z [14:51:53] Working directory changed to D:\a\r1\a\build\release
2019-06-12T14:51:54.5490645Z D:\a\r1\a\build\release\node_modules\node-sass\lib\binding.js:15
2019-06-12T14:51:54.5497252Z throw new Error(errors.missingBinary());
2019-06-12T14:51:54.5498022Z ^
2019-06-12T14:51:54.5498662Z
2019-06-12T14:51:54.5499258Z Error: Missing binding D:\a\r1\a\build\release\node_modules\node-sass\vendor\win32-x64-48\binding.node
2019-06-12T14:51:54.5499538Z Node Sass could not find a binding for your current environment: Windows 64-bit with Node.js 6.x
2019-06-12T14:51:54.5499731Z
2019-06-12T14:51:54.5499883Z Found bindings for the following environments:
2019-06-12T14:51:54.5500034Z – Windows 64-bit with Node.js 8.x

and the error was actually staring us in the face – “binding available for Node 8″……..

The solution, just like for the build process, you have to add an agent task to ensure the correct version of node is used for the release process.

Using npm ci as part of the SPFx CI CD process through Azure Dev Ops

During the Automated Build and Deploy process for a SharePoint Framework Web Part (as documented here) one of the steps you go through to install the application on the build server is a familiar step ‘npm install’.

This works just fine when working locally and should be, but it is inefficient as part of an automated build process.

For a good explaination of why, check out this stackoverflow answer https://stackoverflow.com/questions/52499617/what-is-the-difference-between-npm-install-and-npm-ci/53325242#53325242

npm install reads package.json to create a list of dependencies and uses package-lock.json to inform which versions of these dependencies to install. If a dependency is not in package-lock.json it will be added by npm install.

npm ci (named after Continuous Integration) installs dependencies directly from package-lock.json and uses package.json only to validate that there are no mismatched versions. If any dependencies are missing or have incompatible versions, it will throw an error.

In my experience this can speed up the build process by more than 50% and as the npm install is the rate determining step for the overall buil, this is very helpful.

The step in the process for the build should look like this:

I have submitted a pull request to update the documentation and we will see if it is worthy 🙂

 

 

Correcting SPFx gulp –ship Uglify Errors: Unexpected token: punc (()

We came across the following problem when trying to execute a gulp --ship on out SPFx development


[15:03:34] Starting subtask 'webpack'...
[15:03:49] Error - [webpack] 'dist':
list-view-demo-webpart-web-part.js from UglifyJs
Unexpected token: punc (() [list-view-demo-webpart-web-part.js:962,7]

In researching this issue it turns out that this issue stems from a problem with the webpack uglify plugin (uglify-webpack-plugin) which historically does not work with ES6 code.

From what I read, it seems like the current SPFx 1.8 does use the correct version of webpack and the uglify plugin to avoid this issue but it was still coming up.

We solved the issue by implementing a suggestion to a related issue on github

https://github.com/SharePoint/sp-dev-docs/issues/2782

The key was presented in one of the responses to the issue

https://github.com/SharePoint/sp-dev-docs/issues/2782#issuecomment-475519680

By replacing the uglify plugin with the terser plugin for webpack, the issue was resolved and we were able to Build and deploy.

PSC Labs 2018 review

PSC Labs was founded in 2015 to provide unbiased, vendor-agnostic technology insights. Our mission is to ensure client
delivery excellence and new solution offerings through the adoption of emerging technologies.

For more information check us out at https://labs.psclistens.com

2018 review

PSC Labs undertook a wide-variety of projects in 2018. From Robot Process Automation to Event Driven Architecture seven projects were undertaken to improve our understanding of these technologies/capabilities.

Blockchain

The team looked into how Blockchain worked and then on a more practical level looked specifically into Ethereum and the ability to incorporate Smart Contracts into the chain. We looked at the services provided by various cloud vendors and found that at the time, the examples for implementations were on a very large scale.

Blockchain is not difficult to understand technically, but the broad questions about scalability, long term viability and adoption though are still quite open ended.

Custom Vision API

The team looked at the newly released Azure Cognitive Services Image process capabilities and built a custom app capable of recognizing every day images. The application built on top of a Xamarin iOS mobile app provides a user with the ability to take a number of pictures of an object, from different angles, and store them within the application.

The Azure Cognitive Services are used to generate a Machine Learning model which can then be downloaded back to the device. The application is then capable of using the camera to identify objects with a predicted level of accuracy.

Grouping Models Training The Model Running Locally

The investigation team successfully demonstrated the ability to build a real-world application around the Azure Cognitive Service.

Azure/AWS IoT

The team investigated the IoT services available in Azure and AWS. To build on the previous work the Labs had done with GE’s Predix platform, these investigation teams were focused on using the available abstraction services from the cloud vendors and not on the low level device/data interaction.

We discovered that both platforms were very good at easily setting up the ability to handle data ingestion from devices. The ease of setup on the device to allow for secure authenticated transmission of data was simple and easy to understand in both cases.

The Azure platform service stood out however with their Azure IoT Suite and Remote Monitoring. Once the data ingestion was set up, the IoT Suite enabled us to create a monitoring dashboard and set controls for performance monitoring. The ability to configure limits for data and automate notifications based on those limits promise considerable potential.

The Azure IoT Suite highlighted how far IoT as a service has come in a short period of time and is a viable solution to any company seeking to set up and start to take advantage of the burgeoning IoT landscape.

Fly.io

The investigation team looked at the intriguing concept of a programmable CDN and the promise of being able to enhance website performance without having to change any of the code on the site directly. An example of this capability would be the adding of a watermark to an image. The Fly.io server would proxy in between the image server and programmatically add the watermark. The watermarked version of the image would then be cached for the next user, at the CDN closer to the user than the original image on the server.

The fly.io Edge Application runtime is an open core Javascript environment built for proxy servers. It gives developers powerful caching, content modification, and routing tools.

The runtime is based on V8, with a proxy-appropriate set of Javascript libraries. There are built in APIs for manipulating HTML and Image content, low level caching, and HTTP requests/responses. When possible, we use WhatWG standards (like fetch, Request, Response, Cache, ReadableStream).

The team found that the implementation of Fly.io as a developer was not complex and the examples provided were easy to set up and run. But overall the team found that this capability feels more like a solution waiting for a problem.

Event Driven Architecture

Kafka

At the start of 2018, as part of their 10 technologies to watch Gartner declared “Event Driven Architectures” as something to pay attention to. The Labs team looked into Kafka specifically although there are others (Azure Event Hub being one) with this in mind. Kafka was originally a project created by LinkedIn to handle their massive data volume and was subsequently open sourced through the Apache foundation.

The team created a demo application which ingested data from an HR application managing people and their records. From the input of the data multiple complex processes were initiated and executed by the event driven architecture. The response of the application, even running locally was very impressive.

Robot Process Automation

UIPath

While Robot Process Automation (RPA) is not a new technology, it’s coming to the forefront of business rapidly. With VC funding for major RPA vendors more prevalent (AutomationAnywhere, UIPath), it demonstrates the capacity for the market to absorb this new technology quickly.

RPA as an industry is all about the automation of repetitive mundane tasks, such as manual data entry into multiple systems. Many companies have long established manual business processes, mainly due to the cost to automate the process. RPA can help address this problem by accurately and repeatedly following the same steps a person would.

We looked at UIPath as a vendor for RPA and looked into the more advanced capabilities of the platform. We created an ability for a code check-in process within AzureDevOps, to trigger a build process chain and instruct the RPA robot to automate a UI test through a browser. If the robot found a failure it created a bug within AzureDevOps related to the failing test.

RPA is mature and already being used across many industries, there is significant opportunity for cost effective savings for companies to use RPA.

GraphQL

GraphQL is a technology created by Facebook in response to a problem they found themselves when facing a growth model based on a service-based architecture. As Facebook pages grew in complexity and functionality, the number of services being called increased and caused various performance issues. The PSC labs team set up to investigate whether or not GraphQL would be applicable to the projects we were planning to work on in the future.

The investigation team took an existing mobile application where the load time was in excess of 10 seconds and was able, using GraphQL, to reduce the load time of the page by over 50%. In a case where the user was on a mobile network with high latency the loading speed was increased by over 65%.

GraphQL has many advantages for a developer and project team when considering a services architecture, from the creation of a standard endpoint, to the reduction in network calls and speed of time to page load, it proved itself very valuable.

 

Conclusion

PSC Labs had another successful year investigating many broad technology innovations. As in previous years, some of the projects show great promise and we will be working on new iterations of them in 2019.

If you want to find out more about PSC Labs and/or have an interesting project you would like us to share with you please contact

info@psclistens.com for more information