PSC Labs 2018 review

PSC Labs was founded in 2015 to provide unbiased, vendor-agnostic technology insights. Our mission is to ensure client
delivery excellence and new solution offerings through the adoption of emerging technologies.

For more information check us out at https://labs.psclistens.com

2018 review

PSC Labs undertook a wide-variety of projects in 2018. From Robot Process Automation to Event Driven Architecture seven projects were undertaken to improve our understanding of these technologies/capabilities.

Blockchain

The team looked into how Blockchain worked and then on a more practical level looked specifically into Ethereum and the ability to incorporate Smart Contracts into the chain. We looked at the services provided by various cloud vendors and found that at the time, the examples for implementations were on a very large scale.

Blockchain is not difficult to understand technically, but the broad questions about scalability, long term viability and adoption though are still quite open ended.

Custom Vision API

The team looked at the newly released Azure Cognitive Services Image process capabilities and built a custom app capable of recognizing every day images. The application built on top of a Xamarin iOS mobile app provides a user with the ability to take a number of pictures of an object, from different angles, and store them within the application.

The Azure Cognitive Services are used to generate a Machine Learning model which can then be downloaded back to the device. The application is then capable of using the camera to identify objects with a predicted level of accuracy.

Grouping Models Training The Model Running Locally

The investigation team successfully demonstrated the ability to build a real-world application around the Azure Cognitive Service.

Azure/AWS IoT

The team investigated the IoT services available in Azure and AWS. To build on the previous work the Labs had done with GE’s Predix platform, these investigation teams were focused on using the available abstraction services from the cloud vendors and not on the low level device/data interaction.

We discovered that both platforms were very good at easily setting up the ability to handle data ingestion from devices. The ease of setup on the device to allow for secure authenticated transmission of data was simple and easy to understand in both cases.

The Azure platform service stood out however with their Azure IoT Suite and Remote Monitoring. Once the data ingestion was set up, the IoT Suite enabled us to create a monitoring dashboard and set controls for performance monitoring. The ability to configure limits for data and automate notifications based on those limits promise considerable potential.

The Azure IoT Suite highlighted how far IoT as a service has come in a short period of time and is a viable solution to any company seeking to set up and start to take advantage of the burgeoning IoT landscape.

Fly.io

The investigation team looked at the intriguing concept of a programmable CDN and the promise of being able to enhance website performance without having to change any of the code on the site directly. An example of this capability would be the adding of a watermark to an image. The Fly.io server would proxy in between the image server and programmatically add the watermark. The watermarked version of the image would then be cached for the next user, at the CDN closer to the user than the original image on the server.

The fly.io Edge Application runtime is an open core Javascript environment built for proxy servers. It gives developers powerful caching, content modification, and routing tools.

The runtime is based on V8, with a proxy-appropriate set of Javascript libraries. There are built in APIs for manipulating HTML and Image content, low level caching, and HTTP requests/responses. When possible, we use WhatWG standards (like fetch, Request, Response, Cache, ReadableStream).

The team found that the implementation of Fly.io as a developer was not complex and the examples provided were easy to set up and run. But overall the team found that this capability feels more like a solution waiting for a problem.

Event Driven Architecture

Kafka

At the start of 2018, as part of their 10 technologies to watch Gartner declared “Event Driven Architectures” as something to pay attention to. The Labs team looked into Kafka specifically although there are others (Azure Event Hub being one) with this in mind. Kafka was originally a project created by LinkedIn to handle their massive data volume and was subsequently open sourced through the Apache foundation.

The team created a demo application which ingested data from an HR application managing people and their records. From the input of the data multiple complex processes were initiated and executed by the event driven architecture. The response of the application, even running locally was very impressive.

Robot Process Automation

UIPath

While Robot Process Automation (RPA) is not a new technology, it’s coming to the forefront of business rapidly. With VC funding for major RPA vendors more prevalent (AutomationAnywhere, UIPath), it demonstrates the capacity for the market to absorb this new technology quickly.

RPA as an industry is all about the automation of repetitive mundane tasks, such as manual data entry into multiple systems. Many companies have long established manual business processes, mainly due to the cost to automate the process. RPA can help address this problem by accurately and repeatedly following the same steps a person would.

We looked at UIPath as a vendor for RPA and looked into the more advanced capabilities of the platform. We created an ability for a code check-in process within AzureDevOps, to trigger a build process chain and instruct the RPA robot to automate a UI test through a browser. If the robot found a failure it created a bug within AzureDevOps related to the failing test.

RPA is mature and already being used across many industries, there is significant opportunity for cost effective savings for companies to use RPA.

GraphQL

GraphQL is a technology created by Facebook in response to a problem they found themselves when facing a growth model based on a service-based architecture. As Facebook pages grew in complexity and functionality, the number of services being called increased and caused various performance issues. The PSC labs team set up to investigate whether or not GraphQL would be applicable to the projects we were planning to work on in the future.

The investigation team took an existing mobile application where the load time was in excess of 10 seconds and was able, using GraphQL, to reduce the load time of the page by over 50%. In a case where the user was on a mobile network with high latency the loading speed was increased by over 65%.

GraphQL has many advantages for a developer and project team when considering a services architecture, from the creation of a standard endpoint, to the reduction in network calls and speed of time to page load, it proved itself very valuable.

 

Conclusion

PSC Labs had another successful year investigating many broad technology innovations. As in previous years, some of the projects show great promise and we will be working on new iterations of them in 2019.

If you want to find out more about PSC Labs and/or have an interesting project you would like us to share with you please contact

info@psclistens.com for more information

 

Advertisements

PSC Tech Talk: AI in Action – Azure Cognitive Language Services

In this short but interesting presentation Mark Roden and Jalal Benali talked about how they had used Azure Cognitive Language Services (Translation API) to elegantly solve a client’s need to have their intranet available in multiple languages across multiple territories.

Background

One of PSC’s clients has sites in multiple countries and as part of their intranet consolidation they wanted to provide a manner by which their corporate messages could be translated for the various countries/languages. The intranet was hosted on DotNetNuke CMS system and needed to be configurable, flexible and above all easy to use for the business. The business had looked at publicly available services like Google translate, but determined that on a private intranet site, they did not want any corporate information being taken outside of their control.

Azure Cognitive Language Services Translation API was selected as the solution because it is secure, private, easy to use and surprisingly flexible when it comes to converting web based content.

Solution

PSC created a custom module for DotNetNuke (DNN) which allowed content managers to create translated versions of the data at the click of a button. The solution was tied into the out of the box language capabilities of DNN whereby the languages available to the user for translation were those enabled in the DNN core configuration. In this manner if a new site, in a new country was purchased, the administrators need only turn on the new language for it to become available.

Because the Azure translations need to be reviewed for accuracy by a local admin in-country, PSC created the ability to have the new message held back for administrative approval. Once approved it is then published on the appropriate language version of the intranet.

Once the translations were created the global administrators would be able to monitor which ones were then subsequently modified by the local content manager. In this manner content corrections from the original English would not necessarily be translated and overwritten onto the newly corrected translated versions.

Limitations

  • The number of characters which can be translated at any one time is 10,000
  • No automated translation will be perfect, but for normal conversational English we found it to be better than we expected. For technical documentation the results were not as successful.
  • Some languages were better than others on the accuracy when they were reviewed by the testing teams in-country

Pricing

Depending on which service you use and usage, the pricing varies from free to $4.50 per million characters translated (as of May 2018).

Retention of HTML formatting

One surprising, significant, benefit of using the Translation API what that when fed an HTML string, the HTML tags were ignored. This meant that the formatting of the translation returned was identical to the original. While this does increase the size and number of characters translated, this would not approach the limits necessary for this effort.

Conclusion

The solution PSC implemented allowed the client to securely translate sections of their intranet and then manage the translated pages once they were published. Overall our experience with the Translation API was a very good one. We found it very easy to set up and simple the implement.

PSC Tech Talk: Azure API Management

In this presentation Alex Zakhodin (@AZakhodin) talked about his experience implementing Azure API Management within a large client.

The situation

The client is a globally focused customer currently providing certification services to their clients. They wanted to be able to provide a new service to their clients so that they are able to access their certification data in real time through a consumable, monetized service.

Client challenges

The client’s main application and multiple data sources are on premises and would not be moved to the cloud, so a hybrid application needed to be created and managed.

The client wanted to be able to securely manage traffic accessing the APIs. They needed to be able to track not only the number of users calling the API but control the amount of access over time.

The payment model proposed for this service also needed a way to track everything to a discrete level; the number of hits and the volume of data provided.

PSC solution

PSC implemented a solution using Azure API management which enabled the client to Abstract the data, Govern the process, Monitor the usage and provide the flexibility to on-board new services at any time.

The Azure API Management platform creates an API proxy model to facilitate the monitoring of API traffic through a centralized set of end points. This allows for developers to expose their internally hosted services without risk of exposing a direct connection. It allows for administrators to configure access to the data (down to users), provide limits to the amount of data accessible over a period of time, and to then create accurate reports on the volume of usage for billing.

The platform provides the ability to track traffic geographically and determine volumes and accessibility. For a globally application the end points and data can be made available via geo-replication.

For developers the API management portal provides the ability to not only track usage but also see how the APIs are performing.

To take advantage of the cost pricing models available in Azure, the wherever possible Azure Functions were used. In this way the client is only billed for usage. The direct cost per transaction means that the cost billed to the end client per transaction is easily manageable and competitive.

Conclusion

The Azure API Gateway platform is a mature, enterprise-ready, capability which allows for the creation of a hybrid cloud/internal architecture for companies to monitor, track and monetize their services in a secure and consistent manner.

 

Calling an external service from your chat bot

In this article I will show to how integrate simple commands (intents) into your bot to then integrate with an external service.

Introduction

In previous articles we have looked at how to create a sample Azure bot and in this article we will be looking into how “intents” work. The microsoft documentation on dialogIntents can be found here (for the node bot).

URL Shortener

In an effort to learn more about running a node service in Azure, Azure SQL and to help brand some of my tweets a little better I created my own little URL shortening service running at https://marky.co. The details are not so important for this article but I took the Coligo.io article on create a node url shortner with Mongo and modified it to work with Azure SQL instead of Mongo. The service is called very simply by POSTing the appropriate parameters at the appropriate service on my website. What is returned is a shortened URL. Simple process really.

IntentDialog

The IntentDialog class lets you listen for the user to say a specific keyword or phrase. We call this intent detection because you are attempting to determine what the user is intending to do. IntentDialogs are useful for building more open ended bots that support natural language style understanding.

So in a sense I am looking at this as an opportunity to treat my bot as a CLI client for my own laziness. While the “intent” is to allow for natural language understanding I want to look at it as an opportunity to make my bot into a monkey butler of sorts. Do my work for me, I am lazy (or productive).

I am not a fan of CLI in the programming world. Not being a historically Unix kinda guy, never had to and I prefer a point and click tooling approach to things like git and Azure CLI. That said, this is not programming – this is bots 😉

Setting up your dialogIntent to listen for -sh 

The following code snippets is the basis for my simple bot connector – listen for the “-sh ” command then take the following argument and process it.

The http module is used for the communication with the shortener service. Once the response is returned from the service the shortened URL is sent back to the user. The process is simple and the intent of this is just to show an example of how to use a command in a bot to make it do something.

	"use strict";
	var builder = require("botbuilder");
	var botbuilder_azure = require("botbuilder-azure");
	var azure = require('azure-storage');

	var http = require('http');

	var data = "";

	var options = {
		host: 'marky.co',
		port: '80',
		path: thepath,
		method: 'POST',
		headers: {
			'secret': theSercretKeyWhichStopsBots,
			'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8'
		}
	};

	var dotenv = require('dotenv').config();

	var useEmulator = (process.env.NODE_ENV == 'development');

	var connector = useEmulator ? new builder.ChatConnector({
			appId: process.env['MicrosoftAppId'],
			appPassword: process.env['MicrosoftAppPassword']
		}) : new botbuilder_azure.BotServiceConnector({
		appId: process.env['MicrosoftAppId'],
		appPassword: process.env['MicrosoftAppPassword'],
		stateEndpoint: process.env['BotStateEndpoint'],
		openIdMetadata: process.env['BotOpenIdMetadata']
	});

	var bot = new builder.UniversalBot(connector);

	var intents = new builder.IntentDialog();
	bot.dialog('/', intents);

	intents.matches(/-sh /i, function (session, args) {
		var input = args.matched.input
		
		//input will be in the format "-sh http://www.xomino.com"
		//better validation would probably be appropriate at some point

		//split the input into "-sh" and theURL
		data="url="+input.split(" ")[1]; 
		//match the POST length to the incoming URL
		options.headers['Content-Length'] = data.length 
		session.sendTyping(); //...typing

		var req = http.request(options, function(res) {
			res.setEncoding('utf8');
			res.on('data', function (chunk) {
				session.send('Your short URL is: '+JSON.parse(chunk).shortUrl);
			});
		});

		req.on('error', function(e) {
			console.log('problem with request: ' + e.message);
		});

	// write data to request body
		req.write(data);
		req.end();
	});

	intents.onDefault(function (session, args, next) {
		session.send("I'm sorry "+session.dialogData.name+". I didn't understand.");
	});

	if (useEmulator) {
		var restify = require('restify');
		var server = restify.createServer();
		server.listen(3978, function() {
			console.log('test bot endpont at http://localhost:3978/api/messages');
		});
		server.post('/api/messages', connector.listen());    
	} else {
		module.exports = { default: connector.listen() }
	}

 

Using this code I can start to use my bot as a URL shortener rather than having to go to my website and post the URL into a form.

Note – in case you were curious this does not work in the Skype client itself because a URL is automagically transformed into something else for preview. Ah well that’s not quite the point really 🙂

Conclusion

In this article we have see that using the bot IntentDialog class we are able to listen for predetermined commands which will then allow it to take input, act on that input and return an answer. In this case a URL shortener is a simple application but it would be much more useful if I could securely look into my enterprise and extract information relating to my business……

Setting up the sample Azure bot to work locally with the bot emulator

In this article I will demonstrate how to configure your local development environment to work with the environmental variables set up within your Azure environment in the sample bot previously discussed,

Introduction 

In the previous article we looked at how to create a sample azure bot and then how to configure it in VSTS for continuous integration. If you want to develop with this sample locally you will need to set it up to work with the local bot emulator. (What is Bot Builder for Node.js and why should I use it?). To be able to do this you will have to configure your local development environment to use the process.env variables which are picked up within the azure runtime environment.

process.env

process.env is how environmental variables are passed into a node development environment. They are especially important when it comes to keeping secret keys secret. We have to make sure that they are not included in the git repository and are not available to the end user. You can learn more about process.env in the nodejs documentation.

Using dotenv

I like to use the dotenv nodejs package to handle local env variables to just read my variables locally from a .env file. If you look at the package.json for the example project, turns out so does microsoft 😉

...
  "dependencies": {
    "azure-storage": "^1.3.2",
    "botbuilder": "^3.4.2",
    "botbuilder-azure": "0.0.1",
    "dotenv": "^4.0.0"
  },

When I clone the VSTS repo locally and run it – nothing seems to happen…..and that’s because I do not have any local env variables….


var useEmulator = (process.env.NODE_ENV == 'development');

......
if (useEmulator) {
    var restify = require('restify');
    var server = restify.createServer();
    server.listen(3978, function() {
        console.log('test bot endpont at http://localhost:3978/api/messages');
    });
    server.post('/api/messages', connector.listen());    
} else {
    module.exports = { default: connector.listen() }
}

So we can use a .env file locally in the root of the project and then require it in my code. The .env file contains nothing more that NODE_ENV = ‘development’ right now.

NOTE: To make sure this is not pushed up to the repo – add *.env to your .gitignore file.

We now have a web server running (nice typo MS 😉 ).

Bot Emulator

Once you download and install the bot emulator you can configure it as per the instructions to run a simple bot without issue. This does not work once you have a bot running in Azure. The sample I created uses multiple Azure services and you can see them being called as the environmental variables in the code.

 

var connector = useEmulator ? new builder.ChatConnector() : new botbuilder_azure.BotServiceConnector({
    appId: process.env['MicrosoftAppId'],
    appPassword: process.env['MicrosoftAppPassword'],
    stateEndpoint: process.env['BotStateEndpoint'],
    openIdMetadata: process.env['BotOpenIdMetadata']
});

Unfortunately when I load up the emulator – as you can see above “new builder.ChatConnector” does not pass in any environmental variables. I then get the following error in the console

The appId is undefined as it is not being passed into the application.

The ApplicationId and secret key were given to you when you created the bot in the first place – if you didn’t write them down, you’re going to be regretting that decision right now….

We need to modify the ChatConnector request to pass in the necessary variables, and we also need to add those variables to the .env file.

 

var connector = useEmulator ? new builder.ChatConnector({
        appId: process.env['MicrosoftAppId'], //new param
        appPassword: process.env['MicrosoftAppPassword']  //new param
    }) : new botbuilder_azure.BotServiceConnector({
    appId: process.env['MicrosoftAppId'],
    appPassword: process.env['MicrosoftAppPassword'],
    stateEndpoint: process.env['BotStateEndpoint'],
    openIdMetadata: process.env['BotOpenIdMetadata']
});

Refreshing the bot emulator, we start to get somewhere

and then it dies when we try and talk to it (doh)

Azure Storage Credentials

The problem is that this chat example uses the Azure Storage Service (to make the Azure Function part of the example work). So we have to add the AzureWebJobsStorage environmental variable to our .env file.

 

var queueSvc = azure.createQueueService(process.env.AzureWebJobsStorage);

You can find this connection key in your Azure portal > Storage Service > Access Keys

The format for the .env file entry  is as follows (Running Azure Functions Locally with the CLI and VS Code): AzureWebJobsStorage=’DefaultEndpointsProtocol=https;AccountName=storagename;AccountKey=secretKey’

Once that is in place – we have a bot running locally talking to the bot framework – unfortunately we do not get a response back from the Azure Function….

Azure Function debugging…..

The problem quite simply is that the message sent to the azure function contains a serviceURL from which it should respond – and in this case it is ” serviceUrl: ‘http://localhost:63136′,” and of course it has no idea what that is.

For the sake of this blog post that is ok – we are at least up and running with the “bot” part of this emulator working, although somewhat disappointing it can’t be fully developed in this environment.

Conclusion

As part of this process we have seen how to connect the local bot emulator to a service running in Azure and how to incorporate a connection to Azure Functions.

 

 

Adding your bot code to VSTS for source control and configuring continuous integration

In this article I will walk through the steps to enable source control and continuous integration for your bot

Introduction

In previous articles we have seen how to create a sample bot and how to link it into Skype. In this article we are going to look at how to use Visual Services Team Studio (VSTS) to manage source control and eventually continuous integration. We are going to set up is an environment such that when you check your code into the repository a code deployment is triggered to the nodejs server (our bot in this case) which then finally restarts. In short: code checks in and everything refreshes automagically.

What is going to happen is that we are taking the code out of the Azure sample development web based IDE environment and move it to something a little more robust.

Selecting your Source Control

From your bot interface select settings and scroll down to the continuous integration section.

Note: it is really important to read the instructions – if you do NOT download the zip file of your bot first you will end up with a blank look on your face and a need to rebuild your bot from scratch – take it from this idiot (facepalm).

bot1

Download your zip file and then head over to your favorite source control repo and create one.

In my case I used Visual Studio Team Services (VSTS) to set up a git repo.

bot2I unzipped the file locally and associated it with my new project in VSTS. I use JetBrains Webstorm as my JavaScript development environment but this would work just as well in Visual Studio or VS Code.

bot3

 

Setting up continuous integration

Back within the Azure bot development website click the Continuous Integration button. This will then cause you to move through a number of screens configuring the CI. For this to work you have to have configured your Azure environment to be linked with your VSTS environment. Here’s a great blog post describing how to do just that (Linking your VSTS Account to Azure)

So “Setup”

bot4Select VSTS

bot5

and then your project within VSTS

bot6

Which branch (master just to keep things simple for now)

bot7

et voila

bot8

Configuring a performance test

Following the prompts you can set up the perf test (for once the CI is complete)

bot9

bot10

 

and we now have your Azure bot being deployed from code sitting within VSTS. Cool 🙂

Example

I have my bot hooked up in Skype

bot12

I make a quick change to the message in WebStorm (my JavaScript IDE)

bot11

Commit and push

bot13

bot14

Head back to Skype and within about 60 seconds (it has to restart after all) we have a new message

bot15

What’s really interesting though is that I started my conversation before the restart was ready – but the message was queued and when the server woke up it finally responded – that was very cool

Conclusion

In this article we have looked at how to set up a VSTS source control repo of our new test bot and then configured continuous integration through the bot framework. I check in my new code and within seconds the new functionality is posted out to the live bot.

 

 

 

Adding your Azure bot framework bot into Skype

In this article I will show how simple it is to add your newly created bot into your Skype contacts.

Introduction

In the previous article we looked at how to create your first sample Azure bot which uses Azure Functions in the background. This article will continue from where that left off.

Channels

The bot framework has a number of pre-configured “Channels” which allow your bot to be easily added to other chat mediums.

If you aren’t already there, go into your bot and select Channels. Conveniently there is an “Add to Skype” button right there….(can’t imagine what that does)

bot1

Adding to Skype

You click on the link to Add to Contacts

bot2

you then “Add to contacts” – having to log in during the process (I need to get a better logo for my bot)

bot3

about 30 seconds later my bot appeared in my Skype client as a contact (bonus if you recognize the song lyrics)

bot4

Once it appeared in my contacts, I could talk to it – and while this is pretty simple stupid, this is VERY cool 🙂

bot5

I have to say though, the “response” from the Azure functions seems very slow for what should be a trivial task. I have no idea why or how it is even working right now – I’ll get to that 🙂

Conclusion

Adding your bot to the pre-configured channels is as easy as following the instructions and we now have a bot running in Skype! It’s not quite getting me a beer from the fridge yet, but we’ll work on that 🙂