PSC Tech Talk: Using AzureDevOps to Automate the Build and Deployment of SharePoint Framework widgets

During this PSC Tech Talk, Mark Roden gave a precursory run-through presentation for his SharePointFest Chicago 2018 presentation on the automation of build and deployment for SharePoint Framework widgets.

What is AzureDevops

Mark briefly walked through why AzureDevOps is PSC’s tool of choice for managing Agile projects. During an Agile project we build and deploy projects every two weeks so that progress can be demonstrated to clients and to ensure that the process is tested and working. Azure DevOps allows us to manage the whole process from:

  • Requirements Management (Backlog)
  • Project Management (Sprint Boards)
  • Code Source Control (git)
  • Automated Build and Deploy pipelines
  • Automated Testing

Quality

Having a transparent, visible to a client, Quality control process generates trust. Not only in the development process but also in the process for deployment. PSC uses AzureDevOps capabilities to run unit tests and where appropriate load testing of projects in development. SharePoint Framework is no exception. We want to make sure that anything being developed does not break existing code or the user interface. Traditionally testing would be done at the end of the project. In an Agile project it is done every two weeks.

What is SharePoint Framework?

Traditionally SharePoint on premises allowed an organization to customize the functionality using a “trusted-code” model whereby they were in complete control of the code going into their environment. When SharePoint online came out though this model was not available. Because of the shared-tenant model and because of a lack of access to modify SharePoint in a similar manner than on prem, Microsoft create the front-end-based SharePoint Framework model.

SharePoint online and therefore the SharePoint Framework (SPFx) are based on the React JavaScript framework. Developers create components which are directly integrated into the SharePoint online experience as if they were a first class member of the site.

Hello world 

Mark’s presentation used the Hello World example provided by Microsoft as a simple demonstration of how to build and deploy an SPFx widget locally. Mark then walked through the process of adding the widget manually to his SharePoint on line development tenant. Manually this process takes a couple of hours to set up and then about 10-15 minutes for every successful deployment.

Using AzureDevOps

Mark walked through the “build” and “deployment” processes provided by Microsoft in the AzureDevOps tool. The Build process manager has the ability to create separate tasks which simulate the manual process of creating the deployable code as explained in the Hello World example. The build process is triggered by checking the code into the Master branch.

The deployment process is similar and automates the process of taking the code and moving it out to the SharePoint tenant. The deployment is triggered on the completion of a successful “build”.

The Build and deployment process takes approximately 5 minutes and Mark showed the ability to track progress and see the console logging provided. Mark’s example also provided code coverage reports and testing dashboards.

Summary

When working on agile projects PSC recommends using AzureDevOps as the management tool of choice and as Mark demonstrated in this Tech Talk, building, testing and deploying SharePoint Framework widgets can automated.

 

 

 

 

 

PSC Tech Talk: Using Chrome Profiles for Office365 developers

In this presentation one of our Directors @MarkyRoden gave a short presentation on using Chrome profiles to help manage your life as a developer.

Mark gave the example of having a chrome profile for each of his customers and how he takes advantages of the features the profiles provide.

Using Chrome profiles you can:

  • Store separate bookmarks per profile
    • This means that you can create specific bookmarks just for that customer and/ or that development environment.
  • Different homepages
    • You can create a unique set of pages which always opens for each individual profile. For his salesforce instance Mark has the SalesForce tenant open automatically, for his Azure dev profile, Mark has portal.azure.com open automatically.
  • It remembers you
    • When you log into a website like Office365 for example, you can allow the system to remember who you are and now ask you for authentication every time. These credentials are remembered separately for each profile. Using this Mark can log into him development instance of Office365 by opening the profile and when he opens his PSC instance of Office365 in his Work profile, he is logged into the PSC Office 365 tenant.
  • Chrome extensions
    • Each profile has it’s own set of chrome extensions. So those which are useful for salesforce development, are only available in the salesforce profile.
  • Microsoft Teams
    • Right now you cannot log into two separate client instances within one Teams Client instance. Mark uses his client profiles so that he can access teams for each organization separately within their own tenant, while keeping them all separate from each other.

 

 

 

PSC Tech Talks – Securing AWS Lambdas

During this PSC Tech Talk, Roger Tucker gave an in depth technical talk on how to create signatures and the secure HTTP headers necessary to have secure authenticated access to calling and executing Amazon Web Services (AWS) Lambdas.

What are AWS Lambdas?

AWA Lambdas are a serverless service that runs your code in response to events and automatically manages the underlying compute resources.

PSC uses Lambdas for a number of AWS projects which we create and manage for our clients. The advantages of Lambdas are that they self-scale, and the client is only billed for the time the lambads are running. The presentation was not about the right way (there are many) to architect a Lambda based application. That would be for another day.

Samples Lambdas

Roger showed how to create a simple Lambda function in and call it via an unsecured API Gateway endpoint. Using the Postman tool he demonstrated how the response can be triggered from an unsecured API Gateway endpoint in a development environment.

Roger set a different API Gateway endpoint to use IAM (Identify access manager) security. He allowed the TestLambda user to have access to one resource – the secure demo api.

 

His initial attempt to call the enpoint using Postman with no Authorization provided resulted in a “missing authentication token” message.

In the next attempt the AWS signature option was selected in postman.  He provided the appropriate AWS credentials, region and service name in Postman and resubmitted the request.  We saw how Postman allows us to create the AWS Signature and HTTP header for the sake of testing. And now that the test user has access, we can see the proper response from the endpoint.

 

Creating the signature

Roger demonstrated how to create the Signature necessary for the HTTP header. Starting with a Canonical Request, your AWS credentials, endpoint region, and service you can create the signature. The signature is then used to create the HTTP header.

The hashed response to the canonical request “e3boc……”then becomes an input for a string which is needed to generate the signature.

Using the string to sign (containing the credential scope) we then create the Signature and this is what is attached to the api call as an Authorization header.

There are multiple levels of hashing, this is intentionally complicated, so as to make the Lambdas as secure as possible. This signature must make the necessary hash generated on the server to allow the authorization to be allowed.

Roger then closed out the presentation by showing how his Python code creates these separate credentials and how he can then call the secure Lambda directly.

Conclusion

Securing Lambdas is a complex process, this is by design to ensure that an open endpoint cannot be called by a rogue service.

 

PSC Tech Talk: AI in Action – Azure Cognitive Language Services

In this short but interesting presentation Mark Roden and Jalal Benali talked about how they had used Azure Cognitive Language Services (Translation API) to elegantly solve a client’s need to have their intranet available in multiple languages across multiple territories.

Background

One of PSC’s clients has sites in multiple countries and as part of their intranet consolidation they wanted to provide a manner by which their corporate messages could be translated for the various countries/languages. The intranet was hosted on DotNetNuke CMS system and needed to be configurable, flexible and above all easy to use for the business. The business had looked at publicly available services like Google translate, but determined that on a private intranet site, they did not want any corporate information being taken outside of their control.

Azure Cognitive Language Services Translation API was selected as the solution because it is secure, private, easy to use and surprisingly flexible when it comes to converting web based content.

Solution

PSC created a custom module for DotNetNuke (DNN) which allowed content managers to create translated versions of the data at the click of a button. The solution was tied into the out of the box language capabilities of DNN whereby the languages available to the user for translation were those enabled in the DNN core configuration. In this manner if a new site, in a new country was purchased, the administrators need only turn on the new language for it to become available.

Because the Azure translations need to be reviewed for accuracy by a local admin in-country, PSC created the ability to have the new message held back for administrative approval. Once approved it is then published on the appropriate language version of the intranet.

Once the translations were created the global administrators would be able to monitor which ones were then subsequently modified by the local content manager. In this manner content corrections from the original English would not necessarily be translated and overwritten onto the newly corrected translated versions.

Limitations

  • The number of characters which can be translated at any one time is 10,000
  • No automated translation will be perfect, but for normal conversational English we found it to be better than we expected. For technical documentation the results were not as successful.
  • Some languages were better than others on the accuracy when they were reviewed by the testing teams in-country

Pricing

Depending on which service you use and usage, the pricing varies from free to $4.50 per million characters translated (as of May 2018).

Retention of HTML formatting

One surprising, significant, benefit of using the Translation API what that when fed an HTML string, the HTML tags were ignored. This meant that the formatting of the translation returned was identical to the original. While this does increase the size and number of characters translated, this would not approach the limits necessary for this effort.

Conclusion

The solution PSC implemented allowed the client to securely translate sections of their intranet and then manage the translated pages once they were published. Overall our experience with the Translation API was a very good one. We found it very easy to set up and simple the implement.

PSC Tech Talk: Change Management

“It’s all in the way we listen”. It’s the PSC motto and especially relevant when it comes to helping clients migrate platforms. With our knowledge and experience we’ve seen that technology is often not the problem in our projects, it is people. Humans are masters at making systems serve our needs, but in general people don’t change quite so easily.

In this light hearted presentation with a serious message, one of our most experienced consultants, John Bigenwald (@john_bigenwald), talked about how he approaches change management when helping client migrate platforms. While this is not “Technology Talk”, it is something we do every day at PSC and relevant to every technology project we do.

Tech is easy – People are hard

Not to minimize how hard developers have it, but code can be refactored and changed without having to hurt it’s feelings or years long emotional attachment to the application.

Why do project fail?

Based on research done by a French company, of their own internal projects:

  • 11% of large projects fail because of quality
  • 39% of projects fail due to a lack of planning resources and activities
  • 57% of projects fail due to “breakdown in communications”

The problems are with “soft skills” and not “hard skills”. Staying in contact with the program mangers as we are going through a project is critical and essential to its success. Communicating with the users not only buys us good will, but gives us better insight into how they work. During this time we also uncover many issues which, left unaddressed, could cause the failure of the project.

Lewin’s Change Management Model

No one starts a project intending to fail. There are three stages to Lewin’s Change Management Model:

  • Unfreeze
  • Change
  • Freeze

In a survey 75% of the project managers expected that there would be some form of failure in their IT projects. It is the expectation from the outset based on history and experience telling them not to expect success. Even 15 years ago, The Harvard Business Review stated in an article that:

“Managers expect to be able to plan for all
variables in advance, but they can’t.
Nobody is that smart or has a crystal ball that clear….”

Unfreeze is the key to the change

How you prepare the organization for change will be the driving force behind the whole process. You have to create a sense of urgency, build coalitions and create a vision for change.

  • Preparing the organization to accept the necessity and/or desirability of a change. It involves breaking down the existing status quo before you can build up a new way of operating. You need to develop a compelling message showing why the existing way of doing things cannot continue.

John then went on to draw an analogy between “change management experts” on tv and real life.

In the “Property Brothers” on HTGV they work with people buying a new house and renovate it. As they walk through the show there are issues uncovered and they all need to be handled within budget. The news is not always good and pleasant to have to deal with, but the home owner has to determine what is most important for them to have implemented to call the project a success. They never know what’s going to be behind a wall at the start of a project.

In “The Walking Dead” tv show people are living in a post-apocalyptic world where they have to survive by using their experience and wits to avoid becoming one of the hordes.

There is a point to this….I assure you 🙂

Creating Urgency

The status quo is unfrozen only by creating a sense of urgency. Stalling and slowing the process is not only a high risk, but gives people who want to be obstacles, leverage to not change.

In the Walking Dead a zombie horde in the camp really creates a sense of urgency. But that urgency is driven by fear, which often leads to bad decisions. A poorly thought through decision causes someone to do something unexpected which puts the whole group at risk.

In the Property Brothers they create urgency by painting a vision of the future. This provides the impetus for change. Besides designing and building a new home, an integral part of the brother’s job is to paint of vision of the future. This allows the client to better handle the unplanned nightmare behind the wall. The overall goal is the overall goal and that which is in front of us right now is only an obstacle on the way to the goal.

So how do we create urgency? Hire some zombies to crash the office?? No…..

Urgency is created by a desire to achieve a vision. The greater the vision, the better the promised future, the greater the urgency to reach that future. Losing a ton of money in a certain area, does not necessarily sway the budget lines for the overall plan. This is only one example but the thought process is the same. Without a sense of why this needs to be done now, why is it important enough to move to the front of the queue? Without a sense of why a change will improve the program, your job, your life, there is no urgency to make the change – inertia will win.

Building coalitions

As projects evolve your change agents can switch and they aren’t always the people you think they will be. In the Walking Dead the group’s leader was a former deputy. Others in leadership are a housewife and a survivalist loner. You would not have chosen them as a team to start with but they need to work together for survival. Over time people are lost from the group and new members join as needs change. In the same manner, hidden structure, long term friendships and who is impacted by the change can drive the dynamics of the coalition.

Create a vision for change

Going to the Walking Dead again……they are always talking about a better life, it drives them forward. They don’t talk about how one direction has fewer zombies over there, they talk about the better life when they get there.

In Property Brothers they do not talk about the size of the house, they talk about entertaining, having dinner with the kids and how that will give you a better life.

Unless you have a clearly defined vision for where we are going, and some people to help you get there, and a sense of urgency to do it….. your chances of failure are higher.

Big thinking precedes great achievement

– Wilfred Peterson

Conclusion

Understand what is driving the change. Use that knowledge to paint a vision of the future. Logic drives technology; Emotion drives people.

 

 

 

 

PSC Tech Talks – A Journey to the Programmable Data Center

During this PSC tech Talk Geremy Reiner gave us an overview on his “Journey to the Programmable Data Center”. The emphasis of the presentation was not on the technologies involved, but on the concepts and processes which enable infrastructure to be deployed as code and then from there what business solutions become enabled by the infrastructure.

Background

There is more to innovation that technology for the sake of technology. When asking the question why should we build programmable datacenters the answer is much more than “because the technology is better”. We need to consider how a modern datacenter:

  • Provides a business focused approach to infrastructure
  • Simplifies datacenter management
  • Increases speed of delivery
  • Extends benefits of automation and orchestration

Datacenter Ascendancy

As technology has evovled, so has the way we use it to solve business problems. But technology is not the only thing which has to evolve to be able to maximize the cost reduction and productivity gains which a modern datacenter can provide. The organization has to embrace the new capabilities as well.

A traditional datacenter is stable secure and reliable but to achieve that it has a large footprint, it is generally utilized at only 20% of capacity on average, has a high management cost and is very expensive to scale.

A virtual datacenter has increased scalability, can be managed at a computer not at a rack terminal, is generally utilized at 50% or greater and is much quicker to stand up a new capability.

Cloud computing or “IT as a service” uses highly automated self service portals, the abstraction of infrastructure creation and “click of a button” deployment of managed services . With a global footprint, the capacity on demand model now allows business to plan for the future without having to make large CAPEX investments and planning for its needs for the next 5 years.

Organizational maturity

As the organization matures so can the technology. When the needs of the business can be reflected in a truly self service manner where everything from a new site to a new templated service can be deployed with nothing more than a set of configuration parameters and a button, the automated datacenter comes into its own.

 

Software defined datacenter

Geremy went into more depth about what a programmable datacenter is composed of. From application, to automation, to infrastructure, all with business oversight the modern architected datacenter provides visibility at all levels.

 

So then what?

With all this in place, Geremy then got into the real business benefits, with examples, of where the modern data center enables business flexibility, cost saving, speed to market etc.

Process automation

When we talk process automation at a high level we are generally talking about frameworks like ITIL which cover the best practices for delivering IT services. How we respond to the needs of the users, outages and other unplanned issues requires the ability to know what is going on at any time and to be able to respond to it in a repeatable manner.

In a modern datacenter that is generlly an amazingly well defined automated process.

If a service looks like it is not responding as expected, a new instance of the service is spun up, the necessary configuration changes are made to direct traffic to the new service and the old one is turned off – automatically. The end goal is for this to be seemless to an end user.

Continuous Delivery 

The modern datacenter enables us to create business enables “DevOps” capabilities whereby not only is code tested automatically, the infrastructure enecessary to run the test on, is created programmatically at time of testing. Servers and test suites are stood up and then broken down (or turned on and then turned off) as necessary. This level of automation allows high productivity but keeps costs down for the business.

Azure Resource Management (ARM) templates

There are configuration stanadards for being able to describe how your infrastructure should be created, deployed, sized and run. This can make a sizeable difference to being able to deploy capabilities for your business.

As an example, if you wanted to go from zero capability to a deployed SharePoint farm with SQL server and supporting services, you would be looking at a quarter to half million dollar’s worth of capital investment in hardware and infrastructure, months of planning, service creation, setup and configuration and then installation of the software.

With ARM you can literally deploy the entire sharepoint stack within 15 minutes hosted on Azure, using 9 servers, with the click of a button. At the time of the presentation this build would have cost approx $5000 a month. The cost benefits are clear and significant.

To help get orgnaizations get started with using Azure, Microsoft has created many open source ARM templates and posted them on GitHub for general consumption and improvement. They can be downloaded, configured for personal needs and you can be up and running within hours, not months.

 

Working in the real world

PSC worked with one of our clients to create a 19 server, repeatably deployable process for them, whereby they could sell their services to end customers. Through a web interface, the client team could answer questions on a form which in turn built the custom ARM template. The ARM template was programmatically used to automate the deployment of the necessary environment for the end client based on their requirements.

Conclusion

A modern data center is designed around what business need can it flexibly solve for end users, now and in the future rather than how it can rigidly support the business needs of the present past. PSC has proven experience in deploying infrastructure as a service using ARM templates, automated deployment and management of virtual infrastructure and utilizing modern datacenters to help our customers future-proof their technology needs.

PSC Tech Talk: UX Design – Not just making things pretty

Many companies used to treat design and user experience second-class requirement when creating technical innovation (most people my age will know what I am talking about). Functionality was more important than how easy a product would be to use. As has been the case for many years though, this is certainly no longer the norm. The expectation for design, function and “how it makes me feel” is assumed in the same manner that requirements need to be met.

Our Senior Design Consultant Jay Kasturi gave a presentation on User Experience design, which to a room full of hard core developers was a challenge in itself.

UX Design – Not just making things pretty

Jay spoke about the history of user experience which has its roots in industrial design/human centered design practices and precedes the world of web development.

“UX is the design of everything
independent of medium or across media
with human experience as an explicit outcome
and human engagement as an explicit goal.”
– Jesse James Garrett

The goals of UX are to improve customer satisfaction and loyalty through the utility, ease of use, and engagement provided in the interaction with a product.

UX focuses on the user, paying attention to how the user is engaging with the senses, the body, the mind and the emotions. Beginning from these considerations we work to gain a better picture of the user’s capabilities, constraints, and their experiential context. This is the foundation that user research provides and from which iterative design and development can proceed.

The emotional capacity of UX has only really been addressed in the last two decades. In the book the experience economy they talk about how we have for a long time we have discounted experiences as entertainment only and that is not an accurate description.

How does UX and design thinking work together?

Design thinking is a simple solution framework that anyone can use for innovation. Using it we can empathize, define, ideate, prototype and test our assumptions generated during the process and ensure that we are meeting the needs of the users.

Jay went on to show her vision for how UX is composed of many facets which together build the bigger picture of creating a website for a client.

UX and development

In development, users and roles serve a specific purpose in mapping the architecture needs and flow of an application. But through UX we can develop personas and use cases to uncover user tasks, edge cases, and unmet needs that could be translated into features and find/address usability concerns.

Simply put, developers can get pretty clear requirements of how an application works – but often times how stakeholders envision the application working for themselves is not how others would use it. Just because a functional requirement is met, does not mean that an end user will find it intuitive to use. This is the gap that a thorough UX process will bridge.

UX and UI are not interchangeable

They do touch on each other and do share a great deal of information but it’s important to delineate what each terms is referencing. User interface design is concerned with layout, typography, the visual flow and consistency of elements/components. UI without UX can create products that are visually great, but may not serve specific user needs or make actively confound their tasks.

Examples

Jay went on to show a Personal and Journey map she created for a client which spoke to how the user journey prior to and through the app, and beyond it. Going through this exercise with the client helped contextualize and prioritize key moments in the user experience:

We also saw examples of wireframes with callouts on the page for a different client:

 

Conclusion

Jay’s UX experience and knowledge has had a significant positive impact on our projects since she join PSC last year. Being able to show some of the cool things she has done, and spread the word internally of how she does it made for a very powerful and well-received presentation.

 

PSC Tech Talk – Microsoft Bot Framework

Starting in early 2016 cloud vendors started to promote the concept of bots as a cool new feature and new way for users to interact with their applications within the enterprise. Understanding the general acceptance of a bot as a user interface, the push to gain traction in the space began in earnest.  Seeing this shift in emphasis in the vendor landscape prompted PSC Labs to create an investigation team for a short-term project.

In this presentation Adam Lepley (@AdamLepley) presented the first of a number of talks (here, here and here) he has given on the MS Bot Framework, how it works, why it was created and how easy it is to use.

What is the Microsoft Bot Framework? 

The bot framework is as the name implies a framework for building “bots”. What this means as a developer is that in C# and JavaScript Microsoft has created libraries containing methods and functions for simplifying the creation of an interactive chat bot. Once the created the framework also provides the ability to publish the bot to many chat “channels” like facebook, slack, teams, skype, SMS and others.

Chat bots are not a new concept. Various web sites and chat clients have been leveraging varied forms bots for many years, but mostly with the emphasis on consumer facing applications.  Targeting chat applications also comes with the benefit of building on a platform the user already is familiar with. It also removes the friction of leaning a new application and lessens the burden on developers on creating complex custom UI.

The Plan

Within the lab we always like to learn about a new technology and then make a plan to better understand it and demonstrate capability in it. The Plan was initially to download the examples, install, learn and then expand on what we learn and make our own examples with broader applicability to PSC clients.

We looked into:

  • How to create a bot
  • How to deploy a bot to different channels
  • How to add Artificial intelligence (LUIS)
  • How can we build something applicable to our clients / What else can we play with?

What did we find?

Adam discussed and demonstrated how easy it is to create a bot using the framework. He was able to build a hello world bot in about 10 minutes and publish it to a point where we could actually interact with it in the meeting itself.

The investigation team created the three bots aimed at demonstrating increased productivity gains and enhanced user experiences:

  • Common data capture – The ability to quickly and easily view and create timesheets from a bot.
  • Predictive Analytics – Using Machine Learning techniques to return projected sales results back to users based on Product information hosted in an external database.
  • Cognitive Services – Using cognitive services and natural language processing to demonstrate free text entry in a bot to create task logging on an external site.

The common assumption is text is the primary integrations when using chat clients. This is mostly true when two humans communicate over chat, but as it relates to bots, we have a variety options Microsoft provides with its abstraction.

The bot framework supports text (plain and rich), images (up to 20 Mb), video (up to 1 mins), buttons and the following rich content cards…

In addition to the rich content cards, Microsoft has released a separate service which enables you to build more complex card content layouts which can be rendered from data coming from the bot framework. This also exposed more native platform specific custom rendering of cards.

Timesheet Bot

We set out to build a bot which would help fill out weekly timesheet for our consultants. Our bot has two main features: displaying and creating a weekly timesheet. For displaying the previous week’s timesheet, used a carousel card which can display a collocation of cards representing the days of the week. Each card also has a set of buttons which can either link to additional actions within the bot, or external links to an existing website.

Product information Bot

We created a bot demonstrating the ability to search a product database which in turn triggered an external API call to an associated Azure machine learning service. Users can interact with the bot via a series of question and answers. e.g. “What product are you searching for? Please select one of the following”. The results are then fed back to the bots in the form of a chart graphic. This bot demonstrates a powerful way to access a variety of on demand reports right within a chat client.

Productivity Bot using Natural Language interpretation

We used the Azure LUIS service (Language Understanding Intelligent Service), which is a part of Microsoft’s cognitive services and uses machine learning to help derive intent from text. Users can use an unstructured text request to “create a task” or “create new task” or “I want a new task” which the LUIS service derives the intent to be “Create a Task”. Using the secure integration with and external task tracking service (Trello) the bot is then able to ask the user the necessary questions to create a task based on user inputs.

Conclusion

Bots are being used today by startup and some commercial enterprises trying to break into the corporate enterprise space. Our time spent with the Microsoft Bot Framework has convinced us that bot are ready for the enterprise and there are use cases for them effective implementation today.

 

PSC Tech Talk: How does blockchain work and what is cryptomining?

This week one of the Labs team members Toby Samples (@tsamples) gave a presentation on How does blockchain work and what is cryptomining. We are looking at Blockchain in the Labs right now and with the considerable press around cryptomining and how you can even hack a website to do it, we figured it would be good to educate everyone internally and also come up with some policy around preventing this as part of our delivery excellent to clients.

What is blockchain?

Well simply put it is a distributed digital record which enables the ability to prove that every transaction within the “chain” is correct and has not been tampered with. Most people know the association of blockchain and bitcoin.

Blockchain works by “hashing” the contents of a transaction and adding them to the “chain”. Once the chain is started the next link in the chain is created using the hash from the previous chain. If the contents of any link are changed the hashes will not match and the chain is broken.

The implication for bitcoin transactions on a massive scale is that every transaction is recorded in the chain, which makes the chain large, which makes validating the chain expensive and processor intensive. (One bitcoin transaction costs as much as the energy for a house for a week)

In a financial ledger it is critical to the confidence of the company/investor/buyer that bank records are accurate and no-one is faking the numbers for their own personal gain. But there are many other potential usages which less “volume” but just as much use.

Bitcoin and other distributed cryptocurrencies allow for transactions to happen all over the global and more importantly transaction validation can be a distributed process. It is not instantaneous that the transactions occur.

When a digital transaction is carried out, it is grouped together in a cryptographically protected block with other transactions that have occurred in the last 10 minutes and sent out to the entire network.

Miners (members in the network with high levels of computing power) then compete to validate the transactions by solving complex coded problems. The first miner to solve the problem and validate the block receives a reward. (In the Bitcoin Blockchain network, for example, a miner would receive Bitcoins). This is a really nice article explaining how the proof of work, works.

Explaining How Proof of Stake, Proof of Work, Hashing and Blockchain Work Together

So what is cryptomining?

Cryptomining is using a computer to do the coin mining processing. This is generally cost prohibitive to run as an individual. Unless you have a powerful gaming pc and are making a long term investment, it is not really a financially viable thing to do for an individual. The process is relatively simple: you create an online account to process financial transactions (you get paid), sign up to a service which will give you transactions to process, and install a program to churn through validations. Once you sign up to a service the validations are transmitted to your computer for processing.

It becomes illegal (cryptojacking) when you commandeer  someone else’s machine to do the mining for you. Why not have someone else pay for the mining while you reap the profits for the validation?

Where this becomes especially nefarious when services like coinhive allow you to make your website customers do this mining for you. Some people are starting to use this as income from their websites rather than advertising. Coinhive offer a service whereby you can add a coinhive js file to your website and then anyone who visits that site gets a javascript load of coin mining assigned to the computer and it churns away while you are on the page.

What happened earlier in Feb 2018 became international news when a remote 3rd party js library site used by UK and AUS government sites was hacked and these .gov sites started to behave like coinhive processing sites. See this great blog for more details (The JavaScript Supply Chain Paradox: SRI, CSP and Trust in Third Party Libraries).

There are ways and means to prevent your site becoming victim to this JavaScript attack as the article describes. The tale is cautionary and it is important that awareness of this kind of behavior is out there.

Conclusion

Blockchain is not just for financial transactions, there are many other real world applications for it. Understanding why how cryptocurrency works in principle, and the necessity for Coin Mining it breeds, gives us a better preparedness to prevent its illegal usage.

 

PSC Tech Talk: Azure API Management

In this presentation Alex Zakhodin (@AZakhodin) talked about his experience implementing Azure API Management within a large client.

The situation

The client is a globally focused customer currently providing certification services to their clients. They wanted to be able to provide a new service to their clients so that they are able to access their certification data in real time through a consumable, monetized service.

Client challenges

The client’s main application and multiple data sources are on premises and would not be moved to the cloud, so a hybrid application needed to be created and managed.

The client wanted to be able to securely manage traffic accessing the APIs. They needed to be able to track not only the number of users calling the API but control the amount of access over time.

The payment model proposed for this service also needed a way to track everything to a discrete level; the number of hits and the volume of data provided.

PSC solution

PSC implemented a solution using Azure API management which enabled the client to Abstract the data, Govern the process, Monitor the usage and provide the flexibility to on-board new services at any time.

The Azure API Management platform creates an API proxy model to facilitate the monitoring of API traffic through a centralized set of end points. This allows for developers to expose their internally hosted services without risk of exposing a direct connection. It allows for administrators to configure access to the data (down to users), provide limits to the amount of data accessible over a period of time, and to then create accurate reports on the volume of usage for billing.

The platform provides the ability to track traffic geographically and determine volumes and accessibility. For a globally application the end points and data can be made available via geo-replication.

For developers the API management portal provides the ability to not only track usage but also see how the APIs are performing.

To take advantage of the cost pricing models available in Azure, the wherever possible Azure Functions were used. In this way the client is only billed for usage. The direct cost per transaction means that the cost billed to the end client per transaction is easily manageable and competitive.

Conclusion

The Azure API Gateway platform is a mature, enterprise-ready, capability which allows for the creation of a hybrid cloud/internal architecture for companies to monitor, track and monetize their services in a secure and consistent manner.