PSC Tech Talks – A Journey to the Programmable Data Center

During this PSC tech Talk Geremy Reiner gave us an overview on his “Journey to the Programmable Data Center”. The emphasis of the presentation was not on the technologies involved, but on the concepts and processes which enable infrastructure to be deployed as code and then from there what business solutions become enabled by the infrastructure.

Background

There is more to innovation that technology for the sake of technology. When asking the question why should we build programmable datacenters the answer is much more than “because the technology is better”. We need to consider how a modern datacenter:

  • Provides a business focused approach to infrastructure
  • Simplifies datacenter management
  • Increases speed of delivery
  • Extends benefits of automation and orchestration

Datacenter Ascendancy

As technology has evovled, so has the way we use it to solve business problems. But technology is not the only thing which has to evolve to be able to maximize the cost reduction and productivity gains which a modern datacenter can provide. The organization has to embrace the new capabilities as well.

A traditional datacenter is stable secure and reliable but to achieve that it has a large footprint, it is generally utilized at only 20% of capacity on average, has a high management cost and is very expensive to scale.

A virtual datacenter has increased scalability, can be managed at a computer not at a rack terminal, is generally utilized at 50% or greater and is much quicker to stand up a new capability.

Cloud computing or “IT as a service” uses highly automated self service portals, the abstraction of infrastructure creation and “click of a button” deployment of managed services . With a global footprint, the capacity on demand model now allows business to plan for the future without having to make large CAPEX investments and planning for its needs for the next 5 years.

Organizational maturity

As the organization matures so can the technology. When the needs of the business can be reflected in a truly self service manner where everything from a new site to a new templated service can be deployed with nothing more than a set of configuration parameters and a button, the automated datacenter comes into its own.

 

Software defined datacenter

Geremy went into more depth about what a programmable datacenter is composed of. From application, to automation, to infrastructure, all with business oversight the modern architected datacenter provides visibility at all levels.

 

So then what?

With all this in place, Geremy then got into the real business benefits, with examples, of where the modern data center enables business flexibility, cost saving, speed to market etc.

Process automation

When we talk process automation at a high level we are generally talking about frameworks like ITIL which cover the best practices for delivering IT services. How we respond to the needs of the users, outages and other unplanned issues requires the ability to know what is going on at any time and to be able to respond to it in a repeatable manner.

In a modern datacenter that is generlly an amazingly well defined automated process.

If a service looks like it is not responding as expected, a new instance of the service is spun up, the necessary configuration changes are made to direct traffic to the new service and the old one is turned off – automatically. The end goal is for this to be seemless to an end user.

Continuous Delivery 

The modern datacenter enables us to create business enables “DevOps” capabilities whereby not only is code tested automatically, the infrastructure enecessary to run the test on, is created programmatically at time of testing. Servers and test suites are stood up and then broken down (or turned on and then turned off) as necessary. This level of automation allows high productivity but keeps costs down for the business.

Azure Resource Management (ARM) templates

There are configuration stanadards for being able to describe how your infrastructure should be created, deployed, sized and run. This can make a sizeable difference to being able to deploy capabilities for your business.

As an example, if you wanted to go from zero capability to a deployed SharePoint farm with SQL server and supporting services, you would be looking at a quarter to half million dollar’s worth of capital investment in hardware and infrastructure, months of planning, service creation, setup and configuration and then installation of the software.

With ARM you can literally deploy the entire sharepoint stack within 15 minutes hosted on Azure, using 9 servers, with the click of a button. At the time of the presentation this build would have cost approx $5000 a month. The cost benefits are clear and significant.

To help get orgnaizations get started with using Azure, Microsoft has created many open source ARM templates and posted them on GitHub for general consumption and improvement. They can be downloaded, configured for personal needs and you can be up and running within hours, not months.

 

Working in the real world

PSC worked with one of our clients to create a 19 server, repeatably deployable process for them, whereby they could sell their services to end customers. Through a web interface, the client team could answer questions on a form which in turn built the custom ARM template. The ARM template was programmatically used to automate the deployment of the necessary environment for the end client based on their requirements.

Conclusion

A modern data center is designed around what business need can it flexibly solve for end users, now and in the future rather than how it can rigidly support the business needs of the present past. PSC has proven experience in deploying infrastructure as a service using ARM templates, automated deployment and management of virtual infrastructure and utilizing modern datacenters to help our customers future-proof their technology needs.

PSC Tech Talk: UX Design – Not just making things pretty

Many companies used to treat design and user experience second-class requirement when creating technical innovation (most people my age will know what I am talking about). Functionality was more important than how easy a product would be to use. As has been the case for many years though, this is certainly no longer the norm. The expectation for design, function and “how it makes me feel” is assumed in the same manner that requirements need to be met.

Our Senior Design Consultant Jay Kasturi gave a presentation on User Experience design, which to a room full of hard core developers was a challenge in itself.

UX Design – Not just making things pretty

Jay spoke about the history of user experience which has its roots in industrial design/human centered design practices and precedes the world of web development.

“UX is the design of everything
independent of medium or across media
with human experience as an explicit outcome
and human engagement as an explicit goal.”
– Jesse James Garrett

The goals of UX are to improve customer satisfaction and loyalty through the utility, ease of use, and engagement provided in the interaction with a product.

UX focuses on the user, paying attention to how the user is engaging with the senses, the body, the mind and the emotions. Beginning from these considerations we work to gain a better picture of the user’s capabilities, constraints, and their experiential context. This is the foundation that user research provides and from which iterative design and development can proceed.

The emotional capacity of UX has only really been addressed in the last two decades. In the book the experience economy they talk about how we have for a long time we have discounted experiences as entertainment only and that is not an accurate description.

How does UX and design thinking work together?

Design thinking is a simple solution framework that anyone can use for innovation. Using it we can empathize, define, ideate, prototype and test our assumptions generated during the process and ensure that we are meeting the needs of the users.

Jay went on to show her vision for how UX is composed of many facets which together build the bigger picture of creating a website for a client.

UX and development

In development, users and roles serve a specific purpose in mapping the architecture needs and flow of an application. But through UX we can develop personas and use cases to uncover user tasks, edge cases, and unmet needs that could be translated into features and find/address usability concerns.

Simply put, developers can get pretty clear requirements of how an application works – but often times how stakeholders envision the application working for themselves is not how others would use it. Just because a functional requirement is met, does not mean that an end user will find it intuitive to use. This is the gap that a thorough UX process will bridge.

UX and UI are not interchangeable

They do touch on each other and do share a great deal of information but it’s important to delineate what each terms is referencing. User interface design is concerned with layout, typography, the visual flow and consistency of elements/components. UI without UX can create products that are visually great, but may not serve specific user needs or make actively confound their tasks.

Examples

Jay went on to show a Personal and Journey map she created for a client which spoke to how the user journey prior to and through the app, and beyond it. Going through this exercise with the client helped contextualize and prioritize key moments in the user experience:

We also saw examples of wireframes with callouts on the page for a different client:

 

Conclusion

Jay’s UX experience and knowledge has had a significant positive impact on our projects since she join PSC last year. Being able to show some of the cool things she has done, and spread the word internally of how she does it made for a very powerful and well-received presentation.

 

PSC Tech Talk – Microsoft Bot Framework

Starting in early 2016 cloud vendors started to promote the concept of bots as a cool new feature and new way for users to interact with their applications within the enterprise. Understanding the general acceptance of a bot as a user interface, the push to gain traction in the space began in earnest.  Seeing this shift in emphasis in the vendor landscape prompted PSC Labs to create an investigation team for a short-term project.

In this presentation Adam Lepley (@AdamLepley) presented the first of a number of talks (here, here and here) he has given on the MS Bot Framework, how it works, why it was created and how easy it is to use.

What is the Microsoft Bot Framework? 

The bot framework is as the name implies a framework for building “bots”. What this means as a developer is that in C# and JavaScript Microsoft has created libraries containing methods and functions for simplifying the creation of an interactive chat bot. Once the created the framework also provides the ability to publish the bot to many chat “channels” like facebook, slack, teams, skype, SMS and others.

Chat bots are not a new concept. Various web sites and chat clients have been leveraging varied forms bots for many years, but mostly with the emphasis on consumer facing applications.  Targeting chat applications also comes with the benefit of building on a platform the user already is familiar with. It also removes the friction of leaning a new application and lessens the burden on developers on creating complex custom UI.

The Plan

Within the lab we always like to learn about a new technology and then make a plan to better understand it and demonstrate capability in it. The Plan was initially to download the examples, install, learn and then expand on what we learn and make our own examples with broader applicability to PSC clients.

We looked into:

  • How to create a bot
  • How to deploy a bot to different channels
  • How to add Artificial intelligence (LUIS)
  • How can we build something applicable to our clients / What else can we play with?

What did we find?

Adam discussed and demonstrated how easy it is to create a bot using the framework. He was able to build a hello world bot in about 10 minutes and publish it to a point where we could actually interact with it in the meeting itself.

The investigation team created the three bots aimed at demonstrating increased productivity gains and enhanced user experiences:

  • Common data capture – The ability to quickly and easily view and create timesheets from a bot.
  • Predictive Analytics – Using Machine Learning techniques to return projected sales results back to users based on Product information hosted in an external database.
  • Cognitive Services – Using cognitive services and natural language processing to demonstrate free text entry in a bot to create task logging on an external site.

The common assumption is text is the primary integrations when using chat clients. This is mostly true when two humans communicate over chat, but as it relates to bots, we have a variety options Microsoft provides with its abstraction.

The bot framework supports text (plain and rich), images (up to 20 Mb), video (up to 1 mins), buttons and the following rich content cards…

In addition to the rich content cards, Microsoft has released a separate service which enables you to build more complex card content layouts which can be rendered from data coming from the bot framework. This also exposed more native platform specific custom rendering of cards.

Timesheet Bot

We set out to build a bot which would help fill out weekly timesheet for our consultants. Our bot has two main features: displaying and creating a weekly timesheet. For displaying the previous week’s timesheet, used a carousel card which can display a collocation of cards representing the days of the week. Each card also has a set of buttons which can either link to additional actions within the bot, or external links to an existing website.

Product information Bot

We created a bot demonstrating the ability to search a product database which in turn triggered an external API call to an associated Azure machine learning service. Users can interact with the bot via a series of question and answers. e.g. “What product are you searching for? Please select one of the following”. The results are then fed back to the bots in the form of a chart graphic. This bot demonstrates a powerful way to access a variety of on demand reports right within a chat client.

Productivity Bot using Natural Language interpretation

We used the Azure LUIS service (Language Understanding Intelligent Service), which is a part of Microsoft’s cognitive services and uses machine learning to help derive intent from text. Users can use an unstructured text request to “create a task” or “create new task” or “I want a new task” which the LUIS service derives the intent to be “Create a Task”. Using the secure integration with and external task tracking service (Trello) the bot is then able to ask the user the necessary questions to create a task based on user inputs.

Conclusion

Bots are being used today by startup and some commercial enterprises trying to break into the corporate enterprise space. Our time spent with the Microsoft Bot Framework has convinced us that bot are ready for the enterprise and there are use cases for them effective implementation today.

 

PSC Tech Talk: How does blockchain work and what is cryptomining?

This week one of the Labs team members Toby Samples (@tsamples) gave a presentation on How does blockchain work and what is cryptomining. We are looking at Blockchain in the Labs right now and with the considerable press around cryptomining and how you can even hack a website to do it, we figured it would be good to educate everyone internally and also come up with some policy around preventing this as part of our delivery excellent to clients.

What is blockchain?

Well simply put it is a distributed digital record which enables the ability to prove that every transaction within the “chain” is correct and has not been tampered with. Most people know the association of blockchain and bitcoin.

Blockchain works by “hashing” the contents of a transaction and adding them to the “chain”. Once the chain is started the next link in the chain is created using the hash from the previous chain. If the contents of any link are changed the hashes will not match and the chain is broken.

The implication for bitcoin transactions on a massive scale is that every transaction is recorded in the chain, which makes the chain large, which makes validating the chain expensive and processor intensive. (One bitcoin transaction costs as much as the energy for a house for a week)

In a financial ledger it is critical to the confidence of the company/investor/buyer that bank records are accurate and no-one is faking the numbers for their own personal gain. But there are many other potential usages which less “volume” but just as much use.

Bitcoin and other distributed cryptocurrencies allow for transactions to happen all over the global and more importantly transaction validation can be a distributed process. It is not instantaneous that the transactions occur.

When a digital transaction is carried out, it is grouped together in a cryptographically protected block with other transactions that have occurred in the last 10 minutes and sent out to the entire network.

Miners (members in the network with high levels of computing power) then compete to validate the transactions by solving complex coded problems. The first miner to solve the problem and validate the block receives a reward. (In the Bitcoin Blockchain network, for example, a miner would receive Bitcoins). This is a really nice article explaining how the proof of work, works.

Explaining How Proof of Stake, Proof of Work, Hashing and Blockchain Work Together

So what is cryptomining?

Cryptomining is using a computer to do the coin mining processing. This is generally cost prohibitive to run as an individual. Unless you have a powerful gaming pc and are making a long term investment, it is not really a financially viable thing to do for an individual. The process is relatively simple: you create an online account to process financial transactions (you get paid), sign up to a service which will give you transactions to process, and install a program to churn through validations. Once you sign up to a service the validations are transmitted to your computer for processing.

It becomes illegal (cryptojacking) when you commandeer  someone else’s machine to do the mining for you. Why not have someone else pay for the mining while you reap the profits for the validation?

Where this becomes especially nefarious when services like coinhive allow you to make your website customers do this mining for you. Some people are starting to use this as income from their websites rather than advertising. Coinhive offer a service whereby you can add a coinhive js file to your website and then anyone who visits that site gets a javascript load of coin mining assigned to the computer and it churns away while you are on the page.

What happened earlier in Feb 2018 became international news when a remote 3rd party js library site used by UK and AUS government sites was hacked and these .gov sites started to behave like coinhive processing sites. See this great blog for more details (The JavaScript Supply Chain Paradox: SRI, CSP and Trust in Third Party Libraries).

There are ways and means to prevent your site becoming victim to this JavaScript attack as the article describes. The tale is cautionary and it is important that awareness of this kind of behavior is out there.

Conclusion

Blockchain is not just for financial transactions, there are many other real world applications for it. Understanding why how cryptocurrency works in principle, and the necessity for Coin Mining it breeds, gives us a better preparedness to prevent its illegal usage.