Using Deep Neural Networks for NLP Applications – MAS

Really enjoyed visiting the Monetary Authority of Singapore (MAS) and talking on the applications of Deep Neural Networks for Natural Language Processing (NLP).

IMG_0993

During the talk, there were some great questions from the audience, one of them was “can a character level  model capture the unique structure of words and sentences? ” The answer is YES, and I hope that the demo, showing a three-layers 512-units LSTM model trained on publicly-available Regulatory and Supervisory Framework documents downloaded from the MAS website, predicting the next character and repeating it many times, helped to clarify the answer.

MAS Video Capture

Training the same model on Shakespeare’s works and running both models side by side was fun!  

LSTM

 

Install GPU TensorFlow on AWS Ubuntu 16.04

 TensorFlow™ is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them.

On a typical system, there are multiple computing devices. In TensorFlow, the supported device types are CPU and GPU.  GPUs offer 10 to 100 times more computational power than traditional CPUs, which is one of the main reasons why graphics cards are currently being used to power some of the most advanced neural networks responsible for deep learning.

The environment setup is often the hardest part of getting a deep learning setup going, so hopefully you will find this step-by-step guide helpful.

Launch a GPU-enabled Ubuntu 16.04 AWS instance

Choose an Amazon Machine Image (AMI) – Ubuntu Server 16.04 LTS

AWS-Ubuntu

Choose an instance type

The smallest GPU-enabled machine is p2.xlarge

AWS-Ubuntu-GPUs

You can find more details here.

Configure Instance Details, Add Storage (choose storage size), Add Tags, Configure Security Group and Review Instance Launch and Launch.

launch-status

Open the terminal on your local machine and connect to the remote machine (ssh -i)

Update the package lists for upgrades for packages that need upgrading, as well as new packages that have just come to the repositories

sudo apt-get –assume-yes update

Install the newer versions of the packages

sudo apt-get –assume-yes  upgrade

Install the CUDA 8 drivers

CUDA is a parallel computing platform and application programming interface (API) model created by Nvidia. GPU-accelerated CUDA libraries enable drop-in acceleration across multiple domains such as linear algebra, image and video processing, deep learning and graph analytics.

Verify that you have a CUDA-Capable GPU

lspci | grep -i nvidia
00:1e.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1)

Verify You Have a Supported Version of Linux

uname -m && cat /etc/*release

x86_64
DISTRIB_ID=Ubuntu
…..

The x86_64 line indicates you are running on a 64-bit system. The remainder gives information about your distribution.

 Verify the System Has gcc Installed

gcc –version

If the message is “The program ‘gcc’ is currently not installed. You can install it by typing: sudo apt install gcc”

sudo apt-get install gcc

gcc –version

gcc (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 20160609

….

Verify the System has the Correct Kernel Headers and Development Packages Installed

uname –r

4.4.0-1038-aws

CUDA support

Download the CUDA-8 driver (CUDA 9 is not yet supported by TensorFlow 1.4)

The driver can be downloaded from here:

CUDA-download-toolikit

CUDA-download-toolikit-installer

Or, downloaded directly to the remote machine:

wget -O ./cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64.deb https://developer.nvidia.com/compute/cuda/8.0/Prod2/local_installers/cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64-deb

Downloading patch 2 as well:

wget -O ./cuda-repo-ubuntu1604-8-0-local-cublas-performance-update_8.0.61-1_amd64.deb https://developer.nvidia.com/compute/cuda/8.0/Prod2/patches/2/cuda-repo-ubuntu1604-8-0-local-cublas-performance-update_8.0.61-1_amd64-deb

Install the CUDA 8 driver and patch 2

Extract, analyse, unpack and install the downloaded .deb files

sudo dpkg -i cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64.deb

sudo dpkg -i cuda-repo-ubuntu1604-8-0-local-cublas-performance-update_8.0.61-1_amd64.deb

apt-key is used to manage the list of keys used by apt to authenticate packages. Packages which have been authenticated using these keys will be considered trusted.

sudo apt-key add /var/cuda-repo-8-0-local-ga2/7fa2af80.pub
sudo apt-key add /var/cuda-repo-8-0-local-cublas-performance-update/7fa2af80.pub

sudo apt-get update

Once completed (~10 min), reboot the system to load the NVIDIA drivers.

sudo shutdown -r now

Install cuDNN v6.0

The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers.

Download the cuDNN v6.0 driver

The driver can be downloader from here: please note that you will need to register first.

cuDNN-download2

Copy the driver to the AWS machine (scp -r -i)

Extract the cuDNN files and copy them to the target directory

tar xvzf cudnn-8.0-linux-x64-v6.0.tgz  

sudo cp -P cuda/include/cudnn.h /usr/local/cuda/includesudo

cp -P cuda/lib64/libcudnn* /usr/local/cuda/lib64

sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*

Update your bash file

nano ~/.bashrc

Add the following lines to the end of the bash file:

export CUDA_HOME=/usr/local/cuda

export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH

export PATH=${CUDA_HOME}/bin:${PATH}

bashrc

Save the file and exit.

Install TensorFlow

Install the libcupti-dev library

The libcupti-dev library is the NVIDIA CUDA Profile Tools Interface. This library provides advanced profiling support. To install this library, issue the following command:

sudo apt-get install libcupti-dev

Install pip

Pip is a package management system used to install and manage software packages written in Python which can be found in the Python Package Index (PyPI).

sudo apt-get install python-pip

sudo pip install –upgrade pip

Install TensorFlow

sudo pip install tensorflow-gpu

Test the installation

Run the following within the Python command line:

from tensorflow.python.client import device_lib

def get_available_gpus():

    local_device_protos = device_lib.list_local_devices()

    return [x.name for x in local_device_protos if x.device_type == ‘GPU’]

get_available_gpus()

The output should look similar to that:

2017-11-22 03:18:15.187419: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA

2017-11-22 03:18:17.986516: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero

2017-11-22 03:18:17.986867: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:

name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235

pciBusID: 0000:00:1e.0

totalMemory: 11.17GiB freeMemory: 11.10GiB

2017-11-22 03:18:17.986896: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0, compute capability: 3.7)

[u’/device:GPU:0′]

 

 

Twitter’s real-time stack: Processing billions of events with Heron and DistributedLog

At the first day of the Strata+Hadoop, Maosong Fu, Tech Lead for Realtime Compute at Twitter shared some details on Twitter’s real-time stack

img_6462

There are many industries where optimizing in real-time can have a large impact on overall business performance, leading to instant benefits in customer acquisition, retention, and marketing.

valueofdata

But how fast is real-time? It depends on the context, whether it’s financial trading, tweeting, ad impression count or monthly dashboard.

what-is-real-time

 

Earlier Twitter messaging stack

twittermessaging

Kestrel is a message queue server we use to asynchronously connect many of the services and functions underlying the Twitter product. For example, when users update, any tweets destined for SMS delivery are queued in a Kestrel; the SMS service then reads tweets from this queue and communicates with the SMS carriers for delivery to phones. This implementation isolates the behavior of SMS delivery from the behavior of the rest of the system, making SMS delivery easier to operate, maintain, and scale independently.

Scribe is a server for aggregating log data streamed in real time from a large number of servers.

Some of Kestrel’s limitations are listed in the below:

  • Durability is hard to achieve
  • Read-behind degrades performance
  • Adding subscribers is expensive
  • Scales poorly as number of queues increase
  • Cross DC replication

kestrellimitations

From Twitter Github:

We’ve deprecated Kestrel because internally we’ve shifted our attention to an alternative project based on DistributedLog, and we no longer have the resources to contribute fixes or accept pull requests. While Kestrel is a great solution up to a certain point (simple, fast, durable, and easy to deploy), it hasn’t been able to cope with Twitter’s massive scale (in terms of number of tenants, QPS, operability, diversity of workloads etc.) or operating environment (an Aurora cluster without persistent storage).

Kafka™ is used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies.

Kafka relies on file system page cache with performance degradation when subscribers fall behind – too many random I/O

kafkalimitations

Rethinking messaging

rethinkingmessaging

Apache DistributedLog (DL) is a high-throughput, low-latency replicated log service, offering durability, replication and strong consistency as essentials for building reliable real-time applications.

distributedlogs

Event Bus

eventbus

Features of DistributedLog at Twitter:

High Performance

DL is able to provide milliseconds latency on durable writes with a large number of concurrent logs, and handle high volume reads and writes per second from thousands of clients.

Durable and Consistent

Messages are persisted on disk and replicated to store multiple copies to prevent data loss. They are guaranteed to be consistent among writers and readers in terms of strict ordering.

Efficient Fan-in and Fan-out

DL provides an efficient service layer that is optimized for running in a multi- tenant datacenter environment such as Mesos or Yarn. The service layer is able to support large scale writes (fan-in) and reads (fan-out).

Various Workloads

DL supports various workloads from latency-sensitive online transaction processing (OLTP) applications (e.g. WAL for distributed database and in-memory replicated state machines), real-time stream ingestion and computing, to analytical processing.

Multi Tenant

To support a large number of logs for multi-tenants, DL is designed for I/O isolation in real-world workloads.

Layered Architecture

DL has a modern layered architecture design, which separates the stateless service tier from the stateful storage tier. To support large scale writes (fan- in) and reads (fan-out), DL allows scaling storage independent of scaling CPU and memory.

 

 

distibutedlogs

Storm was no longer able to support Twitter’s requirements and although Twitter improved Storm’s performance eventually Twitter decided to develop Heron.

Heron is a realtime, distributed, fault-tolerant stream processing engine from Twitter. Heron is built with a wide array of architectural improvements that contribute to high efficiency gains.

heron

Heron has powered all realtime analytics with varied use cases at Twitter since 2014. Incident reports dropped by an order of magnitude demonstrating proven reliability and scalability

 

heronusecases

Heron is in production for the last 3 years, reducing hardware requirements by 3x. Heron is highly scalable both in the ability to execute large number of components for each topology and the ability to launch and track large numbers of topologies.

 

heronattwitter

Lambda architecture is a data-processing architecture designed to handle massive quantities of data by taking advantage of both batch– and stream-processing methods. This approach to architecture attempts to balance latency, throughput, and fault-tolerance by using batch processing to provide comprehensive and accurate views of batch data, while simultaneously using real-time stream processing to provide views of online data. The two view outputs may be joined before presentation.

The way this works is that an immutable sequence of records is captured and fed into a batch system and a stream processing system in parallel. You implement your transformation logic twice, once in the batch system and once in the stream processing system. You stitch together the results from both systems at query time to produce a complete answer.

lambda-16338c9225c8e6b0c33a3f953133a4cb

Lambda Architecture: the good

lambdathegood

The problem with the Lambda Architecture is that maintaining code that needs to produce the same result in two complex distributed systems is exactly as painful as it seems like it would be.

LambdaTheBad.png

Summingbird to the Rescue! Summingbird is a library that lets you write MapReduce programs that look like native Scala or Java collection transformations and execute them on a number of well-known distributed MapReduce platforms, including Storm and Scalding.

Summingbird.png

Curious to Learn More?

curioustolearnmore

 

Interested in Heron?

Code at: https://github.com/twitter/heron

http://twitter.github.io/heron/

 

inerestedinheron

Install MongoDB Community Edition and PyMongo on OS X

  • Install Homebew, a free and open-source software package management system that simplifies the installation of software on Apple’s macOS operating system.

/usr/bin/ruby -e “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)

  • Ensure that you’re running the newest version of Homebrew and that it has the newest list of formulae available from the main repository

brew update

  • To install the MongoDB binaries, issue the following command in a system shell:

brew install mongodb

  • Create a data directory (-p create nested directories, but only if they don’t exist already)

mkdir -p ./data/db

  • Before running mongodb for the first time, ensure that the user account running mongodb has read and write permissions for the directory

sudo chmod 765 data

  • Run MongoDB

mongod –dbpath data/db

  • To stop MongoDB, press Control+C in the terminal where the mongo instance is running

Install PyMongo

pip install pymongo

  • In a Python interactive shell:

import pymongo

from pymongo import MongoClient

RoboMongo

  • Create a Connection

client = MongoClient()

  • Access Database Objects

MongoDB creates new databases implicitly upon their first use.

db = client.test

  • Query for All Documents in a Collection

cursor = db.restaurants.find()

for document in cursor: print(document)

  • Query by a Top Level Field

cursor = db.restaurants.find({“borough”: “Manhattan”})

for document in cursor: print(document)

  • Query by a Field in an Embedded Document

cursor = db.restaurants.find({“address.zipcode”: “10075”})

for document in cursor: print(document)

  • Query by a Field in an Array

cursor = db.restaurants.find({“grades.grade”: “B”})

for document in cursor: print(document)

 

  • Insert a Document

Insert a document into a collection named restaurants. The operation will create the collection if the collection does not currently exist.

result = db.restaurants.insert_one(

{

“address”: {            “street”: “2 Avenue”,            “zipcode”: “10075”,            “building”: “1480”,            “coord”: [-73.9557413, 40.7720266]        },

“borough”: “Manhattan”,

“cuisine”: “Italian”,

“grades”: [

{                “date”: datetime.strptime(“2014-10-01”, “%Y-%m-%d“),                “grade”: “A”,                “score”: 11            },

{                “date”: datetime.strptime(“2014-01-16”, “%Y-%m-%d“),                “grade”: “B”,                “score”: 17            }        ],

“name”: “Vella”,

“restaurant_id”: “41704620”

})

result.inserted_id

 

Changing the Game with Data and Insights – Data Science Singapore

Another great Data Science Singapore (DSSG) event! Hong Cao from McLaren Applied Technologies shared his insights on applications of data science at McLaren.

The first project is using economic sensors for continuous human conditions monitoring, including sleep quality, gait and activities, perceived stress and cognitive performance.

DataScience

Gait outlier analysis provides unique insight on fatigue levels while exercising, probability of injury and post surgery performance and recovery.

Gait Analysis Data Science

DataScience(1)DataScience(3)

A related study looks into how biotelemetry assist in patient treatment such as ALS (Amyotrophic Lateral Sclerosis) disease progression monitoring. The prototype tools collect heart rate, activity and speech data to analyse disease progression.

DataScience(3)

HRV (Heart Rate Variability) features are extracted from both the time and from the frequency domains.

DataScience(4)

Activity score is derived from the three-axis accelerometer data.

DataScience(5)DataScience(6)

The second project was a predictive failure POC, to help determine the condition of Haul Trucks in order to predict when a failure might happen. The cost of having an excavator go down in the field is $5 million a day, while the cost of losing a haul truck is $1.8 million per day. If you can prevent it from going down in the field, that makes a huge difference

DataScience(7)DataScience(8)DataScience(9)DataScience(10)DataScience(11)DataScience(12)

Enhance your Privacy: 3 Easy Steps To Setup Opera’s Unlimited and FREE VPN

Opera is the first major browser maker to integrate an unlimited and free VPN or virtual private network. Now, you don’t have to download VPN extensions or pay for VPN subscriptions to access blocked websites and to shield your browsing when on public Wi-Fi.

OperaVPNOverall

 

With a free, unlimited, native VPN that just works out-of-the-box and doesn’t require any subscription, Opera wants to make VPNs available to everyone.

According to Global Web Index, more than half a billion people (24 percent of the world’s online population) have tried or are currently using VPN services. According to the research, the primary reasons people use a VPN are for better access to entertainment content, browser anonymity, and the ability to access sites restricted by their workplace or country.

  1. Download & Install

To test this new feature, start by downloading and installing Opera’s Developer version 

After downloading OperaSetupDeveloper.zip, unzip it and double-click the Opera Installer.

 OperaInstallerInternet

Click Open

OperaInstaller

Complete the installation process.

  1. To enable the VPN function, click the Opera menu, then scroll down to Preferences…

 

OperaDeveloperPreferences

Under Privacy & Security, click the checkbox to enable the VPN function

OperaPrivacySecurityDeveloper

  1. Open a new browser tab or window in Opera and click on the “VPN” blue button that is in the URL link bar, pull down the ‘Virtual Location’ menu to choose the IP region to mimic (currently Canada, Germany, United States).

OperaVPNOK

Now that the Opera VPN has been enabled, you can toggle the VPN off by clicking on the VPN button and flipping the switch to the OFF position, and back on again by returning to the same menu and flipping it back to the ON position

Prior to this new feature Opera recommended VPN provider was SurfEasy VPN, a company that was bought by Opera about a year ago. The cheapest SurfEasy plan starts from $6.49 a month.

OperaSurfEasyPlan

Most of the times the VPN worked well, but occasionally I’ve received the following error message “VPN is temporarily unavailable. Opera is resolving this issue”. At this stage instability is expected

 

 

Data Scientists, With Great Power Comes Great Responsibility

It is a good time to be a data scientist.

With great power comes great responsibilityIn 2012 the Harvard Business Review hailed the role of data scientist “The sexiest job of the 21st century”. Data scientists are working at both start-ups and well-established companies like Twitter, Facebook, LinkedIn and Google receiving a total average salary of $98k ($144k for US respondents only) .

Data – and the insights it provides – gives the business the upper hand to better understand the clients, prospects and the overall operation. Till recently, it was not uncommon for million- and -billion- dollar deals to be accepted or rejected based on the intuition & instinct. Data scientists add value to the business by leading to informed and timely decision-making process using quantifiable, data driven evidence and by translating the data into actionable insights.

So you have a rewarding corporate day job, how about doing data science for social good?

You have been endowed with tremendous data science and leadership powers and the world needs them! Mission-driven organizations are tackling huge social issues like poverty, global warming and public health. Many have tons of unexplored data that could help them make a bigger impact, but don’t have the time or skills to leverage it. Data science has the power to move the needle on critical issues but organizations need access to data superheroes like you to use it

DataKind Blog 

There are a few of programs that exist specifically to facilitate this, the United Nations #VisualizeChange challenge is the one I’ve just taken.

As the Chief Information Technology Officer, I invite the global community of data scientists to partner with the United Nations in our mandate to harness the power of data analytics and visualization to uncover new knowledge about UN related topics such as human rights, environmental issues, and political affairs.

Ms. Atefeh Riazi – Chief Information Technology Officer at United Nations

The United Nations UNITE IDEAS published a number of data visualization challenges. For the latest challenge, #VisualizeChange: A World Humanitarian Summit Data Challenge , we were provided with unstructured information from nearly 500 documents that the consultation process has generated as per July 2015. The qualitative data is categorized in emerging themes and sub-themes that have been identified according to a developed taxonomy. The challenge was to process the consultation data in order to develop an original and thought provoking illustration of information collected through the consultation process.

Over the weekend I’ve built an interactive visualization using open-source tools (R and Shiny) to help and identify innovative ideas and innovative technologies in humanitarian action, especially on communication and IT technology. By making it to the top 10 finalists, the solution is showcased here, as well as on the Unite Ideas platform and other related events worldwide, so I hope that this visualization will be used to uncover new knowledge.

#VisualizeChange Top 10 Visualizations

Opening these challenges to the public helps raising awareness – during the process of analysing the data and designing the visualization I’ve learned on some of most pressing humanitarian needs such as Damage and Need Assessment, Communication, Online Payment and more and on the most promising technologies such as Mobile, Data Analytics, Social Media, Crowdsourcing and more.

#VisualizeChange Innovative Ideas and Technologies

Kaggle is another great platform where you can apply your data science skills for social good. How about applying image classification algorithms to automate the right whale recognition process using a dataset of aerial photographs of individual whale? With fewer than 500 North Atlantic right whales left in the world’s oceans, knowing the health and status of each whale is integral to the efforts of researchers working to protect the species from extinction.

Right Whale Recognition

There are other excellent programs.

The DSSG program ran by the University of Chicago, where aspiring data scientists take on real-world problems in education, health, energy, transportation, economic development, international development and work for three months on data mining, machine learning, big data, and data science projects with social impact.

DataKind bring together top data scientists with leading social change organizations to collaborate on cutting-edge analytics and advanced algorithms to maximize social impact.

Bayes Impact  is a group of practical idealists who believe that applied properly, data can be used to solve the world’s biggest problems.

Are you aware of any other organizations and platforms doing data science for social good? Feel free to share.

Tools & Technologies

R for analysis & visualization
Shiny.io for hosting the interactive R script
The complete source code and the data is hosted here

 

The Evolving Role of the Chief Data Officer

In recent years, there has been a significant rise in the appointments of Chief Data Officers (CDOs).

Although this role is still very new, Gartner predicts that 25 percent of organizations will have a CDO by 2017, with that figure rising to 50 percent in heavily regulated industries such as banking and insurance. Underlying this change is an increasing recognition of the value of data as an asset.

Last week the CDOForum held an event chaired by Dr. Shonali Krishnaswamy Head Data Analytics Department I2R, evaluating the role of the Chief Data Officer and looking into data monetization strategies and real-life Big Data case studies.

According to Debra Logan, Vice President and Gartner Fellow, the

Chief Data Officer (CDO) is a senior executive who bears responsibility for the firm’s enterprise wide data and information strategy, governance, control, policy development, and effective exploitation. The CDO’s role will combine accountability and responsibility for information protection and privacy, information governance, data quality and data life cycle management, along with the exploitation of data assets to create business value.

To succeed in this role, the CDO should never be “siloed” and work closely with other senior leaders to innovate and to transform the business:

  • With the Chief Operating Officers (COO) and with the Chief Marketing Officer (CMO) on creating new business models, including data driven products and services, mass experimentation and on ways to acquire, grow and retain customers including personalization, profitability and retention.
  • With the COO on ways to optimize the operation, counter frauds and threats including business process operations, infrastructure & asset efficiency, counter fraud and public safety and defense.
  • With the Chief Information Officer (CIO) on ways to maximize insights, ensure trust and improve IT economics, including enabling full spectrum of analytics and optimizing big data & analytics infrastructure.
  • With the Chief Human Resource Officer (CHRO) on ways to transform management processes including planning and performance management, talent management, health & benefits optimization, incentive compensation management and human capital management.
  • With the Chief Risk Officer (CRO), CFO and COO on managing risk including risk adjusted performance, financial risk and IT risk & security.

To unleash the true power of data, many CDOs are expanding their role as a way of expanding scope and creating an innovation agenda, moving from Basics (data strategy, data governance, data architecture, data stewardship, data integration and data management) to Advanced, implementing machine learning & predictive analytics, big data solutions, developing new products and services and enhancing customer experience.

Conclusion

Organizations have struggled for decades with the value of their data assets. Having a new chief officer leading all the enterprise-wide management of data assets will ensure maximum benefits to the organization.

 

Agile Development of Data Products with R and Shiny: A Practical Approach

Many companies are interested in turning their data assets into products and services.  It’s not limited anymore to online firms like LinkedIn or Facebook, but there is a variety of companies in offline industries (GM, Apple etc) who have started to develop products and services based on analytics.

But how do you succeed at developing and launching data products?

I would like to suggest a framework building on the idea of Lean Startup and the Minimum Viable Product (MVP), to support the rapid modelling and development of innovative data products. These principles can be applied when launching a new tech start-up, starting a small business, or when starting a new initiative within a large corporation.

Agile Development of Data Products

A minimum viable product has just those core features that allow the product to be deployed, and no more. The product is typically deployed to a subset of possible customers, such as early adopters that are thought to be more forgiving, more likely to give feedback, and able to grasp a product vision from an early prototype or marketing information
http://en.wikipedia.org/wiki/Minimum_viable_product

Some of the benefits of prototyping and developing MVP:

1.       You can get valuable feedback from the users early in the project.

2.       Different stakeholders can compare if the model matches the specification.

3.       It allows the model developer some insight into the accuracy of initial project estimates and whether the deadlines and milestones proposed can be successfully met.

Our fully functional model will look like that:

Simulator Screenshot

Before we dive into the details, feel free to play around with the prototype and get familiar with the model http://benefits.shinyapps.io/BenefitsSimulation/

Ready to go?

Let’s go through the different stages of the process, following a simple example and using R and Shiny for modelling and prototyping. I’ve also published the code in my github repository http://github.com/ofirsh/BenefitsSimulation , feel free to fork it and to play with it.

Ideas

This is where your creativity should kick in! What problem do you try to solve by using data?

Are you aware of a gap in the industry that you currently work in?

Let’s follow a simple example:

Effective employee benefits will significantly reduce staff turnover and companies with the most effective benefits are using them to influence the behavior of their staff and their bottom line, as opposed to simply being competitive

How can you find this optimal point, balancing between the increasing cost of employee benefits and the need to retain staff and reduce staff turnover?

Model

We will assume a (simplified) model that links the attrition rates to the benefits provided to the employee.

I know, it’s simplified. I’m also aware that there are many other relevant parameters in a real-life scenario.

But it’s just an example, so let’s move on.

LinearModel

Our simplified model depends on the following parameters:

  1. Number of Employees
  2. Benefits Saturation ($): we assume a linear dependency between the attrition rate and the benefits provided by the company. As the benefits increase, attrition rate drops, to 0% attrition at the point of Benefits Saturation. Any increase of benefits above the Benefits Saturation point will not have an impact on the attrition rates.
  3. Benefits ($): benefits provided by the company
  4. Max Attrition (%): maximal attrition rates at lowest benefits (100$)
  5. Training Period (months): number of months required to train a new employee
  6. Salary ($)

This model demonstrates the balance between increasing the benefits and the overall cost, and reducing the attrition rate and the associated cost related to hiring and training of new staff.

We will use R to implement our model:

R is an open-source software environment for statistical computing and graphics. The R language is widely used among statisticians and data miners for developing statistical software and data analysis.
http://en.wikipedia.org/wiki/R_%28programming_language%29

We will use R-Studio, a powerful and productive user interface for R. It’s free and open source, and works great on Windows, Mac, and Linux.

Let’s create a file named server.R and write some code. You can find the full source code in my github repository:

Cutoff is a function that models the attrition vs. benefits, where cutoffx is the value of Benefits Saturation:

cutoff <- function(minx, maxx,maxy,cutoffx,x)

{

ysat <- (x>=cutoffx)*0

slope <- ( 0 – maxy ) / ( cutoffx – minx )

yslope <- (maxy + (x-minx)*slope)*(x

return(ysat + yslope)

}

Calculating the different cost components:

benefitsCost <- input$numberee * input$benefits

attritionCost <- input$salary * nTrainingMonths * input$numberee * (currentAttrition / 100)

overallCost <- benefitsCost + attritionCost

The value of the variables starting with input$ is retrieved from the sliding bars, which are part of the User Interface (UI):

NumberOfEESlider

When changing the value of the slider named “Number Of Empolyees” from a value of 200 to a value of 300, the value of the variable input$numberee will change from 200 to 300 accordingly.

Benefits is a sequence of 20 numbers from 100 to 500, covering the range of benefits:

benefits <- round(seq(from = 100, to = 500, length.out = 20))

Let’s plot the benefits cost, the attrition cost and the overall cost as a function of the benefits:

Cost Vs Benefits

The different cost components are calculated below:

benefitsCostV <- input$numberee * benefits

attritionCostV <- input$salary * nTrainingMonths * input$numberee * (attrition / 100) totalCostV <- benefitsCostV + attritionCostV

And now we can plot the different cost components:

plot(benefitsCostV ~ benefits,col = “red”, main = “Cost vs. Benefits”, xlab = “benefits($)”, ylab = “cost($)”)

lines(benefits,benefitsCostV, col = “red”, lwd = 3)

points(benefits,attritionCostV, col = “blue”)

lines(benefits,attritionCostV, col = “blue”,lwd = 3)

points(benefits,totalCostV, col = “purple”)

lines(benefits,totalCostV, col = “purple”,lwd = 3)

Let’s find the minimal cost, and draw a nice orange circle around this optimal point:

minBenefitsIndex <- which.min(totalCostV)

minBenefits <- benefits[minBenefitsIndex]

minBenefitsCost <- totalCostV[minBenefitsIndex]

abline(v=minBenefits,col = “cyan”, lty = “dashed”, lwd = 1)

symbols(minBenefits,minBenefitsCost,circles=20, fg = “darkorange”, inches = FALSE, add=TRUE, lwd = 2)

Tip: Don’t spend too much time on writing the perfect R code; your model might change a lot once your stakeholders will provide their feedback.

Prototype

Shiny is web application framework for R that will turn your analyses into interactive web applications.

Let’s install Shiny and import the library functions:

install.packages(“shiny”)

library(shiny)

Once we have the model in place, we will create the user interface and link it back to the model.

Create a new file named ui.R with the User Interface (UI) elements.

For example, let’s write some text:

titlePanel(“Cost Optimization (Benefits and Talent) – Simulation”),

h5(“This interactive application simulates the impact of multiple …..

And let’s add a slider:

sliderInput(“numberee”,

“Number of Enployees:”,

min = 100,

max = 1000,

value = 200,

step = 100),

The inputId of the slider (numberee) is linking the value of the UI control (number of Employees) to the server side computation engine.

Create a Shiny account, copy-paste the token and a secret to the command line and execute in R-Studio:

shinyapps::setAccountInfo(name=’benefits’, token=’xxxx’, secret=’yyyy’)

And, deploy your code to the Shiny server:

deployApp()

Your prototype is live! http://benefits.shinyapps.io/BenefitsSimulation/

Send the URL to your stakeholders and collect their feedback. Iterate quickly and improve the model and the user interface.

Product

Once your stakeholders are happy with your prototype, it’s time to move on to the next stage and develop your data product.

The good news is that at this stage you should have a pretty good understanding of the requirements and the priorities, based on the feedback provided by your stakeholders.

It’s also the perfect time for you (or for your product development group) to focus more on hosting, architecture, design, the Software Development Life-cycle (SDLC), quality assurance, release management and more.

There are different technologies to consider when developing data products, which I will cover in future posts.

For now I will just mention an interesting option, where you can reuse your server-side R code. Using yhat you can expose your server-side R functionality via a set of web services, and consume these services from a client-side JavaScript libraries, like d3js.

Comments, questions?

Let me know.