Install GPU TensorFlow on AWS Ubuntu 16.04

 TensorFlow™ is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them.

On a typical system, there are multiple computing devices. In TensorFlow, the supported device types are CPU and GPU.  GPUs offer 10 to 100 times more computational power than traditional CPUs, which is one of the main reasons why graphics cards are currently being used to power some of the most advanced neural networks responsible for deep learning.

The environment setup is often the hardest part of getting a deep learning setup going, so hopefully you will find this step-by-step guide helpful.

Launch a GPU-enabled Ubuntu 16.04 AWS instance

Choose an Amazon Machine Image (AMI) – Ubuntu Server 16.04 LTS

AWS-Ubuntu

Choose an instance type

The smallest GPU-enabled machine is p2.xlarge

AWS-Ubuntu-GPUs

You can find more details here.

Configure Instance Details, Add Storage (choose storage size), Add Tags, Configure Security Group and Review Instance Launch and Launch.

launch-status

Open the terminal on your local machine and connect to the remote machine (ssh -i)

Update the package lists for upgrades for packages that need upgrading, as well as new packages that have just come to the repositories

sudo apt-get –assume-yes update

Install the newer versions of the packages

sudo apt-get –assume-yes  upgrade

Install the CUDA 8 drivers

CUDA is a parallel computing platform and application programming interface (API) model created by Nvidia. GPU-accelerated CUDA libraries enable drop-in acceleration across multiple domains such as linear algebra, image and video processing, deep learning and graph analytics.

Verify that you have a CUDA-Capable GPU

lspci | grep -i nvidia
00:1e.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1)

Verify You Have a Supported Version of Linux

uname -m && cat /etc/*release

x86_64
DISTRIB_ID=Ubuntu
…..

The x86_64 line indicates you are running on a 64-bit system. The remainder gives information about your distribution.

 Verify the System Has gcc Installed

gcc –version

If the message is “The program ‘gcc’ is currently not installed. You can install it by typing: sudo apt install gcc”

sudo apt-get install gcc

gcc –version

gcc (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 20160609

….

Verify the System has the Correct Kernel Headers and Development Packages Installed

uname –r

4.4.0-1038-aws

CUDA support

Download the CUDA-8 driver (CUDA 9 is not yet supported by TensorFlow 1.4)

The driver can be downloaded from here:

CUDA-download-toolikit

CUDA-download-toolikit-installer

Or, downloaded directly to the remote machine:

wget -O ./cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64.deb https://developer.nvidia.com/compute/cuda/8.0/Prod2/local_installers/cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64-deb

Downloading patch 2 as well:

wget -O ./cuda-repo-ubuntu1604-8-0-local-cublas-performance-update_8.0.61-1_amd64.deb https://developer.nvidia.com/compute/cuda/8.0/Prod2/patches/2/cuda-repo-ubuntu1604-8-0-local-cublas-performance-update_8.0.61-1_amd64-deb

Install the CUDA 8 driver and patch 2

Extract, analyse, unpack and install the downloaded .deb files

sudo dpkg -i cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64.deb

sudo dpkg -i cuda-repo-ubuntu1604-8-0-local-cublas-performance-update_8.0.61-1_amd64.deb

apt-key is used to manage the list of keys used by apt to authenticate packages. Packages which have been authenticated using these keys will be considered trusted.

sudo apt-key add /var/cuda-repo-8-0-local-ga2/7fa2af80.pub
sudo apt-key add /var/cuda-repo-8-0-local-cublas-performance-update/7fa2af80.pub

sudo apt-get update

Once completed (~10 min), reboot the system to load the NVIDIA drivers.

sudo shutdown -r now

Install cuDNN v6.0

The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers.

Download the cuDNN v6.0 driver

The driver can be downloader from here: please note that you will need to register first.

cuDNN-download2

Copy the driver to the AWS machine (scp -r -i)

Extract the cuDNN files and copy them to the target directory

tar xvzf cudnn-8.0-linux-x64-v6.0.tgz  

sudo cp -P cuda/include/cudnn.h /usr/local/cuda/includesudo

cp -P cuda/lib64/libcudnn* /usr/local/cuda/lib64

sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*

Update your bash file

nano ~/.bashrc

Add the following lines to the end of the bash file:

export CUDA_HOME=/usr/local/cuda

export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH

export PATH=${CUDA_HOME}/bin:${PATH}

bashrc

Save the file and exit.

Install TensorFlow

Install the libcupti-dev library

The libcupti-dev library is the NVIDIA CUDA Profile Tools Interface. This library provides advanced profiling support. To install this library, issue the following command:

sudo apt-get install libcupti-dev

Install pip

Pip is a package management system used to install and manage software packages written in Python which can be found in the Python Package Index (PyPI).

sudo apt-get install python-pip

sudo pip install –upgrade pip

Install TensorFlow

sudo pip install tensorflow-gpu

Test the installation

Run the following within the Python command line:

from tensorflow.python.client import device_lib

def get_available_gpus():

    local_device_protos = device_lib.list_local_devices()

    return [x.name for x in local_device_protos if x.device_type == ‘GPU’]

get_available_gpus()

The output should look similar to that:

2017-11-22 03:18:15.187419: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA

2017-11-22 03:18:17.986516: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero

2017-11-22 03:18:17.986867: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:

name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235

pciBusID: 0000:00:1e.0

totalMemory: 11.17GiB freeMemory: 11.10GiB

2017-11-22 03:18:17.986896: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0, compute capability: 3.7)

[u’/device:GPU:0′]

 

 

Twitter’s real-time stack: Processing billions of events with Heron and DistributedLog

At the first day of the Strata+Hadoop, Maosong Fu, Tech Lead for Realtime Compute at Twitter shared some details on Twitter’s real-time stack

img_6462

There are many industries where optimizing in real-time can have a large impact on overall business performance, leading to instant benefits in customer acquisition, retention, and marketing.

valueofdata

But how fast is real-time? It depends on the context, whether it’s financial trading, tweeting, ad impression count or monthly dashboard.

what-is-real-time

 

Earlier Twitter messaging stack

twittermessaging

Kestrel is a message queue server we use to asynchronously connect many of the services and functions underlying the Twitter product. For example, when users update, any tweets destined for SMS delivery are queued in a Kestrel; the SMS service then reads tweets from this queue and communicates with the SMS carriers for delivery to phones. This implementation isolates the behavior of SMS delivery from the behavior of the rest of the system, making SMS delivery easier to operate, maintain, and scale independently.

Scribe is a server for aggregating log data streamed in real time from a large number of servers.

Some of Kestrel’s limitations are listed in the below:

  • Durability is hard to achieve
  • Read-behind degrades performance
  • Adding subscribers is expensive
  • Scales poorly as number of queues increase
  • Cross DC replication

kestrellimitations

From Twitter Github:

We’ve deprecated Kestrel because internally we’ve shifted our attention to an alternative project based on DistributedLog, and we no longer have the resources to contribute fixes or accept pull requests. While Kestrel is a great solution up to a certain point (simple, fast, durable, and easy to deploy), it hasn’t been able to cope with Twitter’s massive scale (in terms of number of tenants, QPS, operability, diversity of workloads etc.) or operating environment (an Aurora cluster without persistent storage).

Kafka™ is used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies.

Kafka relies on file system page cache with performance degradation when subscribers fall behind – too many random I/O

kafkalimitations

Rethinking messaging

rethinkingmessaging

Apache DistributedLog (DL) is a high-throughput, low-latency replicated log service, offering durability, replication and strong consistency as essentials for building reliable real-time applications.

distributedlogs

Event Bus

eventbus

Features of DistributedLog at Twitter:

High Performance

DL is able to provide milliseconds latency on durable writes with a large number of concurrent logs, and handle high volume reads and writes per second from thousands of clients.

Durable and Consistent

Messages are persisted on disk and replicated to store multiple copies to prevent data loss. They are guaranteed to be consistent among writers and readers in terms of strict ordering.

Efficient Fan-in and Fan-out

DL provides an efficient service layer that is optimized for running in a multi- tenant datacenter environment such as Mesos or Yarn. The service layer is able to support large scale writes (fan-in) and reads (fan-out).

Various Workloads

DL supports various workloads from latency-sensitive online transaction processing (OLTP) applications (e.g. WAL for distributed database and in-memory replicated state machines), real-time stream ingestion and computing, to analytical processing.

Multi Tenant

To support a large number of logs for multi-tenants, DL is designed for I/O isolation in real-world workloads.

Layered Architecture

DL has a modern layered architecture design, which separates the stateless service tier from the stateful storage tier. To support large scale writes (fan- in) and reads (fan-out), DL allows scaling storage independent of scaling CPU and memory.

 

 

distibutedlogs

Storm was no longer able to support Twitter’s requirements and although Twitter improved Storm’s performance eventually Twitter decided to develop Heron.

Heron is a realtime, distributed, fault-tolerant stream processing engine from Twitter. Heron is built with a wide array of architectural improvements that contribute to high efficiency gains.

heron

Heron has powered all realtime analytics with varied use cases at Twitter since 2014. Incident reports dropped by an order of magnitude demonstrating proven reliability and scalability

 

heronusecases

Heron is in production for the last 3 years, reducing hardware requirements by 3x. Heron is highly scalable both in the ability to execute large number of components for each topology and the ability to launch and track large numbers of topologies.

 

heronattwitter

Lambda architecture is a data-processing architecture designed to handle massive quantities of data by taking advantage of both batch– and stream-processing methods. This approach to architecture attempts to balance latency, throughput, and fault-tolerance by using batch processing to provide comprehensive and accurate views of batch data, while simultaneously using real-time stream processing to provide views of online data. The two view outputs may be joined before presentation.

The way this works is that an immutable sequence of records is captured and fed into a batch system and a stream processing system in parallel. You implement your transformation logic twice, once in the batch system and once in the stream processing system. You stitch together the results from both systems at query time to produce a complete answer.

lambda-16338c9225c8e6b0c33a3f953133a4cb

Lambda Architecture: the good

lambdathegood

The problem with the Lambda Architecture is that maintaining code that needs to produce the same result in two complex distributed systems is exactly as painful as it seems like it would be.

LambdaTheBad.png

Summingbird to the Rescue! Summingbird is a library that lets you write MapReduce programs that look like native Scala or Java collection transformations and execute them on a number of well-known distributed MapReduce platforms, including Storm and Scalding.

Summingbird.png

Curious to Learn More?

curioustolearnmore

 

Interested in Heron?

Code at: https://github.com/twitter/heron

http://twitter.github.io/heron/

 

inerestedinheron

Changing the Game with Data and Insights – Data Science Singapore

Another great Data Science Singapore (DSSG) event! Hong Cao from McLaren Applied Technologies shared his insights on applications of data science at McLaren.

The first project is using economic sensors for continuous human conditions monitoring, including sleep quality, gait and activities, perceived stress and cognitive performance.

DataScience

Gait outlier analysis provides unique insight on fatigue levels while exercising, probability of injury and post surgery performance and recovery.

Gait Analysis Data Science

DataScience(1)DataScience(3)

A related study looks into how biotelemetry assist in patient treatment such as ALS (Amyotrophic Lateral Sclerosis) disease progression monitoring. The prototype tools collect heart rate, activity and speech data to analyse disease progression.

DataScience(3)

HRV (Heart Rate Variability) features are extracted from both the time and from the frequency domains.

DataScience(4)

Activity score is derived from the three-axis accelerometer data.

DataScience(5)DataScience(6)

The second project was a predictive failure POC, to help determine the condition of Haul Trucks in order to predict when a failure might happen. The cost of having an excavator go down in the field is $5 million a day, while the cost of losing a haul truck is $1.8 million per day. If you can prevent it from going down in the field, that makes a huge difference

DataScience(7)DataScience(8)DataScience(9)DataScience(10)DataScience(11)DataScience(12)

How To Find The Lag That Results In Maximum Cross-Correlation [R]

I have two time series and I want to find the lag that results in maximum correlation between the two time series. The basic problem we’re considering is the description and modeling of the relationship between these two time series.

In signal processing, cross-correlation is a measure of similarity of two series as a function of the lag of one relative to the other. This is also known as a sliding dot product or sliding inner-product.

For discrete functions, the cross-correlation is defined as:

corr

In the relationship between two time series (yt and xt), the series yt may be related to past lags of the x-series.  The sample cross correlation function (CCF) is helpful for identifying lags of the x-variable that might be useful predictors of yt.

In R, the sample CCF is defined as the set of sample correlations between xt+h and yt for h = 0, ±1, ±2, ±3, and so on.

A negative value for h is a correlation between the x-variable at a time before t and the y-variable at time t.   For instance, consider h = −2.  The CCF value would give the correlation between xt-2 and yt.

For example, let’s start with the first series, y1:

x <- seq(0,2*pi,pi/100)
length(x)
# [1] 201

y1 <- sin(x)
plot(x,y1,type="l", col = "green")

ser1

Adding series y2, with a shift of pi/2:

y2 <- sin(x+pi/2)
lines(x,y2,type="l",col="red")

ser2

Applying the cross correlation function (cff)

cv <- ccf(x = y1, y = y2, lag.max = 100, type = c("correlation"),plot = TRUE)

corr

The maximal correlation is calculated at a positive shift of the y1 series:

cor = cv$acf[,,1]
lag = cv$lag[,,1]
res = data.frame(cor,lag)
res_max = res[which.max(res$cor),]$lag
res_max
# [1] 44

Which means that maximal correlation between series y1 and series y2 is calculated between y1t+44 and y2t

corr-2