Deploying Hyperledger Composer Playground to Bluemix


Since I’ve been deploying my own variations of Composer Playground to Bluemix recently, I thought it might be worth jotting down a few notes for anyone else who wants to do the same. For example, it might be useful to have your own known level of the Playground if you’re giving demos, to avoid any new functionality from the weekly releases causing surprises.

Assuming that you already have a working Composer development environment, a Bluemix account, and the Cloud Foundry CLI installed, here’s how…

First you need the main Composer repository if you don’t have it already

git clone https://github.com/hyperledger/composer.git

 

Next, checkout the code you want to deploy. In most cases you’ll want a release that’s been through one of our weekly release parties. I’ve picked the v0.14.2 release here

cd composer
git checkout -b v0.14.2-deploy v0.14.2

 

Get lerna to do its thing

lerna bootstrap

 

Now build the playground

cd packages/composer-playground
npm run build:prod

 

Create a manifest.yml file for your application with the following content

---
  command: node cli.js
  instances: 2
  memory: 128M
  env:
    COMPOSER_CONFIG: '{"webonly":true}'

 

Log in to Bluemix (you may need to use the –sso option)

cf login

 

Push the new app

cf push <APP_NAME>

 

Enjoy!

Advertisement

A little more conversation


More than a year seems to have vanished somewhere since I left MDM for new adventures with Watson. It’s even been a few months since the new Conversation service first appeared on Bluemix, along with the tools I’ve been helping to build.

If you’re interested in Watson Conversation, or just curious about what I’ve been up to for the last year, these are a few blog posts which I’ve come across which explain everything better than I could:

This thing seems pretty popular, so there are videos too!

 

There are even a few GitHub repositories:

And of course, tweets

If that’s not enough, you can ask questions on Stack Overflow and dw Answers, or join the Watson Developer Community.

If you’re building something with Watson Conversation, I’d love to hear about it! And finally, if you have any tips or tricks that you could share, I’m trying to collect some for a conversation-starter project on GitHub.

Updated: lots more links!

 

Hadoop as a service


It’s been a fun year learning new stuff, and along the way Andy Piper helped out with a bite sized architectural debate while I was experimenting with a Hadoop service on Bluemix. Having a short lived/disposable memory I thought it would be worth posting the discussion here for future reference…

‏@jtonline: Still pondering how a hadoop buildpack might compare to a hadoop service

@andypiper: @jtonline why would you want a buildpack for Hadoop – surely data store = service (broadly) not runtime. #cloudfoundry

@jtonline: @andypiper hmm, maybe, but you want to bring the processing to the data don’t you? Currently seems like services will hold big data in silos

@jtonline: @andypiper for example, I might want to use one of the address verification services from my map reduce job. I’m probably missing something.

@andypiper: @jtonline multiple services can be bound to multiple apps. And you can call jobs in those services from those apps.

@andypiper: @jtonline PivotalHD ships as a service in PivotalCF – obviously you may need data access libraries in the buildpack for the app.

@jtonline: @andypiper not convinced hadoop is just a data store. Do I need apps on runtime to kick off oozie jobs with details of other services?

@andypiper: @jtonline the runtime/service debate on CF has been a long one but I think fairly clean/clear. I’d see Hadoop as a shared resource.

@andypiper: @jtonline bear in mind buildpack -> droplet -> runnable containerised app instance.

@jtonline: @andypiper agreed. Maybe what I’m missing is an easy way to wire services together?

@andypiper: @jtonline yeah maybe – you end up with apps acting as service coordinators I guess.

@andypiper: @jtonline coupled with the fact that apps are intentionally short-lived and best stateless… interesting architectural debate :-)

@andypiper: @jtonline (for “short-lived” read “disposable” my bad)

It should be an interesting 2015.

BigInsights Quicker Start


I’ve been taking a break from Liberty and JAX-RS recently to start tinkering with IBM’s BigInsights Hadoop distribution. To make things easier/more interesting my first attempts were using the Analytics for Hadoop service on BlueMix. In case it helps anyone, here’s what I ended up with before needing to install BigInsights myself:

And this is the script I used to upload data in the video (unfortunately I didn’t have any luck using the HttpFS API):

#!/bin/sh

BIUSER=biblumix
BIPASSWORD=password
BIURI=https://hostname:8443/data/controller/dfs

curl -iv –user ${BIUSER}:${BIPASSWORD} –insecure -X POST “${BIURI}/user/${BIUSER}/sample-data?isfile=false”

curl -iv –user ${BIUSER}:${BIPASSWORD} –insecure -X POST “${BIURI}/user/${BIUSER}/sample-data/orgdata.unl” –header “Content-Type:application/octet-stream” –header “Transfer-Encoding:chunked” -T “orgdata.unl”

curl -iv –user ${BIUSER}:${BIPASSWORD} –insecure -X POST “${BIURI}/user/${BIUSER}/sample-data/persondata.unl” –header “Content-Type:application/octet-stream” –header “Transfer-Encoding:chunked” -T “persondata.unl”

Notes:

  • since recording the demo Bluemix has added a United Kingdom region, however it looks like the Analytics for Hadoop service is currently only available on the US South region.
  • there is also now a BigInsights service on Bluemix which allows you to provision multi-node Hadoop clusters.