Uncategorized

Introducing HuevOS+RancherOS

huevosArch

Today is an exciting day in Santa Barbara. We are very pleased to introduce our latest innovation to the world of DevOps. 

HuevOS – the Docker-based open-source operating system for tomorrow’s IT and Dev/Ops professional.  HuevOS 1.0 (SunnySide) is the open-source/free-range/gluten-free solution that forms the perfect complement to RancherOS.  In addition, we’re delighted to begin development on our proprietary blend of Services and Language Software as a Service (SaLSaaS) which, when overlayed atop a HuevOS+RancherOS stack, provides a complete and delicious solution around which your whole day can be centered.

Try HuevOS+RancherOS today and let us know which SaLSaaS we should work on first to ensure your hunger for DevOps is quenched thoroughly.  

To get your first taste visit the following repository which includes our Chef Recipe and a Vagrantfile to get you HuevOS in short order:

https://github.com/viglesiasce/huevos-cookbook.git

If you already have your RancherOS host up its easy to add in our HuevOS to the mix via the Docker Registry:

docker pull viglesiasce/huevos; docker run huevos

A huge thank you to all involved in getting us to this point and being able to ship a 1.0 version of the HuevOS+RancherOS platform.

Happy clucking!!

huevOS

Standard
Eucalyptus

Deploying Cassandra and Consul with Chef Provisioning

ConsulCassandra

Introduction

Chef Provisioning (née Chef Metal) is an incredibly flexible way to deploy infrastructure. Its many plugins allow users to develop a single methodology for deploying an application that can then be repeated against many types of infrastructure (AWS, Euca, Openstack, etc). Chef provisioning is especially useful when deploying clusters of machines that make up an application as it allows for machines to be:

  • Staged before deployment
  • Batched for parallelism
  • Deployed in serial when necessary

This level of flexibility means that deploying interesting distributed systems like Cassandra and Consul is a breeze. By leveraging community cookbooks for Consul and Cassandra, we can largely ignore the details of package installation and service management and focus our time on orchestrating the stack in the correct order and configuring the necessary attributes such that our cluster converges properly. For this tutorial we will be deploying:

  • DataStax Cassandra 2.0.x
  • Consul
    • Service discovery via DNS
    • Health checks on a per node basis
  • Consul UI
    • Allows for service health visualization

Once complete we will be able to use Consul’s DNS service to load balance our Cassandra client requests across the cluster as well as use Consul UI in order to keep tabs on our clusters’ health.

In the process of writing up this methodology, I went a step further and created a repository and toolchain for configuring and managing the lifecycle of clustered deployments. The chef-provisioning-recipes repository will allow you to configure your AWS/Euca cloud credentials and images and deploy any of the clustered applications available in the repository.

Steps to reproduce

Install prerequisites

  • Install ChefDK
  • Install package deps (for CentOS 6)
    yum install python-devel gcc git
  • Install python deps:
    easy_install fabric PyYaml
  • Clone the chef-provisioning-recipes repo:
    git clone https://github.com/viglesiasce/chef-provisioning-recipes

Edit config file

The configuration file (config.yml) contains information about how and where to deploy the cluster. There are two main sections in the file:

  1. Profiles
    1. Which credentials/cloud to use
    2. What image to use
    3. What instance type to use
    4. What username to use
  2. Credentials
    1. Cloud endpoints or region
    2. Cloud access and secret keys

Edit the config.yml file found in the repo such that the default profile points to a CentOS 6 image in your cloud and the default credentials point to the proper cloud.

Run the deployment

Once the deployer has been configured we simply need to run it and tell it which cluster we would like to deploy. In this case we’d like to deploy Cassandra so we will run the deployer as follows:

./deployer.py cassandra

This will now automate the following process:

  1. Create a chef repository
  2. Download all necessary cookbooks
  3. Create all necessary instances
  4. Deploy Cassandra and Consul

Once this is complete you should be able to see your instances running in your cloud tagged as follows: cassandra-default-N. In order to access your Consul UI dashboard go to http://instance-pub-ip:8500

You should now also be able to query any of your Consul servers for the IPs of your Cassandra cluster:

nslookup cassandra.service.paas.home <instance-pub-ip>

In order to tear down the cluster simply run:

./deployer.py cassandra --op destroy
Standard
Eucalyptus

Chef Metal with Eucalyptus

Introduction

My pull request to chef-metal-fog was recently accepted and released in version 0.8.0 so a quick post on how to get up and running on your Eucalyptus Cloud seemed appropriate.

Chef Metal is a new way to provision your infrastructure using Chef recipes. It allows you to use the same convergent design as normal Chef recipes. You can now define your cloud or bare metal deployment in a Chef recipe then deploy, update and destroy it with chef-client. This flexibility is incredibly useful for both development of new Chef cookbooks and in exploring various topologies of distributed systems.

Game time

First, install the Chef Development Kit. This will install chef-client and a few other tools to get you well on your way to Chef bliss.

Once you have installed the Chef DK on your workstation, install the chef-metal gem into the Chef Ruby environment:

chef gem install chef-metal

You will need to create your Chef repo. This repository will contain all the information about how and where your application gets deployed using Chef Metal. In this case we are naming our app “euca-metal”.

chef generate app euca-metal

You should now see a directory structure as follows:

├── README.md
└── cookbooks
 └── euca-metal
   ├── Berksfile
   ├── chefignore
   ├── metadata.rb
   └── recipes
     └── default.rb

Now that the skeleton of our application has been created lets edit cookbooks/euca-metal/recipes/default.rb to look like this:

require 'chef_metal_fog'

### Arbitrary name of our deployment
deployment_name ='chef-metal-test'

### Use the AWS provider to provision the machines
### Here is where we set our endpoint URLs and keys for our Eucalyptus deployment
with_driver 'fog:AWS', :compute_options => { :aws_access_key_id => 'XXXXXXXXXXXXXXX',
                                             :aws_secret_access_key => 'YYYYYYYYYYYYYYYYYYYYYYYYYY',
                                             :ec2_endpoint => 'http://compute.cloud:8773/services/compute',
                                             :iam_endpoint => 'http://euare.cloud:8773/services/objectstorage'
}

### Create a keypair named after our deployment
fog_key_pair deployment_name do
  allow_overwrite true
end

### Use the key created above to login as root, all machines below
### will be run using these options
with_machine_options ssh_username: 'root', ssh_timeout: 60, :bootstrap_options => {
  :image_id => 'emi-A6EA57D5',
  :flavor_id => 't1.micro',
  :key_name => deployment_name
}

### Launch an instance and name it after our deployment
machine deployment_name do
  ### Install Java on the instance using the Java recipe
  recipe 'java'
end

Once we have defined our deployment we will need to create a local configuration file for chef-client:

mkdir -p .chef; echo 'local_mode true' > .chef/knife.rb

Now that we have defined the deployment and setup chef-client, lets run the damn thing!

chef-client -z cookbooks/euca-metal/recipes/default.rb

You can now see Chef create your keypair, launch your instance, and then attempt to run the “java” recipe as we specified. Unfortunately this has failed. We never told our euca-metal cookbook that it required the Java cookbook nor did we download that cookbook for it to use. Let’s fix that.

First we will tell our euca-metal cookbook that we need it to pull in the ‘java’ cookbook in order to provision the node. We need to add the ‘depends’ line to our cookbook’s metadata.rb file which can be found here: cookbooks/euca-metal/metadata.rb

name 'euca-metal'
maintainer ''
maintainer_email ''
license ''
description 'Installs/Configures euca-metal'
long_description 'Installs/Configures euca-metal'
version '0.1.0'
depends 'java'

Next we will need to actually download that Java cookbook that we now depend on. To do that we need to:

# Change to the euca-metal cookbook directory
cd cookbooks/euca-metal/
# Use berkshelf to download our cookbook dependencies
berks vendor
# Move the berks downloaded cookbooks to our main cookbook repository
# Note that it wont overwrite our euca-metal cookbook
mv berks-cookbooks/* ..
cd ../..
# Rerun our chef-client to deploy Java for realz
chef-client -z cookbooks/euca-metal/recipes/default.rb

You will notice that the machine is not reprovisioned (YAY convergence!). The Java recipe should now be running happily on your existing instance. You can find your ssh keys in the .chef/keys directory.

Happy AWS Compatible Private Cloud Cheffing!!!!

Many thanks to John Keiser for his great work on chef-metal.

Standard
Eucalyptus

Using Comcast CMB for SQS and SNS on Eucalyptus

Introduction

As part of a service oriented infrastructure there comes a need to coordinate work between services. AWS provides a couple of services which allow for application components to communicate with each other and their users/administrators in a decoupled fashion.

The Simple Queue Service (SQS) is a mechanism for an applications producers to distribute work to their consumers in a scalable, reliable and fault tolerant way. The basic lifecycle in SQS is as follows:

  1. A queue is created
  2. Producers send arbitrary text messages into the queue
  3. Consumers are constantly listing the messages in a queue and when one is available they “check out” the work by reading the message
  4. Once the message is read a timer kicks that makes the message unreadable by other consumers for a certain period of time (called the visibility timeout).
  5. The consumer can then perform the necessary task described by the message and then delete the message from the queue
  6. If the consumer does not complete the task in time or fails for some other reason the message is made visible again in the queue and picked up by another consumer

One simple example of using this service would be for a Web application front end to take image conversion orders from a user and then throw the image conversion task into a queue that can then be serviced by a fleet of worker nodes that do the actual image processing (ie the compute heavy portion).

The Simple Notification Service (SNS) is a service that allows for the coordination of messages that have one or more recipient subscribing endpoints. In this service users create a topic and then other services and users can subscribe to the topic in order to receive notifications about its goings on. In this model the sender of the message does not have to know where messages are actually being sent but rather that all subscribers (ie people/apps who need the message) will receive the message in the form that they have requested. Subscriptions to topics can be made through various transport mechanisms:

  1. HTTP
  2. HTTPS
  3. SMS
  4. Email
  5. Email-json
  6. SQS

By publishing a message to a topic with multiple subscribers you can ensure that both applications and the people managing them are all on the same page.

Eucalyptus currently does not implement SQS and SNS but the folks over at Comcast have created an incredibly useful open source project that mirrors the APIs with absolutely incredible fidelity. Not only did they ensure that their API coverage was accurate and useful but they built the application stack on top of Cassandra and Redis making it not only horizontally scalable but extremely performant to boot. For more information: Comcast CMB.

Running CMB in your Eucalyptus cloud

In order to simplify the process of installing and bootstrapping CMB, I have created an image that you can install on your cloud with all the requisite services in place. All instructions here should be performed from your Eucalyptus CLC with your admin credentials sourced.

  1. Download the image and decompress it
    1. curl http://eucalyptus-images.s3.amazonaws.com/public/cmb.raw.xz > cmb.raw.xz
    2. xz -d cmb.raw.xz
  2. Install the image
    1. euca-install-image –virt hvm -i cmb.raw -r x86_64 -b CMB -n CMB
  3. Launch the image
    1. euca-run-instance -k <my-keypair> <emi-from-step-2>
  4. Once the image is launched login to the admin portal to create your first user and get your credentials
    1. Goto http://<instance-public-ip&gt;:6059/webui
    2. Login with: cns_internal/cns_internal
    3. Create a new user
    4. Take note of the Access and Secret keys for your new users
  5. Start using your new services with your favorite SDK

Example: Interacting with SQS using Boto

In the example below swap change the following variables to fit your environment:

  • cmb_host – Hostname or IP of your CMB server
  • access_key – Taken from step 4D above
  • secret_key – Taken from step 4D above
#!/usr/bin/python
from boto.sqs.regioninfo import SQSRegionInfo
from boto.sqs.connection import SQSConnection

cmb_host = 'instance-ip'
access_key = 'your-access-key-from-step-4D'
secret_key = 'your-secret-key-from-step-4D'
cmb_sqs_port = 6059

sqs_region = SQSRegionInfo(endpoint=cmb_host, name='home')
cmb_sqs = SQSConnection(aws_access_key_id=access_key, aws_secret_access_key=secret_key,
region=sqs_region, is_secure=False,
port=cmb_sqs_port)

queue = cmb_sqs.create_queue('test')
msg = queue.new_message('Hello World')
queue.write(msg)

all_queues = cmb_sqs.get_all_queues()
print 'Current queues: '  + str(all_queues)
for queue in all_queues:
    print 'Messages in queue: ' + str([msg.get_body() for msg in queue.get_messages()])
Standard
Uncategorized

Installing Apache Hadoop on Eucalyptus using Hortonworks Data Platform stack and Apache Ambari

viglesiasce:

Get your #hortonworks #hadoop running on #eucalyptus

Originally posted on shaon's blog:

This post demonstrates Hadoop deployment on Eucalyptus using Apache Ambari on Horton Data Platform stack. Bits and pieces:

  1. Eucalyptus 4.0.0
  2. Hortonworks Data Platform (HDP) stack
  3. Apache Ambari
  4. 4 instance-store/EBS-backed instances
    1. 2 vcpus, 1024MB memory, 20GB disk space
    2. CentOS 6.5 base image

For this is demo we tried to use very minimum resources. Our Eucalyptus deployment topology looks like below, 1x (Cloud Controller + Walrus + Storage Controller + Cluster Controller) 1x (Node Controller + Object Storage Gateway) To meet our instance requirement, we changed the instance-type according to our need.

Preparation Run an instance of m1.xlarge or any other instance type that meets the above requirement. When the instance is running copy the keypair that is used to run this instance at .ssh/id_rsa, we will be using this same keypair for all the instances.

Run three more instance of same type with same keypair and image and copy their private IPs for later use. Add security…

View original 200 more words

Standard
Eucalyptus

Install Eucalyptus 4.0 Using Motherbrain and Chef

 

Introduction

Installing distributed systems can be a tedious and time consuming process. Luckily there are many solutions for distributed configuration management available to the open source community. Over the past few months, I have been working on the Eucalyptus cookbook which allows for standardized deployments of Eucalyptus using Chef. This functionality has already been implemented in MicroQA using individual calls to Knife (the Chef command line interface) for each machine in the deployment. Orchestration of the deployment is rather static and thus only 3 topologies have been implemented as part of the deployment tab.

Last month, Riot Games released Motherbrain, their orchestration framework that allows flexible, repeatable, and scalable deployment of multi-tiered applications. Their approach to the deployment roll out problem is simple and understandable. You configure manifests that define how your application components are split up then define the order in which they should be deployed.

For example in the case of Eucalyptus we have cluster, node, and frontend components. Each component is a set of recipes from the Eucalyptus cookbook. Once we have recipes mapped to components we need to define the order in which these components should be rolled out in the “stack order” section of our Motherbrain manifest:

stack_order do
bootstrap ‘cloud::full’
bootstrap ‘cloud::default’
bootstrap ‘cloud::frontend’
bootstrap ‘cluster::default’
bootstrap ‘cluster::cluster-controller’
bootstrap ‘cluster::storage-controller’
bootstrap ‘cloud::user-facing’
bootstrap ‘cloud::walrus’
bootstrap ‘cloud::user-console’
bootstrap ‘node::default’
bootstrap ‘cloud::configure’
bootstrap ‘nuke::default’
end

Once we have the components split up and ordered we need to define our topology. This can we done with another JSON formatted manifest like so:

{“nodes”: [
{ “groups”: [“cloud::frontend”, “cloud::configure”],
“hosts”: [“10.0.1.185″]
},
{ “groups”: [“cluster::default”],
“hosts”: [“10.0.1.186″]
},
{ “groups”: [“node::default”],
“hosts”: [“10.0.1.187″, “10.0.1.181”]
}]

}

With this information, Motherbrain allows you to create arbitrary topologies of your distributed system with repeatability and scalability taken care of. Repeatability comes from using Chef recipes and the scalability is derived from the nodes in each tier being deployed in parallel. In Eucalyptus terms, this means that no matter how many Node Controllers you’d like to deploy to your cluster, they system will come up in almost constant time. In order to tweak the configuration you can deploy your stack into a properly parameterized Chef environment.
Now that the concept has been laid out, lets get to business building our cluster from the 4.0 nightlies.

Installing prerequisites

I have created a script to install and configure Motherbrain and Chef that should work for Enterprise Linux or Mac OSX:

sh <(curl -s https://gist.githubusercontent.com/viglesiasce/9734682/raw/install_motherbrain.sh)

If you’d like to do the steps manually you can:

  1. Install ruby 2.0.0
  2. Install gems
    1. chef
    2. motherbrain
    3. chef-zero
  3. Get cookbooks and dependencies
    1. eucalyptus – https://github.com/eucalyptus/eucalyptus-cookbook
    2. ntp – https://github.com/opscode-cookbooks/ntp.git
    3. selinux – https://github.com/opscode-cookbooks/selinux.git
    4. yum – https://github.com/opscode-cookbooks/yum.git
  4. Upload all cookbooks to your Chef server
  5. Configure Motherbrain
    1. mb configure

Customizing your deployment

  1. Go into the Eucalyptus cookbook directory (~/chef-repo/cookbooks/eucalyptus)
  2. Edit the bootstrap.json file to match your deployment topology
    1. Ensure at least 1 IP/Machine for each component
    2. Same IP can be used for all machines (Cloud-in-a-box)
  3. Edit the environment file in ~/chef-repo/cookbooks/eucalyptus/environments/edge-nightly.json
    1. Change the topology configuration to match what you have defined in the bootstrap.json file
    2. Change the network config to match your Eucalyptus deployment
  4. Upload your environment to the Chef server
    1. knife environment from file environments/edge-nightly.json

Deploying your Eucalyptus Cloud

Now that we have defined our topology and network configuration we can deploy the cookbook using the Motherbrain command line interface by telling the tool:

  1. Which bootstrap configuration to use
  2. Which environment to deploy to

For example:

mb eucalyptus bootstrap bootstrap.json -e edge-nightly -v

Standard
Uncategorized

Pursuit of Quality at Eucalyptus

viglesiasce:

Overview of the extensive Eucalyptus quality toolchain.

Originally posted on shaon's blog:

At Eucalyptus, if one thing we care about it is the quality of the product. Our goal is to deliver an AWS compatible software that just works. To ensure the highest level of quality we try to follow the optimum strategy possible in both Development and in QA.

We have adopted Agile Software Development model a while back and we love it. As a part of our strategy, there are several type of projects we open in Jira. After getting the green signal from Product Management, we get Epic and then after architectural discussion Story tickets are created with Sub-tasks for developers to work on. We also have Jira tickets for New Features which are basically similar to Story but comparatively smaller tasks and generally have no Sub-tasks. At this point one scrum master from each team works closely with the developers to keep track of the…

View original 629 more words

Standard