Skip to content

Latest commit

 

History

History

cluster-petset-portworx

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 

Couchbase Cluster with PetSet and Portworx

  1. Container granular volumes - Portworx can take multiple EBS volumes per host and aggregate the capacity and derive container granular virtual (soft) volumes per container.

  2. Cross Availability Zone HA - Portworx will protect the data, at block level, across multiple compute instances across availability zones. As replication controllers restart pods on different nodes, the data will still be highly available on those nodes.

  3. Support for enterprise data operations - Portworx implements container granular snapshots, class of service, tiering on top of the available physical volumes.

  4. Ease of deployment and provisioning - Portworx itself is deployed as a container and integrated with the orchestration tools. DevOps can programmatically provision container granular storage with any property such as size, class of service, encryption key etc.

Portworx on Kubernetes

  1. Start Kubernetes cluster:

    export KUBERNETES_PROVIDER=aws
    KUBE_OS_DISTRIBUTION=wily NODE_SIZE=m3.medium MASTER_SIZE=m3.medium ./cluster/kube-up.sh
    1. Default distribution for Kubernetes cluster uses Debian operating system. There were issues starting Portworx container on it (TODO: Add bug link).

    2. Default NODE_SIZE is t2.medium. This is not sufficient for running Couchbase and PWX container.

  2. Make sure AWS CLI is configured correctly for us-west-2 region. Use the command aws configure, in case you need to.

  3. Create 4 EBS volumes one for each node:

    aws ec2 create-volume --region us-west-2 --availability-zone us-west-2a --volume-type io1 --iops 1000 --size 128
  4. Get the volume ids

    aws ec2 describe-volumes --filters "Name=size, Values=128" --query 'Volumes[*].{ID:VolumeId}'

    This will show the result:

    [
        {
            "ID": "vol-ce734146"
        },
        {
            "ID": "vol-cf734147"
        },
        {
            "ID": "vol-df734157"
        },
        {
            "ID": "vol-ec734164"
        }
    ]
  5. Get the instance ids for all minions

    aws ec2 describe-instances --filters "Name=tag:Role, Values=kubernetes-minion" --query 'Reservations[*].Instances[*].InstanceId'

    This will show output as:

    [
        [
            "i-b57ee620",
            "i-b47ee621",
            "i-b37ee626",
            "i-b27ee627"
        ]
    ]
  6. Attach a volume to each instance:

    aws ec2 attach-volume --volume-id <volume-id> --instance-id <instance-id> --device /dev/sdf

    For example:

    aws ec2 attach-volume --volume-id vol-df734157 --instance-id i-b27ee627 --device /dev/sdk
    aws ec2 attach-volume --volume-id vol-ce734146 --instance-id i-b37ee626 --device /dev/sdk
    aws ec2 attach-volume --volume-id vol-ec734164 --instance-id i-b47ee621 --device /dev/sdk
    aws ec2 attach-volume --volume-id vol-cf734147 --instance-id i-b57ee620 --device /dev/sdk
  7. Kubernetes 1.4.5 uses Docker 1.11.2. Since Portworx itself runs as a container, it requires MountFlags=shared flag (or MountFlags=slave to be not set) in order for it to export volumes to other containers. More details at moby/moby#17034.

    This works OOTB on Docker 1.12. For a seamless experience, will have to wait for Kubernetes to pick up that release.

    1. Get public IP addresses of all minions:

      aws ec2 describe-instances --filters "Name=tag:Role, Values=kubernetes-minion" --query 'Reservations[*].Instances[*].PublicDnsName'
    2. For each public IP address:

      portworx.sh <public-ip> <cluster-name>
    3. Check status of the provisioned storage:

      ssh -i ~/.ssh/kube_aws_rsa -o StrictHostKeyChecking=no ubuntu@<public-ip> "sudo /opt/pwx/bin/pxctl status"
      Status: PX is operational
      Node ID: aa8b5ce9-dc23-4b98-b5c4-491b9edb619e
      	IP: 172.20.0.229
       	Local Storage Pool: 1 pool
      	Pool	Cos		Size	Used	Status	Zone	Region
      	0	COS_TYPE_LOW	128 GiB	2.0 GiB	Online	a	us-west-2
      	Local Storage Devices: 1 device
      	Device	Path		Media Type		Size		Last-Scan
      	0:1	/dev/xvdk	STORAGE_MEDIUM_SSD	128 GiB		07 Dec 16 00:30 UTC
      	total			-			128 GiB
      Cluster Summary
      	Cluster ID: couchbase_portworx2
      	Node IP: 172.20.0.228 - Capacity: 2.0 GiB/128 GiB Online
      	Node IP: 172.20.0.231 - Capacity: 2.0 GiB/128 GiB Online
      	Node IP: 172.20.0.230 - Capacity: 2.0 GiB/128 GiB Online
      	Node IP: 172.20.0.229 - Capacity: 2.0 GiB/128 GiB Online (This node)
      Global Storage Pool
      	Total Used    	:  8.1 GiB
      	Total Capacity	:  512 GiB

Couchbase with Portworx on Kubernetes

  1. Create 2 PWX volumes (on any Kubernetes worker host) - volumes are visible cluster-wide

    sudo /opt/pwx/bin/pxctl volume create couchbase1
    sudo /opt/pwx/bin/pxctl volume create couchbase2
  2. Create 2 PV

    ./cluster/kubectl.sh create -f /Users/arungupta/workspaces/couchbase-kubernetes/cluster-petset-portworx/pv.yaml
  3. Create Couchbase Petset

    ./cluster/kubectl.sh create -f /Users/arungupta/workspaces/couchbase-kubernetes/cluster-petset-portworx/couchbase-petset.yaml
  4. Describe service using kubectl.sh describe service/couchbase-ui. Couchbase UI should be accessible at LoadBalancer Ingress value.

Misc

  1. Optional verification

    1. Log in to minion: ssh -i ~/.ssh/kube_aws_rsa admin@<master-ip-public>

    2. Verify etcd: curl -fs -X PUT "http://<master-ip-internal>:2379/v2/keys/_test"