This tutorial assumes you are going to run commands in 5 shell sessions:
- ssh session to HCF vagrant box (in
~/hcf
) - ssh session to HCP master (in
~/hcp-developer
) - ssh session to HCP node (in
~/hcp-developer
) - ssh tunnel to HCP node (in
~/hcp-developer
) - local shell
Each heading below includes the shell id (#x) at the end.
You need a running HCF vagrant box with all images already compiled:
cd hcf
vagrant up --provider vmware_fusion
vagrant ssh
# Then, inside the VM
cd hcf
make vagrant-prep
It takes a while to start up HCP, so you want to start this early. You need to provide the address of the docker registry for HCF images; here we use a local registry inside the HCF vagrant box.
Find out the correct URLs for the latest dev harness and hcp client from the HCP Release Notes.
wget https://s3-us-west-2.amazonaws.com/hcp-concourse/hcp-developer-1.xxx.tar.gz
tar xfz hcp-developer-1.xxx.tar.gz
cd hcp-developer
wget https://s3-us-west-2.amazonaws.com/hcp-cli-release/hcp-1.xxx-darwin-amd64.tar.gz
tar xfz hcp-1.xxx-darwin-amd64.tar.gz
INSECURE_REGISTRY=192.168.77.77:5000 ./start.sh
Then push all the HCF images to it:
make registry
make tag IMAGE_REGISTRY=localhost:5000
make publish IMAGE_REGISTRY=localhost:5000
Once the vagrant up
command is complete, log into the master and verify that
all pods are running:
vagrant ssh master
kubectl get pods --namespace=hcp
You will see output like this:
vagrant@k8-master:~$ kubectl get pods --namespace=hcp
NAME READY STATUS RESTARTS AGE
ident-api-p6svw 1/1 Running 0 27m
ipmgr-ivexp 1/1 Running 0 27m
rpmgr-76ave 1/1 Running 0 27m
Take note of the ipmgr
instance name, in this case ipmgr-ivexp
, and then
view the logs like this:
kubectl logs -f ipmgr-ivexp --namespace=hcp
Use your host IP address for DOMAIN
settings in the instance
definition file you're using. The examples have a "parameters" section
that sets this value.
Back inside the HCF vagrant box run:
make hcp IMAGE_REGISTRY=192.168.77.77:5000
This generates the hcf-hcp.json
file containing the HCP service definition for
the current set of roles.
PORT=$(curl -Ss http://192.168.200.2:8080/api/v1/namespaces/hcp/services/ipmgr | jq -r '.spec.ports[0].nodePort')
./hcp api https://192.168.200.3:$PORT
./hcp login admin -p cnapadmin
TOKEN=$(cat $HOME/.hcp | jq -r .AccessToken)
curl -k -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" -XPOST -d @/home/vagrant/hcf/hcf-hcp.json https://192.168.200.3:$PORT/v1/services
make hcp-instance IMAGE_REGISTRY=192.168.77.77:5000
# or
make hcp-instance-ha IMAGE_REGISTRY=192.168.77.77:5000
Or instead of running make hcp-instance
, you can use the ~/hcf/hcp/hcf-hcp-instance.json
sample configuration to create an instance of the newly registered service:
{
"name": "hcf",
"version": "0.0.0",
"vendor": "HPE",
"labels": ["hcf"],
"instance_id": "hcf",
"description": "HCF test cluster"
}
NOTE: Ensure that the name
, version
, and vendor
fields in the instance definition match the same fields in the service definition.
Remember the instance_id
, here hcf
, which is the name to use when
talking to HCP about it.
To instantiate the service, post the instance definition to HCP:
curl -k -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" -XPOST -d @/home/vagrant/hcf/hcf-hcp-instance.json https://192.168.200.3:$PORT/v1/instances
where $PORT
is set above.
Once the instance definition has been posted there should be plenty of activity on the log.
Alternatively, for just a list of events for this new instance, you can run:
kubectl get events --namespace=hcf --watch
To install all the files necessary to run hcf-status
you need to follow these steps:
cd hcp-developer
~/hcf/bin/install-hcf-status-on-hcp.sh
vagrant ssh node
# Then, inside the VM
sudo su
/home/vagrant/hcf/opt/hcf/bin/hcf-status
It takes a long time to start HCF on HCP in vagrant (up to 30 minutes).
Use docker ps --filter label=role=XXX
to find HCF containers to interact with, e.g.
docker exec -it $(docker ps -a -q --filter label=role=api) bash
Here is a bash function to display the full Monit status for a container:
get-container-id() { docker ps -a -q --filter=name="k8s_${1}\\..*hcf" ; }
enter() { docker exec -t -i $(get-container-id "$1") /bin/bash ; }
m() { docker exec -t $(get-container-id "$1") curl -u monit_user:monit_password http://localhost:2822/_status ; }
m api
The setup_ports.sh
script will setup ssh forwarding ports from the host to the
HCF instance. The script does not return to the shell; press ^C to terminate
when you are done.
cd hcp-developer
sudo ./setup_ports.sh hcf `ipconfig getifaddr en0`
Check hcf-status
(shell #3) to make sure HCF is all up and running, and then
target it from the host:
cd node-env
cf api --skip-ssl-validation https://api.`ipconfig getifaddr en0`.nip.io
cf auth admin changeme
cf create-org hpe
cf target -o hpe
cf create-space myspace
cf target -o hpe -s myspace
cf push node-env