Skip to content

Benchmarking a Serverless System with vHive

shyamjesal edited this page Mar 30, 2025 · 9 revisions

vHive hands-on session

Background notes on usage of screen and tmux

screen and tmux commands let you create detachable terminals which will make running and managing background processes easier. Also they simplify reconnections to remote nodes.

screen

screen command is used throughout the tutorial for launching the commands that should be running in background (containerd, vHive, etc).

Here's short cheatsheet of useful commands:

screen -S <screen_name>             # creates and attaches to terminalwith <screen_name>, use "Cntr-A + d" to detach from it without closing
screen -dmS <screen_name> <command> # creates new terminal with <screen_name> and runs <command> in the background (without attaching to it)
screen -r <screen_name>             # attach to terminal with the name <screen_name>
screen -ls                          # lists all created terminals

# ctrl+a d to detach from a screen session

Note: In the tutorial screen is commonly used with sudo rights. In that case, any manipulation with these terminals needs to be done with preceding sudo (e.g. use sudo screen -ls to list all the terminal sessions created by sudo screen -dmS ...)

tmux

tmux is similar to screen in terms of what it can do, but more user-friendly.

Short cheatsheet of important commands:

tmux new-session -s <session_name>              # creates and attaches to a session <session_name>, use "Cntr-B + d" to detach from it without closing
tmux new-session -s <session_name> -d <command> # creates new session with <session_name> and runs <command> in the background (without attaching to it)
tmux attach -t <session_name>                   # attach to terminal with the name <session_name>
tmux list-sessions                              # lists all created sessions

# ctrl+b d to detach from a tmux session

Log into the node

ssh -p <port> vhive@<node_IP>

Password will be given during the session, node and port can be retrieved from this site: https://vhive-serverless.github.io/sosp-tutorial-infra-setup/.

It is recommended to use tmux for remote session to make it easier in case you need assistance or there are problems with the connection.

Start the vHive server

Setup the environment on the node

cd ~/vhive/
./setup_tool setup_node stock-only

Note: this step might result in errors in device-mapper, but it is safe to ignore them.

Start containerd

Containerd will be used for infrastructure containers.

sudo screen -dmS containerd containerd; sleep 5;
sudo screen -r containerd # check the daemon's logs

The following error is ok: failed to load cni during init, please check CRI plugin status

Create a Knative cluster

Create the k8s cluster with single node and install knative on top of it.

./setup_tool create_one_node_cluster stock-only

Check that the cluster is running

kubectl get pods -A

After some time cluster should create all pods. The correct final result should consist of following pods in namespaces:

  • istio-system: 3 running
  • knative-eventing: 7 running
  • knative-serving: 8 running, 1 completed
  • kube-system: 9 running
  • metallb-system: 2 running
  • registry: 2 running

Run the examples

Single hello world function

This command will deploy a single function (hello world).

~/vSwarm/tools/deployer/deployer -jsonFile ~/singleFunction.json -endpointsFile ~/singleEndpoint.json -funcPath ./configs/knative_workloads
kn service list # see the deployed service

The resulting services should contain single service named helloworld-0. Invoke this function for 10 seconds with 1 request per second:

~/vSwarm/tools/invoker/invoker -rps 1 -time 10 -endpointsFile ~/singleEndpoint.json -latf singleFunction

It can be that you don't get the same amount of issued and completed requests. This is due to time outs when having cold starts. In this case rerun the invoker.

Zipkin tracing example

Setup the container-based vHive cluster

./setup_tool setup_node stock-only
sudo screen -dmS containerd containerd; sleep 5;
./setup_tool create_one_node_cluster stock-only
./setup_tool setup_zipkin; sleep 5;
screen -dmS zipkin istioctl dashboard zipkin

Wait for pods to get ready:

kubectl get pods -A

Note: default-domain pods might fail several times, but in the end, there should be one that completed.

After that, Zipkin UI will be available from your browser at localhost:9411

Deploy video analytics example

Setup the credentials:

export AWS_ACCESS_KEY_ID=<YOUR_KEY>
export AWS_ACCESS_KEY=$AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=<YOUR_SECRET>
export AWS_SECRET_KEY=$AWS_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION=us-west-1
export ENABLE_TRACING=”true”

Create unique S3 bucket name (change FirstName and LastName for your name):

export BUCKET_NAME=`echo "FirstName LastName $(date)" | md5sum | awk '{print $1}'`
echo BUCKET_NAME=$BUCKET_NAME

Create S3 bucket for intermediate storage of frames. If the command fails due to name conflict, try generating the name again.

aws s3api create-bucket --bucket $BUCKET_NAME --create-bucket-configuration LocationConstraint=us-west-1

This example consists of stream, decode and recognition functions. The stream function reads a video file from S3 bucket and sends it to the decode function. The decode function decodes the video and sends single frame from it to recognition function. The recognition function recognizes objects on the frame and returns result back to caller (decoder and eventually stream function). The stream function then returns to the client.

Deploy the functions:

~/vSwarm/tools/kn_deploy.sh ~/vSwarm/benchmarks/video-analytics/yamls/knative/s3_single/*

Check that the services are running:

kn service list

Invoke the video analytics example

~/vSwarm/tools/test-client/test-client -addr <HOSTNAME>:80

HOSTNAME here is URL of streaming service got from previous command without http:// prefix.

Look at Zipkin trace of the execution

In the result of the invocation, Zipkin recorded trace of all calls in the application. You can see it in Zipkin UI.

Zipkin UI is available in browser on localhost:9411. In case it is unavailable, please make sure that your ssh login command looked like this: ssh -p <port> vhive@<node> -L 9411:127.0.0.1:9411.

At Zipkin UI, construct the query: red button with plus sign, choose serviceName, search for activator-service. Run the query. To look at visualization, click show on the right side of the resulting entry. This will show call to every component (activator, streaming, decoder and regognition).

Clean up the video analytics example

kn service delete --all

Deploy the second video analytics example (optional)

This time the workflow is almost the same as in previous one, but the decoder sends not one frame to recognition but six of them.

~/vSwarm/tools/kn_deploy.sh ~/vSwarm/benchmarks/video-analytics/yamls/knative/s3/*

Check that the services are running:

kn service list

Invoke the second video analytics example (optional)

~/vSwarm/tools/test-client/test-client -addr <HOSTNAME>:80