Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set MTU size for Weave networking to support "jumbo frames" #2600

Closed
jordanjennings opened this issue May 19, 2017 · 6 comments
Closed

Set MTU size for Weave networking to support "jumbo frames" #2600

jordanjennings opened this issue May 19, 2017 · 6 comments

Comments

@jordanjennings
Copy link
Contributor

In some preliminary testing I have seen significant throughput increases for pod to pod traffic when using jumbo frames with Weave. This can be configured manually after standing up a cluster, however it would be nice to either 1) have this as the default configuration, or 2) provide a way to configure it on cluster creation.

I propose we make it the default configuration when using --networking weave with cluster creation.

AWS docs on MTU settings supported by VPC networking:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html

Weave docs on configuring MTU with WEAVE_MTU environment variable:
https://www.weave.works/docs/net/latest/using-weave/fastdp/

From the Weave docs:

The underlying network must be able to deliver packets of the size specified plus overheads of around 84-87 bytes (the final MTU should be divisible by four)

Based on this I feel we should set the MTU size to 8912 (largest number that is divisible by four and leaves 87 bytes available from 9001). I would be happy to submit this change in a pull request if others agree it's a good default. If not, then we could continue discussion on #1171 to make the Weave settings configurable.

@chrislovecnm
Copy link
Contributor

So first off you need to setup go and kops. The biggest thing is to have $GOPATH setup and have kops in $GOPATH/src/k8s.io/kops. We only support OSX and Linux for development.

Next, read https://github.com/kubernetes/kops/blob/master/docs/development/api_updates.md

The MTU can be given a default value, that you probably know best, and also let the user override it only via the API. Not with flag / CLI options. If you do not think we should set the MTU value by default, your call.

You are going to make an API update and add a new value to the API. The value needs to be a pointer and added to https://github.com/kubernetes/kops/blob/master/pkg/apis/kops/networking.go#L56

Regenerate the API.

Next, you need to add the values to the template. https://github.com/kubernetes/kops/blob/master/upup/models/cloudup/resources/addons/networking.weave/k8s-1.6.yaml and the other file need to have another suffix added to their filenames: ".template". That will turn the file into a golang template. Withing the template you have full access to the cluterspec API values. The calico files and this PR https://github.com/kubernetes/kops/pull/2091/files#diff-f1acfc687488ff4c2f2753f76e4b93b7R53

For example

{{ .Networking.Weave.MTU }}

https://github.com/kubernetes/kops/blob/master/tests/integration/privateweave/in-v1alpha2.yaml#L25 is a good visual representation of our cluster spec in YAML. You will need to test using kops create -f mycluster.yaml command. Let me know if you have not used the clusterspec yaml before.

So

  1. get kops building locally
  2. update the api
  3. move weave to a .template file (need to do back pre and post 1.6 files)
  4. access the API values in the template to setup MTU
  5. do a build and test.

@chrislovecnm
Copy link
Contributor

@bboreham do you know what default settings that we should use on GCE and vSphere?

@bboreham
Copy link
Contributor

GCE was 1376 last time I checked (1460 available on native network; no jumbo packets)
No idea about vSphere, sorry.

@chrislovecnm
Copy link
Contributor

@luomiao

  1. is CNI provider like weave supported on k8s running on vSphere
  2. if so can are jumbo packets with an MTU setting supported?

@chrislovecnm
Copy link
Contributor

Fixed and closing

@jordanjennings
Copy link
Contributor Author

@chrislovecnm Looks like you meant to close this already, can you close it now? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants