Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decrease max batch size in the stress tests #13953

Merged
merged 1 commit into from
Feb 2, 2023

Conversation

andre4i
Copy link
Contributor

@andre4i andre4i commented Feb 2, 2023

Description

Seeing some errors of the form:

Nack (BadRequestError): Message size too large. Boxcar message count: 1, size: 933394, max message size: 921600.

This is due to the fact that our max batch limit (if unspecified 972800) which will trigger chunking if exceeded is too high for the server-side limit of a message boxcar.

What happens is that the batch gets compressed and the payload becomes 933394 bytes. 933394 < 972800, therefore chunking will not be activated. The payload is sent as-is and the server rejects it with the message above.

The default limit will also be made permanent in the runtime

@andre4i andre4i requested a review from a team as a code owner February 2, 2023 17:23
@github-actions github-actions bot added the area: tests Tests to add, test infrastructure improvements, etc label Feb 2, 2023
@github-actions github-actions bot added the base: main PRs targeted against main branch label Feb 2, 2023
@agarwal-navin
Copy link
Contributor

The default limit will also be made permanent in the runtime

Do you mean it will made permanent in the real code or test?

@agarwal-navin
Copy link
Contributor

Couldn't this be a problem in real scenarios as well (not just tests)?

@andre4i
Copy link
Contributor Author

andre4i commented Feb 2, 2023

The default limit will also be made permanent in the runtime

Do you mean it will made permanent in the real code or test?

In the real code in the runtime.

@andre4i
Copy link
Contributor Author

andre4i commented Feb 2, 2023

Couldn't this be a problem in real scenarios as well (not just tests)?

Probably but unlikely. Our scenario is already unsupported for large payloads. This issue will just make the container fail but with a different error. Still looking at the telemetry and trying to figure out if it shows up anywhere else..

@andre4i
Copy link
Contributor Author

andre4i commented Feb 2, 2023

Couldn't this be a problem in real scenarios as well (not just tests)?

no hits outside the stress tests. But yes, it is possible but highly unlikely (the stress tests use very badly compressed payloads explicitly to force chunking, hence it is happening more often)

@andre4i andre4i merged commit 085616c into microsoft:main Feb 2, 2023
daesun-park pushed a commit to daesun-park/FluidFramework that referenced this pull request Feb 8, 2023
## Description

Seeing some errors of the form:

```
Nack (BadRequestError): Message size too large. Boxcar message count: 1, size: 933394, max message size: 921600.
```

This is due to the fact that our max batch limit (if unspecified 972800)
which will trigger chunking if exceeded is too high for the server-side
limit of a message boxcar.

What happens is that the batch gets compressed and the payload becomes
933394 bytes. 933394 < 972800, therefore chunking will not be activated.
The payload is sent as-is and the server rejects it with the message
above.

**The default limit will also be made permanent in the runtime**
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: tests Tests to add, test infrastructure improvements, etc base: main PRs targeted against main branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants