Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error handling in custom benchmark #22245

Merged
merged 26 commits into from
Sep 4, 2024

Conversation

jikim-msft
Copy link
Contributor

@jikim-msft jikim-msft commented Aug 16, 2024

Description

#AB12925

Found a bug in benchmarkCustom() function not throwing an error.

This PR makes following changes:

  • Adds try-catch logic to the benchmarkCustom() function.
  • Adds error event from the benchmarkCustom() function which gets emitted to the MochaReporter.
  • Enables --verbose on the test:customBenchmarks script for better logging.

Result

With the changes above, the following code throws an error on console:

			benchmarkCustom({
				only: true,
				type: BenchmarkType.Measurement,
				title: `SharedTree`,
				run: async (reporter) => {
					throw new Error("This error is INTENTIONAL.");
				},
			});
Screenshot 2024-08-22 at 16 55 35

@github-actions github-actions bot added area: examples Changes that focus on our examples dependencies Pull requests that update a dependency file base: main PRs targeted against main branch labels Aug 16, 2024
@github-actions github-actions bot added area: dds Issues related to distributed data structures area: dds: tree labels Aug 23, 2024
@github-actions github-actions bot removed the dependencies Pull requests that update a dependency file label Aug 23, 2024
@msfluid-bot
Copy link
Collaborator

msfluid-bot commented Aug 23, 2024

@fluid-example/bundle-size-tests: +245 Bytes
Metric NameBaseline SizeCompare SizeSize Diff
aqueduct.js 461.15 KB 461.18 KB +35 Bytes
azureClient.js 559.19 KB 559.24 KB +49 Bytes
connectionState.js 680 Bytes 680 Bytes No change
containerRuntime.js 261.99 KB 262 KB +14 Bytes
fluidFramework.js 399.76 KB 399.77 KB +14 Bytes
loader.js 134.26 KB 134.28 KB +14 Bytes
map.js 42.39 KB 42.39 KB +7 Bytes
matrix.js 146.56 KB 146.56 KB +7 Bytes
odspClient.js 526.47 KB 526.52 KB +49 Bytes
odspDriver.js 97.72 KB 97.74 KB +21 Bytes
odspPrefetchSnapshot.js 42.78 KB 42.79 KB +14 Bytes
sharedString.js 163.26 KB 163.26 KB +7 Bytes
sharedTree.js 390.27 KB 390.28 KB +7 Bytes
Total Size 3.3 MB 3.3 MB +245 Bytes

Baseline commit: b7e4e1e

Generated by 🚫 dangerJS against 3f2a5c1

@jikim-msft jikim-msft changed the title WIP; error handling in custom benchmark Error handling in custom benchmark Aug 23, 2024
Copy link
Contributor

@alexvy86 alexvy86 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple of things but it looks pretty good to me. We should make sure that with these changes, using benchmarkCustom() behaves as expected in tree or tablebench, both in success and failure cases, and both with --perfMode and without. The usual pnpm-linking / override method to test that should still work.

tools/benchmark/src/mocha/customOutputRunner.ts Outdated Show resolved Hide resolved
tools/benchmark/src/test/CustomOptionBenchmark.test.ts Outdated Show resolved Hide resolved
jikim-msft and others added 2 commits August 28, 2024 10:44
Co-authored-by: Alex Villarreal <716334+alexvy86@users.noreply.github.com>
@jikim-msft
Copy link
Contributor Author

jikim-msft commented Aug 29, 2024

/*!
 * Copyright (c) Microsoft Corporation and contributors. All rights reserved.
 * Licensed under the MIT License.
 */

"use strict";

const getFluidTestMochaConfig = require("@fluid-internal/mocha-test-setup/mocharc-common");

const packageDir = __dirname;
const config = getFluidTestMochaConfig(packageDir);

// TODO: Revert this
const spec =
	config["spec"] !== undefined
		? Array.isArray(config["spec"])
			? config["spec"]
			: [config["spec"]] // If string, wrap as array to use spread operator
		: []; // If undefined, use an empty array

spec.push("--perfMode");

module.exports = {
	...config,
	spec: spec,
};
  1. Add --perfMode to .mocharc.cjs's spec and run pnpm run test:mocha
Screenshot 2024-08-29 at 14 29 37
  1. Add --perfMode flag to the script and run pnpm run test:mocha -- --perfMode
Screenshot 2024-08-29 at 14 31 39

I added logging to ensure that the script & .mocharc BOTH work in --perfMode

if (isInPerformanceTestingMode) {
    console.log(`PERFMODE Benchmark error: ${benchmarkError.error}`);
    return;
}
  1. pnpm run test:mocha:verbose -- --perfMode
Screenshot 2024-08-29 at 15 12 57
  1. "test:mocha": "cross-env FLUID_TEST_VERBOSE=1 mocha \"dist/test/**/*.js\""
Screenshot 2024-08-29 at 15 13 41

@jikim-msft
Copy link
Contributor Author

With the change in the Reporter.ts, errors are correctly logged using the table.

Screenshot 2024-09-03 at 16 59 33

tools/benchmark/src/Reporter.ts Outdated Show resolved Hide resolved
tools/benchmark/src/Reporter.ts Outdated Show resolved Hide resolved
@alexvy86
Copy link
Contributor

alexvy86 commented Sep 4, 2024

With the change in the Reporter.ts, errors are correctly logged using the table.

Awesome! Can you confirm that things look right if there's a mix of successful and failing tests in a given suite? The successful test should still get the custom data printed, and I'd like to see that the failed tests that won't report those columns still look correct and don't affect the rendering of the successful tests.

@jikim-msft
Copy link
Contributor Author

Screenshot 2024-09-04 at 10 44 41

benchmarkCustom() is now able to report all the test results without crashing.

#22245 (comment)

Also from the comment above, I wanted to make sure that the error messages from --perfMode and non-perfMode should be different as the events get emitted at different time.

--perfMode: return ;
non perfMode: throw error

Both emit benchmark end before running isInPerformanceTestingMode.

@jikim-msft jikim-msft marked this pull request as ready for review September 4, 2024 17:52
@jikim-msft
Copy link
Contributor Author

jikim-msft commented Sep 4, 2024

pnpm run test

image

pnpm run test:customBenchmarks

image

pnpm run test:customBenchmarks -- --perfMode

image

@github-actions github-actions bot added the public api change Changes to a public API label Sep 4, 2024
Copy link
Contributor

github-actions bot commented Sep 4, 2024

🔗 No broken links found! ✅

Your attention to detail is admirable.

linkcheck output


> fluidframework-docs@0.25.0 ci:linkcheck /home/runner/work/FluidFramework/FluidFramework/docs
> start-server-and-test ci:start 1313 linkcheck:full

1: starting server using command "npm run ci:start"
and when url "[ 'http://127.0.0.1:1313' ]" is responding with HTTP status code 200
running tests using command "npm run linkcheck:full"


> fluidframework-docs@0.25.0 ci:start
> http-server ./public --port 1313 --silent


> fluidframework-docs@0.25.0 linkcheck:full
> npm run linkcheck:fast -- --external


> fluidframework-docs@0.25.0 linkcheck:fast
> linkcheck http://localhost:1313 --skip-file skipped-urls.txt --external

Crawling...

Stats:
  378023 links
    2980 destination URLs
       2 URLs ignored
       0 warnings
       0 errors


@jikim-msft
Copy link
Contributor Author

image

image

Verified that the errors are logging in both non-perfMode and perfMode.

@jikim-msft jikim-msft enabled auto-merge (squash) September 4, 2024 23:41
@jikim-msft jikim-msft merged commit 05d2213 into microsoft:main Sep 4, 2024
38 checks passed
@jikim-msft jikim-msft deleted the benchmark/add-error branch September 5, 2024 20:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: dds: tree area: dds Issues related to distributed data structures area: examples Changes that focus on our examples base: main PRs targeted against main branch public api change Changes to a public API
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants