[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow marks and measures to be shown in developer tooling without adding it to the performance timeline #68

Open
anniesullie opened this issue Nov 18, 2019 · 19 comments

Comments

@anniesullie
Copy link
Collaborator

Some developers I've talked to want to make heavy use of performance.mark and performance.measure for local debugging, but not spam their analytics with local debugging information. It might be useful to have an option to only show the marks/measures in developer tooling, and not add them to the performance timeline.

@npm1
Copy link
Contributor
npm1 commented Nov 18, 2019

I don't understand the use case here. These are marks and measures, so it would be fairly straightforward for the developer to reserve a prefix in the name (like 'debug') and ignore those on analytics code.

@tdresser
Copy link
Contributor

@bgirard

@bgirard
Copy link
bgirard commented Nov 18, 2019

Here's an example of React doing this directly. They add and clear at the same time: https://github.com/facebook/react/blob/50addf4c0e411e351de7290c8c60ec775c25c8c4/packages/react-reconciler/src/ReactDebugFiberPerf.js#L88-L101

Internally we do the same in Facebook's code base.

We only pull browser events from the performance entries, we don't use that as a buffer for our own tracing. We use performance.mark/measure only to get our data in the DevTools' 'Performance' tab.

This was worse before UserTimingsV3 because we could only clear a mark after we were sure we wouldn't use it as a start again.

@npm1
Copy link
Contributor
npm1 commented Nov 18, 2019

Ah, interesting. I imagine it's desirable to remove this memory asap to avoid affecting too much any memory measurements done in this debug mode. And that code of course looks silly, but all it does it create the trace event that makes it to the developer tooling. I think this is a good candidate for presenting to the WebPerf WG to see what others think... I don't think this requires a lot of spec work, but it would be great to have support from other browser vendors and ideally an interested user other than Facebook.

There seem to be two ways to implement this, not sure which is more desirable:

  • Create the trace event whenever a PerformanceMark is constructed, even if not via performance.mark (and add a similar PerformanceMeasure constructor with this property). This way we don't need to change performance.mark(), but instead require that every PerformanceMark object is logged in DevTools.
  • Augment the Performance{Mark/Measure}Options dictionary with a 'debug' field.

@bgirard
Copy link
bgirard commented Nov 18, 2019

I chatted with @bvaughn. I believe the most important requirements for us here is reducing the overhead of these calls. I haven't profiled it but Brian on the React team has seen some high overhead for mark/measure calls in the past. Avoiding the clear call otherwise is just developer ergonomics.

The other feature request I have in this space is to allow DevTools to categorize the measure timings. For instance we have several mark providers: React rendering, Metric measures, Loading measures. It would be nice to have an API to provide a category. If we add something to the Options field it would be nice to consider adding a nestable category field.

@bgirard
Copy link
bgirard commented Nov 19, 2019

Here's an example of the React debug render timings. These are only reported in DEV mode because of performance issues:
Screenshot 2019-11-18 13 50 36

Here's an example of the performance measures used by our metrics:
Screenshot 2019-11-18 13 45 14

This shows when placeholders are shown, why, how long hero elements take to render. These measures line up perfectly with network requests, screenshots and JS sampling.

@rniwa
Copy link
rniwa commented Nov 19, 2019

Isn't this what console.time and console.timeEnd are for?

@anniesullie
Copy link
Collaborator Author

Thanks, @rniwa.

@bgirard: For the performance issue, does this bug in chromium track it correctly? https://bugs.chromium.org/p/chromium/issues/detail?id=943732
If not let's work offline on a better test case.

Anyone opposed to closing this in favor of fixing performance issues in chromium and using console.time and console.timeEnd for this use case?

@tdresser
Copy link
Contributor

Perhaps we should be updating console.time etc to have parity with new user timing features?

@bvaughn
Copy link
bvaughn commented Nov 19, 2019

Isn't this what console.time and console.timeEnd are for?

I don't think this would be a great solution, since it would (I believe) spam the console with every mark/measure we made.

@bgirard
Copy link
bgirard commented Nov 19, 2019

@anniesullie Yes, I think that issue covers what we need from Chrome which isn't a standards issue. I believe the only standards question to be resolved here is if we want to provide a ergonomic way to avoid having to add and clear.

Isn't this what console.time and console.timeEnd are for?

To add Brian's response, that API is also significantly more constrained than User Timings L3's measure options. It doesn't appear to be a practical replacement. In particular in addition to spamming the console, we have to instrument all call sites with a console.time/timeEnd() while knowing the final label. PerformanceMeasureOptions allows us to provide a start/end/label at any point.

@andydavies
Copy link

The commercial RUM products that support User Timing (SpeedCurve, mPulse) avoid the 'spamming problem' by requiring customers to configure which marks and measures they want to collect (also means timings from third-party tags can be excluded or included as desired).

When I look at @bgirard's DevTools screenshot, it reminds me of an OpenTelemetry type trace

@yoavweiss
Copy link
Contributor

@mathiasbynens - thoughts on this one?
I'm thinking that maybe a UserTiming equivalent API that's only targeted at the Performance devtools tab can acheive what folks want here, with lower overhead when performance tracing is not enabled.

@mathiasbynens
Copy link

@paullewis is our DevTools performance timeline expert. Paul, do you have any thoughts here?

@yoavweiss
Copy link
Contributor

This seems related to #86 and the "namespace" idea for solving it. Having namespaces would enable developer tools to filter out noisy/irrelevant-at-the-moment ones (but keep them in e.g. deep-dive profiles)

@yoavweiss
Copy link
Contributor

@mathiasbynens - Thoughts on this in the context of user timing "namespaces"?

@yoavweiss
Copy link
Contributor

also @jackfranklin

@jackfranklin
Copy link

How does namespaces solve this? Is the suggestion here that we could provide options in DevTools to include/exclude marks within a certain namespace? Or are you picturing that we'd have a namespace like "DEVTOOLS" which is the only one that would ever make it onto e.g. the perf panel ?

@yoavweiss
Copy link
Contributor

Apologies for not providing more context. This was discussed at a WG meeting a couple of months back.
The basic idea is that namespaces would enable devtools and devtool extensions to highlight some user timings or to hide others (e.g. "show me all the React timestamps" or "hide everything unrelated to my app's logic")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests