-
-
Notifications
You must be signed in to change notification settings - Fork 80
vibe optimization #547
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
vibe optimization #547
Conversation
Hey @quantizor, thanks for optimizing the runtime! Just took a brief look at the diff and I think it would be good to split this into multiple PRs so I can verify some changes in isolation. If you're up to it, I'll review your changes in more detail tomorrow, probably leave a bunch of comments and let you know how to best split it up. |
Sure thing! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool stuff! I added some comments around some details. At the scale of tailwind-merge it makes sense to pay attention to every small thing to not introduce potential regressions.
|
||
export const createClassGroupUtils = (config: AnyConfig) => { | ||
const classMap = createClassMap(config) | ||
const { conflictingClassGroups, conflictingClassGroupModifiers } = config | ||
|
||
// Use the existing LRU cache implementation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can remove this comment, it's clear from the code that you're using the existing implementation
|
||
export const createClassGroupUtils = (config: AnyConfig) => { | ||
const classMap = createClassMap(config) | ||
const { conflictingClassGroups, conflictingClassGroupModifiers } = config | ||
|
||
// Use the existing LRU cache implementation | ||
const classGroupCache = createLruCache<string, AnyClassGroupIds | undefined>(config.cacheSize) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you move the addition of the cache to a separate PR? I'm not sure about this though and would want to test this.
Some things I'm thinking of:
- I'm not sure how the size of this cache would correlate with the main cache size. Could it be that this cache is much smaller/bigger than the main one?
- How much speed do we gain with this cache alone (this is why I want to isolate it into its own PR)? We're paying for it with increased memory consumption and I don't yet know whether it would be worth it.
const getClassGroupId = (className: string) => { | ||
const classParts = className.split(CLASS_PART_SEPARATOR) | ||
// Check cache first for efficiency |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can remove this comment, the code is self-explanatory
|
||
// Classes like `-inset-1` produce an empty string as first classPart. We assume that classes for negative values are used correctly and remove it from classParts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please make sure that comments like this one stay in the codebase after your changes. Same with other instances.
} | ||
|
||
return getGroupRecursive(classParts, classMap) || getGroupIdForArbitraryProperty(className) | ||
// Cache result to avoid repeated computation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment can be removed, too
@@ -68,7 +68,7 @@ export const mergeClassList = (classList: string, configUtils: ConfigUtils) => { | |||
|
|||
const classId = modifierId + classGroupId | |||
|
|||
if (classGroupsInConflict.includes(classId)) { | |||
if (classGroupsInConflict.indexOf(classId) > -1) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this change? Is includes()
slower than indexOf()
? 🤔
/** | ||
* In Tailwind CSS v3 the important modifier was at the start of the base class name. This is still supported for legacy reasons. | ||
* @see https://github.com/dcastil/tailwind-merge/issues/513#issuecomment-2614029864 | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add back this comment? The context it provides will be important in the future.
if (modifiers.length <= 1) { | ||
return modifiers | ||
// Binary search insertion point | ||
function findInsertionIndex(arr: string[], val: string, start: number, end: number): number { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you make all the function you create arrow functions instead? tailwind-merge uses them wherever possible because they compress better, see #449.
const pos = findInsertionIndex( | ||
currentSegment, | ||
modifier, | ||
0, | ||
currentSegment.length, | ||
) | ||
currentSegment.splice(pos, 0, modifier) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder whether this is really faster than sorting once at the end. If we insert elements to anywhere else than the end, the JS engine has to move all subsequent array elements which can be quite inefficient. 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's move this one to a separate PR, too.
Claude and I jammed on some potential improvements...
Rationale for increasing cache size is the partials it's storing are quite small and a medium-complexity website may have thousands of them. They probably won't take up much space even in volume.
Roughly a 15% improvement in the cached example... it adds up!