-
Notifications
You must be signed in to change notification settings - Fork 2k
Lexer, helpers and comments cleanup #5063
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lexer, helpers and comments cleanup #5063
Conversation
… clearer that it only happens once
…ass through the parser into the node classes
…ent them in the nodes class
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@GeoffreyBooth lgtm
@@ -946,19 +947,19 @@ exports.Lexer = class Lexer | |||
|
|||
# Same as `token`, except this just returns the token without adding it | |||
# to the results. | |||
makeToken: (tag, value, offsetInChunk = 0, length = value.length) -> | |||
makeToken: (tag, value, offsetInChunk = 0, length = value.length, origin) -> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@GeoffreyBooth ah I see ya once this is merged I can merge into preserve-string-literal
and convert to an option arg that gets passed along by token()
src/helpers.coffee
Outdated
# and therefore matching `tokenHash`es, merge the comments from both/all | ||
# tokens together into one array, even if there are duplicate comments; | ||
# they will get sorted out later. | ||
if tokenData[tokenHash].comments? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@GeoffreyBooth I know this is just basically moving existing code, but a way I like to write this pattern is:
(tokenData[tokenHash].comments ?= []).push token.comments...
Thanks @helixbass I made the change you recommended. |
#5045 had various improvements to comments and the lexer that were unrelated to that PR’s goal of passing extra token data through the parser. This PR wraps up those unrelated improvements.
@helixbass @zdenko