Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove libdparse from token dump functionality #13

Open
wants to merge 13 commits into
base: replace_libdparse
Choose a base branch
from

Conversation

lucica28
Copy link
Collaborator

@lucica28 lucica28 commented Mar 1, 2022

No description provided.

@@ -52,3 +52,155 @@ ulong printLineCount(Tokens)(File output, string fileName, ref Tokens tokens)
output.writefln("%s:\t%d", fileName, count);
return count;
}

void printTokenDump(string fileName)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please allow the user to supply they output file, as printTokenCount and printLineCount do.
This will also simplify testing

file.close();

auto deleteme2 = "test2.txt";
auto fp = freopen("test2.txt", "w", stdout);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

freopen and co won't be necessary as the user will be able to supply the output stream

<< }>> 1 124 8 1 6
";

scope(exit)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please move scope (exit) to where the files are defined.
The purpose of scope (exit) is to keep the initialization and cleanup code close to each other.
Adding it here provides no benefit, as we are already at the end (exit) of the function.

writeln("text \tblank\tindex\tline\tcolumn\ttype\tcomment\ttrailingComment");
do
{
import std.stdio : writefln;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please move import at the beginning of the function

@@ -289,6 +276,7 @@ else
LexerConfig config;
config.stringBehavior = StringBehavior.source;
auto tokens = byToken(readFile(f), config, &cache);

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is unrelated. Please undo.

case TOK.string_:
aux = t.ustring[0 .. t.len].dup;
aux = "\"" ~ aux ~ "\"";
writef("<<%20s>>", aux);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is no need for aux here. Just do: writef("<<%20s>>", "\"" ~ t.ustring[0 .. t.len] ~ "\"") . Also, the dup does not seem necessary.

Same goes for subsequent uses of aux.

import dmd.lexer;
import std.file;

auto input = readText(fileName);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this function reading the file again? It seems that we have already read the bytes here: https://github.com/Dlang-UPB/D-scanner/pull/13/files#diff-d6826361035218d94dfa90eb567fe01c5b904a35a8fdfe45278569b3617c3717R196

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The printfTokenDump function should receive a string a simply lex it. That would make the unittest a lot easier also.

break;
case TOK.int32Literal:
writef("<<%20d>>", t.intvalue);
empty = false;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that empty should be just set to true if the default case is taken. You would have less lines of code that way.

Copy link
Collaborator

@RazvanN7 RazvanN7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simplify the test case to not use files.

@@ -276,7 +276,6 @@ else
LexerConfig config;
config.stringBehavior = StringBehavior.source;
auto tokens = byToken(readFile(f), config, &cache);

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unrelated change.

{
assert(exists(deleteme));
remove(deleteme);
}

file.write(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we creating a file to put the text into it and then read that text back and pass it to the printTokenDump? We want to test only the functionality that we've added, which is the token dump. We can trust that the library implementation of read and write works correctly.

{
import dmd.tokens;
import dmd.lexer;
import std.file;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These imports need to be selective to pass the linter test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants