Skip to content

Commit

Permalink
Improve memory efficiency for test data preparation
Browse files Browse the repository at this point in the history
Data preparation for huge_patch test could be very slow because strings
are immutable, each concatenation creates a new string and discards
the old ones.

New approach of data preparation is to concatenate large string from
smaller parts with ''.join() method. This method reduces memory usage
and enhances performance, because it minimizes the number of new string
objects created.
  • Loading branch information
arkamar committed Jun 20, 2024
1 parent 1cfdf62 commit 247a510
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions tests/test_patch.py
Original file line number Diff line number Diff line change
Expand Up @@ -1426,7 +1426,7 @@ def test_svn_mixed_line_ends(self):
self.assertEqual(results[0].header, expected_header)

def test_huge_patch(self):
text = """diff --git a/huge.file b/huge.file
text_parts = ["""diff --git a/huge.file b/huge.file
index 0000000..1111111 100644
--- a/huge.file
+++ a/huge.file
Expand All @@ -1438,9 +1438,9 @@ def test_huge_patch(self):
-44444444
+55555555
+66666666
"""
for n in range(0, 1000000):
text += "+" + hex(n) + "\n"
"""]
text_parts.extend("+" + hex(n) + "\n" for n in range(0, 1000000))
text = ''.join(text_parts)
start_time = time.time()
result = list(wtp.patch.parse_patch(text))
self.assertEqual(1, len(result))
Expand Down

0 comments on commit 247a510

Please sign in to comment.