Skip to content

Commit 5619626

Browse files
authored
Improve huge_patch test (#60)
* Measure just the performance of the parser We don't want to measure time of test data preparation because it could be slow in some python interpreters, depending on how string concatenation is implemented. * Improve memory efficiency for test data preparation Data preparation for huge_patch test could be very slow because strings are immutable, each concatenation creates a new string and discards the old ones. New approach of data preparation is to concatenate large string from smaller parts with ''.join() method. This method reduces memory usage and enhances performance, because it minimizes the number of new string objects created.
1 parent 7691687 commit 5619626

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

tests/test_patch.py

+5-5
Original file line numberDiff line numberDiff line change
@@ -1426,8 +1426,7 @@ def test_svn_mixed_line_ends(self):
14261426
self.assertEqual(results[0].header, expected_header)
14271427

14281428
def test_huge_patch(self):
1429-
start_time = time.time()
1430-
text = """diff --git a/huge.file b/huge.file
1429+
text_parts = ["""diff --git a/huge.file b/huge.file
14311430
index 0000000..1111111 100644
14321431
--- a/huge.file
14331432
+++ a/huge.file
@@ -1439,9 +1438,10 @@ def test_huge_patch(self):
14391438
-44444444
14401439
+55555555
14411440
+66666666
1442-
"""
1443-
for n in range(0, 1000000):
1444-
text += "+" + hex(n) + "\n"
1441+
"""]
1442+
text_parts.extend("+" + hex(n) + "\n" for n in range(0, 1000000))
1443+
text = ''.join(text_parts)
1444+
start_time = time.time()
14451445
result = list(wtp.patch.parse_patch(text))
14461446
self.assertEqual(1, len(result))
14471447
self.assertEqual(1000007, len(result[0].changes))

0 commit comments

Comments
 (0)