Skip to content

Commit eebef2c

Browse files
Exclude input from output (#388)
* Enable CodeQL for pull requests (#374) This reverts commit 1a04923. * Update --------- Co-authored-by: bandish-shah <[email protected]>
1 parent 70d9151 commit eebef2c

File tree

2 files changed

+10
-4
lines changed

2 files changed

+10
-4
lines changed

.github/workflows/codeql-analysis.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,9 @@ name: "CodeQL"
1414
on:
1515
push:
1616
branches: [ main ]
17+
pull_request:
18+
# The branches below must be a subset of the branches above
19+
branches: [ main ]
1720
schedule:
1821
- cron: '0 9 * * 1' # Every Monday at 09:00 (9:00 AM)
1922

examples/inference-deployments/mpt/mpt_7b_ft_handler.py

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -307,11 +307,14 @@ def predict(self, model_requests: List[Dict]) -> List[str]:
307307
start_lengths = torch.IntTensor(start_lengths)
308308
tokens_batch = self.model(start_ids, start_lengths, **generate_kwargs)
309309
outputs = []
310-
for tokens in tokens_batch:
310+
for i, tokens in enumerate(tokens_batch):
311311
for beam_id in range(generate_kwargs['beam_width']):
312-
# Do not exclude context input from the output
313-
# token = tokens[beam_id][start_lengths[i]:]
314-
token = tokens[beam_id]
312+
# Exclude context input from the output
313+
token = tokens[beam_id][start_lengths[i]:]
314+
315+
# Do this to exclude context input from the output
316+
# token = tokens[beam_id]
317+
315318
# stop at end_id; This is the same as eos_token_id
316319
token = token[token != self.end_id]
317320
output = self.tokenizer.decode(token)

0 commit comments

Comments
 (0)