0.09677
Comment
Results
Acc_lousy
Acc_lousy
Accuracy
Accuracy
runs.summary["eval_predictions"]
- 1 of 1
prompt.count
1
False
answer == generation
runs.summary["eval_predictions"]
- 52 of 62
prompt
user
answer
generation
answer == generation
max_length
max_new_tokens
min_length
min_new_tokens
early_stopping
max_time
do_sample
num_beams
num_beam_groups
penalty_alpha
use_cache
temperature
top_k
top_p
typical_p
epsilon_cutoff
eta_cutoff
diversity_penalty
repetition_penalty
encoder_repetition_penalty
length_penalty
no_repeat_ngram_size
bad_words_ids
force_words_ids
renormalize_logits
constraints
forced_bos_token_id
forced_eos_token_id
remove_invalid_values
exponential_decay_length_penalty
suppress_tokens
begin_suppress_tokens
forced_decoder_ids
sequence_bias
guidance_scale
low_memory
num_return_sequences
output_attentions
output_hidden_states
output_scores
return_dict_in_generate
pad_token_id
bos_token_id
eos_token_id
encoder_no_repeat_ngram_size
decoder_start_token_id
generation_kwargs
_from_model_config
transformers_version
48
49
50
51
52
Run: macabre-witch-95
1
Add a comment
Created with ❤️ on Weights & Biases.
https://wandb.ai/capecape/otto/reports/LLama2-7b-detailed-prompt--Vmlldzo1ODM1OTc0