Skip to main content

LLama2-7b - baseline

Created on November 1|Last edited on November 1

Results


prompt.count
1
False
answer == generation
prompt
user
answer
generation
answer == generation
max_length
max_new_tokens
min_length
min_new_tokens
early_stopping
max_time
do_sample
num_beams
num_beam_groups
penalty_alpha
use_cache
temperature
top_k
top_p
typical_p
epsilon_cutoff
eta_cutoff
diversity_penalty
repetition_penalty
encoder_repetition_penalty
length_penalty
no_repeat_ngram_size
bad_words_ids
force_words_ids
renormalize_logits
constraints
forced_bos_token_id
forced_eos_token_id
remove_invalid_values
exponential_decay_length_penalty
suppress_tokens
begin_suppress_tokens
forced_decoder_ids
sequence_bias
guidance_scale
low_memory
num_return_sequences
output_attentions
output_hidden_states
output_scores
return_dict_in_generate
pad_token_id
bos_token_id
eos_token_id
encoder_no_repeat_ngram_size
decoder_start_token_id
generation_kwargs
_from_model_config
transformers_version
48
49
50
51
52
Run: macabre-witch-95
1

List<File<(table)>>
List<File<(table)>>