Skip to content

Bug: Failed to load quantizied DeepSeek-V2-Lite-Chat model #8254

Closed
@starP-W

Description

@starP-W

What happened?

I am trying to load a quantizied(q4_k_m) model of DeepSeek-V2-Lite-Chat and interact via nextchat or llama-cli, but it failed to generate a reasonable reply.
image
I'm wondering if my parameter settings are wrong or something else is wrong.

Name and Version

./llama-server --version
version: 3281 (023b880)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu

What operating system are you seeing the problem on?

Linux, Windows

Relevant log output

./llama-server -m /root/autodl-fs/deepseek-v2-lite-chat-q4_k_m.gguf -ngl 999 -c 2048
INFO [                    main] build info | tid="139753438261248" timestamp=1719914866 build=3281 commit="023b8807"
INFO [                    main] system info | tid="139753438261248" timestamp=1719914866 n_threads=56 n_threads_batch=-1 total_threads=112 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | "
llama_model_loader: loaded meta data with 38 key-value pairs and 377 tensors from /root/autodl-fs/deepseek-v2-lite-chat-q4_k_m.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
llama_model_loader: - kv   1:                               general.name str              = DeepSeek-V2-Lite-Chat
llama_model_loader: - kv   2:                      deepseek2.block_count u32              = 27
llama_model_loader: - kv   3:                   deepseek2.context_length u32              = 163840
llama_model_loader: - kv   4:                 deepseek2.embedding_length u32              = 2048
llama_model_loader: - kv   5:              deepseek2.feed_forward_length u32              = 10944
llama_model_loader: - kv   6:             deepseek2.attention.head_count u32              = 16
llama_model_loader: - kv   7:          deepseek2.attention.head_count_kv u32              = 16
llama_model_loader: - kv   8:                   deepseek2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv   9: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                deepseek2.expert_used_count u32              = 6
llama_model_loader: - kv  11:                          general.file_type u32              = 15
llama_model_loader: - kv  12:        deepseek2.leading_dense_block_count u32              = 1
llama_model_loader: - kv  13:                       deepseek2.vocab_size u32              = 102400
llama_model_loader: - kv  14:           deepseek2.attention.kv_lora_rank u32              = 512
llama_model_loader: - kv  15:             deepseek2.attention.key_length u32              = 192
llama_model_loader: - kv  16:           deepseek2.attention.value_length u32              = 128
llama_model_loader: - kv  17:       deepseek2.expert_feed_forward_length u32              = 1408
llama_model_loader: - kv  18:                     deepseek2.expert_count u32              = 64
llama_model_loader: - kv  19:              deepseek2.expert_shared_count u32              = 2
llama_model_loader: - kv  20:             deepseek2.expert_weights_scale f32              = 1.000000
llama_model_loader: - kv  21:             deepseek2.rope.dimension_count u32              = 64
llama_model_loader: - kv  22:                deepseek2.rope.scaling.type str              = yarn
llama_model_loader: - kv  23:              deepseek2.rope.scaling.factor f32              = 40.000000
llama_model_loader: - kv  24: deepseek2.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  25: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.070700
llama_model_loader: - kv  26:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  27:                         tokenizer.ggml.pre str              = deepseek-llm
llama_model_loader: - kv  28:                      tokenizer.ggml.tokens arr[str,102400]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  29:                  tokenizer.ggml.token_type arr[i32,102400]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  30:                      tokenizer.ggml.merges arr[str,99757]   = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e...
llama_model_loader: - kv  31:                tokenizer.ggml.bos_token_id u32              = 100000
llama_model_loader: - kv  32:                tokenizer.ggml.eos_token_id u32              = 100001
llama_model_loader: - kv  33:            tokenizer.ggml.padding_token_id u32              = 100001
llama_model_loader: - kv  34:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  35:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  36:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  37:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  108 tensors
llama_model_loader: - type q5_0:   14 tensors
llama_model_loader: - type q8_0:   13 tensors
llama_model_loader: - type q4_K:  229 tensors
llama_model_loader: - type q6_K:   13 tensors
llm_load_vocab: special tokens cache size = 2400
llm_load_vocab: token to piece cache size = 0.6659 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = deepseek2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 102400
llm_load_print_meta: n_merges         = 99757
llm_load_print_meta: n_ctx_train      = 163840
llm_load_print_meta: n_embd           = 2048
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 16
llm_load_print_meta: n_layer          = 27
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 192
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 3072
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 10944
llm_load_print_meta: n_expert         = 64
llm_load_print_meta: n_expert_used    = 6
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = yarn
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 0.025
llm_load_print_meta: n_ctx_orig_yarn  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 16B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 15.71 B
llm_load_print_meta: model size       = 9.65 GiB (5.28 BPW) 
llm_load_print_meta: general.name     = DeepSeek-V2-Lite-Chat
llm_load_print_meta: BOS token        = 100000 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token        = 100001 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token        = 100001 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token         = 126 'Ä'
llm_load_print_meta: max token length = 256
llm_load_print_meta: n_layer_dense_lead   = 1
llm_load_print_meta: n_lora_q             = 0
llm_load_print_meta: n_lora_kv            = 512
llm_load_print_meta: n_ff_exp             = 1408
llm_load_print_meta: n_expert_shared      = 2
llm_load_print_meta: expert_weights_scale = 1.0
llm_load_print_meta: rope_yarn_log_mul    = 0.0707
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size =    0.32 MiB
llm_load_tensors: offloading 27 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 28/28 layers to GPU
llm_load_tensors:        CPU buffer size =   112.50 MiB
llm_load_tensors:      CUDA0 buffer size =  9767.98 MiB
.....................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 2048
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 0.025
llama_kv_cache_init:      CUDA0 KV buffer size =   540.00 MiB
llama_new_context_with_model: KV self size  =  540.00 MiB, K (f16):  324.00 MiB, V (f16):  216.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.78 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   212.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =     8.01 MiB
llama_new_context_with_model: graph nodes  = 1924
llama_new_context_with_model: graph splits = 2
INFO [                    init] initializing slots | tid="139753438261248" timestamp=1719914869 n_slots=1
INFO [                    init] new slot | tid="139753438261248" timestamp=1719914869 id_slot=0 n_ctx_slot=2048
INFO [                    main] model loaded | tid="139753438261248" timestamp=1719914869
INFO [                    main] chat template | tid="139753438261248" timestamp=1719914869 chat_example="You are a helpful assistant\n\nUser: Hello\n\nAssistant: Hi there<|end▁of▁sentence|>User: How are you?\n\nAssistant:" built_in=true
INFO [                    main] HTTP server listening | tid="139753438261248" timestamp=1719914869 n_threads_http="111" port="8080" hostname="127.0.0.1"
INFO [            update_slots] all slots are idle | tid="139753438261248" timestamp=1719914869
INFO [      log_server_request] request | tid="139751737454592" timestamp=1719914882 remote_addr="127.0.0.1" remote_port=36850 status=200 method="OPTIONS" path="/v1/chat/completions" params={}
INFO [   launch_slot_with_task] slot is processing task | tid="139753438261248" timestamp=1719914882 id_slot=0 id_task=0
INFO [            update_slots] kv cache rm [p0, end) | tid="139753438261248" timestamp=1719914882 id_slot=0 id_task=0 p0=0
INFO [      log_server_request] request | tid="139751737454592" timestamp=1719914883 remote_addr="127.0.0.1" remote_port=36850 status=200 method="POST" path="/v1/chat/completions" params={}
INFO [            update_slots] slot released | tid="139753438261248" timestamp=1719914883 id_slot=0 id_task=0 n_ctx=2048 n_past=209 n_system_tokens=0 n_cache_tokens=0 truncated=false
INFO [            update_slots] all slots are idle | tid="139753438261248" timestamp=1719914883
INFO [      log_server_request] request | tid="139750853832704" timestamp=1719914952 remote_addr="127.0.0.1" remote_port=36852 status=200 method="OPTIONS" path="/v1/chat/completions" params={}
INFO [   launch_slot_with_task] slot is processing task | tid="139753438261248" timestamp=1719914952 id_slot=0 id_task=113
INFO [            update_slots] kv cache rm [p0, end) | tid="139753438261248" timestamp=1719914952 id_slot=0 id_task=113 p0=0
INFO [      log_server_request] request | tid="139750853832704" timestamp=1719914957 remote_addr="127.0.0.1" remote_port=36852 status=200 method="POST" path="/v1/chat/completions" params={}
INFO [            update_slots] slot released | tid="139753438261248" timestamp=1719914957 id_slot=0 id_task=113 n_ctx=2048 n_past=604 n_system_tokens=0 n_cache_tokens=0 truncated=false
INFO [            update_slots] all slots are idle | tid="139753438261248" timestamp=1719914957
INFO [      log_server_request] request | tid="139750845440000" timestamp=1719914957 remote_addr="127.0.0.1" remote_port=36854 status=200 method="OPTIONS" path="/v1/chat/completions" params={}
INFO [   launch_slot_with_task] slot is processing task | tid="139753438261248" timestamp=1719914957 id_slot=0 id_task=621
INFO [            update_slots] kv cache rm [p0, end) | tid="139753438261248" timestamp=1719914957 id_slot=0 id_task=621 p0=0
INFO [            update_slots] slot context shift | tid="139753438261248" timestamp=1719914976 id_slot=0 id_task=621 n_keep=1 n_left=2046 n_discard=1023 n_ctx=2048 n_past=2047 n_system_tokens=0 n_cache_tokens=0
INFO [            update_slots] slot context shift | tid="139753438261248" timestamp=1719914987 id_slot=0 id_task=621 n_keep=1 n_left=2046 n_discard=1023 n_ctx=2048 n_past=2047 n_system_tokens=0 n_cache_tokens=0
INFO [            update_slots] slot context shift | tid="139753438261248" timestamp=1719914999 id_slot=0 id_task=621 n_keep=1 n_left=2046 n_discard=1023 n_ctx=2048 n_past=2047 n_system_tokens=0 n_cache_tokens=0
INFO [            update_slots] slot context shift | tid="139753438261248" timestamp=1719915010 id_slot=0 id_task=621 n_keep=1 n_left=2046 n_discard=1023 n_ctx=2048 n_past=2047 n_system_tokens=0 n_cache_tokens=0
INFO [            update_slots] slot context shift | tid="139753438261248" timestamp=1719915021 id_slot=0 id_task=621 n_keep=1 n_left=2046 n_discard=1023 n_ctx=2048 n_past=2047 n_system_tokens=0 n_cache_tokens=0
INFO [            update_slots] slot context shift | tid="139753438261248" timestamp=1719915033 id_slot=0 id_task=621 n_keep=1 n_left=2046 n_discard=1023 n_ctx=2048 n_past=2047 n_system_tokens=0 n_cache_tokens=0
INFO [            update_slots] slot context shift | tid="139753438261248" timestamp=1719915044 id_slot=0 id_task=621 n_keep=1 n_left=2046 n_discard=1023 n_ctx=2048 n_past=2047 n_system_tokens=0 n_cache_tokens=0
INFO [            update_slots] slot context shift | tid="139753438261248" timestamp=1719915055 id_slot=0 id_task=621 n_keep=1 n_left=2046 n_discard=1023 n_ctx=2048 n_past=2047 n_system_tokens=0 n_cache_tokens=0
INFO [            update_slots] slot context shift | tid="139753438261248" timestamp=1719915067 id_slot=0 id_task=621 n_keep=1 n_left=2046 n_discard=1023 n_ctx=2048 n_past=2047 n_system_tokens=0 n_cache_tokens=0
INFO [            update_slots] slot context shift | tid="139753438261248" timestamp=1719915078 id_slot=0 id_task=621 n_keep=1 n_left=2046 n_discard=1023 n_ctx=2048 n_past=2047 n_system_tokens=0 n_cache_tokens=0
INFO [            update_slots] slot context shift | tid="139753438261248" timestamp=1719915089 id_slot=0 id_task=621 n_keep=1 n_left=2046 n_discard=1023 n_ctx=2048 n_past=2047 n_system_tokens=0 n_cache_tokens=0
INFO [            update_slots] slot context shift | tid="139753438261248" timestamp=1719915100 id_slot=0 id_task=621 n_keep=1 n_left=2046 n_discard=1023 n_ctx=2048 n_past=2047 n_system_tokens=0 n_cache_tokens=0
INFO [            update_slots] slot context shift | tid="139753438261248" timestamp=1719915112 id_slot=0 id_task=621 n_keep=1 n_left=2046 n_discard=1023 n_ctx=2048 n_past=2047 n_system_tokens=0 n_cache_tokens=0
INFO [            update_slots] slot context shift | tid="139753438261248" timestamp=1719915123 id_slot=0 id_task=621 n_keep=1 n_left=2046 n_discard=1023 n_ctx=2048 n_past=2047 n_system_tokens=0 n_cache_tokens=0
INFO [            update_slots] slot context shift | tid="139753438261248" timestamp=1719915134 id_slot=0 id_task=621 n_keep=1 n_left=2046 n_discard=1023 n_ctx=2048 n_past=2047 n_system_tokens=0 n_cache_tokens=0

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workinghigh severityUsed to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow)

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions