Skip to content

Starting out support for lmql with conda on wsl #355

Open
@Gayanukaa

Description

@Gayanukaa

To test out LMQL as per the guidance in the documentation. I did the following steps where I encountered several issues:

  1. I started on a blank folder on Linux environment and running conda env create -f requirements.yml -n lmql-dev with the requirements file
  2. After activating the environment on conda i ran the activate script with source activate-dev.sh
  3. Then I installed lmql on the environment to run with my gpu using pip install lmql[hf]

To test local model - Llama-2-7B

  1. I added Llama.cpp using CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python as per llama-cpp-python
  2. To load the .guff of Llama-2-7B-GGUF I used lmql serve-model llama.cpp:/home/gayanukaa/llm-test/lmql-test/llama-2-7b.Q4_K_M.gguf --cuda --port 9999 --trust_remote_code True

From this onwards I faced several issues:

  • Despite installing lmql i got a ModuleNotFoundError for running python files
    image
  • After loaading the inference server as step 5 above i tried running Python scripts from one of three possible methods and lmql run for .lmql files -> There was no ouput. I was not sure about whether it takes a lot of time to load but it remains on this.
    image

Would appreciate any help or guidance to resolve these issues to run and learn LMQL

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions