Skip to content

Compile llama on local PC #204

Open
Open
@bowangnio

Description

@bowangnio

hi,
currently qai_hub_model only can compile model on cloud, if we can do it on local pc???

because the qnn sdk version is changing on cloud, and bugs are also.
sometime I built the mode successfully, but sometime failed, due to bugs of cloud.
if we can do it locally, i can keep the sdk version unchanged, and get compiling successfully every time.

Metadata

Metadata

Assignees

Labels

questionPlease ask any questions on Slack. This issue will be closed once responded to.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions