Skip to content

Tips for training from scratch? #8

Closed
@luchris429

Description

@luchris429

Hello,

I've been playing with this architecture on nanoGPT. While I can get other architectures to play nicely there (e.g. RMT), I'm really struggling to get GLA to perform well.

Do you have any tips or code for training? For example, do you have a repository you recommend or key hyperparameter differences to normal transformers?

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions