Skip to content

chkang13/fastai-kr

Repository files navigation

fastai에 오신 것을 환영합니다. (fast.ai 한국어 번역 튜토리얼)

fastai는 최신 모범 사례를 사용하여 빠르고 정확하게 신경망 훈련하는 것을 단순화합니다.

CI PyPI Conda (channel only) Build fastai images docs

설치하기

Google Colab 을 사용하면 별도의 설치 없이 fastai를 사용할 수 있습니다. 사실상, 이 문서의 모든 페이지는 대화형 노트북으로도 사용할 수 있습니다. 페이지 상단의 "colab에서 열기"를 클릭하여 열 수 있습니다. (Colab 런타임을 "GPU"로 변경하여 빠르게 실행해야 합니다!) 자세한 내용은 Colab 사용하기에 대한 fast.ai 문서를 참조하세요.

Linux 또는 Windows(NB: Mac은 지원되지 않음)를 실행하는 한 conda를 사용하여 사용자의 컴퓨터에 fastai를 설치할 수 있습니다.(권장) Windows의 경우 중요한 참고 사항은 "Windows에서 실행"을 참조하세요.

만약 miniconda (권장)을 사용한다면 아래의 명령어를 실행하세요. (만약 conda 를 사용하는 대신에 mamba를 사용한다면 설치과정이 훨씬 더 빠르고 신뢰할 수 있습니다.):

conda install -c fastchan fastai

...또는 Anaconda를 사용한다면 아래의 명령어를 실행하세요.:

conda install -c fastchan fastai anaconda

To install with pip, use: pip install fastai. If you install with pip, you should install PyTorch first by following the PyTorch installation instructions.

If you plan to develop fastai yourself, or want to be on the cutting edge, you can use an editable install (if you do this, you should also use an editable install of fastcore to go with it.) First install PyTorch, and then:

git clone https://github.com/fastai/fastai
pip install -e "fastai[dev]"

Learning fastai

The best way to get started with fastai (and deep learning) is to read the book, and complete the free course.

To see what's possible with fastai, take a look at the Quick Start, which shows how to use around 5 lines of code to build an image classifier, an image segmentation model, a text sentiment model, a recommendation system, and a tabular model. For each of the applications, the code is much the same.

Read through the Tutorials to learn how to train your own models on your own datasets. Use the navigation sidebar to look through the fastai documentation. Every class, function, and method is documented here.

To learn about the design and motivation of the library, read the peer reviewed paper.

About fastai

fastai is a deep learning library which provides practitioners with high-level components that can quickly and easily provide state-of-the-art results in standard deep learning domains, and provides researchers with low-level components that can be mixed and matched to build new approaches. It aims to do both things without substantial compromises in ease of use, flexibility, or performance. This is possible thanks to a carefully layered architecture, which expresses common underlying patterns of many deep learning and data processing techniques in terms of decoupled abstractions. These abstractions can be expressed concisely and clearly by leveraging the dynamism of the underlying Python language and the flexibility of the PyTorch library. fastai includes:

  • A new type dispatch system for Python along with a semantic type hierarchy for tensors
  • A GPU-optimized computer vision library which can be extended in pure Python
  • An optimizer which refactors out the common functionality of modern optimizers into two basic pieces, allowing optimization algorithms to be implemented in 4–5 lines of code
  • A novel 2-way callback system that can access any part of the data, model, or optimizer and change it at any point during training
  • A new data block API
  • And much more...

fastai is organized around two main design goals: to be approachable and rapidly productive, while also being deeply hackable and configurable. It is built on top of a hierarchy of lower-level APIs which provide composable building blocks. This way, a user wanting to rewrite part of the high-level API or add particular behavior to suit their needs does not have to learn how to use the lowest level.

Layered API

Migrating from other libraries

It's very easy to migrate from plain PyTorch, Ignite, or any other PyTorch-based library, or even to use fastai in conjunction with other libraries. Generally, you'll be able to use all your existing data processing code, but will be able to reduce the amount of code you require for training, and more easily take advantage of modern best practices. Here are migration guides from some popular libraries to help you on your way:

Windows Support

When installing with mamba or conda replace -c fastchan in the installation with -c pytorch -c nvidia -c fastai, since fastchan is not currently supported on Windows.

Due to python multiprocessing issues on Jupyter and Windows, num_workers of Dataloader is reset to 0 automatically to avoid Jupyter hanging. This makes tasks such as computer vision in Jupyter on Windows many times slower than on Linux. This limitation doesn't exist if you use fastai from a script.

See this example to fully leverage the fastai API on Windows.

Tests

To run the tests in parallel, launch:

nbdev_test_nbs or make test

For all the tests to pass, you'll need to install the following optional dependencies:

pip install "sentencepiece<0.1.90" wandb tensorboard albumentations pydicom opencv-python scikit-image pyarrow kornia \
    catalyst captum neptune-cli

Tests are written using nbdev, for example see the documentation for test_eq.

Contributing

After you clone this repository, please run nbdev_install_git_hooks in your terminal. This sets up git hooks, which clean up the notebooks to remove the extraneous stuff stored in the notebooks (e.g. which cells you ran) which causes unnecessary merge conflicts.

Before submitting a PR, check that the local library and notebooks match. The script nbdev_diff_nbs can let you know if there is a difference between the local library and the notebooks.

  • If you made a change to the notebooks in one of the exported cells, you can export it to the library with nbdev_build_lib or make fastai.
  • If you made a change to the library, you can export it back to the notebooks with nbdev_update_lib.

Docker Containers

For those interested in official docker containers for this project, they can be found here.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •