|
1 | 1 | Roadmap
|
2 | 2 | =======
|
3 | 3 |
|
4 |
| -The goal of this doc is to align core and community efforts for the project and to share what's the focus for the next 6 months. |
| 4 | +The goal of this roadmap is to align the efforts of the core EvaDB team and community contributors by describing the biggest focus areas for the next 6 months: |
5 | 5 |
|
6 |
| -What is the core EvaDB team working on right now? |
7 |
| --------------------------------------------------- |
8 |
| - |
9 |
| -Our biggest priorities right now are improving the user experience of LLM data wrangling and classical AI tasks (e.g., regression, classification, and forecasting). |
| 6 | +.. note:: |
| 7 | + Please ping us on our `Slack<https://evadb.ai/slack>`_ if you any questions or feedback on these focus areas. |
10 | 8 |
|
11 |
| -LLM data wrangling |
12 |
| -~~~~~~~~~~~~~~~~~~ |
| 9 | +LLM-based Data Wrangling |
| 10 | +~~~~~~~~~~~~~~~~~~~~~~~~ |
13 | 11 |
|
14 |
| -* Prompt Engineering: more flexibility of constructing prompt and better experience/feedback to tune the prompt. |
15 |
| -* LLM Cache: reuse the LLM calls based on the model, prompt, and input columns. |
16 |
| -* LLM Batch: intelligently group multiple LLM calls into one to reduce the cost and latency. |
17 |
| -* Cost Calculation and Estimation: show the cost (i.e., time, token usage, and dollars) of the query at the plan time and after execution. |
| 12 | +* Prompt Engineering: more flexibility of constructing prompt and better developer experience/feedback to tune the prompt. |
| 13 | +* LLM Cache: Reuse the results of LLM calls based on the model, prompt, and input columns. |
| 14 | +* LLM Batching: Intelligently group multiple LLM calls into a single call to reduce cost and latency. |
| 15 | +* LLM Cost Calculation and Estimation: Show the estimated cost metrics (i.e., time, token usage, and dollars) of the query at optimization time and the actual cost metrics after query execution. |
18 | 16 |
|
19 |
| -Classical AI tasks |
| 17 | +Classical AI Tasks |
20 | 18 | ~~~~~~~~~~~~~~~~~~
|
21 | 19 |
|
22 |
| -* Accuracy: show the accuracy of the training. |
23 |
| -* Configuration guidance: provide guidance and suggestion on how to configure the AutoML framework (e.g., which frequency to use for forcasting). |
24 |
| -* Cost calculation and estimation: show the cost (i.e., time) of the query the plan time and after exectuion. |
25 |
| -* Path to Scale: improve the processing pipeline for large datasets. |
26 |
| - |
27 |
| -What areas are great for community contributions? |
28 |
| --------------------------------------------------- |
| 20 | +* Accuracy: Show the accuracy of the training loop. |
| 21 | +* Configuration Guidance: Provide guidance on how to configure the AutoML framework (e.g., which frequency to use for forecasting). |
| 22 | +* Task Cost calculation and Estimation: Show the estimated cost metrics (i.e., time) of the query at optimization time and the actual cost metrics after execution. |
| 23 | +* Path to Scalability: Improve the efficiency of the query processing pipeline for large datasets. |
29 | 24 |
|
30 |
| -.. note:: |
31 |
| - If you are unsure about your idea, feel free to chat with us in the **#community** channel in our `Slack <https://evadb.ai/slack>`_. |
32 | 25 |
|
33 | 26 | We are looking forward to expand our integrations including data sources and AI functions, where we can use them with the rest of the ecosystem of EvaDB.
|
34 | 27 |
|
35 |
| -Example Data Sources |
36 |
| -~~~~~~~~~~~~~~~~~~~~ |
| 28 | +More Application Data Sources |
| 29 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 30 | + |
| 31 | +`GitHub <https://github.com/georgia-tech-db/evadb/tree/staging/evadb/third_party/databases/github>`_ is an **application data source** already available in EvaDB. Such data sources allow the developer to quickly build AI applications without focusing on extracting, loading, and transforming data from the application. |
37 | 32 |
|
38 |
| -`GitHub <https://github.com/georgia-tech-db/evadb/tree/staging/evadb/third_party/databases/github>`_ is one application data sources we have added in EvaDB. These application data sources help the user to develop AI applications without the needs of extracting, loading, and transforming data. Example application data sources that are not in EvaDB yet, but we think can boost the AI applications, include (but not limited to) the following: |
| 33 | +Data sources that are not available in EvaDB yet, but would be super relevant for emerging AI applications, include (but not limited to) the following applications: |
39 | 34 |
|
40 | 35 | * YouTube
|
41 | 36 | * Google Search
|
42 | 37 | * Reddit
|
43 | 38 | * arXiv
|
| 39 | +* Hacker News |
| 40 | + |
| 41 | +When adding a data source to EvaDB, please add a documentation page in your PR explaining the usage. Here is an `illustrative documentation page <https://evadb.readthedocs.io/en/stable/source/reference/databases/github.html>`_ for the GitHub data source in EvaDB. |
| 42 | + |
| 43 | +More AI functions |
| 44 | +~~~~~~~~~~~~~~~~~ |
44 | 45 |
|
45 |
| -When adding a data source to EvaDB, we do expect a documentation page to explain the usage. This is an `example documentation page <https://evadb.readthedocs.io/en/stable/source/reference/databases/github.html>`_ for the GitHub integration. |
| 46 | +Adding more AI functions in EvaDB will enable more choices for app developers while building AI applications. |
46 | 47 |
|
47 |
| -Example AI functions |
48 |
| -~~~~~~~~~~~~~~~~~~~~ |
| 48 | +`Stable Diffusion <https://github.com/georgia-tech-db/evadb/blob/staging/evadb/functions/stable_diffusion.py>`_ is an illustrative AI function in EvaDB that generates an image given a text prompt. |
49 | 49 |
|
50 |
| -Adding more AI functions in EvaDB can give users more choices and possibilities for developing AI applications. |
51 |
| -`Stable Diffusion <https://github.com/georgia-tech-db/evadb/blob/staging/evadb/functions/stable_diffusion.py>`_ is an example AI function in EvaDB that generates an image given a prompt. |
52 |
| -Example AI functions that are not in EvaDB yet, but we think can boost the AI applications, include (but not limited to) the following: |
| 50 | +AI functions that are not available in EvaDB yet, but would be super relevant for emerging AI applications, include (but not limited to) the following: |
53 | 51 |
|
54 |
| -* Sklearn (besides the linear regression) |
| 52 | +* Sklearn (beyond linear regression) |
55 | 53 | * OCR (PyTesseract)
|
56 |
| -* AWS Rekognition service |
| 54 | +* AWS Rekognition service |
57 | 55 |
|
58 |
| -When adding a AI function to EvaDB, we do expect a documentation page to explain the usage. This is an `example documetation page <https://evadb.readthedocs.io/en/latest/source/reference/ai/stablediffusion.html>`_ for Stable Diffusion. Optionally, but highly recommended is also to have a notebook to showcase the use cases. |
59 |
| -Example `notebook <https://colab.research.google.com/github/georgia-tech-db/eva/blob/master/tutorials/18-stable-diffusion.ipynb>`_ for Stable Diffusion. |
| 56 | +When adding an AI function to EvaDB, please add a documentation page in your PR explaining the usage. Here is an `illustrative documentation page <https://evadb.readthedocs.io/en/latest/source/reference/ai/stablediffusion.html>`_ for Stable Diffusion. |
60 | 57 |
|
| 58 | +Notebooks are also super helpful to showcase use-cases! Here is an illustrative `notebook <https://colab.research.google.com/github/georgia-tech-db/eva/blob/master/tutorials/18-stable-diffusion.ipynb>`_ on using Stable Diffusion in EvaDB queries. |
0 commit comments