You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* SimpleTable is a class representing a table in a SimpleDB. It can handle tabular and geospatial data. To create one, it's best to instantiate a SimpleDB first.
* Generates embeddings for a specified column and stores the results in a new column.
633
+
*
634
+
* This method currently supports Google Gemini, Vertex AI, and local models running with Ollama. It retrieves credentials and the model from environment variables (`AI_KEY`, `AI_PROJECT`, `AI_LOCATION`, `AI_EMBEDDINGS_MODEL`) or accepts them as options. Options take precedence over environment variables.
635
+
*
636
+
* To run local models with Ollama, set the `OLLAMA` environment variable to `true` and start Ollama on your machine. Make sure to install the model you want and set the `AI_EMBEDDINGS_MODEL` environment variable to the model name.
637
+
*
638
+
* To avoid exceeding rate limits, you can use the `rateLimitPerMinute` option to automatically add a delay between requests to comply with the rate limit.
639
+
*
640
+
* If you have a business or professional account with high rate limits, you can set the `concurrent` option to process multiple requests concurrently and speed up the process.
641
+
*
642
+
* The `cache` option allows you to cache the results of each request locally, saving resources and time. The data is cached in the local hidden folder `.journalism-cache` (because this method uses the `getEmbedding` function from the [journalism library](https://github.com/nshiab/journalism)). Don't forget to add `.journalism-cache` to your `.gitignore` file!
643
+
*
644
+
* This method won't work if your table contains geometries.
645
+
*
646
+
* @example
647
+
* Basic usage with cache, rate limit, and verbose logging
648
+
* ```ts
649
+
* // New table with column "food".
650
+
* await table.loadArray([
651
+
* { food: "pizza" },
652
+
* { food: "sushi" },
653
+
* { food: "burger" },
654
+
* { food: "pasta" },
655
+
* { food: "salad" },
656
+
* { food: "tacos" }
657
+
* ]);
658
+
*
659
+
* // Ask the AI to generate embeddings in a new column "embeddings".
* // Avoid exceeding a rate limit by waiting between requests
664
+
* rateLimitPerMinute: 15,
665
+
* // Log details
666
+
* verbose: true,
667
+
* });
668
+
* ```
669
+
*
670
+
* @param column - The column to be used as input for the embeddings.
671
+
* @param newColumn - The name of the new column where the embeddings will be stored.
672
+
* @param options - Configuration options for the AI request.
673
+
* @param options.concurrent - The number of concurrent requests to send. Defaults to 1.
674
+
* @param options.cache - If true, the results will be cached locally. Defaults to false.
675
+
* @param options.rateLimitPerMinute - The rate limit for the AI requests in requests per minute. If necessary, the method will wait between requests. Defaults to no limit.
676
+
* @param options.model - The model to use. Defaults to the `AI_MODEL` environment variable.
677
+
* @param options.apiKey - The API key. Defaults to the `AI_KEY` environment variable.
678
+
* @param options.vertex - Whether to use Vertex AI. Defaults to `false`. If `AI_PROJECT` and `AI_LOCATION` are set in the environment, it will automatically switch to true.
679
+
* @param options.project - The Google Cloud project ID. Defaults to the `AI_PROJECT` environment variable.
680
+
* @param options.location - The Google Cloud location. Defaults to the `AI_LOCATION` environment variable.
681
+
* @param options.ollama - Whether to use Ollama. Defaults to the `OLLAMA` environment variable.
682
+
* @param options.verbose - Whether to log additional information. Defaults to `false`.
* Generates and executes a SQL query based on a prompt. Additional instructions are automatically added before and after your prompt, such as the column types. To see the full prompt, set the `verbose` option to true.
0 commit comments