Model configuration (preview)
This feature is available from 2.1.0 and currently in preview state and subject to change.
|
Model configuration enables seamless integration and switching between any LLM, optimizing performance and flexibility for the diverse range of AI applications available in AI Accelerator.
Installing with Maven
Basic integration comes with Magnolia AI Accelerator core installation. For access to various AI models, you will need to add the corresponding dependencies to your Maven project.
OpenAI models
This enables integration with OpenAI models such as GPT4 and Dall-E.
<dependency>
<groupId>info.magnolia.ai</groupId>
<artifactId>magnolia-ai-accelerator-openai</artifactId>
<version>2.2.0-beta1</version>
</dependency>
AWS Bedrock hosted models
<dependency>
<groupId>info.magnolia.ai</groupId>
<artifactId>magnolia-ai-accelerator-aws</artifactId>
<version>2.2.0-beta1</version>
</dependency>
Configuration
AI models can be added to your light module by creating a aiModels
folder and adding a <your-model>.yaml
file with the following configuration:
modelName: <your-model-name> (1)
modelVersion: <your-ai-model> (2)
$type: <ai-model-type> (3)
scriptLocation: <image-to-image-model-script> (4)
modelParameters: (5)
deprecated: true
inputParameters: (6)
<model-parameter-name>:
$type: <PARAMETER-TYPE>(7)
required: <true/false>
description: <DESCRIPTION>
defaultValue: <DEFAULT-VALUE>
1 | This is the displayed name of the model in the UI (i.e. OpenAI Gpt-4o, Claude 3.5 Sonnet, Dall-E·3, etc). |
2 | This is the specific version of the model that will be used. (i.e. gpt-4o, anthropic.claude-3-5-sonnet-20240620-v1:0, gemini-2.0-flash, dall-e-3, etc). |
3 | This is the type of model that will be used, depending on your configuration. (i.e. openAiModel , googleGemini , amazonBedrockModel ). |
4 | Optional script location for browser based execution. |
5 | modelParameters were deprecated starting 2.2 in favor of inputParameters |
6 | inputParameters Parameters present in this configuration will have a UI representation and should represent the configurable parameters of the model Available parameter types are:
|
7 | See Parameter Configuration for detailed information about available parameter types and their configurations. |
Image IO
Image IO is available from 2.1.0
Magnolia AI Accelerator Image IO enables a flexible way for developers to add image models such as DallE·2/3 and Flux for AI powered image generation.
Installing with Maven
<dependency>
<groupId>info.magnolia.ai</groupId>
<artifactId>magnolia-ai-accelerator-image-io</artifactId>
<version>2.2.0-beta1</version>
</dependency>
Configuration
Image models can be added to your light module by creating a aiTextToImageModels
folder and adding a <your-image-to-image-model>.yaml
using the configuration provided here.
Sample Text to Image model script
Example:
export default class {
/**
*
* @param appState contains Application State (provided by AI Accelerator)
* @param parameters contains all configured model parameters i.e (2)
* @returns {{b64_json: String, width: Number, height: Number, prompt: String}[]}>
*/
async generateImage(appState, parameters) {
const images = callImageModelApi(parameters);
return images;
}
}
Sample configuration
AI Models
To add general purpose AI models to your light module, create a aiModels
folder and add a <your-ai-model>.yaml
file.
GPT4o
modelName: OpenAI GPT-4o
modelVersion: gpt-4o
$type: openAiModel
inputParameters:
temperature:
$type: number
Image-to-Image
To add text-to-image models to your light module, create a aiImageToImageModels
folder and add a <your-image-to-image-model>.yaml
file.
OpenAI Dall-E·2
Sample model configuration for generating images with Dall-E·2 (https://platform.openai.com/docs/api-reference/images).
modelName: Dall-E·2
modelId: dall-e-2
appId: dall-e-2
modelParameters:
prompt:
type: PROMPT
required: true
description: The prompt to generate an image from.
n:
type: NUMBER
description: |
The number of images to generate. Must be between 1 and 10.
defaultValue: 1
size:
type: ENUM
description: |
The size of the generated images. Defaults to 1024x1024
Must be one of 1024x1024, 1792x1024, or 1024x1792
enumValues: [1024x1024, 1792x1024, 1024x1792]
defaultValue: 1024x1024
scriptLocation: /ai-accelerator-openai/webresources/DallEImageModelHandler.js
OpenAI Dall-E·3
Sample model configuration for generating images with Dall-E·3 (https://platform.openai.com/docs/api-reference/images).
modelName: OpenAI Dall-E·3
modelId: dall-e-3
appId: dall-e-3
modelParameters:
prompt:
type: PROMPT
required: true
description: The prompt to generate an image from.
size:
type: ENUM
description: |
The size of the generated images. Defaults to 1024x1024
Must be one of 1024x1024, 1792x1024, or 1024x1792
enumValues: [1024x1024, 1792x1024, 1024x1792]
defaultValue: 1024x1024
style:
type: ENUM
description: |
The style of the generated images. Must be one of vivid or natural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images.
enumValues: [vivid, natural]
defaultValue: vivid
quality:
type: ENUM
description: |
The quality of the image that will be generated. hd creates images with finer details and greater consistency across the image
enumValues: [standard, hd]
defaultValue: standard
scriptLocation: /ai-accelerator-openai/webresources/DallEImageModelHandler.js
FLUX.1.0 [dev]
Sample configuration for FLUX.1.0 [dev] model (https://fal.ai/models/fal-ai/flux/dev).
modelName: FLUX.1 [dev]
modelId: dev
appId: fal-ai/flux
modelParameters:
prompt:
type: PROMPT
required: true
description: The prompt to generate an image from.
image_size:
type: ENUM
description: |
The size of the generated image. Default value: landscape_4_3
Possible enum values: square_hd, square, portrait_4_3, portrait_16_9, landscape_4_3, landscape_16_9
enumValues: [square_hd, square, portrait_4_3, portrait_16_9, landscape_4_3, landscape_16_9]
defaultValue: landscape_4_3
num_outputs:
type: NUMBER
description: "The number of images to generate. Default value: 1"
defaultValue: 1
guidance_scale:
type: NUMBER
description: "The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you. Default value: 3.5"
defaultValue: 3.5
num_inference_steps:
type: NUMBER
description: "The number of inference steps to perform. Default value: 28"
defaultValue: 28
scriptLocation: /ai-accelerator-fal-ai/webresources/FluxImageModelHandler.js
FLUX.1.1 [pro]
Sample model configuration for the FLUX.1.1 [pro] model (https://fal.ai/models/fal-ai/flux-pro/v1.1/api).
modelName: FLUX.1.1 [pro]
modelId: v1.1
appId: fal-ai/flux-pro
modelParameters:
prompt:
type: PROMPT
required: true
description: The prompt to generate an image from.
image_size:
type: ENUM
description: |
The size of the generated image. Default value: landscape_4_3
Possible enum values: square_hd, square, portrait_4_3, portrait_16_9, landscape_4_3, landscape_16_9
enumValues: [square_hd, square, portrait_4_3, portrait_16_9, landscape_4_3, landscape_16_9]
defaultValue: landscape_4_3
num_outputs:
type: NUMBER
description: "The number of images to generate. Default value: 1"
defaultValue: 1
guidance_scale:
type: NUMBER
description: "The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you. Default value: 3.5"
defaultValue: 3.5
num_inference_steps:
type: NUMBER
description: "The number of inference steps to perform. Default value: 28"
defaultValue: 28
scriptLocation: /ai-accelerator-fal-ai/webresources/FluxImageModelHandler.js