Model configuration (preview)
This feature is available from 2.1.0 and currently in preview state and subject to change.
|
The model configuration schema has changed with version 2.2.0-beta4. Make sure to update your configuration files accordingly. The most important change is renaming inputParameters to input and outputParameters to output .
|
Model configuration enables seamless integration and switching between any LLM, optimizing performance and flexibility for the diverse range of AI applications available in AI Accelerator.
Concepts
AI Model
An Ai Model is a collection of parameters that define the behavior and capabilities of a specific AI model. It includes information such as the model name, parameters, and input and output types.
AI Models are defined in the aiModel
registry.
AI Task
An AI Task is used to execute simple or complex logic involving AI models. Task may contain business logic or call models directly to generate or modify content.
AI Models are defined in the aiTask
registry.
AI Task Type
A task type is a grouping of tasks that share a common purpose or functionality.
At the same time it defines input and output parameters and therefore acts as programming interface definition of tasks.
AiTaskType are used to automatically fill task input parameters and automatically read task output parameters.
Ai Task may optionally have a property taskType
that defines the type of the task.
AI Task Types are defined in the aiTaskType
registry.
Installing with Maven
Basic integration comes with Magnolia AI Accelerator core installation. For access to various AI models, you will need to add the corresponding dependencies to your Maven project.
OpenAI models
This enables integration with OpenAI models such as GPT4 and Dall-E.
<dependency>
<groupId>info.magnolia.ai</groupId>
<artifactId>magnolia-ai-accelerator-openai</artifactId>
<version>2.2.0-beta4</version>
</dependency>
AWS Bedrock hosted models
<dependency>
<groupId>info.magnolia.ai</groupId>
<artifactId>magnolia-ai-accelerator-aws</artifactId>
<version>2.2.0-beta4</version>
</dependency>
Configuration
AI models can be added to your light module by creating a aiModels
folder and adding a <your-model>.yaml
file with the following configuration:
modelName: <your-model-name> (1)
modelVersion: <your-ai-model> (2)
$type: <ai-model-type> (3)
scriptLocation: <image-to-image-model-script> (4)
modelParameters: (5)
deprecated: true
input: (6)
<model-input-parameter-name>:
$type: <PARAMETER-TYPE>(7)
required: <true/false>
description: <DESCRIPTION>
defaultValue: <DEFAULT-VALUE>
output: (8)
<model-output-parameter-name>:
$type: <PARAMETER-TYPE>(9)
1 | This is the displayed name of the model in the UI (i.e. OpenAI Gpt-4o, Claude 3.5 Sonnet, Dall-E·3, etc). |
2 | This is the specific version of the model that will be used. (i.e. gpt-4o, anthropic.claude-3-5-sonnet-20240620-v1:0, gemini-2.0-flash, dall-e-3, etc). |
3 | This is the type of model that will be used, depending on your configuration. (i.e. openAiModel , googleGemini , amazonBedrockModel ). |
4 | Optional script location for browser based execution. |
5 | modelParameters were deprecated starting 2.2 in favor of input |
6 | input Parameters present in this configuration will have a UI representation and should represent the configurable parameters of the model. |
7 | See Parameter Configuration for detailed information about available parameter types and their configurations. |
8 | The output of the model. Same rules as for input apply. |
9 | See Parameter Configuration for detailed information about available parameter types and their configurations. |
Sample configuration
AI Models
To add general purpose AI models to your light module, create a aiModels
folder and add a <your-ai-model>.yaml
file.
GPT4o
modelName: OpenAI GPT-4o
modelVersion: gpt-4o
$type: openAiModel
inputParameters:
temperature:
$type: number
AWS Bedrock - Claude 3.5 Sonnet
modelName: Claude 3.5 Sonnet
modelVersion: anthropic.claude-3-5-sonnet-20240620-v1:0
$type: amazonBedrockModel
input:
max_tokens_to_sample:
$type: number
description: "Number of tokens to generate"
defaultValue: 1024
required: true
FalAi Flux Dev
modelName: FLUX.1 [dev]
modelVersion: dev
appId: fal-ai/flux
$type: falAiModel
input:
prompt:
$type: prompt
required: true
description: The prompt to generate an image from.
image_size:
$type: switchable
description: "Either use a preset or a custom value"
options:
preset_image_size:
$type: enum
description: |
The size of the generated image. Default value: landscape_4_3
Possible enum values: square_hd, square, portrait_4_3, portrait_16_9, landscape_4_3, landscape_16_9
enumValues: [square_hd, square, portrait_4_3, portrait_16_9, landscape_4_3, landscape_16_9]
defaultValue: landscape_4_3
custom_image_size:
$type: jsonObject
jsonSchema: |
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "object",
"properties": {
"width": { "type": "integer" },
"height": { "type": "integer" }
},
"required": ["width", "height"]
},
}
description: |
The size of the generated image.
defaultValue: |
{
"width": 1280,
"height": 720
}
num_images:
$type: number
description: "The number of images to generate. Default value: 1"
defaultValue: 1
guidance_scale:
$type: number
description: "The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you. Default value: 3.5"
defaultValue: 3.5
num_inference_steps:
$type: number
description: "The number of inference steps to perform. Default value: 28"
defaultValue: 28
output:
images:
$type: list
itemType: imageUrl
Image IO
Image IO is available from 2.1.0
Magnolia AI Accelerator Image IO enables a flexible way for developers to add image models such as DallE·2/3 and Flux for AI powered image generation.
Image IO is using a different registry. Please be sure to not confuse with the general model configuration mentioned above. The configuration of Image IO will be migrated to the general model configuration in a future release. |
Installing with Maven
<dependency>
<groupId>info.magnolia.ai</groupId>
<artifactId>magnolia-ai-accelerator-image-io</artifactId>
<version>2.2.0-beta4</version>
</dependency>
Configuration
Image models can be added to your light module by creating a aiTextToImageModels
folder and adding a <your-image-to-image-model>.yaml
using the configuration provided here.
Since version 2.2.0 modelParameter is deprecated in order to align the new general purpose model registry. Please use input instead.
|
Sample Text to Image model script
Example:
export default class {
/**
*
* @param appState contains Application State (provided by AI Accelerator)
* @param parameters contains all configured model parameters i.e (2)
* @returns {{b64_json: String, width: Number, height: Number, prompt: String}[]}>
*/
async generateImage(appState, parameters) {
const images = callImageModelApi(parameters);
return images;
}
}
Image-to-Image
This model configuration applies for Image-to-Image models for the AI Accelerator Image IO only.
To add text-to-image models to your light module, create a aiImageToImageModels
folder and add a <your-image-to-image-model>.yaml
file.
OpenAI Dall-E·2
Sample model configuration for generating images with Dall-E·2 (https://platform.openai.com/docs/api-reference/images).
modelName: Dall-E·2
modelId: dall-e-2
appId: dall-e-2
input:
prompt:
type: PROMPT
required: true
description: The prompt to generate an image from.
n:
type: NUMBER
description: |
The number of images to generate. Must be between 1 and 10.
defaultValue: 1
size:
type: ENUM
description: |
The size of the generated images. Defaults to 1024x1024
Must be one of 1024x1024, 1792x1024, or 1024x1792
enumValues: [1024x1024, 1792x1024, 1024x1792]
defaultValue: 1024x1024
scriptLocation: /ai-accelerator-openai/webresources/DallEImageModelHandler.js
OpenAI Dall-E·3
Sample model configuration for generating images with Dall-E·3 (https://platform.openai.com/docs/api-reference/images).
modelName: OpenAI Dall-E·3
modelId: dall-e-3
appId: dall-e-3
input:
prompt:
type: PROMPT
required: true
description: The prompt to generate an image from.
size:
type: ENUM
description: |
The size of the generated images. Defaults to 1024x1024
Must be one of 1024x1024, 1792x1024, or 1024x1792
enumValues: [1024x1024, 1792x1024, 1024x1792]
defaultValue: 1024x1024
style:
type: ENUM
description: |
The style of the generated images. Must be one of vivid or natural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images.
enumValues: [vivid, natural]
defaultValue: vivid
quality:
type: ENUM
description: |
The quality of the image that will be generated. hd creates images with finer details and greater consistency across the image
enumValues: [standard, hd]
defaultValue: standard
scriptLocation: /ai-accelerator-openai/webresources/DallEImageModelHandler.js
FLUX.1.0 [dev]
Sample configuration for FLUX.1.0 [dev] model (https://fal.ai/models/fal-ai/flux/dev).
modelName: FLUX.1 [dev]
modelId: dev
appId: fal-ai/flux
input:
prompt:
type: PROMPT
required: true
description: The prompt to generate an image from.
image_size:
type: ENUM
description: |
The size of the generated image. Default value: landscape_4_3
Possible enum values: square_hd, square, portrait_4_3, portrait_16_9, landscape_4_3, landscape_16_9
enumValues: [square_hd, square, portrait_4_3, portrait_16_9, landscape_4_3, landscape_16_9]
defaultValue: landscape_4_3
num_outputs:
type: NUMBER
description: "The number of images to generate. Default value: 1"
defaultValue: 1
guidance_scale:
type: NUMBER
description: "The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you. Default value: 3.5"
defaultValue: 3.5
num_inference_steps:
type: NUMBER
description: "The number of inference steps to perform. Default value: 28"
defaultValue: 28
scriptLocation: /ai-accelerator-fal-ai/webresources/FluxImageModelHandler.js
FLUX.1.1 [pro]
Sample model configuration for the FLUX.1.1 [pro] model (https://fal.ai/models/fal-ai/flux-pro/v1.1/api).
modelName: FLUX.1.1 [pro]
modelId: v1.1
appId: fal-ai/flux-pro
input:
prompt:
type: PROMPT
required: true
description: The prompt to generate an image from.
image_size:
type: ENUM
description: |
The size of the generated image. Default value: landscape_4_3
Possible enum values: square_hd, square, portrait_4_3, portrait_16_9, landscape_4_3, landscape_16_9
enumValues: [square_hd, square, portrait_4_3, portrait_16_9, landscape_4_3, landscape_16_9]
defaultValue: landscape_4_3
num_outputs:
type: NUMBER
description: "The number of images to generate. Default value: 1"
defaultValue: 1
guidance_scale:
type: NUMBER
description: "The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you. Default value: 3.5"
defaultValue: 3.5
num_inference_steps:
type: NUMBER
description: "The number of inference steps to perform. Default value: 28"
defaultValue: 28
scriptLocation: /ai-accelerator-fal-ai/webresources/FluxImageModelHandler.js