Configure asset metadata generation
Available from version 2.2.6
Generate asset metadata with a configured AI model and task parameters. Configure a model as described in Model configuration. As with all tasks, the default model applies unless you override it for the generateAssetMetadata task by decoration.
Features
The generateAssetMetadata task provides:
* A configurable action using jcrCommandAction
* An observation listener that can auto-generate metadata when an asset changes
* A Groovy script to bulk-generate metadata for assets in DAM
Model configuration
To use the gemini-2-5-flash model:
-
Configure the model
gemini-2-5-flash. -
Decorate
<light-module-folder>/<light-module-name>/decorations/ai-accelerator-image-io/aiTasks/generateAssetMetadata.yaml:generateAssetMetadata.yamlmodelId: gemini-2-5-flash
You can control which languages the task generates. By default, the task uses the Magnolia default language. To add or change languages, decorate the input as shown.
The example adds English (en), German (de), Simplified Chinese (zh-CN), and Swiss Standard German (de-CH).
input:
locales:
defaultValue:
english: en
german: de
chinese: zh-CN
swiss: de-CH
Task parameters
Set defaults by decorating the input section of aiTasks/generateAssetMetadata.yaml. You can also pass parameters when invoking the action by:
* Setting the action definition’s params.taskParameters
* Setting the Groovy script’s taskParameters
| Parameter | Description |
|---|---|
|
Required Path to the DAM asset used to generate metadata. Use a Magnolia JCR DAM asset path. |
|
Required; default: |
|
Required; default: |
|
Optional; default: Magnolia default language
Languages to generate. Provide a map of label-to-locale code, for example |
|
Optional; default: |
|
Optional; default: |
|
Optional; default: |
|
Optional; default: |
Generate metadata by action
See Model configuration for model setup.
Ensure your module.yaml declares the dependency:
dependencies:
dam-app-jcr:
version: 4.0.0/*
The decoration below: * Adds two actions to the DAM browser subApp to write caption or description for a single asset and overwrite existing content * Enables i18n for the caption and description fields so you can view generated values in multiple languages
<light-module-folder>/<light-module-name>/decorations/dam-assets-app/apps/dam.yamlsubApps:
jcrBrowser:
actions:
generateImageCaption:
label: Generate Image Caption
icon: icon-import
$type: jcrCommandAction
command: execute-task
catalog: ai-accelerator
params:
taskId: ai-accelerator-image-io:generateAssetMetadata
taskParameters:
fieldName: caption
overwrite: true
maxLength: 100
locales:
english: en
german: de
availability:
writePermissionRequired: true
nodeTypes:
asset: mgnl:asset
rules:
notDeleted:
$type: jcrIsDeletedRule
negate: true
generateImageDescription:
label: Generate Image Description
icon: icon-import
$type: jcrCommandAction
command: execute-task
catalog: ai-accelerator
params:
taskId: ai-accelerator-image-io:generateAssetMetadata
taskParameters:
fieldName: description
overwrite: true
maxWords: 150
locales:
english: en
german: de
availability:
writePermissionRequired: true
nodeTypes:
asset: mgnl:asset
rules:
notDeleted:
$type: jcrIsDeletedRule
negate: true
# Add the defined actions to the action bar.
actionbar:
sections:
asset:
groups:
generate:
items:
- name: generateImageDescription
- name: generateImageCaption
# Enable i18n for caption and description.
jcrDetail:
form:
properties:
caption:
i18n: true
description:
i18n: true
Generate metadata by observation
The module provides an observation listener that auto-generates metadata when an asset is added in the Assets app. The listener is disabled by default.
|
As of version 2.2.8, the |
When enabled, the ai-accelerator-image-io:generateAssetMetadata task first uses values defined under its input properties. If none are set, it falls back to the defaultValue entries. You can view these in the Definitions app.
observationMetadataGeneration:
enabled: false
observationDelay: 1000
fields:
caption:
enabled: false
taskId: ai-accelerator-image-io:generateAssetMetadata
taskParameters:
fieldName: caption
maxLength: 100
description:
enabled: true
taskId: ai-accelerator-image-io:generateAssetMetadata
taskParameters:
fieldName: description
maxWords: 150
Set observationMetadataGeneration.enabled: true to enable the listener and use the defaults above. By default, caption is disabled and description is enabled. You can add fields entries for title and subject.
To configure observation, decorate <light-module-folder>/<light-module-name>/decorations/ai-accelerator-image-io/config.yaml.
observationMetadataGeneration:
enabled: true
observationDelay: 5000
fields:
caption:
enabled: true
taskParameters:
maxLength: 100
locales: &locales
english: en
german: de
chinese: zh-CN
swiss: de-CH
description:
enabled: true
taskParameters:
maxWords: 150
locales: *locales
| Property | Description |
|---|---|
|
Enable or disable the observation listener. Default: |
|
Debounce delay in milliseconds before running the task after an event. Default: |
|
Map of field configurations keyed by field name ( |
| Property | Description |
|---|---|
|
Whether this field is generated by observation. Defaults: |
|
Task to execute. Default: |
|
Parameters forwarded to the task. See Task parameters. Common examples include:
- |
Generate metadata by Groovy script
The ai-accelerator-image-io module includes a Groovy script to generate description properties for all images in the DAM workspace.
Open the Groovy app at ${webapp-context-path}/.magnolia/admincentral#app:groovy:browser;/ai-accelerator/CreateDescriptionForAllAssets.
import info.magnolia.ai.task.AiTaskExecutor
import info.magnolia.objectfactory.Components
import com.google.common.collect.ImmutableMap
workspace = 'dam'
nodeType = "mgnl:asset"
folderType = "mgnl:folder"
folderPath = "/" // Narrow the scope if needed.
excludedFolders = [ ] // Add full folder paths to skip AI lookups.
// e.g. excludedFolders = ["/travel-demo/social-icons", "/svg-icons", "/destinations"]
def generateDescription(assetPath) {
AiTaskExecutor taskExecutor = Components.getComponent(AiTaskExecutor.class)
String prompt = "Create a great alt text for SEO purposes"
Map<String, Object> taskParameters = ImmutableMap.<String, Object> builder()
.put("prompt", prompt)
.put("path", assetPath)
.put("overwrite", false) // Set true to overwrite existing data.
// .put("fieldName", "caption") // Default is "description"; allowed: "caption", "title", "subject"
.build()
Map<String, Object> taskContext = ImmutableMap.<String, Object> builder()
.put("taskParameters", taskParameters)
.build()
try {
Map<String, Object> execute = taskExecutor.execute("ai-accelerator-image-io:generateAssetMetadata", taskContext)
Map<String, Object> response = ImmutableMap.<String, Object> builder().putAll(execute).put("prompt", prompt).build()
if (response.get("error") == true) {
println response.toString()
} else {
LinkedHashMap responseData = ((LinkedHashMap) response.get("data"))
println "Retrieved Description properties for: " + assetPath
println "\tEnglish: " + responseData.get("text_en").toString()
println "\tGerman: " + responseData.get("text_de").toString()
}
} catch (Exception e) {
println "Error generating description for: " + assetPath + " " + e.toString()
}
}
def assessFolder(targetFolder) {
targetNode = session.getNode(targetFolder)
def currentPath = targetNode.getPath()
println "Analyzing folder: " + currentPath
// Skip excluded folders.
if (excludedFolders.any { currentPath.startsWith(it) }) {
println "Skipping excluded folder: " + currentPath
return
}
targetNodeAssets = NodeUtil.asList(NodeUtil.getNodes(targetNode, nodeType))
targetNodeFolders = NodeUtil.asList(NodeUtil.getNodes(targetNode, folderType))
for (assetNode in targetNodeAssets) {
generateDescription(assetNode.getPath())
}
for (folderNode in targetNodeFolders) {
assessFolder(folderNode.getPath())
}
}
ctx = MgnlContext.getInstance()
session = ctx.getJCRSession(workspace)
assessFolder(folderPath)