danielrosehill commited on
Commit
fb719ac
·
1 Parent(s): e25b8f6
.gitignore ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+ *.so
6
+ .Python
7
+ build/
8
+ develop-eggs/
9
+ dist/
10
+ downloads/
11
+ eggs/
12
+ .eggs/
13
+ lib/
14
+ lib64/
15
+ parts/
16
+ sdist/
17
+ var/
18
+ wheels/
19
+ *.egg-info/
20
+ .installed.cfg
21
+ *.egg
22
+
23
+ # Virtual environments
24
+ venv/
25
+ ENV/
26
+ env/
27
+
28
+ # IDE
29
+ .vscode/
30
+ .idea/
31
+ *.swp
32
+ *.swo
33
+ *~
34
+
35
+ # OS
36
+ .DS_Store
37
+ Thumbs.db
38
+
39
+ # Dataset cache
40
+ .cache/
41
+ cached/
42
+
43
+ # Testing
44
+ .pytest_cache/
45
+ .coverage
46
+ htmlcov/
47
+
48
+ # Hugging Face
49
+ dataset_infos.json.lock
README.md ADDED
@@ -0,0 +1,273 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: cc0-1.0
5
+ size_categories:
6
+ - n<1K
7
+ task_categories:
8
+ - other
9
+ pretty_name: Multimodal AI Taxonomy
10
+ tags:
11
+ - multimodal
12
+ - taxonomy
13
+ - ai-models
14
+ - modality-mapping
15
+ - computer-vision
16
+ - audio
17
+ - video-generation
18
+ - image-generation
19
+ ---
20
+
21
+ # Multimodal AI Taxonomy
22
+
23
+ A comprehensive, structured taxonomy for mapping multimodal AI model capabilities across input and output modalities.
24
+
25
+ ## Dataset Description
26
+
27
+ This dataset provides a systematic categorization of multimodal AI capabilities, enabling users to:
28
+ - Navigate the complex landscape of multimodal AI models
29
+ - Filter models by specific input/output modality combinations
30
+ - Understand the nuanced differences between similar models (e.g., image-to-video with/without audio, with/without lip sync)
31
+ - Discover models that match specific use case requirements
32
+
33
+ ### Dataset Summary
34
+
35
+ The taxonomy organizes multimodal AI capabilities by:
36
+ - **Output modality** (video, audio, image, text, 3D models)
37
+ - **Operation type** (creation vs. editing)
38
+ - **Detailed characteristics** (lip sync, audio generation method, motion type, etc.)
39
+ - **Maturity level** (experimental, emerging, mature)
40
+ - **Platform availability** and example models
41
+
42
+ ### Supported Tasks
43
+
44
+ This is a reference taxonomy dataset for:
45
+ - Model discovery and filtering
46
+ - Understanding multimodal AI capabilities
47
+ - Research into multimodal AI landscape
48
+ - Building model selection tools
49
+
50
+ ## Dataset Structure
51
+
52
+ The taxonomy is organized as a hierarchical folder structure:
53
+
54
+ ```
55
+ taxonomy/
56
+ ├── schema.json # Common schema definition
57
+ ├── README.md # Taxonomy documentation
58
+ ├── video-generation/
59
+ │ ├── creation/modalities.json
60
+ │ └── editing/modalities.json
61
+ ├── audio-generation/
62
+ │ ├── creation/modalities.json
63
+ │ └── editing/modalities.json
64
+ ├── image-generation/
65
+ │ ├── creation/modalities.json
66
+ │ └── editing/modalities.json
67
+ ├── text-generation/
68
+ │ ├── creation/modalities.json
69
+ │ └── editing/modalities.json
70
+ └── 3d-generation/
71
+ ├── creation/modalities.json
72
+ └── editing/modalities.json
73
+ ```
74
+
75
+ ### Data Instances
76
+
77
+ Each modality entry contains:
78
+
79
+ ```json
80
+ {
81
+ "id": "img-to-vid-lipsync-text",
82
+ "name": "Image to Video (Lip Sync from Text)",
83
+ "input": {
84
+ "primary": "image",
85
+ "secondary": ["text"]
86
+ },
87
+ "output": {
88
+ "primary": "video",
89
+ "audio": true,
90
+ "audioType": "speech"
91
+ },
92
+ "characteristics": {
93
+ "processType": "synthesis",
94
+ "audioGeneration": "text-to-speech",
95
+ "audioPrompting": "text-based",
96
+ "lipSync": true,
97
+ "lipSyncMethod": "generated-from-text",
98
+ "motionType": "facial"
99
+ },
100
+ "metadata": {
101
+ "maturityLevel": "mature",
102
+ "commonUseCases": [
103
+ "Avatar creation",
104
+ "Character animation from portrait",
105
+ "Marketing personalization"
106
+ ],
107
+ "platforms": ["Replicate", "FAL AI", "HeyGen"],
108
+ "exampleModels": ["Wav2Lip", "SadTalker", "DreamTalk"]
109
+ }
110
+ }
111
+ ```
112
+
113
+ ### Data Fields
114
+
115
+ **Top-level file fields:**
116
+ - `fileType`: Always "multimodal-ai-taxonomy"
117
+ - `outputModality`: The primary output type (video, audio, image, text, 3d-model)
118
+ - `operationType`: Either "creation" or "editing"
119
+ - `description`: Human-readable description of the file contents
120
+ - `modalities`: Array of modality objects
121
+
122
+ **Modality object fields:**
123
+ - `id` (string): Unique identifier in kebab-case
124
+ - `name` (string): Human-readable name
125
+ - `input` (object):
126
+ - `primary` (string): Main input modality
127
+ - `secondary` (array): Additional optional inputs
128
+ - `output` (object):
129
+ - `primary` (string): Main output modality
130
+ - `audio` (boolean): Whether audio is included (for video outputs)
131
+ - `audioType` (string): Type of audio (speech, music, ambient, etc.)
132
+ - `characteristics` (object): Modality-specific features (varies by type)
133
+ - `metadata` (object):
134
+ - `maturityLevel` (string): experimental, emerging, or mature
135
+ - `commonUseCases` (array): Typical use cases
136
+ - `platforms` (array): Platforms supporting this modality
137
+ - `exampleModels` (array): Example model implementations
138
+ - `relationships` (object, optional): Links to related modalities
139
+
140
+ ### Data Splits
141
+
142
+ This dataset is provided as a complete reference taxonomy without splits.
143
+
144
+ ## Dataset Creation
145
+
146
+ ### Curation Rationale
147
+
148
+ The rapid development of multimodal AI has created a complex landscape with hundreds of model variants. Platforms like Replicate and FAL AI offer numerous models that differ not just in parameters or resolution, but in fundamental modality support. For example, among 20+ image-to-video models, some generate silent video, others add ambient audio, and some include lip-synced speech - but these differences aren't easily filterable.
149
+
150
+ This taxonomy addresses the need for:
151
+ 1. **Systematic categorization** of multimodal capabilities
152
+ 2. **Fine-grained filtering** beyond basic input/output types
153
+ 3. **Discovery** of models matching specific use cases
154
+ 4. **Understanding** of the multimodal AI landscape
155
+
156
+ ### Source Data
157
+
158
+ The taxonomy is curated from:
159
+ - Public AI model platforms (Replicate, FAL AI, HuggingFace, RunwayML, etc.)
160
+ - Research papers and model documentation
161
+ - Community knowledge and testing
162
+ - Direct platform API exploration
163
+
164
+ ### Annotations
165
+
166
+ All entries are manually curated and categorized based on model documentation, testing, and platform specifications.
167
+
168
+ ## Considerations for Using the Data
169
+
170
+ ### Social Impact
171
+
172
+ This dataset is designed to:
173
+ - Democratize access to understanding multimodal AI capabilities
174
+ - Enable better model selection for specific use cases
175
+ - Support research into multimodal AI trends and capabilities
176
+
177
+ ### Discussion of Biases
178
+
179
+ The taxonomy reflects:
180
+ - Current state of publicly accessible multimodal AI (as of 2025)
181
+ - Platform availability bias toward commercial services
182
+ - Maturity level assessments based on community adoption and stability
183
+
184
+ ### Other Known Limitations
185
+
186
+ - The field is rapidly evolving; new modalities emerge regularly
187
+ - Platform and model availability changes over time
188
+ - Some experimental modalities may have limited real-world implementations
189
+ - Coverage may be incomplete for niche or newly emerging modalities
190
+
191
+ ## Additional Information
192
+
193
+ ### Dataset Curators
194
+
195
+ Created and maintained as an open-source project for the multimodal AI community.
196
+
197
+ ### Licensing Information
198
+
199
+ Creative Commons Zero v1.0 Universal (CC0 1.0) - Public Domain Dedication
200
+
201
+ ### Citation Information
202
+
203
+ If you use this taxonomy in your research or projects, please cite:
204
+
205
+ ```bibtex
206
+ @dataset{multimodal_ai_taxonomy,
207
+ title={Multimodal AI Taxonomy},
208
+ author={Community Contributors},
209
+ year={2025},
210
+ publisher={Hugging Face},
211
+ howpublished={\url{https://huggingface.co/datasets/YOUR_USERNAME/multimodal-ai-taxonomy}}
212
+ }
213
+ ```
214
+
215
+ ### Contributions
216
+
217
+ This is an open-source taxonomy that welcomes community contributions. To add new modalities or update existing entries:
218
+
219
+ 1. Follow the schema defined in `taxonomy/schema.json`
220
+ 2. Add entries to the appropriate modality file based on output type and operation
221
+ 3. Submit a pull request with clear documentation
222
+
223
+ For detailed contribution guidelines, see `taxonomy/README.md`.
224
+
225
+ ## Usage Examples
226
+
227
+ ### Loading the Dataset
228
+
229
+ ```python
230
+ from datasets import load_dataset
231
+
232
+ # Load the entire taxonomy
233
+ dataset = load_dataset("YOUR_USERNAME/multimodal-ai-taxonomy")
234
+
235
+ # Access specific modality files
236
+ video_creation = dataset["video_generation_creation"]
237
+ audio_editing = dataset["audio_generation_editing"]
238
+ ```
239
+
240
+ ### Filtering by Characteristics
241
+
242
+ ```python
243
+ import json
244
+
245
+ # Find all video generation modalities with lip sync
246
+ with open("taxonomy/video-generation/creation/modalities.json") as f:
247
+ data = json.load(f)
248
+
249
+ lipsync_modalities = [
250
+ m for m in data["modalities"]
251
+ if m.get("characteristics", {}).get("lipSync") == True
252
+ ]
253
+
254
+ for modality in lipsync_modalities:
255
+ print(f"{modality['name']}: {modality['id']}")
256
+ ```
257
+
258
+ ### Finding Models by Use Case
259
+
260
+ ```python
261
+ # Find mature image generation methods
262
+ with open("taxonomy/image-generation/creation/modalities.json") as f:
263
+ data = json.load(f)
264
+
265
+ mature_methods = [
266
+ m for m in data["modalities"]
267
+ if m["metadata"]["maturityLevel"] == "mature"
268
+ ]
269
+ ```
270
+
271
+ ## Contact
272
+
273
+ For questions, suggestions, or contributions, please open an issue in the dataset repository.
dataset_infos.json ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "all": {
3
+ "description": "Complete taxonomy with all modalities across all output types and operations.",
4
+ "citation": "@dataset{multimodal_ai_taxonomy,\n title={Multimodal AI Taxonomy},\n author={Community Contributors},\n year={2025},\n publisher={Hugging Face},\n}",
5
+ "homepage": "https://huggingface.co/datasets/YOUR_USERNAME/multimodal-ai-taxonomy",
6
+ "license": "cc0-1.0",
7
+ "features": {
8
+ "id": {
9
+ "dtype": "string",
10
+ "_type": "Value"
11
+ },
12
+ "name": {
13
+ "dtype": "string",
14
+ "_type": "Value"
15
+ },
16
+ "input_primary": {
17
+ "dtype": "string",
18
+ "_type": "Value"
19
+ },
20
+ "input_secondary": {
21
+ "feature": {
22
+ "dtype": "string",
23
+ "_type": "Value"
24
+ },
25
+ "_type": "Sequence"
26
+ },
27
+ "output_primary": {
28
+ "dtype": "string",
29
+ "_type": "Value"
30
+ },
31
+ "output_audio": {
32
+ "dtype": "bool",
33
+ "_type": "Value"
34
+ },
35
+ "output_audio_type": {
36
+ "dtype": "string",
37
+ "_type": "Value"
38
+ },
39
+ "characteristics": {
40
+ "dtype": "string",
41
+ "_type": "Value"
42
+ },
43
+ "metadata_maturity_level": {
44
+ "dtype": "string",
45
+ "_type": "Value"
46
+ },
47
+ "metadata_common_use_cases": {
48
+ "feature": {
49
+ "dtype": "string",
50
+ "_type": "Value"
51
+ },
52
+ "_type": "Sequence"
53
+ },
54
+ "metadata_platforms": {
55
+ "feature": {
56
+ "dtype": "string",
57
+ "_type": "Value"
58
+ },
59
+ "_type": "Sequence"
60
+ },
61
+ "metadata_example_models": {
62
+ "feature": {
63
+ "dtype": "string",
64
+ "_type": "Value"
65
+ },
66
+ "_type": "Sequence"
67
+ },
68
+ "relationships": {
69
+ "dtype": "string",
70
+ "_type": "Value"
71
+ },
72
+ "file_output_modality": {
73
+ "dtype": "string",
74
+ "_type": "Value"
75
+ },
76
+ "file_operation_type": {
77
+ "dtype": "string",
78
+ "_type": "Value"
79
+ }
80
+ },
81
+ "supervised_keys": null,
82
+ "builder_name": "multimodal_ai_taxonomy",
83
+ "config_name": "all",
84
+ "version": "1.0.0",
85
+ "splits": {
86
+ "train": {
87
+ "name": "train",
88
+ "num_bytes": 0,
89
+ "num_examples": 0,
90
+ "dataset_name": "multimodal_ai_taxonomy"
91
+ }
92
+ },
93
+ "download_size": 0,
94
+ "dataset_size": 0
95
+ }
96
+ }
multimodal-ai-taxonomy.py ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Multimodal AI Taxonomy dataset loading script."""
2
+
3
+ import json
4
+ import os
5
+ from pathlib import Path
6
+ from typing import Dict, List
7
+
8
+ import datasets
9
+
10
+
11
+ _CITATION = """\
12
+ @dataset{multimodal_ai_taxonomy,
13
+ title={Multimodal AI Taxonomy},
14
+ author={Community Contributors},
15
+ year={2025},
16
+ publisher={Hugging Face},
17
+ }
18
+ """
19
+
20
+ _DESCRIPTION = """\
21
+ A comprehensive, structured taxonomy for mapping multimodal AI model capabilities across input and output modalities.
22
+ This dataset provides a systematic categorization of multimodal AI capabilities, enabling users to navigate the complex
23
+ landscape of multimodal AI models, filter by specific input/output modality combinations, and discover models that match
24
+ specific use case requirements.
25
+ """
26
+
27
+ _HOMEPAGE = "https://huggingface.co/datasets/YOUR_USERNAME/multimodal-ai-taxonomy"
28
+
29
+ _LICENSE = "cc0-1.0"
30
+
31
+ _URLS = {
32
+ "schema": "taxonomy/schema.json",
33
+ "video_generation_creation": "taxonomy/video-generation/creation/modalities.json",
34
+ "video_generation_editing": "taxonomy/video-generation/editing/modalities.json",
35
+ "audio_generation_creation": "taxonomy/audio-generation/creation/modalities.json",
36
+ "audio_generation_editing": "taxonomy/audio-generation/editing/modalities.json",
37
+ "image_generation_creation": "taxonomy/image-generation/creation/modalities.json",
38
+ "image_generation_editing": "taxonomy/image-generation/editing/modalities.json",
39
+ "text_generation_creation": "taxonomy/text-generation/creation/modalities.json",
40
+ "text_generation_editing": "taxonomy/text-generation/editing/modalities.json",
41
+ "3d_generation_creation": "taxonomy/3d-generation/creation/modalities.json",
42
+ "3d_generation_editing": "taxonomy/3d-generation/editing/modalities.json",
43
+ }
44
+
45
+
46
+ class MultimodalAITaxonomy(datasets.GeneratorBasedBuilder):
47
+ """Multimodal AI Taxonomy dataset."""
48
+
49
+ VERSION = datasets.Version("1.0.0")
50
+
51
+ BUILDER_CONFIGS = [
52
+ datasets.BuilderConfig(
53
+ name="all",
54
+ version=VERSION,
55
+ description="Complete taxonomy with all modalities",
56
+ ),
57
+ datasets.BuilderConfig(
58
+ name="video_generation_creation",
59
+ version=VERSION,
60
+ description="Video generation (creation) modalities",
61
+ ),
62
+ datasets.BuilderConfig(
63
+ name="video_generation_editing",
64
+ version=VERSION,
65
+ description="Video generation (editing) modalities",
66
+ ),
67
+ datasets.BuilderConfig(
68
+ name="audio_generation_creation",
69
+ version=VERSION,
70
+ description="Audio generation (creation) modalities",
71
+ ),
72
+ datasets.BuilderConfig(
73
+ name="audio_generation_editing",
74
+ version=VERSION,
75
+ description="Audio generation (editing) modalities",
76
+ ),
77
+ datasets.BuilderConfig(
78
+ name="image_generation_creation",
79
+ version=VERSION,
80
+ description="Image generation (creation) modalities",
81
+ ),
82
+ datasets.BuilderConfig(
83
+ name="image_generation_editing",
84
+ version=VERSION,
85
+ description="Image generation (editing) modalities",
86
+ ),
87
+ datasets.BuilderConfig(
88
+ name="text_generation_creation",
89
+ version=VERSION,
90
+ description="Text generation (creation) modalities",
91
+ ),
92
+ datasets.BuilderConfig(
93
+ name="text_generation_editing",
94
+ version=VERSION,
95
+ description="Text generation (editing) modalities",
96
+ ),
97
+ datasets.BuilderConfig(
98
+ name="3d_generation_creation",
99
+ version=VERSION,
100
+ description="3D generation (creation) modalities",
101
+ ),
102
+ datasets.BuilderConfig(
103
+ name="3d_generation_editing",
104
+ version=VERSION,
105
+ description="3D generation (editing) modalities",
106
+ ),
107
+ ]
108
+
109
+ DEFAULT_CONFIG_NAME = "all"
110
+
111
+ def _info(self):
112
+ features = datasets.Features(
113
+ {
114
+ "id": datasets.Value("string"),
115
+ "name": datasets.Value("string"),
116
+ "input_primary": datasets.Value("string"),
117
+ "input_secondary": datasets.Sequence(datasets.Value("string")),
118
+ "output_primary": datasets.Value("string"),
119
+ "output_audio": datasets.Value("bool"),
120
+ "output_audio_type": datasets.Value("string"),
121
+ "characteristics": datasets.Value("string"), # JSON string for flexibility
122
+ "metadata_maturity_level": datasets.Value("string"),
123
+ "metadata_common_use_cases": datasets.Sequence(datasets.Value("string")),
124
+ "metadata_platforms": datasets.Sequence(datasets.Value("string")),
125
+ "metadata_example_models": datasets.Sequence(datasets.Value("string")),
126
+ "relationships": datasets.Value("string"), # JSON string for flexibility
127
+ "file_output_modality": datasets.Value("string"),
128
+ "file_operation_type": datasets.Value("string"),
129
+ }
130
+ )
131
+
132
+ return datasets.DatasetInfo(
133
+ description=_DESCRIPTION,
134
+ features=features,
135
+ homepage=_HOMEPAGE,
136
+ license=_LICENSE,
137
+ citation=_CITATION,
138
+ )
139
+
140
+ def _split_generators(self, dl_manager):
141
+ """Returns SplitGenerators."""
142
+
143
+ # Download/locate all files
144
+ if self.config.name == "all":
145
+ # Load all modality files
146
+ config_names = [k for k in _URLS.keys() if k != "schema"]
147
+ else:
148
+ # Load only the specified config
149
+ config_names = [self.config.name]
150
+
151
+ return [
152
+ datasets.SplitGenerator(
153
+ name=datasets.Split.TRAIN,
154
+ gen_kwargs={
155
+ "config_names": config_names,
156
+ "dl_manager": dl_manager,
157
+ },
158
+ ),
159
+ ]
160
+
161
+ def _generate_examples(self, config_names, dl_manager):
162
+ """Yields examples from the taxonomy."""
163
+
164
+ idx = 0
165
+ for config_name in config_names:
166
+ filepath = _URLS[config_name]
167
+
168
+ # Read the JSON file
169
+ with open(dl_manager.download(filepath), encoding="utf-8") as f:
170
+ data = json.load(f)
171
+
172
+ output_modality = data.get("outputModality", "")
173
+ operation_type = data.get("operationType", "")
174
+
175
+ # Process each modality in the file
176
+ for modality in data.get("modalities", []):
177
+ # Extract input information
178
+ input_data = modality.get("input", {})
179
+ input_primary = input_data.get("primary", "")
180
+ input_secondary = input_data.get("secondary", [])
181
+
182
+ # Extract output information
183
+ output_data = modality.get("output", {})
184
+ output_primary = output_data.get("primary", "")
185
+ output_audio = output_data.get("audio", False)
186
+ output_audio_type = output_data.get("audioType", "")
187
+
188
+ # Extract metadata
189
+ metadata = modality.get("metadata", {})
190
+ maturity_level = metadata.get("maturityLevel", "")
191
+ common_use_cases = metadata.get("commonUseCases", [])
192
+ platforms = metadata.get("platforms", [])
193
+ example_models = metadata.get("exampleModels", [])
194
+
195
+ # Keep characteristics and relationships as JSON strings for flexibility
196
+ characteristics = json.dumps(modality.get("characteristics", {}))
197
+ relationships = json.dumps(modality.get("relationships", {}))
198
+
199
+ yield idx, {
200
+ "id": modality.get("id", ""),
201
+ "name": modality.get("name", ""),
202
+ "input_primary": input_primary,
203
+ "input_secondary": input_secondary,
204
+ "output_primary": output_primary,
205
+ "output_audio": output_audio,
206
+ "output_audio_type": output_audio_type,
207
+ "characteristics": characteristics,
208
+ "metadata_maturity_level": maturity_level,
209
+ "metadata_common_use_cases": common_use_cases,
210
+ "metadata_platforms": platforms,
211
+ "metadata_example_models": example_models,
212
+ "relationships": relationships,
213
+ "file_output_modality": output_modality,
214
+ "file_operation_type": operation_type,
215
+ }
216
+
217
+ idx += 1
taxonomy/3d-generation/creation/modalities.json ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "fileType": "multimodal-ai-taxonomy",
3
+ "outputModality": "3d-model",
4
+ "operationType": "creation",
5
+ "description": "Modalities for creating 3D model content from text, images, or other inputs",
6
+ "modalities": [
7
+ {
8
+ "id": "text-to-3d",
9
+ "name": "Text to 3D Model",
10
+ "input": {
11
+ "primary": "text",
12
+ "secondary": []
13
+ },
14
+ "output": {
15
+ "primary": "3d-model",
16
+ "audio": false
17
+ },
18
+ "characteristics": {
19
+ "processType": "synthesis",
20
+ "generationType": "3d-synthesis"
21
+ },
22
+ "metadata": {
23
+ "maturityLevel": "emerging",
24
+ "commonUseCases": [
25
+ "3D asset generation",
26
+ "Rapid prototyping",
27
+ "Game asset creation"
28
+ ],
29
+ "platforms": ["Replicate", "Meshy", "3DFY"],
30
+ "exampleModels": ["Point-E", "Shap-E", "DreamFusion"]
31
+ }
32
+ },
33
+ {
34
+ "id": "img-to-3d",
35
+ "name": "Image to 3D Model",
36
+ "input": {
37
+ "primary": "image",
38
+ "secondary": []
39
+ },
40
+ "output": {
41
+ "primary": "3d-model",
42
+ "audio": false
43
+ },
44
+ "characteristics": {
45
+ "processType": "synthesis",
46
+ "generationType": "3d-reconstruction"
47
+ },
48
+ "metadata": {
49
+ "maturityLevel": "emerging",
50
+ "commonUseCases": [
51
+ "3D reconstruction",
52
+ "Object digitization",
53
+ "Asset creation from photos"
54
+ ],
55
+ "platforms": ["Replicate", "Meshy", "Luma AI"],
56
+ "exampleModels": ["Zero-1-to-3", "Wonder3D"]
57
+ }
58
+ }
59
+ ]
60
+ }
taxonomy/3d-generation/editing/modalities.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "fileType": "multimodal-ai-taxonomy",
3
+ "outputModality": "3d-model",
4
+ "operationType": "editing",
5
+ "description": "Modalities for editing and transforming existing 3D model content (placeholder for future expansion)",
6
+ "modalities": []
7
+ }
taxonomy/README.md ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Multimodal AI Taxonomy
2
+
3
+ A comprehensive, folder-based taxonomy for mapping multimodal AI model capabilities across input and output modalities.
4
+
5
+ ## Structure
6
+
7
+ The taxonomy is organized by **output modality** with subfolders for **creation** vs **editing** operations:
8
+
9
+ ```
10
+ taxonomy/
11
+ ├── schema.json # Common schema definition for all modality files
12
+ ├── video-generation/
13
+ │ ├── creation/
14
+ │ │ └── modalities.json # Creating video from scratch
15
+ │ └── editing/
16
+ │ └── modalities.json # Transforming existing video
17
+ ├── audio-generation/
18
+ │ ├── creation/
19
+ │ │ └── modalities.json # Creating audio from scratch
20
+ │ └── editing/
21
+ │ └── modalities.json # Transforming existing audio
22
+ ├── image-generation/
23
+ │ ├── creation/
24
+ │ │ └── modalities.json # Creating images from scratch
25
+ │ └── editing/
26
+ │ └── modalities.json # Transforming existing images
27
+ ├── text-generation/
28
+ │ ├── creation/
29
+ │ │ └── modalities.json # Creating text from scratch (future)
30
+ │ └── editing/
31
+ │ └── modalities.json # Transforming existing text (future)
32
+ └── 3d-generation/
33
+ ├── creation/
34
+ │ └── modalities.json # Creating 3D models
35
+ └── editing/
36
+ └── modalities.json # Transforming 3D models (future)
37
+ ```
38
+
39
+ ## Organizational Principles
40
+
41
+ ### 1. Output Modality Organization
42
+ - Folders are organized by **what is being generated/produced**
43
+ - Example: "Image to Video" lives in `video-generation/` because video is the output
44
+
45
+ ### 2. Creation vs Editing
46
+ - **Creation**: Generating new content from scratch (text-to-video, image-to-video, etc.)
47
+ - **Editing**: Modifying existing content (video-to-video, audio-to-audio, image-to-image)
48
+
49
+ ### 3. Multimodal Inputs
50
+ - When multiple inputs are used (e.g., image + audio → video), file placement is determined by the **output** modality
51
+ - Example: "Image + Audio to Video" goes in `video-generation/creation/`
52
+
53
+ ## Schema Definition
54
+
55
+ All modality JSON files follow a common schema defined in `schema.json`. Each file contains:
56
+
57
+ ```json
58
+ {
59
+ "fileType": "multimodal-ai-taxonomy",
60
+ "outputModality": "video|audio|image|text|3d-model",
61
+ "operationType": "creation|editing",
62
+ "description": "Human-readable description",
63
+ "modalities": [
64
+ // Array of modality objects
65
+ ]
66
+ }
67
+ ```
68
+
69
+ ### Modality Object Structure
70
+
71
+ Each modality in the array follows this pattern:
72
+
73
+ ```json
74
+ {
75
+ "id": "unique-kebab-case-id",
76
+ "name": "Human Readable Name",
77
+ "input": {
78
+ "primary": "main-input-type",
79
+ "secondary": ["additional", "input", "types"]
80
+ },
81
+ "output": {
82
+ "primary": "output-type",
83
+ "audio": true|false,
84
+ "audioType": "speech|music|ambient|etc"
85
+ },
86
+ "characteristics": {
87
+ // Flexible object with modality-specific characteristics
88
+ // See schema.json for available fields
89
+ },
90
+ "metadata": {
91
+ "maturityLevel": "experimental|emerging|mature",
92
+ "commonUseCases": ["use case 1", "use case 2"],
93
+ "platforms": ["Platform 1", "Platform 2"],
94
+ "exampleModels": ["Model 1", "Model 2"]
95
+ },
96
+ "relationships": {
97
+ // Optional: parent/child/related modality IDs
98
+ }
99
+ }
100
+ ```
101
+
102
+ ## For AI Agents
103
+
104
+ When working with this taxonomy:
105
+
106
+ ### Adding New Modalities
107
+
108
+ 1. Determine the **output modality** (video, audio, image, text, 3d-model)
109
+ 2. Determine if it's **creation** (generating new) or **editing** (transforming existing)
110
+ 3. Navigate to the appropriate folder (e.g., `video-generation/creation/`)
111
+ 4. Add the new modality object to the `modalities` array in `modalities.json`
112
+ 5. Follow the schema defined in `schema.json`
113
+
114
+ ### Example: Adding "Text to Audio with Emotion Control"
115
+
116
+ This generates audio (output) from scratch (creation):
117
+
118
+ ```bash
119
+ # Edit: audio-generation/creation/modalities.json
120
+ # Add to the modalities array:
121
+ {
122
+ "id": "text-to-audio-emotion-control",
123
+ "name": "Text to Audio (Emotion Control)",
124
+ "input": {
125
+ "primary": "text",
126
+ "secondary": []
127
+ },
128
+ "output": {
129
+ "primary": "audio",
130
+ "audioType": "speech"
131
+ },
132
+ "characteristics": {
133
+ "processType": "synthesis",
134
+ "audioType": "speech",
135
+ "voiceCloning": false,
136
+ "emotionControl": true
137
+ },
138
+ "metadata": {
139
+ "maturityLevel": "emerging",
140
+ "commonUseCases": [
141
+ "Emotional narration",
142
+ "Character voice acting",
143
+ "Interactive storytelling"
144
+ ],
145
+ "platforms": ["ElevenLabs", "Experimental"],
146
+ "exampleModels": []
147
+ }
148
+ }
149
+ ```
150
+
151
+ ### Creating New Modality Categories
152
+
153
+ If you need to add a new output modality type (e.g., `haptic-generation`):
154
+
155
+ 1. Create the folder structure:
156
+ ```bash
157
+ mkdir -p taxonomy/haptic-generation/{creation,editing}
158
+ ```
159
+
160
+ 2. Create `modalities.json` files in both subfolders:
161
+ ```json
162
+ {
163
+ "fileType": "multimodal-ai-taxonomy",
164
+ "outputModality": "haptic",
165
+ "operationType": "creation",
166
+ "description": "Modalities for creating haptic feedback from various inputs",
167
+ "modalities": []
168
+ }
169
+ ```
170
+
171
+ 3. Update `schema.json` to include "haptic" in the output modality enum
172
+
173
+ ### Querying the Taxonomy
174
+
175
+ To find modalities matching specific criteria:
176
+
177
+ 1. **By output type**: Navigate to the appropriate folder (e.g., `video-generation/`)
178
+ 2. **By operation**: Look in `creation/` or `editing/` subfolder
179
+ 3. **By characteristics**: Parse JSON and filter by characteristics fields
180
+ - Lip sync: `characteristics.lipSync === true`
181
+ - Audio generation: `output.audio === true`
182
+ - Maturity: `metadata.maturityLevel === "mature"`
183
+
184
+ ### Filtering Examples
185
+
186
+ **Use Case 1**: Find all ways to generate video with lip sync from text
187
+ ```
188
+ Folder: video-generation/creation/
189
+ Filter: characteristics.lipSync === true
190
+ AND characteristics.audioPrompting === "text-based"
191
+ Results: img-to-vid-lipsync-text, img-to-vid-lipsync-lora
192
+ ```
193
+
194
+ **Use Case 2**: Find mature image generation methods
195
+ ```
196
+ Folder: image-generation/creation/
197
+ Filter: metadata.maturityLevel === "mature"
198
+ Results: text-to-img
199
+ ```
200
+
201
+ **Use Case 3**: Find all audio editing capabilities
202
+ ```
203
+ Folder: audio-generation/editing/
204
+ Filter: All modalities in this folder
205
+ Results: audio-to-audio-inpainting, music-to-music-inpainting
206
+ ```
207
+
208
+ ## Key Characteristics Fields
209
+
210
+ Different modality types use different characteristics fields. Common ones include:
211
+
212
+ - **processType**: synthesis, transformation, inpainting, editing, enhancement, rendering
213
+ - **audioGeneration**: none, synthesized, text-to-speech, reference-based
214
+ - **lipSync**: true/false
215
+ - **motionType**: general, facial, audio-driven, audio-reactive, guided, camera-path
216
+ - **transformationTypes**: array of transformation types for editing operations
217
+ - **maturityLevel**: experimental, emerging, mature
218
+
219
+ See `schema.json` for the complete list of available fields and their allowed values.
220
+
221
+ ## Extending the Schema
222
+
223
+ As new modality types emerge:
224
+
225
+ 1. Add new enum values to appropriate fields in `schema.json`
226
+ 2. Add new characteristics properties as needed
227
+ 3. Document the new fields in the schema's property descriptions
228
+ 4. Update this README with examples
229
+
230
+ ## Validation
231
+
232
+ To validate that a modality file follows the schema:
233
+
234
+ ```bash
235
+ # Using a JSON schema validator
236
+ npm install -g ajv-cli
237
+ ajv validate -s taxonomy/schema.json -d taxonomy/video-generation/creation/modalities.json
238
+ ```
239
+
240
+ ## Migration from Original Structure
241
+
242
+ The original `taxonomy.json` file has been migrated to this folder-based structure:
243
+
244
+ - Single flat array → Organized by output modality and operation type
245
+ - Categories metadata → Implicit in folder structure
246
+ - All modality objects → Distributed across appropriate modality files
247
+ - Schema enforcement → Explicit schema.json for validation
248
+
249
+ ## Benefits of This Structure
250
+
251
+ 1. **Scalability**: Easy to add new modalities without file bloat
252
+ 2. **Organization**: Clear categorization by output and operation type
253
+ 3. **Discoverability**: Navigate folder structure to find relevant modalities
254
+ 4. **Maintainability**: Each file focuses on a specific domain
255
+ 5. **Agent-friendly**: Clear patterns for AI agents to follow when adding/querying
256
+ 6. **Extensible**: Easy to add new modality types as folders
257
+ 7. **Schema-enforced**: Common pattern ensures consistency
taxonomy/audio-generation/creation/modalities.json ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "fileType": "multimodal-ai-taxonomy",
3
+ "outputModality": "audio",
4
+ "operationType": "creation",
5
+ "description": "Modalities for creating audio content from text or other inputs",
6
+ "modalities": [
7
+ {
8
+ "id": "text-to-audio",
9
+ "name": "Text to Audio",
10
+ "input": {
11
+ "primary": "text",
12
+ "secondary": []
13
+ },
14
+ "output": {
15
+ "primary": "audio",
16
+ "audioType": "general"
17
+ },
18
+ "characteristics": {
19
+ "processType": "synthesis",
20
+ "audioType": "general",
21
+ "audioCategories": ["speech", "sound-effects", "music", "ambient"]
22
+ },
23
+ "metadata": {
24
+ "maturityLevel": "mature",
25
+ "commonUseCases": [
26
+ "Sound effect generation",
27
+ "Voiceover creation",
28
+ "Audio asset production"
29
+ ],
30
+ "platforms": ["Replicate", "ElevenLabs", "AudioCraft"],
31
+ "exampleModels": ["AudioGen", "MusicGen"]
32
+ }
33
+ },
34
+ {
35
+ "id": "text-to-speech",
36
+ "name": "Text to Speech",
37
+ "input": {
38
+ "primary": "text",
39
+ "secondary": []
40
+ },
41
+ "output": {
42
+ "primary": "audio",
43
+ "audioType": "speech"
44
+ },
45
+ "characteristics": {
46
+ "processType": "synthesis",
47
+ "audioType": "speech",
48
+ "voiceCloning": false
49
+ },
50
+ "metadata": {
51
+ "maturityLevel": "mature",
52
+ "commonUseCases": [
53
+ "Narration",
54
+ "Accessibility",
55
+ "Voice assistants"
56
+ ],
57
+ "platforms": ["ElevenLabs", "Google Cloud", "Azure", "AWS"],
58
+ "exampleModels": ["ElevenLabs", "Google WaveNet", "Azure Neural TTS"]
59
+ }
60
+ },
61
+ {
62
+ "id": "text-to-music",
63
+ "name": "Text to Music",
64
+ "input": {
65
+ "primary": "text",
66
+ "secondary": []
67
+ },
68
+ "output": {
69
+ "primary": "audio",
70
+ "audioType": "music"
71
+ },
72
+ "characteristics": {
73
+ "processType": "synthesis",
74
+ "audioType": "music",
75
+ "melodic": true
76
+ },
77
+ "metadata": {
78
+ "maturityLevel": "emerging",
79
+ "commonUseCases": [
80
+ "Background music generation",
81
+ "Musical composition",
82
+ "Soundtrack creation"
83
+ ],
84
+ "platforms": ["Replicate", "Stability AI"],
85
+ "exampleModels": ["MusicGen", "Stable Audio"]
86
+ }
87
+ }
88
+ ]
89
+ }
taxonomy/audio-generation/editing/modalities.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "fileType": "multimodal-ai-taxonomy",
3
+ "outputModality": "audio",
4
+ "operationType": "editing",
5
+ "description": "Modalities for editing and transforming existing audio content",
6
+ "modalities": [
7
+ {
8
+ "id": "audio-to-audio-inpainting",
9
+ "name": "Audio to Audio (Inpainting)",
10
+ "input": {
11
+ "primary": "audio",
12
+ "secondary": ["text"]
13
+ },
14
+ "output": {
15
+ "primary": "audio",
16
+ "audioType": "general"
17
+ },
18
+ "characteristics": {
19
+ "processType": "inpainting",
20
+ "modification": "selective-editing"
21
+ },
22
+ "metadata": {
23
+ "maturityLevel": "emerging",
24
+ "commonUseCases": [
25
+ "Audio editing",
26
+ "Sound design",
27
+ "Audio restoration"
28
+ ],
29
+ "platforms": ["Experimental"],
30
+ "exampleModels": []
31
+ }
32
+ },
33
+ {
34
+ "id": "music-to-music-inpainting",
35
+ "name": "Music to Music (Inpainting)",
36
+ "input": {
37
+ "primary": "audio",
38
+ "secondary": ["text"]
39
+ },
40
+ "output": {
41
+ "primary": "audio",
42
+ "audioType": "music"
43
+ },
44
+ "characteristics": {
45
+ "processType": "inpainting",
46
+ "modification": "selective-editing",
47
+ "melodic": true,
48
+ "audioSubtype": "music"
49
+ },
50
+ "metadata": {
51
+ "maturityLevel": "experimental",
52
+ "commonUseCases": [
53
+ "Music editing",
54
+ "Compositional modifications",
55
+ "Arrangement changes"
56
+ ],
57
+ "platforms": ["Experimental"],
58
+ "exampleModels": []
59
+ },
60
+ "relationships": {
61
+ "parent": "audio-to-audio-inpainting",
62
+ "note": "Music inpainting is a specialized subset of audio inpainting"
63
+ }
64
+ }
65
+ ]
66
+ }
taxonomy/image-generation/creation/modalities.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "fileType": "multimodal-ai-taxonomy",
3
+ "outputModality": "image",
4
+ "operationType": "creation",
5
+ "description": "Modalities for creating image content from text or other inputs",
6
+ "modalities": [
7
+ {
8
+ "id": "text-to-img",
9
+ "name": "Text to Image",
10
+ "input": {
11
+ "primary": "text",
12
+ "secondary": []
13
+ },
14
+ "output": {
15
+ "primary": "image",
16
+ "audio": false
17
+ },
18
+ "characteristics": {
19
+ "processType": "synthesis",
20
+ "generationType": "synthesis"
21
+ },
22
+ "metadata": {
23
+ "maturityLevel": "mature",
24
+ "commonUseCases": [
25
+ "Concept art generation",
26
+ "Product mockups",
27
+ "Marketing assets"
28
+ ],
29
+ "platforms": ["Replicate", "Stability AI", "Midjourney", "DALL-E"],
30
+ "exampleModels": ["Stable Diffusion", "DALL-E 3", "Midjourney"]
31
+ }
32
+ }
33
+ ]
34
+ }
taxonomy/image-generation/editing/modalities.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "fileType": "multimodal-ai-taxonomy",
3
+ "outputModality": "image",
4
+ "operationType": "editing",
5
+ "description": "Modalities for editing and transforming existing image content",
6
+ "modalities": [
7
+ {
8
+ "id": "img-to-img",
9
+ "name": "Image to Image",
10
+ "input": {
11
+ "primary": "image",
12
+ "secondary": ["text"]
13
+ },
14
+ "output": {
15
+ "primary": "image",
16
+ "audio": false
17
+ },
18
+ "characteristics": {
19
+ "processType": "transformation",
20
+ "transformationTypes": ["style-transfer", "enhancement", "editing", "inpainting"]
21
+ },
22
+ "metadata": {
23
+ "maturityLevel": "mature",
24
+ "commonUseCases": [
25
+ "Image editing",
26
+ "Style transfer",
27
+ "Image enhancement",
28
+ "Object removal/addition"
29
+ ],
30
+ "platforms": ["Replicate", "Stability AI", "Midjourney"],
31
+ "exampleModels": ["Stable Diffusion img2img", "ControlNet"]
32
+ }
33
+ }
34
+ ]
35
+ }
taxonomy/schema.json ADDED
@@ -0,0 +1,291 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "$schema": "http://json-schema.org/draft-07/schema#",
3
+ "title": "Multimodal AI Taxonomy Schema",
4
+ "version": "1.0.0",
5
+ "description": "Common schema for all modality JSON arrays in the taxonomy. Each modality file should contain an array of modality objects following this schema.",
6
+
7
+ "modalitySchema": {
8
+ "type": "object",
9
+ "required": ["id", "name", "input", "output", "characteristics", "metadata"],
10
+ "properties": {
11
+ "id": {
12
+ "type": "string",
13
+ "description": "Unique identifier for this modality (kebab-case)",
14
+ "pattern": "^[a-z0-9-]+$"
15
+ },
16
+ "name": {
17
+ "type": "string",
18
+ "description": "Human-readable name for this modality"
19
+ },
20
+ "input": {
21
+ "type": "object",
22
+ "required": ["primary"],
23
+ "properties": {
24
+ "primary": {
25
+ "type": "string",
26
+ "description": "Primary input modality",
27
+ "enum": ["text", "image", "video", "audio", "music", "3d-model", "lora-model"]
28
+ },
29
+ "secondary": {
30
+ "type": "array",
31
+ "description": "Additional input modalities that can be provided",
32
+ "items": {
33
+ "type": "string",
34
+ "enum": ["text", "image", "video", "audio", "music", "3d-model", "lora-model"]
35
+ }
36
+ }
37
+ }
38
+ },
39
+ "output": {
40
+ "type": "object",
41
+ "required": ["primary"],
42
+ "properties": {
43
+ "primary": {
44
+ "type": "string",
45
+ "description": "Primary output modality",
46
+ "enum": ["text", "image", "video", "audio", "music", "3d-model"]
47
+ },
48
+ "audio": {
49
+ "type": "boolean",
50
+ "description": "Whether audio is included in the output (for video outputs)"
51
+ },
52
+ "audioType": {
53
+ "type": "string",
54
+ "description": "Type of audio in the output",
55
+ "enum": ["speech", "music", "ambient", "synchronized", "original", "general"]
56
+ }
57
+ }
58
+ },
59
+ "characteristics": {
60
+ "type": "object",
61
+ "description": "Specific characteristics and capabilities of this modality. Fields vary based on modality type.",
62
+ "properties": {
63
+ "processType": {
64
+ "type": "string",
65
+ "description": "Type of processing being performed",
66
+ "enum": ["synthesis", "transformation", "inpainting", "editing", "enhancement", "rendering"]
67
+ },
68
+ "audioGeneration": {
69
+ "type": "string",
70
+ "description": "How audio is generated",
71
+ "enum": ["none", "synthesized", "text-to-speech", "reference-based"]
72
+ },
73
+ "audioPrompting": {
74
+ "type": "string",
75
+ "description": "How audio generation is prompted",
76
+ "enum": ["text-based", "audio-reference", "none"]
77
+ },
78
+ "lipSync": {
79
+ "type": "boolean",
80
+ "description": "Whether lip sync is supported/generated"
81
+ },
82
+ "lipSyncMethod": {
83
+ "type": "string",
84
+ "description": "Method used for lip sync",
85
+ "enum": ["generated-from-text", "audio-driven"]
86
+ },
87
+ "motionType": {
88
+ "type": "string",
89
+ "description": "Type of motion in video output",
90
+ "enum": ["general", "facial", "audio-driven", "audio-reactive", "guided", "camera-path"]
91
+ },
92
+ "audioCharacteristics": {
93
+ "type": "array",
94
+ "description": "Specific characteristics of generated audio",
95
+ "items": {
96
+ "type": "string",
97
+ "enum": ["background", "environmental", "atmospheric", "melodic", "rhythmic"]
98
+ }
99
+ },
100
+ "transformationTypes": {
101
+ "type": "array",
102
+ "description": "Types of transformations supported",
103
+ "items": {
104
+ "type": "string",
105
+ "enum": ["style-transfer", "enhancement", "editing", "inpainting", "motion-modification", "object-editing"]
106
+ }
107
+ },
108
+ "preserveAudio": {
109
+ "type": "boolean",
110
+ "description": "Whether original audio is preserved in video transformations"
111
+ },
112
+ "audioHandling": {
113
+ "type": "string",
114
+ "description": "How audio is handled during processing",
115
+ "enum": ["passthrough", "removed", "generated", "modified"]
116
+ },
117
+ "characterReference": {
118
+ "type": "string",
119
+ "description": "Method for character reference/consistency",
120
+ "enum": ["lora", "image", "video"]
121
+ },
122
+ "audioVideoSync": {
123
+ "type": "boolean",
124
+ "description": "Whether audio and video are synchronized/coherent"
125
+ },
126
+ "audioVisualization": {
127
+ "type": "boolean",
128
+ "description": "Whether visuals are generated from audio"
129
+ },
130
+ "audioType": {
131
+ "type": "string",
132
+ "description": "Type of audio output",
133
+ "enum": ["speech", "music", "ambient", "general", "sound-effects"]
134
+ },
135
+ "audioCategories": {
136
+ "type": "array",
137
+ "description": "Categories of audio that can be generated",
138
+ "items": {
139
+ "type": "string",
140
+ "enum": ["speech", "sound-effects", "music", "ambient"]
141
+ }
142
+ },
143
+ "voiceCloning": {
144
+ "type": "boolean",
145
+ "description": "Whether voice cloning is supported"
146
+ },
147
+ "melodic": {
148
+ "type": "boolean",
149
+ "description": "Whether output is melodic (for music/audio)"
150
+ },
151
+ "audioSubtype": {
152
+ "type": "string",
153
+ "description": "Specific subtype of audio",
154
+ "enum": ["music", "speech", "effects", "ambient"]
155
+ },
156
+ "modification": {
157
+ "type": "string",
158
+ "description": "Type of modification being performed",
159
+ "enum": ["selective-editing", "enhancement", "restoration", "transformation"]
160
+ },
161
+ "generationType": {
162
+ "type": "string",
163
+ "description": "Type of generation process",
164
+ "enum": ["synthesis", "3d-synthesis", "3d-reconstruction"]
165
+ },
166
+ "renderType": {
167
+ "type": "string",
168
+ "description": "Type of rendering process",
169
+ "enum": ["3d-rendering", "2d-rendering"]
170
+ },
171
+ "guidanceType": {
172
+ "type": "string",
173
+ "description": "Type of guidance provided to the model",
174
+ "enum": ["text-only", "text-and-visual", "audio-driven", "multimodal"]
175
+ }
176
+ }
177
+ },
178
+ "metadata": {
179
+ "type": "object",
180
+ "required": ["maturityLevel", "commonUseCases"],
181
+ "properties": {
182
+ "maturityLevel": {
183
+ "type": "string",
184
+ "description": "How established/stable this modality is",
185
+ "enum": ["experimental", "emerging", "mature"]
186
+ },
187
+ "commonUseCases": {
188
+ "type": "array",
189
+ "description": "Common use cases for this modality",
190
+ "items": {
191
+ "type": "string"
192
+ },
193
+ "minItems": 1
194
+ },
195
+ "platforms": {
196
+ "type": "array",
197
+ "description": "Platforms/services that support this modality",
198
+ "items": {
199
+ "type": "string"
200
+ }
201
+ },
202
+ "exampleModels": {
203
+ "type": "array",
204
+ "description": "Example models that implement this modality",
205
+ "items": {
206
+ "type": "string"
207
+ }
208
+ }
209
+ }
210
+ },
211
+ "relationships": {
212
+ "type": "object",
213
+ "description": "Relationships to other modalities (optional)",
214
+ "properties": {
215
+ "parent": {
216
+ "type": "string",
217
+ "description": "Parent modality ID if this is a specialization"
218
+ },
219
+ "children": {
220
+ "type": "array",
221
+ "description": "Child modality IDs that specialize this modality",
222
+ "items": {
223
+ "type": "string"
224
+ }
225
+ },
226
+ "related": {
227
+ "type": "array",
228
+ "description": "Related modality IDs",
229
+ "items": {
230
+ "type": "string"
231
+ }
232
+ },
233
+ "note": {
234
+ "type": "string",
235
+ "description": "Additional notes about the relationship"
236
+ }
237
+ }
238
+ }
239
+ }
240
+ },
241
+
242
+ "fileStructure": {
243
+ "description": "Each modality JSON file should be an object with metadata and an array of modalities",
244
+ "type": "object",
245
+ "required": ["fileType", "outputModality", "operationType", "modalities"],
246
+ "properties": {
247
+ "fileType": {
248
+ "type": "string",
249
+ "const": "multimodal-ai-taxonomy",
250
+ "description": "Identifier for this file type"
251
+ },
252
+ "outputModality": {
253
+ "type": "string",
254
+ "description": "The primary output modality this file describes",
255
+ "enum": ["video", "audio", "image", "text", "3d-model"]
256
+ },
257
+ "operationType": {
258
+ "type": "string",
259
+ "description": "Whether this file describes creation or editing operations",
260
+ "enum": ["creation", "editing"]
261
+ },
262
+ "description": {
263
+ "type": "string",
264
+ "description": "Human-readable description of what this file contains"
265
+ },
266
+ "modalities": {
267
+ "type": "array",
268
+ "description": "Array of modality objects following the modalitySchema",
269
+ "items": {
270
+ "$ref": "#/modalitySchema"
271
+ }
272
+ }
273
+ }
274
+ },
275
+
276
+ "usageInstructions": {
277
+ "description": "Instructions for AI agents working with this taxonomy",
278
+ "guidelines": [
279
+ "Each JSON file in the taxonomy represents a collection of related modalities",
280
+ "All modality objects must follow the modalitySchema defined above",
281
+ "Files are organized by output modality (video-generation, audio-generation, etc.)",
282
+ "Within each modality folder, files are separated into creation/ and editing/ subfolders",
283
+ "Creation: Generating content from scratch (text-to-video, image-to-video, etc.)",
284
+ "Editing: Modifying existing content (video-to-video, audio-to-audio, image-to-image, etc.)",
285
+ "For multimodal inputs, file placement is determined by OUTPUT modality",
286
+ "The characteristics object is flexible - use appropriate fields for each modality type",
287
+ "Maintain consistent IDs using kebab-case format",
288
+ "Add new enum values to the schema as new modality types emerge"
289
+ ]
290
+ }
291
+ }
taxonomy/text-generation/creation/modalities.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "fileType": "multimodal-ai-taxonomy",
3
+ "outputModality": "text",
4
+ "operationType": "creation",
5
+ "description": "Modalities for creating text content from various inputs (placeholder for future expansion)",
6
+ "modalities": []
7
+ }
taxonomy/text-generation/editing/modalities.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "fileType": "multimodal-ai-taxonomy",
3
+ "outputModality": "text",
4
+ "operationType": "editing",
5
+ "description": "Modalities for editing and transforming existing text content (placeholder for future expansion)",
6
+ "modalities": []
7
+ }
taxonomy/video-generation/creation/modalities.json ADDED
@@ -0,0 +1,328 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "fileType": "multimodal-ai-taxonomy",
3
+ "outputModality": "video",
4
+ "operationType": "creation",
5
+ "description": "Modalities for creating video content from various input types (text, images, audio, 3D models)",
6
+ "modalities": [
7
+ {
8
+ "id": "img-to-vid-no-audio",
9
+ "name": "Image to Video (No Audio)",
10
+ "input": {
11
+ "primary": "image",
12
+ "secondary": []
13
+ },
14
+ "output": {
15
+ "primary": "video",
16
+ "audio": false
17
+ },
18
+ "characteristics": {
19
+ "processType": "synthesis",
20
+ "audioGeneration": "none",
21
+ "lipSync": false,
22
+ "motionType": "general"
23
+ },
24
+ "metadata": {
25
+ "maturityLevel": "mature",
26
+ "commonUseCases": [
27
+ "Static image animation",
28
+ "Product visualization",
29
+ "Concept previsualization"
30
+ ],
31
+ "platforms": ["Replicate", "FAL AI", "Stability AI"],
32
+ "exampleModels": ["Stable Video Diffusion", "AnimateDiff"]
33
+ }
34
+ },
35
+ {
36
+ "id": "img-to-vid-ambient-audio",
37
+ "name": "Image to Video (Ambient Audio)",
38
+ "input": {
39
+ "primary": "image",
40
+ "secondary": ["text"]
41
+ },
42
+ "output": {
43
+ "primary": "video",
44
+ "audio": true,
45
+ "audioType": "ambient"
46
+ },
47
+ "characteristics": {
48
+ "processType": "synthesis",
49
+ "audioGeneration": "synthesized",
50
+ "audioPrompting": "text-based",
51
+ "lipSync": false,
52
+ "motionType": "general",
53
+ "audioCharacteristics": ["background", "environmental", "atmospheric"]
54
+ },
55
+ "metadata": {
56
+ "maturityLevel": "emerging",
57
+ "commonUseCases": [
58
+ "Scene ambiance creation",
59
+ "Marketplace atmosphere",
60
+ "Environmental storytelling"
61
+ ],
62
+ "platforms": ["FAL AI", "Experimental"],
63
+ "exampleModels": []
64
+ }
65
+ },
66
+ {
67
+ "id": "img-to-vid-lipsync-text",
68
+ "name": "Image to Video (Lip Sync from Text)",
69
+ "input": {
70
+ "primary": "image",
71
+ "secondary": ["text"]
72
+ },
73
+ "output": {
74
+ "primary": "video",
75
+ "audio": true,
76
+ "audioType": "speech"
77
+ },
78
+ "characteristics": {
79
+ "processType": "synthesis",
80
+ "audioGeneration": "text-to-speech",
81
+ "audioPrompting": "text-based",
82
+ "lipSync": true,
83
+ "lipSyncMethod": "generated-from-text",
84
+ "motionType": "facial"
85
+ },
86
+ "metadata": {
87
+ "maturityLevel": "mature",
88
+ "commonUseCases": [
89
+ "Avatar creation",
90
+ "Character animation from portrait",
91
+ "Marketing personalization"
92
+ ],
93
+ "platforms": ["Replicate", "FAL AI", "HeyGen"],
94
+ "exampleModels": ["Wav2Lip", "SadTalker", "DreamTalk"]
95
+ }
96
+ },
97
+ {
98
+ "id": "img-to-vid-lipsync-audio",
99
+ "name": "Image to Video (Lip Sync from Audio)",
100
+ "input": {
101
+ "primary": "image",
102
+ "secondary": ["audio"]
103
+ },
104
+ "output": {
105
+ "primary": "video",
106
+ "audio": true,
107
+ "audioType": "speech"
108
+ },
109
+ "characteristics": {
110
+ "processType": "synthesis",
111
+ "audioGeneration": "reference-based",
112
+ "audioPrompting": "audio-reference",
113
+ "lipSync": true,
114
+ "lipSyncMethod": "audio-driven",
115
+ "motionType": "facial"
116
+ },
117
+ "metadata": {
118
+ "maturityLevel": "mature",
119
+ "commonUseCases": [
120
+ "Voice cloning with video",
121
+ "Dubbing and localization",
122
+ "Podcast video generation"
123
+ ],
124
+ "platforms": ["Replicate", "FAL AI"],
125
+ "exampleModels": ["Wav2Lip", "SadTalker"]
126
+ }
127
+ },
128
+ {
129
+ "id": "img-to-vid-lipsync-lora",
130
+ "name": "Image to Video (Lip Sync with LoRA Character)",
131
+ "input": {
132
+ "primary": "image",
133
+ "secondary": ["text", "lora-model"]
134
+ },
135
+ "output": {
136
+ "primary": "video",
137
+ "audio": true,
138
+ "audioType": "speech"
139
+ },
140
+ "characteristics": {
141
+ "processType": "synthesis",
142
+ "audioGeneration": "text-to-speech",
143
+ "audioPrompting": "text-based",
144
+ "lipSync": true,
145
+ "lipSyncMethod": "generated-from-text",
146
+ "characterReference": "lora",
147
+ "motionType": "facial"
148
+ },
149
+ "metadata": {
150
+ "maturityLevel": "experimental",
151
+ "commonUseCases": [
152
+ "Consistent character animation",
153
+ "Brand mascot videos",
154
+ "Personalized avatars"
155
+ ],
156
+ "platforms": ["Specialized services"],
157
+ "exampleModels": []
158
+ }
159
+ },
160
+ {
161
+ "id": "text-to-vid-no-audio",
162
+ "name": "Text to Video (No Audio)",
163
+ "input": {
164
+ "primary": "text",
165
+ "secondary": []
166
+ },
167
+ "output": {
168
+ "primary": "video",
169
+ "audio": false
170
+ },
171
+ "characteristics": {
172
+ "processType": "synthesis",
173
+ "audioGeneration": "none",
174
+ "motionType": "general"
175
+ },
176
+ "metadata": {
177
+ "maturityLevel": "emerging",
178
+ "commonUseCases": [
179
+ "Concept visualization",
180
+ "Storyboarding",
181
+ "Creative exploration"
182
+ ],
183
+ "platforms": ["Replicate", "FAL AI", "RunwayML"],
184
+ "exampleModels": ["ModelScope", "ZeroScope"]
185
+ }
186
+ },
187
+ {
188
+ "id": "text-to-vid-with-audio",
189
+ "name": "Text to Video (With Audio)",
190
+ "input": {
191
+ "primary": "text",
192
+ "secondary": []
193
+ },
194
+ "output": {
195
+ "primary": "video",
196
+ "audio": true,
197
+ "audioType": "synchronized"
198
+ },
199
+ "characteristics": {
200
+ "processType": "synthesis",
201
+ "audioGeneration": "synthesized",
202
+ "audioPrompting": "text-based",
203
+ "audioVideoSync": true,
204
+ "motionType": "general"
205
+ },
206
+ "metadata": {
207
+ "maturityLevel": "experimental",
208
+ "commonUseCases": [
209
+ "Complete scene generation",
210
+ "Multimedia storytelling"
211
+ ],
212
+ "platforms": ["Experimental"],
213
+ "exampleModels": []
214
+ }
215
+ },
216
+ {
217
+ "id": "audio-to-vid",
218
+ "name": "Audio to Video",
219
+ "input": {
220
+ "primary": "audio",
221
+ "secondary": ["text"]
222
+ },
223
+ "output": {
224
+ "primary": "video",
225
+ "audio": true,
226
+ "audioType": "original"
227
+ },
228
+ "characteristics": {
229
+ "processType": "synthesis",
230
+ "audioVisualization": true,
231
+ "motionType": "audio-reactive"
232
+ },
233
+ "metadata": {
234
+ "maturityLevel": "experimental",
235
+ "commonUseCases": [
236
+ "Music visualization",
237
+ "Audio-reactive art",
238
+ "Podcast video generation"
239
+ ],
240
+ "platforms": ["Experimental"],
241
+ "exampleModels": []
242
+ }
243
+ },
244
+ {
245
+ "id": "multimodal-img-audio-to-vid",
246
+ "name": "Image + Audio to Video",
247
+ "input": {
248
+ "primary": "image",
249
+ "secondary": ["audio"]
250
+ },
251
+ "output": {
252
+ "primary": "video",
253
+ "audio": true,
254
+ "audioType": "original"
255
+ },
256
+ "characteristics": {
257
+ "processType": "synthesis",
258
+ "audioGeneration": "reference-based",
259
+ "motionType": "audio-driven",
260
+ "lipSync": false
261
+ },
262
+ "metadata": {
263
+ "maturityLevel": "experimental",
264
+ "commonUseCases": [
265
+ "Audio-driven animation",
266
+ "Dance video generation",
267
+ "Music-driven motion"
268
+ ],
269
+ "platforms": ["Experimental"],
270
+ "exampleModels": []
271
+ }
272
+ },
273
+ {
274
+ "id": "multimodal-text-img-to-vid",
275
+ "name": "Text + Image to Video",
276
+ "input": {
277
+ "primary": "text",
278
+ "secondary": ["image"]
279
+ },
280
+ "output": {
281
+ "primary": "video",
282
+ "audio": false
283
+ },
284
+ "characteristics": {
285
+ "processType": "synthesis",
286
+ "guidanceType": "text-and-visual",
287
+ "motionType": "guided"
288
+ },
289
+ "metadata": {
290
+ "maturityLevel": "emerging",
291
+ "commonUseCases": [
292
+ "Guided video generation",
293
+ "Controlled animation",
294
+ "Reference-based video creation"
295
+ ],
296
+ "platforms": ["Replicate", "FAL AI"],
297
+ "exampleModels": ["AnimateDiff with ControlNet"]
298
+ }
299
+ },
300
+ {
301
+ "id": "3d-to-vid",
302
+ "name": "3D Model to Video",
303
+ "input": {
304
+ "primary": "3d-model",
305
+ "secondary": []
306
+ },
307
+ "output": {
308
+ "primary": "video",
309
+ "audio": false
310
+ },
311
+ "characteristics": {
312
+ "processType": "rendering",
313
+ "renderType": "3d-rendering",
314
+ "motionType": "camera-path"
315
+ },
316
+ "metadata": {
317
+ "maturityLevel": "mature",
318
+ "commonUseCases": [
319
+ "3D visualization",
320
+ "Product rendering",
321
+ "Architectural visualization"
322
+ ],
323
+ "platforms": ["Blender", "Unreal Engine", "Unity"],
324
+ "exampleModels": []
325
+ }
326
+ }
327
+ ]
328
+ }
taxonomy/video-generation/editing/modalities.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "fileType": "multimodal-ai-taxonomy",
3
+ "outputModality": "video",
4
+ "operationType": "editing",
5
+ "description": "Modalities for editing and transforming existing video content",
6
+ "modalities": [
7
+ {
8
+ "id": "vid-to-vid-no-audio",
9
+ "name": "Video to Video (No Audio)",
10
+ "input": {
11
+ "primary": "video",
12
+ "secondary": ["text"]
13
+ },
14
+ "output": {
15
+ "primary": "video",
16
+ "audio": false
17
+ },
18
+ "characteristics": {
19
+ "processType": "transformation",
20
+ "transformationTypes": ["style-transfer", "motion-modification", "object-editing"],
21
+ "preserveAudio": false
22
+ },
23
+ "metadata": {
24
+ "maturityLevel": "emerging",
25
+ "commonUseCases": [
26
+ "Video style transfer",
27
+ "Video editing",
28
+ "Motion manipulation"
29
+ ],
30
+ "platforms": ["Replicate", "RunwayML"],
31
+ "exampleModels": ["Gen-2", "Video ControlNet"]
32
+ }
33
+ },
34
+ {
35
+ "id": "vid-to-vid-preserve-audio",
36
+ "name": "Video to Video (Preserve Audio)",
37
+ "input": {
38
+ "primary": "video",
39
+ "secondary": ["text"]
40
+ },
41
+ "output": {
42
+ "primary": "video",
43
+ "audio": true,
44
+ "audioType": "original"
45
+ },
46
+ "characteristics": {
47
+ "processType": "transformation",
48
+ "transformationTypes": ["style-transfer", "motion-modification", "object-editing"],
49
+ "preserveAudio": true,
50
+ "audioHandling": "passthrough"
51
+ },
52
+ "metadata": {
53
+ "maturityLevel": "emerging",
54
+ "commonUseCases": [
55
+ "Video style transfer with audio",
56
+ "Content transformation maintaining soundtrack"
57
+ ],
58
+ "platforms": ["Replicate", "RunwayML"],
59
+ "exampleModels": []
60
+ }
61
+ }
62
+ ]
63
+ }