The Flyworks MCP is a Model Context Protocol (MCP) server that provides a convenient interface for interacting with the Flyworks API. It facilitates fast and free lipsync video creation for a wide range of digital avatars, including realistic and cartoon styles.
Input avatar video (footage):
avatar.mp4
Audio clip with TTS saying 我是一个飞影数字人。Welcome to Flyworks MCP server demo. This tool enables fast and free lipsync video creation for a wide range of digital avatars, including realistic and cartoon styles.
:
audio.mp4
Generated lipsync video:
generated.2.mp4
- Create lipsynced videos using digital avatar video and audio as inputs
- Create lipsynced videos by text (with text-to-speech)
- Create digital human avatars from images or videos
- Support for both async and sync modes of operation
- More features coming soon...
- Python 3.8+
- Dependencies:
httpx
,mcp[cli]
Go to Claude > Settings > Developer > Edit Config > claude_desktop_config.json
to include the following:
{
"mcpServers": {
"flyworks": {
"command": "uvx",
"args": [
"flyworks-mcp",
"-y"
],
"env": {
"FLYWORKS_API_TOKEN": "your_api_token_here",
"FLYWORKS_API_BASE_URL": "https://hfw-api.hifly.cc/api/v2/hifly",
"FLYWORKS_MCP_BASE_PATH": "/path/to/your/output/directory"
}
}
}
}
Go to Cursor -> Preferences -> Cursor Settings -> MCP -> Add new global MCP Server
to add above config.
Make sure to replace your_api_token_here
with your actual API token, and update the FLYWORKS_MCP_BASE_PATH
to a valid directory on your system where output files will be saved.
Note: We offer free trial access to our tool with the token
2aeda3bcefac46a3
. However, please be aware that the daily quota for this free access is limited. Additionally, the generated videos will be watermarked and restricted to a duration of 45 seconds. For full access, please contact us at bd@flyworks.ai to acquire your token.
To install flyworks-mcp for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @Flyworks-AI/flyworks-mcp --client claude
-
Clone this repository:
git clone https://github.com/yourusername/flyworks-mcp.git cd flyworks-mcp
-
Install dependencies:
pip install httpx "mcp[cli]>=1.6.0"
Or using
uv
:uv pip install httpx "mcp[cli]>=1.6.0"
To avoid timeout issues during server startup, we recommend pre-installing all dependencies:
pip install pygments pydantic-core httpx "mcp[cli]>=1.6.0"
-
Configuration
Set your Flyworks API token as an environment variable:
# Linux/macOS
export FLYWORKS_API_TOKEN="your_token_here"
# Windows (Command Prompt)
set FLYWORKS_API_TOKEN=your_token_here
# Windows (PowerShell)
$env:FLYWORKS_API_TOKEN="your_token_here"
Alternatively, you can create a .env
file.
- Running the Server
Run the server.py
file directly:
python server.py
Please confirm its absolute path by running this command in your terminal:
which uvx
Once you obtain the absolute path (e.g., /usr/local/bin/uvx), update your configuration to use that path (e.g., "command": "/usr/local/bin/uvx").
Create a lipsync video with audio input. Animates a digital human avatar to speak in sync with the provided audio.
Parameters:
avatar
: Digital human avatar ID. Either this or avatar creation parameters must be provided.avatar_video_url
: URL of a video to create the avatar from.avatar_image_url
: URL of an image to create the avatar from.avatar_video_file
: Local path to a video file to create the avatar from.avatar_image_file
: Local path to an image file to create the avatar from.audio_url
: Remote URL of the audio file. One of audio_url or audio_file must be provided.audio_file
: Local path to the audio file. One of audio_url or audio_file must be provided.title
: Optional title for the created video.async_mode
: If true, returns task_id immediately. If false, waits for completion and downloads the video. Default is true.output_path
: Where to save the downloaded video if async_mode is false. Default is "output.mp4".
Notes:
- For avatar creation, provide exactly ONE of avatar_video_url, avatar_image_url, avatar_video_file, or avatar_image_file.
- If avatar ID is directly provided, these parameters will be ignored.
Returns:
- If async_mode is true: task_id for checking status later and created_avatar (if a new avatar was created)
- If async_mode is false: downloaded video path, task result, and created_avatar (if applicable)
Create a lipsync video with text input. Generates audio from the text and animates a digital human avatar to speak it.
Parameters:
avatar
: Digital human avatar ID. Either this or avatar creation parameters must be provided.avatar_video_url
: URL of a video to create the avatar from.avatar_image_url
: URL of an image to create the avatar from.avatar_video_file
: Local path to a video file to create the avatar from.avatar_image_file
: Local path to an image file to create the avatar from.text
: Text content to be spoken by the avatar. Required.voice
: Voice ID to use for text-to-speech. If not provided, a random voice will be selected automatically.title
: Optional title for the created video.async_mode
: If true, returns task_id immediately. If false, waits for completion and downloads the video. Default is true.output_path
: Where to save the downloaded video if async_mode is false. Default is "output.mp4".
Notes:
- For avatar creation, provide exactly ONE of avatar_video_url, avatar_image_url, avatar_video_file, or avatar_image_file.
- If avatar ID is directly provided, these parameters will be ignored.
Returns:
- If async_mode is true: task_id for checking status later, selected voice ID, and created_avatar (if applicable)
- If async_mode is false: downloaded video path, task result, selected voice ID, and created_avatar (if applicable)
For tasks run in async mode, you can check their status using the Flyworks API's /creation/task
endpoint with the task_id returned by the tool.
- Job processing may take some time, please be patient
- Video file URLs are temporary, please download and save them promptly
- When using local files, the server will automatically upload them to Flyworks servers
- In sync mode, the tool will wait for the task to complete and automatically download the video
- Maximum allowed wait time for sync mode is 10 minutes (600 seconds)
- Avatar creation through videos usually provides better quality but takes longer
- For quick testing, avatar creation through images is faster but may have lower quality