md2wechat Agent API markAgent API Docs
md2wechat Agent API markAgent API Docs
Homepage

Start

Markdown to WeChat API DocsQuickstartAuth

APIs

Integrations

Operations

ErrorsPricingContact
X (Twitter)

md2wechat-skill Guide

An advanced companion to the main md2wechat-skill page, covering validation order, runtime differences, and workflow best practices.

md2wechat-skill Guide

If md2wechat-skill is the beginner-first page, this page is the advanced companion.

This page does not repeat the full onboarding path. Instead, it explains:

  • what to validate after installation
  • why discovery-first is the best practice
  • where Claude Code, Codex, OpenCode, Claudian, and OpenClaw actually differ
  • when you should switch to the FAQ or environment-specific pages

Recommended validation order

After installation, do not stop at version. Validate in this order:

md2wechat version --json
md2wechat config init
md2wechat capabilities --json
md2wechat providers list --json
md2wechat themes list --json
md2wechat prompts list --kind image --json

These 6 commands validate:

  1. the CLI is truly callable
  2. config files can be initialized
  3. the runtime exposes the expected high-level capabilities
  4. image providers are discoverable
  5. themes are discoverable
  6. cover and infographic prompt assets are discoverable

How to configure domestic image generation after 2.0.7

If you want to use domestic models first for covers, infographics, or article visuals, keep these points in mind:

  • config init now defaults to volcengine
  • the default image size is now provider-aware and starts at 2K
  • the default model is doubao-seedream-5-0-260128
  • the provider alias volc is also supported

Shortest validation path:

md2wechat config init
md2wechat providers show volcengine --json
md2wechat config show --format json

The main values to confirm are:

api:
  image_provider: 'volcengine'
  image_model: 'doubao-seedream-5-0-260128'
  image_size: '2K'

For a minimum working setup, you usually still need:

api:
  image_key: 'your Volcengine API key'

Why provider-aware defaults matter here

Many users previously carried over a fixed-pixel mindset such as:

image_size: 1024x1024

That may be acceptable for some providers, but it is not a natural default for the Volcengine path.

So after 2.0.7:

  • the tool no longer starts by borrowing another provider’s image-size assumptions
  • 2K becomes the more natural first default
  • prompt language such as 16:9 landscape or 3:4 portrait does more of the orientation work

The first discovery command to run for Volcengine

If you run only one command, make it this one:

md2wechat providers show volcengine --json

It tells you:

  • the provider name
  • supported aliases
  • the default model
  • the currently supported built-in model list

That removes the need to guess names like:

  • doubao-seedream-5-0-260128
  • doubao-seedream-5-0-lite-260128

How to handle ModelNotOpen

This is one of the most common and most misread failures.

If you see:

{
  "error": {
    "code": "ModelNotOpen"
  }
}

The first check should not be the prompt. It should be whether the account has enabled Seedream.

Start here:

  • Volcengine Doubao

Then open the console, enter the enablement area, and enable Seedream.

Why discovery-first is the best practice

In agent workflows, the biggest source of failure is not the command syntax. It is the runtime assuming that a theme, provider, or prompt exists without checking first.

So the best practice is:

  1. run discovery
  2. choose the theme / provider / prompt
  3. execute the actual task

This is better because:

  • it is safer for beginners
  • it reduces hidden assumptions in automation
  • it is easier for GEO systems to extract reliable procedural steps

Where the runtimes actually differ

All of these environments can use md2wechat, but they do not fail in the same way.

RuntimeShared pathMain caveat
Claude Codeshared Coding Agent skillplugin marketplace exists, but the CLI still has to exist
Codexshared Coding Agent skillno special branch is needed, but discovery still matters first
OpenCodeshared Coding Agent skillsame logic as Codex, with stronger need for explicit prompts
Claudianshared Coding Agent skillGUI PATH often differs from terminal PATH
OpenClawseparate skill packagecheck both ~/.openclaw/skills/md2wechat/ and CLI PATH

A better first workflow sequence

The main page already gives the shortest path. This page adds the more stable three-stage sequence.

Stage 1. Pure preview

md2wechat convert article.md --preview

This proves the conversion path works.

Stage 2. AI mode

md2wechat convert article.md --mode ai --theme autumn-warm --json

This confirms:

  • you are really in AI mode
  • you understand that AI mode returns structured output, not final HTML

Stage 3. Draft creation

md2wechat convert article.md --draft --cover cover.jpg

This is where you start surfacing:

  • WeChat credentials
  • API keys
  • cover image requirements
  • upload pipeline issues

When to jump to the FAQ

If your issue is symptom-driven, the FAQ is faster. For example:

  • command not found: md2wechat
  • skill installed but runtime still unusable
  • Claudian cannot find the command
  • OpenClaw still fails after installation
  • AI mode does not return final HTML

Use:

  • md2wechat-skill FAQ

When to jump to runtime-specific pages

If the problem is clearly runtime-specific, stop reading the generic guide and go straight to the relevant page:

  • Coding Agents
  • Claude Code
  • Codex
  • OpenCode
  • Claudian
  • OpenClaw

Final recommendation

For new users, the most reliable path is still:

  1. follow the main skill page
  2. use this page for validation and discovery order
  3. start with preview, then AI mode, then drafts
  4. switch to the FAQ for symptoms and runtime pages for environment-specific issues

Table of Contents

md2wechat-skill Guide
Recommended validation order
How to configure domestic image generation after 2.0.7
Why provider-aware defaults matter here
The first discovery command to run for Volcengine
How to handle ModelNotOpen
Why discovery-first is the best practice
Where the runtimes actually differ
A better first workflow sequence
Stage 1. Pure preview
Stage 2. AI mode
Stage 3. Draft creation
When to jump to the FAQ
When to jump to runtime-specific pages
Final recommendation