AITutoro
🇬🇧

Midjourney training: what it is, what it costs, and how to get results in 2026

Midjourney has shipped a new model, a video engine, and a full web editor since most tutorials were written. Half the guides still tell you to open Discord and type /imagine — advice that now belongs in a footnote, not a starting point. This Midjourney training guide covers what matters in 2026: from picking the right subscription tier to generating your first production-ready image. For a deeper skill-building path, AITutoro's Midjourney modules go further than any single article can.

Updated March 2026

What is Midjourney?

Midjourney is an AI image generation platform founded in 2021 by David Holz, co-founder of Leap Motion. The company is self-funded — no venture capital — and turned profitable within two months of its public beta. It has grown to an estimated over 20 million registered users while spending nothing on traditional advertising.

The platform started as a Discord bot, and that origin story still confuses newcomers. The primary interface is now a web app at midjourney.com, which also works as a mobile progressive web app (PWA). Discord remains available but is no longer the default. Midjourney removed its free trial in late March 2023, and no broadly available public API exists.

What sets Midjourney apart from tools like DALL-E's approach to image generation or Stable Diffusion is its opinionated aesthetic. The models lean toward high visual quality and painterly realism out of the box, which makes them a strong fit for marketing assets, concept art, and editorial illustration — even before you learn a single parameter. Midjourney is one of several platforms covered in our AI tools overview.

Midjourney subscription tiers compared

Midjourney runs on a subscription model with four tiers. The core trade-off is Fast GPU hours (immediate generation) versus Relax Mode (unlimited but slower, queued generation). Here is the full breakdown, based on the official Midjourney plan comparison:

FeatureBasic ($10/mo)Standard ($30/mo)Pro ($60/mo)Mega ($120/mo)
Fast GPU hours/month~3.3 hrs (~200 images)~15 hrs (~900 images)~30 hrs (~1,800 images)~60 hrs (~3,600 images)
Relax ModeNoYesYesYes
Stealth ModeNoNoYesYes
Concurrent Fast jobs331212
Annual discount20%20%20%20%

All paid tiers include commercial use rights, but a revenue threshold applies: companies earning more than $1 million in gross annual revenue must subscribe to Pro or Mega. Basic and Standard cover individuals and smaller businesses.

Who should pick what:

  • Hobbyists and explorers — Basic gives you around 200 images per month. Enough to experiment, not enough for production work.
  • Regular creators — Standard unlocks Relax Mode, which means unlimited generation at slower speeds. This is where the platform becomes practical for ongoing creative work.
  • Professionals and agencies — Pro adds Stealth Mode (your images stay private) and 12 concurrent jobs. If you produce client-facing assets, start here.

Annual billing saves 20% across every tier. For a side-by-side with DALL-E's pricing model, see how Midjourney compares to DALL-E.

What Midjourney V7 changes (and why it matters)

V7 launched in April 2025 and became the default model in June 2025. This release is not an incremental update — it is a complete architectural rebuild.

The improvements are substantial: significantly better anatomical accuracy (hands, faces, body coherence) and substantially improved prompt comprehension. If you tried Midjourney a year ago and got frustrated by melted fingers or prompts the model seemed to ignore, V7 delivers a different experience.

Three new features stand out:

  • Draft Mode renders images at 10x speed and half the GPU cost. Think of it as a sketch pad — iterate on composition and concept fast, then switch to full quality for the final render.
  • Model Personalization ships on by default. Rate roughly 200 images in the Midjourney gallery, and the model builds a taste profile. Every generation after that subtly shifts toward your aesthetic preferences — like training a creative assistant on your visual taste without writing a single prompt modifier.
  • Voice Prompting lets you speak your prompts directly through the web app. Describe what you want out loud, and Midjourney translates it into a generation. Useful for brainstorming when typing feels like a bottleneck.

For users returning from V6 or earlier: the upgrade is automatic. V7 is the default. Your old prompts still work, but expect noticeably different (and broadly better) results.

Core capabilities beyond text-to-image

Video generation (V1 Video Model)

Midjourney's V1 Video Model launched in June 2025 — but it works differently than you might expect. It converts existing images into video, not text into video. You start from a generated image (or any uploaded image) and produce a five-second clip.

You can extend those clips in four-second increments, up to 21 seconds total. Default output runs at 480p and 24fps, with a 720p HD option available for Standard, Pro, and Mega plans at higher GPU cost. Two motion styles ship with the model: Low Motion for atmospheric, slow-moving shots, and High Motion for faster, more dramatic motion.

The GPU cost runs roughly 8x that of image generation. On the Basic plan, that eats through your hours fast. Pro and Mega subscribers can generate video through Relax Mode, making experimentation practical without worrying about running out of time.

Web editor

The web editor, significantly expanded during the V7 era, brings image manipulation tools directly into Midjourney. Inpainting (called Vary Region) lets you select and regenerate specific parts of an image. Outpainting expands the canvas beyond the original borders. Retexture regenerates an entire image in a new style while preserving the underlying composition.

The editor also supports layers, smart selection with inclusion/exclusion masking, and background erasure. For many workflows, this eliminates the round-trip to Photoshop — you can refine an image inside the same tool that generated it.

Style Reference and Character Reference

Two parameters solve the biggest frustration in AI image generation: inconsistency.

Style Reference (--sref) copies an artistic style — color palette, texture, mood — from a reference image or a numeric style code. Apply the same --sref code across ten prompts, and all ten outputs share a unified visual language. For brand-consistent marketing assets, this is transformative.

Character Reference (--cref) maintains a consistent character appearance across generations. Control it with Character Weight (--cw): set to 0 for face-only consistency, or 100 for face, hair, and clothing. Run a product mascot through a dozen scenes without it morphing into someone else each time.

Niji 7 (anime and illustration)

Niji 7 launched in January 2026 as a specialized model for anime and illustration work. It produces cleaner linework, flatter aesthetics, and more literal prompt interpretation than the main V7 model. If you generate anime-style characters, manga panels, or illustration-style assets, Niji 7 handles the medium's conventions natively. Activate it with --niji 7.

Midjourney tutorial: key parameters every user should know

Midjourney's parameter system is where casual users become capable ones. You do not need to memorize the full list — these eight cover most creative needs:

  • --ar (aspect ratio) — Controls image dimensions. --ar 16:9 for widescreen, --ar 9:16 for vertical stories. Default is square (1:1).
  • --chaos (0-100) — Adds variation between the four generated images. Low values produce similar results; high values introduce wild variety. Start at 20-30 for useful exploration.
  • --stylize (0-1000) — Balances artistic interpretation against prompt accuracy. Low values follow your prompt literally; high values let Midjourney add its own flair. Default is 100.
  • --no — Negative prompting. --no text, watermark removes unwanted elements. Essential for clean output.
  • --seed — A number that influences initial generation for approximate consistency. Use the same seed and prompt within the same session and settings for similar results. Useful for systematic experimentation, though seeds do not guarantee identical outputs across sessions.
  • --sref — Apply a style from a reference image or numeric style code. Covered in the Style Reference section above.
  • --cref — Maintain character consistency across images. Covered in the Character Reference section above.
  • --v — Select a specific model version (--v 7, --v 6.1). Default is V7.

For the full parameter list, consult Midjourney's official parameter documentation.

Commercial use and licensing

Paid subscribers own their generated images with full commercial rights — "to the fullest extent possible under current law," per Midjourney Terms of Service. You can use outputs for marketing materials, book covers, game assets, merchandise, and print-on-demand products.

Three details matter:

  • All images are public by default. Everything you generate appears in Midjourney's community gallery unless you subscribe to Pro or Mega with Stealth Mode enabled. If client confidentiality matters, factor this into your tier decision.
  • Revenue threshold. Companies earning above $1 million in gross annual revenue must use Pro or Mega. Basic and Standard plans do not cover commercial use at that scale.
  • U.S. copyright uncertainty. Purely AI-generated images may not qualify for copyright protection under current U.S. law, which requires human authorship. The U.S. Copyright Office guidance on AI-generated works provides the latest framework. You can use the images commercially under Midjourney's license, but enforcing exclusive ownership through copyright registration remains a separate and unresolved legal question.

Restricted uses include deepfakes of real people and trademark imitation. Midjourney's terms explicitly prohibit both.

Getting started — first image in five minutes

  1. Sign up. Go to midjourney.com and create an account. Pick a plan — Standard is the sweet spot for most users, since Relax Mode removes the anxiety of burning through GPU hours.

  2. Open the Imagine bar. Type a prompt describing what you want. Be specific: "a ceramic coffee mug on a wooden table, morning light, shallow depth of field" outperforms "a mug" by a wide margin.

  3. Review the grid. Midjourney generates four variations. Hover over any image to upscale it (higher resolution), create subtle variations, or open it in the editor.

  4. Download or edit. Save the final image, or open the web editor to refine with inpainting, outpainting, or retexturing.

Practical tip: Start every session in Draft Mode. It runs 10x faster at half the GPU cost, so you can iterate on composition and concept without watching your hours drain. Switch to Fast mode for the final, polished render.

For structured prompt templates and creative starting points, explore prompt resources for content creation.

AITutoro's Midjourney training modules walk you through prompts, parameters, and style consistency step by step — a structured alternative to piecing together scattered tutorials.

Learn Midjourney with structured training

Knowing the interface is the starting line. The distance between "I can generate an image" and "I can produce brand-consistent visual assets on demand" comes down to prompt engineering, style control, and workflow integration — skills that compound with practice.

AITutoro's adaptive Midjourney training adjusts to what you already know. If you have mastered the basics, it skips ahead. If parameters like --sref and --cref are new territory, it walks you through them with hands-on exercises. The training path for creators covers everything from first prompt to production workflow.

Start your free Midjourney training on AITutoro — two modules free, no credit card required.

Build real skill with AI tools

AITutoro provides adaptive training for both ChatGPT and Claude. The platform adjusts to what you already know, so you skip the basics and focus on the techniques that move your work forward.

Frequently asked questions

Related Comparisons

Ready to master your AI workflow?

Whether you chose ChatGPT, Claude, or both, targeted skill-building turns a good tool into a competitive advantage.