AboutWriting

Run Stable Diffusion on Apple Silicon

Prerequisites

  • A Mac with an M1 or M2 chip
  • 16GB RAM or more. 8GB of RAM works, but it is extremely slow
  • macOS 12.3 or higher

Set up Python

You need Python 3.10 to run Stable Diffusion. Run python -V to see what Python version you have installed:

$ python3 -V                                                              
Python 3.10.6

If it's 3.10 or above, like here, you're good to go


Clone the repository and install the dependencies

Run this to clone the fork of Stable Diffusion:

git clone -b apple-silicon-mps-support https://github.com/CompVis/stable-diffusion.git
cd stable-diffusion
mkdir -p models/ldm/stable-diffusion-v1/

Then, set up a virtualenv to install the dependencies:

python3 -m pip install virtualenv
python3 -m virtualenv venv

Activate the virtualenv:

source venv/bin/activate

(You'll need to run this command again any time you want to run Stable Diffusion.)

Then, install the dependencies:

pip install -r requirements.txt

Download the weights

Go to the Hugging Face repository, have a read and understand the license, then click "Access repository".

Download sd-v1-4.ckpt (~4 GB) on that page and save it as models/ldm/stable-diffusion-v1/model.ckpt in the directory you created above.


⭐️ Enter your prompt:

python scripts/txt2img.py \
  --prompt "an electronic device with wires and wires connected to it, inspired by Mike "Beeple" Winkelmann, trending on zbrush central, oscilloscope, dan mcpharlin : : ornate, retro vintage screens, replica model, russian lab experiment, studio orange" \
  --n_samples 1 --n_iter 1 --plms

Your output's in outputs/txt2img-samples/. That's it.

It took 6 mins to run on M1 Macbook Pro 16G:

Framer's Component Picker

Framer's Component Picker

Output:

Framer's Component Picker

If you're struggling to get this set up, feel free to DM me.