StableDiffusion on Apple Silicon Macs: Native Deployment for Zero Fan Spin and Maximum Performance

StableDiffusion

Step-by-step guide to deploy StableDiffusion on Apple Silicon Macs unleashing the full potential of the Apple Silicon chip and its Neural Engine

Many services that use AI models to generate images are emerging. Most of these services offer a credits-based pricing system, where you can buy credits to use on the service, and each service has its own credits-per-image tiers. Another option is to directly deploy the text-to-image model on your own machine, which avoids any usage costs, except for the cost of operating the machine. Text-to-image models are, in fact, known to be heavy on energy consumption and often require high-spec hardware to process the generative algorithms efficiently. In particular, a machine with at least 16 GB of RAM is usually required, unless you dedicate the computing session solely to image generation, which frees up RAM for the process.

Generated by AI — Credits to DreamStudio

StableDiffusion

StableDiffusion is one of the most accessible models for generating images, as it can easily be installed on a personal machine, even a portable one, without requiring much expertise. Until recently, the Mac ecosystem was unable to use this type of software due to the transition to ARM architecture in the latest iterations of Macs, which required some specific macOS-related software optimisations that were not yet made available by StableDiffusion maintainers.

However, this has changed in recent weeks, as Apple has released an official repository on GitHub containing an optimized version of the StableDiffusion models, as well as instructions on how to create custom models and/or convert them to a Mac-optimized version and use them efficiently with Apple Silicon CPUs, GPUs, and Neural Engine resources. Initial tests with this optimized version have shown dramatic increases in performance compared to the non-optimized versions that were available until now. What stands out the most, however, is the efficiency of the image generation process. In all of my tests, the fan never spun and it was possible to use the machine for other intensive tasks at the same time.

https://medium.com/media/acf71d5b9a7349a73d46683466a8e68a/href

Apple has also released instructions on how to deploy the model on iOS devices such as iPhones and iPads, through careful and optimized use of memory resources on these portable devices, since they usually do not go above the 8 GB of RAM capacity.

In this step-by-step guide, I will show you how to correctly deploy StableDiffusion models on an Apple Silicon machine, with optimisations for the specific architecture, provided by Apple. I will also provide instructions on how to troubleshoot common issues that may arise while deploying and using the model, and how to address them. After following these steps, you will be able to generate images from a text prompt using a command-line application from the Mac Terminal app.

Disclaimer: this guide will instruct you on how to use StableDiffusion through a command line interface, that is through the Terminal. At this time, this is the recommended method since currently there are no GUI wrappers for the official Apple implementation, or at least there are no trusted and stable version that I know of available yet.

Deploy StableDiffusion model on Apple Silicon Mac via CLI

Before we begin, let’s make sure you have the necessary requirements. Firstly, ensure that your Mac meets the system requirements. If you have an Apple Silicon machine, you should not have any issues updating your system to the last available macOS release.

Python 3.8

macOS Ventura 13.1

Xcode 14.2

RAM capacity: 16 GB suggested. 8 GB could suffice, buy you would have to try running your model only on the Mac, closing all other apps.

In addition to meeting the machine requirements, also ensure that you have a decent internet connection, as the models you will download are quite large (several gigabytes each). And, of course, make sure you have enough free space on your machine’s drive to accommodate the models you want to download (the amount of space required will depend on which models you choose to download).

1 — Prepare project and install dependencies

Run this script in the location where you want the project to be stored. It will create a project folder named stable_diffusion_apple and install all the required dependencies inside a custom Python environment called “venv” in the same folder. Additionally, it will create a new models folder inside the project folder, which will hold all the downloaded models.

mkdir stable_diffusion_apple
cd stable_diffusion_apple
mkdir models

python3 -m venv .venv
source .venv/bin/activate

pip install huggingface_hub
pip install git+https://github.com/apple/ml-stable-diffusion

git clone https://github.com/apple/ml-stable-diffusion

2 — Python script to download CoreML models

Once you have completed the previous preparatory steps, you are ready to download the models prepared by Apple, which have already been converted to the custom CoreML format used on macOS for leveraging Apple Silicon specific optimizations. The script makes it easy to download models from the Hugging Face-hosted Apple repositories.

To download a specific model, first go to the stable_diffusion_apple folder. The script will put each model you download in the models folder you created earlier. Simply copy the script from the code block below into a new file in the stable_diffusion_apple folder, and modify the strings repo_id and variant to match the model you want to download. For example, in the version provided, the script will automatically download the StableDiffusion 1.5 model in the split_einsum variant.

https://medium.com/media/8a59393062908f9dfa404c88e006e48f/href

Once you’ve copied the script, and named it for example downloader.py, you can execute it with the command python downloader.py. The download process will begin, which may take a few minutes depending on your internet connection speed. Once the download is complete, you can check inside the models folder to confirm that the file has been downloaded.

You’re finished! Now, use the cd ml-stable-diffusion command to navigate to the ml-stable-diffusion folder, and then use the following command to generate an image from a text prompt:

swift run StableDiffusionSample --disable-safety --resource-path ../models/coreml-stable-diffusion-v1-5_split_einsum_compiled --compute-units all "a photo of an astronaut riding a horse on mars"

Notice that while split_einsum models can leverage on all the compute units (CPU, GPU and Apple‘s Neural Engine) original is only compatible with CPU and GPU.

Troubleshooting

As far as troubleshooting goes here I’ll list some of the things I’ve found out while deploying it on my Mac. Hopefully, they will be helpful for someone else.

Error: no such module ‘PackageDescription’

This error shows up as soon as you run the swift run StableDiffusionSample command.

On my Mac this issue appeared after updating macOS to the last dot version and it may happen to someone else also. If you experience this issue, make sure that the path to the Xcode tools is set up correctly. You can check the current path by using the command xcode-select –print-path. It will print the path to your Xcode installation, which is usually /Applications/Xcode.app/Contents/Developer. If the path is incorrect, that is doesn’t match with the real location of Xcode, use the command sudo xcode-select –switch /path/to/xcode to set it up. So if you have Xcode installed in the default location, that is the one I’ve showed before, execute this command :

sudo xcode-select –switch /Applications/Xcode.app/Contents/Developer

Once you have done this, the error should not occur again.

This is some reference for this issue: https://forums.kodeco.com/t/server-error-no-such-module-packagedescription/177438

Error: calling plan_submit in batch processing

This error occurs when you use the –compute-units all flag on original models. These models only support the cpuOnly and cpuAndGPU flags. This is a common mistake, as the CLI defaults to all compute units, so it is important to remember to set the correct flag when running on models that do not support running on the Apple Neural Engine.

Tips

If you encounter any other errors when running the image generation, it is likely that there is not enough free RAM available to execute the text-to-image job. You can try freeing up some space by closing other running apps or restarting your Mac. Another option is to use split_einsum models instead of original ones, as they are optimized for low-memory devices like iPhones and iPads. Additionally, make sure to include the –disable-safety flag when using the StableDiffusion command, as shown in the example above. This will free up some resources that are used to ensure that the generated image follows NSFW guidelines during the computing process.”

As a general rule of thumb, keep in mind that newer Apple Silicon devices (with M2 chips) have a more powerful Neural Engine compared to the previous generation, including the M1 Pro. Using split_einsum models, which can also make use of the Neural Engine, will generally result in faster execution times and better energy efficiency.

To quickly get the time needed to execute a job simply prepend the swift run StableDiffusionSample command with the time keyword. The time will be displayed in the console once the job is finished.

Since the Apple implementation was released only a few weeks ago, it may still be missing certain features that you are familiar with using with StableDiffusion models. In that case do not worry, the repository is actively maintained by many contributors, so it may be worth checking back occasionally to see if any new features have been added in the meantime. For example, I recently did this to get the latest release, which finally added support for negative prompts.

Now, if you want to get the latest version, you can simply download the updated repository without re-downloading the models. To do this, go to the stable_diffusion_apple folder and delete the ml-stable-diffusion folder you created earlier. Then, clone the updated version from GitHub using the command git clone https://github.com/apple/ml-stable-diffusion, as you did when you first set up the StableDiffusion deployment.

Conclusion

For reference, here is the official repository for the Apple-maintained version of the StableDiffusion models:

GitHub – apple/ml-stable-diffusion: Stable Diffusion with Core ML on Apple Silicon

In this repository, you can also find instructions and details about performance, among other things. This article was sourced from this repository, along with the following guide by Hugging Face:

Using Stable Diffusion with Core ML on Apple Silicon

If you encounter any new or unusual issues while deploying models, feel free to reach out. Your experience may help troubleshoot issues for other users and improve the framework released on GitHub. Thank you for reading.

Further Reading

If you liked 👏 this article you may enjoy reading through some of my other articles. Oh, and don’t forget to subscribe! 🫵

Level Up Coding

Thanks for being a part of our community! Before you go:

🚀👉 Join the Level Up talent collective and find an amazing job


StableDiffusion on Apple Silicon Macs: Native Deployment for Zero Fan Spin and Maximum Performance was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.


This content originally appeared on Level Up Coding - Medium and was authored by StringMeteor

StableDiffusion

Step-by-step guide to deploy StableDiffusion on Apple Silicon Macs unleashing the full potential of the Apple Silicon chip and its Neural Engine

Many services that use AI models to generate images are emerging. Most of these services offer a credits-based pricing system, where you can buy credits to use on the service, and each service has its own credits-per-image tiers. Another option is to directly deploy the text-to-image model on your own machine, which avoids any usage costs, except for the cost of operating the machine. Text-to-image models are, in fact, known to be heavy on energy consumption and often require high-spec hardware to process the generative algorithms efficiently. In particular, a machine with at least 16 GB of RAM is usually required, unless you dedicate the computing session solely to image generation, which frees up RAM for the process.

Generated by AI — Credits to DreamStudio

StableDiffusion

StableDiffusion is one of the most accessible models for generating images, as it can easily be installed on a personal machine, even a portable one, without requiring much expertise. Until recently, the Mac ecosystem was unable to use this type of software due to the transition to ARM architecture in the latest iterations of Macs, which required some specific macOS-related software optimisations that were not yet made available by StableDiffusion maintainers.

However, this has changed in recent weeks, as Apple has released an official repository on GitHub containing an optimized version of the StableDiffusion models, as well as instructions on how to create custom models and/or convert them to a Mac-optimized version and use them efficiently with Apple Silicon CPUs, GPUs, and Neural Engine resources. Initial tests with this optimized version have shown dramatic increases in performance compared to the non-optimized versions that were available until now. What stands out the most, however, is the efficiency of the image generation process. In all of my tests, the fan never spun and it was possible to use the machine for other intensive tasks at the same time.

Apple has also released instructions on how to deploy the model on iOS devices such as iPhones and iPads, through careful and optimized use of memory resources on these portable devices, since they usually do not go above the 8 GB of RAM capacity.

In this step-by-step guide, I will show you how to correctly deploy StableDiffusion models on an Apple Silicon machine, with optimisations for the specific architecture, provided by Apple. I will also provide instructions on how to troubleshoot common issues that may arise while deploying and using the model, and how to address them. After following these steps, you will be able to generate images from a text prompt using a command-line application from the Mac Terminal app.

Disclaimer: this guide will instruct you on how to use StableDiffusion through a command line interface, that is through the Terminal. At this time, this is the recommended method since currently there are no GUI wrappers for the official Apple implementation, or at least there are no trusted and stable version that I know of available yet.

Deploy StableDiffusion model on Apple Silicon Mac via CLI

Before we begin, let’s make sure you have the necessary requirements. Firstly, ensure that your Mac meets the system requirements. If you have an Apple Silicon machine, you should not have any issues updating your system to the last available macOS release.

Python 3.8
macOS Ventura 13.1
Xcode 14.2
RAM capacity: 16 GB suggested. 8 GB could suffice, buy you would have to try running your model only on the Mac, closing all other apps.

In addition to meeting the machine requirements, also ensure that you have a decent internet connection, as the models you will download are quite large (several gigabytes each). And, of course, make sure you have enough free space on your machine’s drive to accommodate the models you want to download (the amount of space required will depend on which models you choose to download).

1 — Prepare project and install dependencies

Run this script in the location where you want the project to be stored. It will create a project folder named stable_diffusion_apple and install all the required dependencies inside a custom Python environment called “venv” in the same folder. Additionally, it will create a new models folder inside the project folder, which will hold all the downloaded models.

mkdir stable_diffusion_apple
cd stable_diffusion_apple
mkdir models

python3 -m venv .venv
source .venv/bin/activate

pip install huggingface_hub
pip install git+https://github.com/apple/ml-stable-diffusion

git clone https://github.com/apple/ml-stable-diffusion

2 — Python script to download CoreML models

Once you have completed the previous preparatory steps, you are ready to download the models prepared by Apple, which have already been converted to the custom CoreML format used on macOS for leveraging Apple Silicon specific optimizations. The script makes it easy to download models from the Hugging Face-hosted Apple repositories.

To download a specific model, first go to the stable_diffusion_apple folder. The script will put each model you download in the models folder you created earlier. Simply copy the script from the code block below into a new file in the stable_diffusion_apple folder, and modify the strings repo_id and variant to match the model you want to download. For example, in the version provided, the script will automatically download the StableDiffusion 1.5 model in the split_einsum variant.

Once you’ve copied the script, and named it for example downloader.py, you can execute it with the command python downloader.py. The download process will begin, which may take a few minutes depending on your internet connection speed. Once the download is complete, you can check inside the models folder to confirm that the file has been downloaded.

You’re finished! Now, use the cd ml-stable-diffusion command to navigate to the ml-stable-diffusion folder, and then use the following command to generate an image from a text prompt:

swift run StableDiffusionSample --disable-safety --resource-path ../models/coreml-stable-diffusion-v1-5_split_einsum_compiled --compute-units all "a photo of an astronaut riding a horse on mars"
Notice that while split_einsum models can leverage on all the compute units (CPU, GPU and Apple's Neural Engine) original is only compatible with CPU and GPU.

Troubleshooting

As far as troubleshooting goes here I’ll list some of the things I’ve found out while deploying it on my Mac. Hopefully, they will be helpful for someone else.

Error: no such module ‘PackageDescription’

This error shows up as soon as you run the swift run StableDiffusionSample command.

On my Mac this issue appeared after updating macOS to the last dot version and it may happen to someone else also. If you experience this issue, make sure that the path to the Xcode tools is set up correctly. You can check the current path by using the command xcode-select --print-path. It will print the path to your Xcode installation, which is usually /Applications/Xcode.app/Contents/Developer. If the path is incorrect, that is doesn’t match with the real location of Xcode, use the command sudo xcode-select --switch /path/to/xcode to set it up. So if you have Xcode installed in the default location, that is the one I’ve showed before, execute this command :

sudo xcode-select --switch /Applications/Xcode.app/Contents/Developer

Once you have done this, the error should not occur again.

This is some reference for this issue: https://forums.kodeco.com/t/server-error-no-such-module-packagedescription/177438

Error: calling plan_submit in batch processing

This error occurs when you use the --compute-units all flag on original models. These models only support the cpuOnly and cpuAndGPU flags. This is a common mistake, as the CLI defaults to all compute units, so it is important to remember to set the correct flag when running on models that do not support running on the Apple Neural Engine.

Tips

If you encounter any other errors when running the image generation, it is likely that there is not enough free RAM available to execute the text-to-image job. You can try freeing up some space by closing other running apps or restarting your Mac. Another option is to use split_einsum models instead of original ones, as they are optimized for low-memory devices like iPhones and iPads. Additionally, make sure to include the --disable-safety flag when using the StableDiffusion command, as shown in the example above. This will free up some resources that are used to ensure that the generated image follows NSFW guidelines during the computing process.”

As a general rule of thumb, keep in mind that newer Apple Silicon devices (with M2 chips) have a more powerful Neural Engine compared to the previous generation, including the M1 Pro. Using split_einsum models, which can also make use of the Neural Engine, will generally result in faster execution times and better energy efficiency.

To quickly get the time needed to execute a job simply prepend the swift run StableDiffusionSample command with the time keyword. The time will be displayed in the console once the job is finished.

Since the Apple implementation was released only a few weeks ago, it may still be missing certain features that you are familiar with using with StableDiffusion models. In that case do not worry, the repository is actively maintained by many contributors, so it may be worth checking back occasionally to see if any new features have been added in the meantime. For example, I recently did this to get the latest release, which finally added support for negative prompts.

Now, if you want to get the latest version, you can simply download the updated repository without re-downloading the models. To do this, go to the stable_diffusion_apple folder and delete the ml-stable-diffusion folder you created earlier. Then, clone the updated version from GitHub using the command git clone https://github.com/apple/ml-stable-diffusion, as you did when you first set up the StableDiffusion deployment.

Conclusion

For reference, here is the official repository for the Apple-maintained version of the StableDiffusion models:

GitHub - apple/ml-stable-diffusion: Stable Diffusion with Core ML on Apple Silicon

In this repository, you can also find instructions and details about performance, among other things. This article was sourced from this repository, along with the following guide by Hugging Face:

Using Stable Diffusion with Core ML on Apple Silicon

If you encounter any new or unusual issues while deploying models, feel free to reach out. Your experience may help troubleshoot issues for other users and improve the framework released on GitHub. Thank you for reading.

Further Reading

If you liked 👏 this article you may enjoy reading through some of my other articles. Oh, and don’t forget to subscribe! 🫵

Level Up Coding

Thanks for being a part of our community! Before you go:

🚀👉 Join the Level Up talent collective and find an amazing job


StableDiffusion on Apple Silicon Macs: Native Deployment for Zero Fan Spin and Maximum Performance was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.


This content originally appeared on Level Up Coding - Medium and was authored by StringMeteor


Print Share Comment Cite Upload Translate Updates
APA

StringMeteor | Sciencx (2023-01-05T15:35:12+00:00) StableDiffusion on Apple Silicon Macs: Native Deployment for Zero Fan Spin and Maximum Performance. Retrieved from https://www.scien.cx/2023/01/05/stablediffusion-on-apple-silicon-macs-native-deployment-for-zero-fan-spin-and-maximum-performance/

MLA
" » StableDiffusion on Apple Silicon Macs: Native Deployment for Zero Fan Spin and Maximum Performance." StringMeteor | Sciencx - Thursday January 5, 2023, https://www.scien.cx/2023/01/05/stablediffusion-on-apple-silicon-macs-native-deployment-for-zero-fan-spin-and-maximum-performance/
HARVARD
StringMeteor | Sciencx Thursday January 5, 2023 » StableDiffusion on Apple Silicon Macs: Native Deployment for Zero Fan Spin and Maximum Performance., viewed ,<https://www.scien.cx/2023/01/05/stablediffusion-on-apple-silicon-macs-native-deployment-for-zero-fan-spin-and-maximum-performance/>
VANCOUVER
StringMeteor | Sciencx - » StableDiffusion on Apple Silicon Macs: Native Deployment for Zero Fan Spin and Maximum Performance. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2023/01/05/stablediffusion-on-apple-silicon-macs-native-deployment-for-zero-fan-spin-and-maximum-performance/
CHICAGO
" » StableDiffusion on Apple Silicon Macs: Native Deployment for Zero Fan Spin and Maximum Performance." StringMeteor | Sciencx - Accessed . https://www.scien.cx/2023/01/05/stablediffusion-on-apple-silicon-macs-native-deployment-for-zero-fan-spin-and-maximum-performance/
IEEE
" » StableDiffusion on Apple Silicon Macs: Native Deployment for Zero Fan Spin and Maximum Performance." StringMeteor | Sciencx [Online]. Available: https://www.scien.cx/2023/01/05/stablediffusion-on-apple-silicon-macs-native-deployment-for-zero-fan-spin-and-maximum-performance/. [Accessed: ]
rf:citation
» StableDiffusion on Apple Silicon Macs: Native Deployment for Zero Fan Spin and Maximum Performance | StringMeteor | Sciencx | https://www.scien.cx/2023/01/05/stablediffusion-on-apple-silicon-macs-native-deployment-for-zero-fan-spin-and-maximum-performance/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.