That imagers with artificial intelligence engines are in fashion is indisputable. Many professionals and amateurs have started generating images from elaborate descriptions, with truly impressive results.
One of these generators is Stable Diffusion, which has the particularity of being able to be installed and run locally on a Mac, which has not gone unnoticed on Apple premises, and the company’s developers have decided Optimize iOS and macOS to run Stable Diffusion more efficiently
Apple Silicon, Core ML and great potential
Apple officially made the announcement in its Machine Learning section, championing running Stable Diffusion locally as a more private way to experience this generator and one that saves us the cost of renting a server.
Note the optimization on iOS 16.2 and macOS 13.1, thanks to some modifications that Apple developers have applied to the CoreML engine. This should reduce the imaging wait time, which is usually quite long (I tried imaging with my base MacBook Air M2 and the wait was around 6-8 minutes).
To improve those times, we’ll have to wait for those updates, which we should have ready in time for this Christmas. So, two weeks maximum. It will be interesting to see how much these times improve: generating images using artificial intelligence is a very intense process and Apple Silicon can greatly benefit from it.