Introduction
————————
Recently, you may have seen many such beautiful paintings in many places.
such as cyberpunk
prompt: Cyberpunk, 8k resolution, castle, the rose sea, dream
Ink painting style
prompt: a watercolor ink painting of a fallen angel with a broken halo wielding a jagged broken blade standing on top of a skyscraper in the style of anti - art trending on artstation deviantart pinterest detailed realistic hd 8 k high resolution
painting
prompt: portrait of bob barker playing twister with scarlett johansson, an oil painting by ross tran and thomas kincade
watercolor
prompt: a girl with lavender hair and black skirt, fairy tale style background, a beautiful half body illustration, top lighting, perfect shadow, soft painting, reduce saturation, leaning towards watercolor, art by hidari and krenz cushart and wenjun lin and akihiko yoshida
And we can see it everywhere on various platforms, the following are Xiaohongshu, Xianyu, twitter
These images look like they were drawn by an artist, but they are not made by real artists, but by AI artists. Just like defeating Lee Sedol to enter the Go industry in 2016, AI has begun to enter the art world.
Let's take a look at a more critical timeline of the development of AI painting
- Disco Diffusion is a tool for digital art creation using artificial intelligence deep learning released on the Google Colab platform. It is an open source tool based on the MIT license agreement and can be run directly on Google Drive or deployed locally.
Disco Diffusion has a disadvantage, that is, the speed is very slow, starting every half an hour.
- Midjourney is an AI art project lab that Somnai, the original author of Disco Diffusion, joined.
Midjourney has improved Disco Diffusion, which can produce pictures in an average of 1 minute.
- OpenAI introduces DALL·E 2, DALL-E 2 achieves higher resolution and lower latency, but also includes new features such as editing existing images.
There is no experience qualification for DALL·E 2 yet.
- stability.ai launches Stable-Diffusion and goes open source
Once launched, it has been loved by the majority of netizens. It is easy to operate and quick to produce pictures, with an average of 10-20 seconds.
Stable-Diffusion is free and the generation speed is fast. Every time the generated rendering is like opening a blind box, it needs to be constantly tried and polished, so everyone starts to play like crazy, even Tesla's former artificial intelligence and automatic driving Visual director Andrej Karpathy is obsessed with it.
while stability.ai is a young British team
Their purpose is " AI by the people, for the people". The meaning of the Chinese translation is that people create AI, and AI serves people. In addition to stable-diffusion, they also participated in numerous AI projects.
Today, I mainly introduce the gameplay of stable-diffusion. The official platform built by stable-diffusion is mainly dreamstudio.ai . When I hear the name, I feel very good, dream editor (obtain it by yourself, do not spray, because the generated pictures are very dreamy) , you can also use colab to run locally, the following will introduce these two methods in detail
How to use
1. Official website registered account
Open https://beta.dreamstudio.ai/ and choose a registration method. I used a Google account to log in here (there are related tutorials to teach you how to register a Google account), you can also choose your own method.
After registration, you can enter this interface.
You can enter the noun directly below, or you can open the settings button on the right, which will have more detailed configuration.
After entering the keywords, click the Dream
button directly, and wait about 10 seconds to generate the picture.
Of course, this way of generating is very convenient, but there is a limit to the number of times.
You can see the points in the upper right corner. By default, your registered account will have 200 points. Each time you generate a picture with the default settings, it will consume a point. If you want to generate more methods, you need to pay, 10 pounds 1000 points .
If you want to get a more detailed picture, you need to consume more points for a single time. The following is the official price list:
And in this way, the copyright of the pictures you generate is automatically converted to CC0 1.0. You can use the pictures you generate for commercial or non-commercial use, but they will also become public domain resources by default.
2. Use Colab (recommended)
This one is the way I recommend, because you can use Stable Diffusion almost infinitely in this way, and because this way is the way you run the model, the copyright belongs to you.
What is Colab?
Colaboratory, referred to as "Colab", is a product developed by the Google Research team. In Colab, anyone can write and execute arbitrary Python code through a browser. It is especially suitable for machine learning, data analysis and educational purposes. Technically, Colab is a hosted Jupyter notebook service. Users can use it directly without setting up, and at the same time, they can also get free access to computing resources such as GPU. - https://research.google.com/colaboratory/faq.html?hl=en-US
Since Colab is a Google product, you must have a Google account before using it. If you don't know how to register, go to the Google account registration tutorial at the bottom.
And we are currently using the Hugging face open source colab example by default.
Hugging face is a New York-based chatbot start-up service provider. The applications developed are popular among teenagers, and a large number of models are stored on it. Stability.ai 's Stable * *Diffusion is also open source on it.
Open the link: https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb
Once open, click Connect in the upper right corner.
Click OK
After the connection is made, we run the first script, which is to view the currently used machine. Generally, one is randomly assigned from K80, T4, P100, V100.
What I got is a Tesla T4 GPU machine, which is more character-based here. If you get a V100, be sure to show it off.
Then continue to run the following command to install the necessary dependencies. After each installation is complete, the running time and running status will be displayed.
Run to this step, you will be asked to fill in a huggingface_hub token link
Go to https://huggingface.co/settings/tokens this page, if not logged in, it will be transferred to the login page by default
After registering an account, copy the Token to the Colab page
Then you will be prompted that the login is successful. If the prompt is abnormal, it should be that you copied it wrong. At this time, you have to open the secret key and copy it manually.
Then we start to pull the model
Note that if you run it directly here, you will get an error and 403 will be displayed.
{"error":"Access to model CompVis/stable-diffusion-v1-4 is restricted and you are not in the authorized list. Visit https://huggingface.co/CompVis/stable-diffusion-v1-4 to ask for access."}
This is because you did not go to huggingface to authorize access.
Open https://huggingface.co/CompVis/stable-diffusion-v1-4
Click to run this repository, and then go back to Colab to pull the model normally.
Finally, the exciting time comes, start generating the image, run the following two steps, the prompt is the description, and you can enter any words you want.
Clicking on the run with the official default prompt will generate a photo of the astronaut riding a horse (about 20 seconds)
nice, this is the picture I generated.
The above basic tutorial is completed, and more rich parameters can be set later.
Set a random seed (first quickly generate a low-quality image to see the effect, and then increase the image quality)
Adjust the number of iterations
Multi-column pictures
set width and height
In general, I personally prefer this method, because it can be DIYed by yourself, and it can be used almost infinitely.
Finally, if you don't want to prompt, you can refer to this website https://lexica.art/ , which contains a lot of samples that others have tried.
3. Run locally
If you have an advanced graphics card yourself, you can try it yourself.
About copyright
Indeed, in general, stable-diffusion has no special restrictions, but the use of images must comply with the following rules:
1. If you are using a third-party platform, you need to comply with some regulations of the third-party platform. For example, you can use the official dreamstudio.ai for commercial or non-commercial use, but by default you must also follow the CC0 1.0 agreement.
2. If you use your own local deployment, then the copyright belongs to you.
Google account registration
First of all emmm, scientific xx, know everything
Click to create an account - personal use
Fill in basic personal information
Fill in the phone number and year and month information
Then the mobile phone receives a verification code, click the verification, and the part-time job is done.
then click skip
Agree to the agreement and you're done!
Put a wave of the pictures I recently generated, the pavilions in spring, summer, autumn and winter
If you don't understand or if you are a digital painting enthusiast, welcome to communicate. (Advertising and posting irrelevant content are strictly prohibited! If the code expires, add qiufengblue)
Latest update, put all other stuff into notion
https://qiufeng.notion.site/06fab45ec290447ba41c3fd0f6e78fac
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。