找回密码
 加入我们

QQ登录

只需一步,快速开始

搜索
查看: 84|回复: 1

[智能] SD底模ProductDesign

[复制链接]
发表于 2024-3-21 11:58 | 显示全部楼层 |阅读模式
https://civitai.com/models/23893 ... nimalism-eddiemauro
productDesign_eddiemauro20-4.png


Product Design (minimalism-eddiemauro)

Type CHECKPOINT TRAINED
Uploaded May 31, 2023
Base Model  SD 1.5
Usage Tips  CLIP SKIP: 2
Trigger Words:  3D PRODUCT RENDER
3D PRODUCT RENDER STYLE


==========================
productDesign_eddiemauro20.png

3D product render, futuristic kettle, finely detailed, purism, ue 5, a computer rendering, minimalism, octane render, 4k

Negative prompt:
EasyNegative, (worst quality:2), (low quality:2), (normal quality:2), lowres, ((monochrome)), ((grayscale)), cropped, text, jpeg artifacts, signature, watermark, username, sketch, cartoon, drawing, anime, duplicate, blurry, semi-realistic, out of frame, ugly, deformed, ((multiple objects))

Steps: 40, Size: 512x512, Seed: 1101613588, Model: eddiemauro2.0, Sampler: Euler a, CFG scale: 8, Clip skip: 2, Model hash: db58f07060

==============2================
productDesign_eddiemauro20-2.png

3D Product render style, futuristic lamp, finely detailed, purism, ue 5, a computer rendering, minimalism, octane render, 4k

Negative prompt: (worst quality:2), (low quality:2), (normal quality:2), lowres, ((monochrome)), ((grayscale)), cropped, text, jpeg artifacts, signature, watermark, username, sketch, cartoon, drawing, anime, duplicate, blurry, semi-realistic, out of frame, ugly, deformed, EasyNegative

Steps: 30, Size: 512x712, Seed: 2707648872, Model: eddiemauro2.0, Sampler: Euler a, CFG scale: 8, Clip skip: 2, Model hash: db58f07060

=========3=============
productDesign_eddiemauro20-3.png

3D product render, futuristic vehicle, finely detailed, purism, ue 5, a computer rendering, minimalism, octane render, 4k

Negative prompt: EasyNegative, (worst quality:2), (low quality:2), (normal quality:2), lowres, ((monochrome)), ((grayscale)), cropped, text, jpeg artifacts, signature, watermark, username, sketch, cartoon, drawing, anime, duplicate, blurry, semi-realistic, out of frame, ugly, deformed, orange
Steps: 40, Size: 996x1176, Seed: 1360573815, Model: eddiemauro2.0, Sampler: Euler a, CFG scale: 8, Clip skip: 2, Mask blur: 4, Model hash: db58f07060, Denoising strength: 0.5


===============4================


3D Product render, futuristic ((ceramic)) bottle, finely detailed, purism, ue 5, a computer rendering, minimalism, octane render, 4k

Negative prompt: (worst quality:2), (low quality:2), (normal quality:2), lowres, ((monochrome)), ((grayscale)), cropped, text, jpeg artifacts, signature, watermark, username, sketch, cartoon, drawing, anime, duplicate, blurry, semi-realistic, out of frame, ugly, deformed, EasyNegative
Steps: 30, Size: 512x512, Seed: 527433073, Model: eddiemauro2.0, Sampler: Euler a, CFG scale: 8, Clip skip: 2, Model hash: db58f07060


========5=========

productDesign_eddiemauro20-5.png

3D product render, futuristic chair, finely detailed, purism, ue 5, a computer rendering, minimalism, octane render, 4k

Negative prompt: EasyNegative, (worst quality:2), (low quality:2), (normal quality:2), lowres, ((monochrome)), ((grayscale)), cropped, text, jpeg artifacts, signature, watermark, username, sketch, cartoon, drawing, anime, duplicate, blurry, semi-realistic, out of frame, ugly, deformed
Steps: 40, Size: 512x512, Seed: 1157398724, Model: eddiemauro2.0, Sampler: Euler a, CFG scale: 8, Clip skip: 2, Model hash: db58f07060


=======================

Before to use

You have to know how works Stable Diffusion. I recommend using Automatic1111 like an interface to launch the model.

This is a model trained on SD 1.5 model, so you have to consider that it is not perfect. I had to make so much testing before to arrive to a stable generation. I will enhance the model when a better base model arrives (like SD XL new one).

This is a Checkpoint dataset.

I recommend you to follow me on my instagram account, where I will explain about AI image generation: https://www.instagram.com/eddiemauro.design/

Intro

PRODUCT DESIGN (minimalism-eddiemauro) CHECKPOINT: Hi, I’m a product and car designer, and I’m so excited to test with AI, I think is a good tool for designing. This tool is so useful for the design process (shapes-ideas generation), but more than that, it helps so much to refine aesthetically. Also, you can turn sketch and 3D schemes into render style.

ORIGINAL MODEL: eddiemauro 1.5. Express the minimalism concept. Matte finishing.

VARIANT MODEL: eddiemauro 1.5b. The prompt is more precise, and it adapts better with objects, but minimalism style is less. The object shape is more realistic, so I consider the original model is more "creative". Also, is less matte. (also known as eddiemauro3.5).

VARIANT MODEL v.2: eddiemauro 2. The prompt is more precise, and it adapts better with objects than original and first variant mode, but the minimalism style is less. The object shape is more realistic, so I consider the original model is more "creative". It is more matte when you apply hires.fix and img2img mode. It is more colorful, and It tends to have sometimes weird color combinations.

VARIANT MODEL v.2.5: eddiemauro 2.5. The prompt is more precise, and it adapts better to objects than the previous one, and the consistence of objects shapes is so good. It has a perfect balance between consistency and creativity. It works better with low Steps and CFG because was trained with 768 dimension. Furthermore, it is less colorful, a less saturated than the previous models, but this can be solved by increasing CFG. If you want to access this model soon, please be part of my lv.1 membership on Ko-fi:



If you want to support my work and help me to upload more models (with better quality), you can do it by entering here and donating, I would greatly appreciate it: https://ko-fi.com/eddiemauro

Installation

I use Automatic1111, the best UI for Stable Diffusion image generation, so I recommend you to install locally or use it online with some Colab or other hosting. You can find online instructions or videos to do that. If you are going to install locally, you can watch this tutorial online and I recommend you to have at least a 6-8 Gb of VRAM Graphic Card (nvidia) to have a stable interface and launch with “Microsoft Edge” because you will have problems on “Google Chrome”. Try also to install “medvram” or “lowvram” options besides “xformers” (search online how to).

You have to install the Checkpoint model to use.

Please for image creation you have to follow all my recommendations, if you don't, it is impossible to generate a good image quality. Also, you have to consider that from today AI image generation is not so consistent and perfect, you have to invest time to get it and make plenty of tests.

Recommendations for image generation

Activation token/caption: Inside prompt space, the first word has to be: “3D product render”, “product render” or "3D product render style" to activate the style. It is mandatory, if you don't do it, it will not work properly.

Another recommended prompting: Inside prompt you can use those words that will enhance the image generation: in the positive space, “futuristic, finely detailed, purism, ue 5, a computer rendering, minimalism, octane render, 4k”; in the negative space: “(worst quality:2), (low quality:2), (normal quality:2), lowres, ((monochrome)), ((grayscale)), cropped, text, jpeg artifacts, signature, watermark, username, sketch, cartoon, drawing, anime, duplicate, blurry, semi-realistic, out of frame, ugly, deformed”. You can also watch the image metadata of example images here and simulate the prompt.

Textual inversion/embedding or Lora tool recommended: I consider that “EasyNegative” is one best of textual inversion for negative prompt space, you should use it. Download here and install it, putting the file inside “embeddings”. You can use it also “Detail Tweaker” to even reduce the details of the image, for that, you have to download from here, install like a Lora and use it inside positive prompt with a value of “-0.5”. Use it when you realize the checkpoint used has so many details, but not when you see that minimalism is on it. This can change the shape of objects considerably. You can use another Lora's like "Epi noiseoffset" or "Godard Style", but not any of my Lora's of product design minimalism.

VAE: Mostly it is recommended to use the “vae-ft-mse-840000-ema-pruned” Stable Diffusion standard.

Clip Skip: It was trained on 2, so use 2.

Steps and CFG: It is recommended to use Steps from “20-40” and CFG scale from “6-9”, the ideal is: steps 30, CFG 8. For next models, those values could change.

Color adjustment: When you feel there is a color extra in the image, just put it in Negative prompt: "weird colors", or the color you intend to avoid. The v.2 tends to be so colorful.

Sampler: I use mostly “EulerA”, “DPM++SDE Karras” or “DPM++2S a”. Euler tends to be simpler and more creative. Make experimentation with other samplers if you like.

Batch: In txt2img try to put a value of 4 to generate more than 1 image and watch the generations. If you have a good graphic card, you can use “Batch size”, this will create at same time 4 images, increasing generation time; but if your computer cannot handle this, change to “Batch count” that will create 4 images in a row (not a same time), but generation time will be more.

Image aspect: Try to use these dimensions: 512x512, 768x512, 512x768, but even you can experiment with different. Don't generate bigger images because the style could be lost, if you want to create a bigger image, use hires.fix in txt2img mode, img2img increase method or Ultimate SD Upscale script extension + ControlNet, or just upscaling with GAN models.

Create bigger images: There are 4 different methods to create large images in Stable Diffusion, you can check online how to. For first method “txt2img hires.fix”, I recommend you to use upscale model called “4x-UltraSharp”, downloading here just “.pth” file, and then installing it, putting inside “ESRGAN” file. In hires.fix option put any “upscale by” value, and then with a “denoise strength” of “0.5-0.7”. For the second method, you have to select first the image generated in txt2img and then putting in img2img mode, increasing dimension at least “1,5 times” with a “denoise strength” from “0.3-0.5”. For the third method, you can use the same configuration of img2img, but activating “tile” mode of “ControlNet” extension and also the script of “Ultimate SD Upscale”, but for that, I recommend you to watch a tutorial here. For the last method, you have to pass the generated image in txt2img to “extras” and then select a GAN model and scale it, you can also use the “4x-UltraSharp” model.

Get more control of your creation: Use “ControlNet” extension to generate a more controlled shape of what you want, and even you can test it with sketches. Use “Scribble” or “Lineart” modes. For that, I recommend you to install this extension and then learn to how to use. There are plenty of online videos about it.

Copy prompt for image metadata: You can download my example images here and put it inside “PNG info” tab from Automatic1111

Example Prompting:

Positive prompt:

3D product render, futuristic armchair, finely detailed, purism, ue 5, a computer rendering, minimalism, octane render, 4k

Negative prompt:

EasyNegative, (worst quality:2), (low quality:2), (normal quality:2), lowres, ((monochrome)), ((grayscale)), cropped, text, jpeg artifacts, signature, watermark, username, sketch, cartoon, drawing, anime, duplicate, blurry, semi-realistic, out of frame, ugly, deformed

Steps: from 20-40 (For EulerA is enough 20, DPM++SDE Karras or DPM++2S a)

CFG scale: 6-9 (8 Ideal).

What comes for the future

I’m already trying to enhance the model. This was trained with 512 image aspect, so I will try with 768 (bigger one), and also other configurations (like changing captions, steps, epochs, etc.). If you like a better model of this version, try to keep supporting me on ko-fi, if there are more people supporting me, I can invest more time to train and enhance models, but if this doesn't happen I cannot.



I launched my first private model for my Ko-fi membership lv.1, called "eddiemauro scene" minimalistic scenery creation for rendering. If you want to access to private models, you can support me and subscribe to this membership. I will also start to upload here more models centered on product and car design.


License

Watch here a Stable Diffusion license link. In the case of this specific model, use it for whatever you want in terms of image generation, also commercial (sell images you generate). It is prohibited:

Upload this model to any server or public online site without my permission.

Share online this model without my permission, using my exact model with a different name or uploading this model and then run it on services that generate images for money.

Merge it with a checkpoint or a Lora, and then publish it or share online, just talk to me first. In the future,

Sell this model or merges using this model.


Supporting

You can follow me on my social networks. I will show my process and also design tips and tools. Also, you can check my webpage and in case of you need a design service, I work like a freelance.

http://eddiemauro.design/

https://www.facebook.com/eddiemauro.design

https://www.instagram.com/eddiemauro.design

https://www.linkedin.com/in/eddiemauro

https://www.behance.net/eadesign1

回复

使用道具 举报

 楼主| 发表于 2024-3-21 12:03 | 显示全部楼层
链接:https://pan.baidu.com/s/1XhB6KPQDi8j9S76FwjRRPA?pwd=1gaf
提取码:1gaf
--来自百度网盘超级会员V7的分享
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 加入我们

本版积分规则

QQ|Archiver|手机版|小黑屋|吹友吧 ( 京ICP备05078561号 )

GMT+8, 2024-12-28 00:08 , Processed in 0.317387 second(s), 18 queries .

Powered by Discuz! X3.5 Licensed

© 2001-2024 Discuz! Team.

快速回复 返回顶部 返回列表