diff --git a/.gitignore b/.gitignore index d4090ae..d3a7269 100644 --- a/.gitignore +++ b/.gitignore @@ -1,3 +1,4 @@ .token outputs happytrees.log +users.db diff --git a/README.md b/README.md index 5481c77..d83957b 100644 --- a/README.md +++ b/README.md @@ -1,13 +1,69 @@ # Happy Trees Discord Bot ## Description -This is a Discord bot for taking Stable Diffusion image requests and running them on a local GPU. +This is a Discord bot for taking Stable Diffusion image requests and running them on a local GPU. Use of this bot to generate images with Stable Diffusion is subject to the [Stable Diffusion License](https://huggingface.co/spaces/CompVis/stable-diffusion-license), including Attachment A. + +## TODO +* Figure out why the persistent views don't work after a server restart. Until we fix this, interaction buttons stop working when the bot is restarted. ## Setup Clone this repository to the system where you have setup [Stable Diffusion](https://github.com/CompVis/stable-diffusion), with the optimizations from [Basu Jindal's fork](https://github.com/basujindal/stable-diffusion). We're not covering how to run Stable Diffusion here, so you're on your own there. By default, we're assuming that you're installing both of these to your user's home directory on a linux system. +You'll also need [real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) for image upscaling. We're using the portable NCNN executables, so you can just download that, or you can grab the whole repo if you're feeling adventurous. + Create a new Discord application in the [Discord Developer Portal](https://discord.com/developers/applications), with a name, description, and icon of your choosing. Then head over to the Bot link on the lefthand panel. Here you'll want to create a bot (yes, you're sure) and enable the "Message Content Intent" under "Privileged Gateway Intents". Once you do that, you can grab the bot token via the Reset Token button and store it in a `.token` file within the cloned repository folder. Edit any paths in the `bot.py` file that you need for your specific system setup and then go ahead and run `python3 bot.py` from within the cloned directory to test it out. If everything worked and it can connect to Discord, it should print an invite link to your system console. You can use that link to invite your bot to servers where you have the "Manage Server" permission. (If you flip the "Public Bot" toggle in the Discord bot interface to "Off", from the default of "On", then *only* you will be able to use the invite link. If you leave it public, then anyone with the link can use the invite link. Unless you're feeling generous with your GPU cycles, you probably want to leave this private unless and only briefly toggle it off for specific periods of time when you know people should be adding the bot to other servers.) If that all worked and you want to make this a regular thing, you can update your username in the `happytrees.service` file and copy it to `/etc/systemd/system/`. After that, you'll just need to run `sudo systemctl daemon-reload` to let systemd see your new file, then `sudo systemctl enable happytrees.service` to start the service on boot, and (optionally) `sudo systemctl start happytrees.service` to run a copy from the service file right now. + +## Usage + +To use the Happy Trees bot, you need to be a member of a server to which it has been invited. At that point you can use one of the following options to get submit requests to the bot: + +* Send a message in a channel that the bot is a part of and @mention the bot +* Send a message in a channel that the bot is a part of and prefix your message with `!happytree` or `!happytrees` +* Send the bot a DM (you can @mention or use a prefix here, but it is not necessary) + +A note about messages in channels: this bot does not adhere to your capitalist notions of private property. So if you make a request in a public channel, anyone can interact with the buttons on the response to get copies of the generated images or upscale those images. If you want to keep things private, talk to the bot in a DM. + +The general flow of interaction is that you can provide the bot with a Stable Diffusion prompt and you will be placed in a queue to have your art generated on the local GPU. Once it is your turn, your art will be generated and provided to you in a reply to your original message. If you didn't change the defaults, then the bot will provide you with a grid composed of four 448px by 448px images and below that grid will be a set of buttons. These buttons allow you to obtain the individual 448x448 samples that the algorithm created. That step is not bound by the queue and can be completed even when other work is pending. Once you have isolated an image (either through the buttons to select from a grid or by only asking for a single image in the first place), you will also be provided with a button labeled "Embiggen". This button will allow you to upscale the image to 4x the original size, 1792x1792. Beware: This is an AI art task and will be placed in the queue to wait your turn. While we could continue to go larger, that's a reasonable size and is about the limit of Discord's default abilities. If you need something bigger than that, you'll need to get in touch with someone to do it by hand so that they can pass you the resulting file via another means. + +### img2img + +While just tossing text at the bot will get you interesting, if somewhat random, art, there is another way to exert a touch more control. You can also attach an image to your message, which the bot will use **in addition to** your text prompt to generate artwork. The image that you provide can be in any standard image format (no PDFs...) and of any size. It doesn't need to be huge. Although, if you want the bot to be able to use detail from your input image and not just general shapes and colors, you will want it to be at least 1:1 with the 448x448 output size. + +### Options + +As alluded to above, there are several options that one can use to modify the default behavior of a request. These can be placed basically anywhere in your message, including in the middle of a prompt (as long as you have spaces to separate them), but generally I recommend picking the beginning or end so that it's a bit easier for you to keep track of what you're doing. + + +``` +--seed [0-1000000] will use a specific number instead of a random seed for the generation process. +--n_samples [1-4] will determine how many sample images I make for you. Default is 4. +--ddim_steps [0-80] will cause me to spend more or less compute time on the image. Default is 50. +--strength [0.00-1.00] will set how much liberty I should take in deviating from your input image, with 1 being to basically ignore the input image. Default is 0.75. +``` + +#### Seed + +Because computers can't actually make random decisions, we settle for fancy math functions that seem to produce random output (pseudorandom number generators, or PRNGs) to simulate it. A seed tells the PRNG where to start in its sequence of fake randomness. Without this, it would always start at the same spot and you'd always get the same answers in the same order. Normally, the script pulls a seed from a combination of fairly unpredictable things in the operating system, so that you're unlikely to see the same image again. If you **want** to get the same answer as you did before, take note of the seed from a prior run (if you open the file and can see the file name, the name format is `seed__#####.png` or `grid__####.png) and pass that same seed value in to the `--seed` parameter. **Note:** when you generate multiple samples, the seed always increases by 1 for each sample. This is true even if you pass in a manual seed. So if you have a grid and want to do some refining based on the third sample, you'd need to use the grid's seed from the filename plus two. + +#### DDIM Steps + +The explanation in the help text is super simplified, but basically correct. You should think of this parameter as "How much work should the AI put into this?" As you can imagine, if you give the AI too little time, you won't really have any recognizable art. But on the other hand, like any good AI, if you let it spend too much time on something the results can wind up being horrific. See the below examples by way of illustration: + +![An example of how different DDIM settings affect the output](/grid_136549.png) + +This shows the output with 5 steps (what even is that??), 10 and 15 steps (look at how they've mutilated that poor dog), 20 steps (that's pretty much the right shape), 50 steps (smoothed out more and decided to lose the tail), and 70 steps (the only one of the batch that had both reasonable features and an actual lower jaw). As you go higher, it starts editing more and taking things away (like crucial parts of the dog's head). In general, I wouldn't recommend going below 20 steps if you want to have some idea of what the prompt and seed combination actually would look like. The default, 50, is a good balance for most applications, so unless you're lowering the step count to speed things up, or tweaking it by a few steps in either direction to get rid of an annoying artifact, I'd leave it be. + + +#### Strength + +This **only** applies to `img2img` requests (requests where you attached an input image to your message). Unlike the other inputs, this one is a decimal between zero and one. The closer to one that it gets, the less the AI pays attention to the input you gave it. The default gives the AI quite a bit of latitude. If you're only trying for some recoloring and minor touch-ups but have an image that you largely want to stick to, try something more conservative like `--strength 0.2`. + +## Weaknesses of the Stable Diffusion model + +While Stable Diffusion can do some really impressive things, including things that other AI models explicitly refuse to do like generating images of specific people (ex. if you ask for "Javy Baez", with no other context it knows that you're talking about a baseball player and gives you decent attempts at his likeness), it still has several limitations. Regarding living things especially, it still doesn't really know what makes a person a person or a dog a dog. It has some mathematical equations in it that produce things that are remarkably realistic sometimes, but those equations lack constraints to prevent them from making silly errors like dogs with one central ear or people in positions that humans cannot ever hope to achieve. If you are wanting to make your images more reasonable, I suggest using the `img2img` functionality and constraining the model a bit so that it is at least starting with something in the right arrangement of limbs. Otherwise, your best bet is really just to keep fishing for a good seed value and hoping that one of the random iterations will come out right. + +On the subject of `img2img`, as noted above quality isn't a big deal for the model, especially at higher strength values. One thing that is a known problem, though, is if you are mixing different styles of artwork. If you have a photo as a guide and say you want some birds in the sky behind you, so you draw them in with MS Paint... that won't work well. You'd be better off either drawing the whole reference image in Paint, or getting the AI to make you some birds and then cutting and pasting them into the image in the spot you want with a bit of a blur to hide the obvious edges. diff --git a/bot.py b/bot.py index 34dc217..ae8ebdf 100644 --- a/bot.py +++ b/bot.py @@ -6,19 +6,93 @@ import sys import logging import asyncio import time -import shlex import re import random +import hashlib +import tempfile import discord +from discord.ext import commands +import sqlite3 +from contextlib import closing -class HappyTreesBot(discord.Client): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) +class EnlargeButton(discord.ui.Button): + def __init__(self, relfile): + super().__init__(style=discord.ButtonStyle.secondary, label='Embiggen', custom_id=relfile) + self.logger = logging.getLogger('discord.EnlargeButton') + + async def callback(self, interaction): + assert self.view is not None + view = self.view + t = time.perf_counter() + self.logger.info(f'Queueing request for {interaction.user} to embiggen {self.custom_id}.') + interaction.client.commissions.put_nowait((t, 'embiggen', interaction.message, self.custom_id, None, None, None, None)) + queuelen = interaction.client.commissions.qsize() + waittime = (queuelen * 6) - 4 + await interaction.response.send_message(f'{interaction.user} you are number {queuelen} in the Happy Trees processing queue. Estimated wait time is {waittime} minutes.') + +class EnlargeView(discord.ui.View): + def __init__(self, outfile=None, outpath=None): + super().__init__(timeout=None) + if not outfile: + return + self.logger = logging.getLogger('discord.EnlargeView') + relfile = os.path.relpath(outfile, outpath) + self.logger.info(f'Creating embiggen button for {relfile}.') + self.add_item(EnlargeButton(relfile)) + +class GridButton(discord.ui.Button): + def __init__(self, sample, samplefile, row): + super().__init__(style=discord.ButtonStyle.secondary, label=f'Image {sample + 1}', custom_id=samplefile, row=row) + self.logger = logging.getLogger('discord.GridButton') + + async def callback(self, interaction): + assert self.view is not None + self.logger.info(f'Received a request to return single file {self.custom_id}') + samplefile = os.path.join(interaction.client.outpath, self.custom_id) + try: + await interaction.response.send_message(file=discord.File(samplefile, description=f'Sample {self.custom_id}.'), view=EnlargeView(samplefile, interaction.client.outpath)) + except: + await interaction.response.send_message(f'An error occurred. Sample {self.custom_id} not found.') + + +class GridSelect(discord.ui.View): + def __init__(self, samples=None, n_rows=None, outfile=None, outpath=None): + super().__init__(timeout=None) + if not samples: + return + self.logger = logging.getLogger('discord.GridSelect') + relfile = os.path.relpath(outfile, outpath) + relpath, gridfile = os.path.split(relfile) + _, seed, base = os.path.splitext(gridfile)[0].split('_') + per_row = samples // n_rows + if (samples % n_rows) != 0: + per_row += 1 + self.logger.info(f'Creating buttons for GridView for {outfile}.') + for sample in range(samples): + sampleseed = int(seed)+sample + samplebase = int(base)+sample + samplefile = os.path.join(relpath, f'seed_{sampleseed}_{samplebase:05}.png') + row=sample//per_row + self.add_item(GridButton(sample, samplefile, row)) + +class HappyTreesBot(commands.Bot): + def __init__(self): + intents = discord.Intents.default() + intents.message_content = True + super().__init__(command_prefix=commands.when_mentioned_or('!happytree'), intents=intents) self.logger = logging.getLogger('discord.HappyTreesBot') self.commissions = asyncio.Queue() self.paintings = [] self.outpath = os.path.abspath(os.path.expanduser('~/happytreesbot/outputs')) self.sdpath = os.path.abspath(os.path.expanduser('~/stable-diffusion')) + self.ganpath = os.path.abspath(os.path.expanduser('~/real-ESRGAN')) + with closing(sqlite3.connect("users.db")) as conn: + with closing(conn.cursor()) as curs: + curs.execute("CREATE TABLE IF NOT EXISTS users(id integer PRIMARY KEY, first_use integer, last_use integer)") + + async def setup_hook(self) -> None: + self.add_view(GridSelect()) + self.add_view(EnlargeView()) async def on_ready(self): self.logger.info(f'Logged in to Discord as {self.user}!') @@ -33,39 +107,81 @@ class HappyTreesBot(discord.Client): elif message.content.startswith(f'<@{self.application_id}>'): self.logger.info(f'Mention message from {message.author} ({message.author.id}): {message.content}') await self.take_commission(message) + elif isinstance(message.channel, discord.channel.DMChannel): + self.logger.info(f'Direct message from {message.author} ({message.author.id}): {message.content}') + await self.take_commission(message) else: self.logger.debug(f'Non-command message from {message.author} ({message.author.id}): {message.content}') + async def usage(self, message): + await message.reply(f'Welcome to the Happy Trees Bot. I take your requests and draw them using an old GTX 980, the Stable Diffusion image generation algorithm and the real-ESRGAN upscaling algorithm. You can commission an image by sending me a _DM_, sending a message that starts by _@mentioning_ me in a channel that I\'m in, or prefixing your message with _!happytree_ in a channel that I\'m in.\n\nI have two main modes of operation:\n **txt2img:** This is my default mode and here I\'ll interpret most text as part of a prompt and generate an image based on that.\n **img2img:** You can use this by attaching an image to your message and I\'ll try to use your image as a guide for my output.\n\nThere are also certain special options you can pass me to affect how I work. These are:\n *--seed [0-1000000]* will use a specific number instead of a random seed for the generation process.\n *--n_samples [1-4]* will determine how many sample images I make for you. **Default is 4.**\n *--ddim_steps [0-80]* will cause me to spend more or less compute time on the image. **Default is 50.**\n *--strength [0.00-1.00]* will set how much liberty I should take in deviating from your input image, with 1 being to basically ignore the input image. **Default is 0.75.**\n\nFor more details and examples, please check out the README file in the Happy Trees Bot Git repository: .') + return + + async def license(self, message): + await message.author.send(f'Hi there! This seems to be your first time using the Happy Trees Bot, so I just want to be sure that you know how to use the bot and are aware of the license restrictions that the the bot operates under from the Stable Diffusion library.\n\n**NOTE:** All requests run one at a time because I only have one GPU. Please be respectful of others who might want to request some images and don\'t queue up too much art all at once. *Thank you*\n\n You can interact with the bot via DM, or via a message that starts with an @mention of the bot in a channel that the bot is in, or via a message that starts with !happytree in a channel that the bot is in. For full details on how to use the bot, use any of those methods with the command "help".\n\nAll use of the Happy Trees Bot is subject to the Stable Diffusion license: , including _Attachment A: Use Restrictions_. If you **do not agree** to the terms of that license, please **discontinue use** of the Happy Trees Bot. You should read the license for full details, but my short summart here is: don\'t be a jerk, don\'t be a creep, and don\'t break the law. Hopefully you can manage that.') + return + + async def take_commission(self, message): t = time.perf_counter() - parsed = shlex.split(message.content)[1:] + parsed = message.content.split() + tags = 0 + for tok in parsed: + if tok.startswith('!happytree') or tok.startswith(f'<@{self.application_id}>'): + tags += 1 + else: + break + parsed = parsed[tags:] + with closing(sqlite3.connect("users.db")) as conn: + with closing(conn.cursor()) as curs: + existing_user = curs.execute('SELECT * FROM users where id=?', (message.author.id,)).fetchall() + curs.execute('INSERT INTO users (id, first_use, last_use) VALUES (?, strftime(\'%s\', \'now\'), strftime(\'%s\', \'now\')) ON CONFLICT(id) DO UPDATE SET last_use=strftime(\'%s\', \'now\');', (message.author.id,)) + conn.commit() + self.logger.debug(f'First message token: {parsed[0]}.') + if parsed[0].lower() in ["help", "--help", "-help", "-h", "--h"]: + await self.usage(message) + return + elif parsed[0].lower() in ["license", "--license", "-license", "-l", "--l"]: + await self.license(message) + return + if not existing_user: + await self.license(message) prompt = "" currOpt = "" + customSeed = False seed = random.randint(0, 1000000) samples = 4 steps = 50 + strength = 0.75 for token in parsed: if not token.startswith('--'): if currOpt and token.isdigit(): token = int(token) if currOpt == "seed": - if token <= 1000000: - seed = token + if (token >= 1) and (token <= 1000000): + seed = int(token//1) + customSeed = True else: await message.reply(f'Invalid seed value. Seed must be <= 1000000.') return elif currOpt == "n_samples": - if token <= 4: - samples = token + if (token >= 1) and (token <= 4): + samples = int(token//1) else: await message.reply(f'Invalid n_samples value. This GPU is old and cannot do more than 4 at a time.') return elif currOpt == "ddim_steps": - if token <= 80: - steps = token + if (token >= 1) and (token <= 80): + steps = int(token//1) else: await message.reply(f'Invalid ddim_steps value. Limited to 80 for time considerations.') return + elif currOpt == "strength": + if (token > 0) and (token < 1): + strength = token + else: + await message.reply(f'Invalid strength value. Strength must be a decimal between 0 and 1.') + return currOpt = "" else: if not prompt: @@ -77,40 +193,41 @@ class HappyTreesBot(discord.Client): opt = token[2:] if "=" in opt: optName, _, val = opt.partition('=') - try: - val = int(val) - except: - await message.reply(f'Invalid option value. Please use a positive integer.') - return if optName == "seed": - if val <= 1000000: - seed = val + if (val >= 1) and (val <= 1000000): + seed = int(val//1) + customSeed = True else: await message.reply(f'Invalid seed value. Seed must be <= 1000000.') return elif optName == "n_samples": - if val <= 4: - samples = val + if (val >= 1) and (val <= 4): + samples = int(val//1) else: await message.reply(f'Invalid n_samples value. This GPU is old and cannot do more than 4 at a time.') return elif optName == "ddim_steps": - if val <= 80: - steps = val + if (val >= 1) and (val <= 80): + steps = int(val//1) else: await message.reply(f'Invalid ddim_steps value. Limited to 80 for time considerations.') return + elif optName == "strength": + if (val > 0) and (val < 1): + strength = val + else: + await message.reply(f'Invalid strength value. Strength must be a decimal between 0 and 1.') else: - await message.reply(f'Invalid option name: {optName}. Valid options are "--ddim_steps", "--n_samples", and "--seed".') + await message.reply(f'Invalid option name: {optName}. Valid options are "--ddim_steps", "--n_samples", "--seed", and "--strength".') return else: - if opt in ["ddim_steps", "n_samples", "seed"]: + if opt in ["ddim_steps", "n_samples", "seed", "strength"]: currOpt = opt else: - await message.reply(f'Invalid option name: {opt}. Valid options are "--ddim_steps", "--n_samples", and "--seed".') + await message.reply(f'Invalid option name: {opt}. Valid options are "--ddim_steps", "--n_samples", "--seed", and "--strength".') return - self.logger.info(f'Queueing request for {message.author}. Prompt: "{prompt}"; Samples: {samples}; Seed: {seed}; Steps: {steps}.') - self.commissions.put_nowait((t, message, prompt, samples, seed, steps)) + self.logger.info(f'Queueing request for {message.author}. Prompt: "{prompt}"; Samples: {samples}; Seed: {seed}; Steps: {steps}; Strength: {strength}.') + self.commissions.put_nowait((t, 'paint', message, prompt, samples, seed, steps, strength)) position=self.commissions.qsize() waittime=position * 6 await message.reply(f'{message.author} you are number {position} in the Happy Trees processing queue. Estimated wait time is {waittime} minutes.') @@ -118,32 +235,76 @@ class HappyTreesBot(discord.Client): async def painting(self): while True: self.logger.debug(f'Bob Ross is ready to paint!') - t, message, prompt, samples, seed, steps = await self.commissions.get() - self.logger.info(f'Bob Ross is painting "{prompt}" for {message.author}.') - # This section copies the filename generation code from - # optimized_txt2img.py from optimizedSD - os.makedirs(self.outpath, exist_ok=True) - sample_path = os.path.join(self.outpath, "_".join(re.split(":| ", prompt)))[:150] - os.makedirs(sample_path, exist_ok=True) - base_count = len(os.listdir(sample_path)) - outfile = os.path.join(sample_path, "seed_" + str(seed) + "_" + f"{base_count:05}.png") - self.logger.debug(f'Output file will be: {outfile}') - start = time.perf_counter() - self.logger.info(f'About to start the subprocess...') - proc = await asyncio.create_subprocess_exec('/usr/bin/python3','optimizedSD/optimized_txt2img.py', '--H', '448', '--W', '448', '--precision', 'full', '--outdir', self.outpath, '--n_iter', '1', '--n_samples', '1', '--ddim_steps', str(steps), '--seed', str(seed), '--prompt', str(prompt), stderr=asyncio.subprocess.STDOUT, stdout=asyncio.subprocess.PIPE, cwd=self.sdpath) - self.logger.info(f'Started SD subprocess, PID: {proc.pid}') - procout, procerr = await proc.communicate() - procoutput = procout.decode().strip() - self.logger.info(f'Process output: {procoutput}') - complete = time.perf_counter() - self.logger.info(f'Painting of "{prompt}" complete for {message.author}. Wait time: {start-t:0.5f}; Paint time: {complete-start:0.5f}; Total time: {complete-t:0.5f}.') - try: - await message.reply(file=discord.File(outfile, description=f'Prompt: "{prompt}"; Starting Seed: {seed}; Steps: {steps}')) - except: - await message.reply(f'An error occurred. Output file not found.') - raise - self.logger.debug(f'Reply sent') - self.commissions.task_done() + t, command, message, prompt, samples, seed, steps, strength = await self.commissions.get() + if command == "embiggen": + os.makedirs(self.outpath, exist_ok=True) + with tempfile.NamedTemporaryFile(suffix=".png", dir=self.outpath) as fp: + start = time.perf_counter() + proc = await asyncio.create_subprocess_exec(os.path.join(self.ganpath, 'realesrgan-ncnn-vulkan'), '-i', os.path.join(self.outpath, prompt), '-o', fp.name, '-n', 'realesrgan-x4plus', stderr=asyncio.subprocess.STDOUT, stdout=asyncio.subprocess.PIPE, cwd=self.ganpath) + self.logger.info(f'Started real-ESRGAN subprocess, PID: {proc.pid}') + procout, procerr = await proc.communicate() + procoutput = procout.decode().strip() + self.logger.info(f'Process output: {procoutput}') + complete = time.perf_counter() + self.logger.info(f'Embiggening of "{prompt}" complete. Output at {fp.name}. Wait time: {start-t:0.5f}; Paint time: {complete-start:0.5f}; Total time: {complete-t:0.5f}.') + try: + await message.reply(file=discord.File(fp.name, description=f'Embiggened version of {prompt}')) + except: + await message.reply(f'An error occurred. Output file not found.') + raise + self.logger.debug(f'Reply sent') + self.commissions.task_done() + elif command == "paint": + self.logger.info(f'Bob Ross is painting "{prompt}" for {message.author}.') + # This section copies the filename generation code from + # optimized_txt2img.py from optimizedSD + os.makedirs(self.outpath, exist_ok=True) + promptpath = "_".join(re.split(":| ", prompt))[:150] + sample_path = os.path.join(self.outpath, promptpath) + os.makedirs(sample_path, exist_ok=True) + hashdirpath = os.path.join(self.outpath, 'hashed') + os.makedirs(hashdirpath, exist_ok=True) + hash_path = os.path.join(hashdirpath, hashlib.sha256(promptpath.encode()).hexdigest()) + if not os.path.exists(hash_path): + os.symlink(sample_path, hash_path, target_is_directory=True) + base_count = len(os.listdir(sample_path)) + attachment="" + if message.attachments: + if message.attachments[0].content_type.startswith('image/'): + base_count += 1 + attachment = os.path.join(sample_path, "init_" + str(seed) + "_" + f"{base_count:05}_{message.attachments[0].filename}") + await message.attachments[0].save(attachment) + if samples > 1: + outfile = os.path.join(hash_path, "grid_" + str(seed) + "_" + f"{base_count:04}.png") + else: + outfile = os.path.join(hash_path, "seed_" + str(seed) + "_" + f"{base_count:05}.png") + self.logger.debug(f'Output file will be: {outfile}') + num_rows = int((samples**0.5)//1) + start = time.perf_counter() + self.logger.info(f'About to start the subprocess...') + if attachment: + proc = await asyncio.create_subprocess_exec('/usr/bin/python3','optimizedSD/optimized_img2img.py', '--H', '448', '--W', '448', '--precision', 'full', '--outdir', self.outpath, '--init-img', str(attachment), '--strength', str(strength), '--n_iter', '1', '--n_samples', str(samples), '--n_rows', str(num_rows), '--ddim_steps', str(steps), '--seed', str(seed), '--prompt', str(prompt), stderr=asyncio.subprocess.STDOUT, stdout=asyncio.subprocess.PIPE, cwd=self.sdpath) + else: + proc = await asyncio.create_subprocess_exec('/usr/bin/python3','optimizedSD/optimized_txt2img.py', '--H', '448', '--W', '448', '--precision', 'full', '--outdir', self.outpath, '--n_iter', '1', '--n_samples', str(samples), '--n_rows', str(num_rows), '--ddim_steps', str(steps), '--seed', str(seed), '--prompt', str(prompt), stderr=asyncio.subprocess.STDOUT, stdout=asyncio.subprocess.PIPE, cwd=self.sdpath) + self.logger.info(f'Started SD subprocess, PID: {proc.pid}') + procout, procerr = await proc.communicate() + procoutput = procout.decode().strip() + self.logger.info(f'Process output: {procoutput}') + complete = time.perf_counter() + self.logger.info(f'Painting of "{prompt}" complete for {message.author}. Output at {outfile}. Wait time: {start-t:0.5f}; Paint time: {complete-start:0.5f}; Total time: {complete-t:0.5f}.') + try: + if samples > 1: + await message.reply(file=discord.File(outfile, description=f'Prompt: "{prompt}"; Starting Seed: {seed}; Steps: {steps}'), view=GridSelect(samples, num_rows, outfile, self.outpath)) + else: + await message.reply(file=discord.File(outfile, description=f'Prompt: "{prompt}"; Starting Seed: {seed}; Steps: {steps}'), view=EnlargeView(outfile, self.outpath)) + except: + await message.reply(f'An error occurred. Output file not found.') + raise + self.logger.debug(f'Reply sent') + self.commissions.task_done() + else: + self.logger.warn(f'Unknown command in queue: {command}') + self.commissions.task_done() async def bobross(self): while True: @@ -157,9 +318,7 @@ class HappyTreesBot(discord.Client): self.paintings = [] async def main(): - intents = discord.Intents.default() - intents.message_content = True - bot = HappyTreesBot(intents=intents) + bot = HappyTreesBot() try: token = os.getenv("HAPPYTREES_TOKEN") if not token: diff --git a/grid_136549.png b/grid_136549.png new file mode 100644 index 0000000..9c95dcd Binary files /dev/null and b/grid_136549.png differ