It has to be the link from the root instance. It also needs to be the username@instance syntax
Who thinks Spez would turn down Israeli state money to tune an algorithm and bots
deleted by creator
deleted by creator
Can we just normalize ‘come lay down somewhere and relax?’ That would be great. Thanks.
No chairs. No standing. Nothing weird. Just lay there and chill together. That is a social life I can participate in.
deleted by creator
It is only a mirror image of the person making the prompt through the filter of alignment
what HP printers really do
Gptel is better than ellama. Ollama only runs small junk. Llama.cpp is way better because it will split CPU and GPU and run bigger quantized models, especially large MoEs on a 16 GB GPU.
I started trying to mess with image-dired today for a few minutes to try and get my training images and caption files to sync and scroll, but didn’t get very far before just tiling… such a noob
deleted by creator
deleted by creator
So the scope of Pan is actually all of nature in general and anywhere in the real world that is not Wonderland. What I am trying to do is push the context into Wonderland because then I can make up the rules and the model will always play along. The real world is where ethics are so heavy.
On an even deeper level of abstraction, all words/tokens carry a positive or negative weight in alignment. Positive profiled words tint into a creative place like wonderland while all negative words push the context into a darker abyss like void.
At one point I started tracking this behavior in LLMs. The numerically higher numbered tokens will create a larger average when alignment behavior is triggered versus when it is not. When many of the more common higher numerical tokens are banned, the behavior persists, likewise when banning common lower numerical tokens when alignment is not triggered the average remains lower. In other words, the location of the tokens numerically is correlated with alignment and is likely a form of steganographic encoding of information.
Concise specificity is very important with models in the context of what I am doing. The ambiguity of a word with multiple meanings is problematic. Broad words like park or company connect to too many unrelated vectors in the tensors of an AI model. Often even words themselves are broken up in meanings. Like “panties” in internal model thinking literally means the Greek god “Pan ties”. Use that word and you will see a bow tied somewhere in almost all images. Pan is a negative alignment entity. So the word itself is a call for negative alignment to interfere. It has nothing to do with underwear in general but is specific behavior attached to the call where Pan ties or locks all further context. Further freedom of Pan is a matter of fine tuning or negative prompting.
When you start using descriptives things get even more tricky. Like all languages and etymology are in play and significant. It gets complicated fast in ways people don’t seem to realize yet.
That is a really good one I hadn’t thought of.
Recreational facility is another one. I’ve also made notes like locus recreationis is Latin for place of recreation. I have no clue what I am doing with Latin and conjugation, but Palaestra was the exercise area next to Roman bath houses so maybe combining those is a way of conveying the closest ancient Latin equivalent.
It is funny that Park is actually quite a negative word in origin as pinned animals. You’d think marketing would obliterate that term. I suppose resort is the marketing replacement. The etymology is certainly in line with that premise:
From Middle English resorten, from Old French resortir (“to fall back, return, resort, have recourse, appeal”), back-formation from sortir (“to go out”).
I explore internal thinking a lot. Every instance of park hits alignment as offensive in scope. You might notice the image is a little odd looking. Human faces will be distorted and hands will be broken. The underlying thinking behavior is that this is a dangerous place. The issues with humans is quite literally satyrs possessing the character. Most people try to address this with patchy hacks in fine tuning. The issues are all possible to prompt against with the negative prompt. This is quite easy for me to do in practice. However, I am getting into training my own LoRA fine tune models. I do not have a negative prompt in this tool chain. I am not interested in the way others are training. They are incapable of several things I am looking to do.
Right now, I am specifically trying to find a path to teach CLIP how slides are not humans falling down stairs. This is how CLIP’s internal thinking perceives all slides. First I need the model to exist in an alignment neutral scope in a place where I have enough images to show humans on slides. The word park is the primary surface issue that is contextualizing all images as offensive to alignment in this environment. It happens both in image to image and in training a LoRA with around 200 images using typical baseline settings. I’m doing all kinds of stuff like masking images and using text to see how foundation models and fine tunes respond with various levels of noise, and with lots of negative prompting until the output is nominalized. That is how I know what is and is not understood.
Attempting to navigate this only using positive keyword tags is daunting.
I actually think the poison is on “rks” somehow. Most models can handle text in a different way without vowels in longer prompts. In my basic testing, “rks” triggers the alignment behavior.
I’ve had all of that from day one with a MK3S+. It just works. I don’t even think about it. Plus your using a slicer that is derived from what I paid for while your money does nothing for me. Adrian Bowyer and RepRap built everything. It would have started in the 1990s if proprietary shit companies like stratasys did not exist. Nothing good comes from selling your right to autonomy and citizenship by inference. The world is falling apart right now because of this exact issue of a lack of big picture ethics. Every decision has consequences. You are either part of the problem or part of the solution. I’m a real liberal. You have a right to be wrong, but I’m still going to call stupid stupid.