Itlooks like you may be getting a bit of stringing and a bit of elephants foot and possibly arachne is changing the shape of the characters...
If you are using Arachne, try classic perimeter generator
I try to make safe suggestions,You should understand the context and ensure you are happy that they are safe before attempting to apply my suggestions, what you do, is YOUR responsibility. Location Halifax UK
It looks like you may be getting a bit of stringing and a bit of elephants foot and possibly arachne is changing the shape of the characters... If you are using Arachne, try classic perimeter generator.
Here is what I get from your recommendations. That's better, but white letters are not so... white. Do you think increasing the depth of the text modifier could improve whiteness? I have attached the zipped project file. There was a short extrusion of black PLA after filament change, above the 'L' letter, just before the nozzle moved to the wipe tower. It's very difficult to catch this small amount of filament before it gets colder, not to mention the risk to remove text by pulling the excess of melted black filament. Should users perform such actions to get a perfect result? If yes, I wonder how one can rely on the MMU device...
Hi Georges.
different white filaments have different opacity.
Prusa Signal white, is a good strong white, many others are more sort of watery,
additional layers behind light colours can help immensely but it always leads to more colour changes, which can be a pain with Manual MMU.
Did you clean the nozzle before you started?
The MMU3 finishes a section of say Blue Lettering. Then moves to the Purge Block,
Wipes the extruders nozzle, then re shapes the filament tip, before withdrawing it.
it then loads the next filament and wipes the mixed filament un the purge block, until it hopes the colours have changed, at which time, the printer returns to the print
Elephants foot compensation makes the first layer outline smaller than the others to allow for it spreading under heated pressure, as I understand. Anyway the text might suffer if the adjustment makes the text smaller.
We showcase an exceptionally splendid typeface known as Elephant Font. This font is a member of the sans serif type typeface family. The creator of this remarkable typeface is Vpcreativeshop. Its popularity and fame can be attributed to its Bold and Classic characters.
One best feature of the Elephant typeface is that it is suitable for all text and small and large paragraphs. All letters of this tremendous typeface have bold strokes that make it perfect for many designs.
You can also search it from Google. Another similar font which is known as ff din font is perfect for different designs. Its text generator tool is best for all of your text designs. This family is most popular for image designs. You will have to download and enjoy its results.
This stunning and remarkable typeface is used for plenty of projects. A lot of designers use this family for many designs such as book covers, logos related to various institutions, poster making, and much more. This typeface is commonly used for different fashion magazines and report articles.
Rotis font is lightweight with bold strokes similar to the Elephant Font family. This gorgeous typeface is also best for many creative designs like greeting cards, infographics, clothes designs, and so on. This stylish typeface is used for public transportation systems. Many game designers use it for game designs and game development.
With our elephant-shaped word generator, you can input any text you desire and watch as the words fill the contours of the elephant's body. This tool is perfect for those looking to add a touch of whimsy to their presentations, documents, or creative projects.
In addition to its fun and playful design, our word generator also offers a range of customization options. You can choose from a variety of fonts, colors, and layouts to ensure that your word cloud is perfectly tailored to your needs.
Very nice examples, indeed. I've had the same experience with that total lack of actual understanding. What the AGI-is-nigh community doesn't get is that understanding token ordering or pixel-ordering is to real understanding as understanding ink-distribution is to books (and no, I did not think of that example myself, that comes from late 19th-century/early 20th century Dutch psychiatrist/writer Frederik van Eeden, in his studies that foreshadow our current understanding of the subconscious)
I suspect that many don't realize that this "technology" is INHERENTLY flawed. It is not a glitch, it is the way it works. There is a neural network inside which performs generalizations based on data, and this will always be often/sometimes wrong. It is not "early days" , it is very late (since at least 1990 no significant change). The basis is wrong, and we can only use it where the result doesn't matter very much. Many practitioners don't get this. They believe it will grow up. No, we need a paradigm change.
I think you are being too pessimistic and missing the obvious. Look at it this way: our species is in deep trouble (climate change, incurable diseases, etc); we create an artificial superintelligence; we then proceed to pepper the superintelligence with inane questions and silly image requests. Unsurprisingly, the superintelligence decides to mock us, as we richly deserve...
Gary, I share your dismay that AI acolytes can fail to recognise how grave these "errors of discomprehension" really are. If a human were to make qualitative errors like these, we would diagnose a serious mental pathology. These hallucinations betray a deep disconnect from reality -- and that's not a figure of speech.
Surely the simple truth is that Large Language Models do not model the real world. They are models of how the world has been represented in large volumes of text. The text is all they got (making LLMs the ultimate and purest Post-Modernists). And the text is biased, confined to those things that people care to write about.
I wondered if maybe the reason that the elephant prompts were not working was that elephants are so much larger than humans. So I entered the same prompt, but replaced "elephant" with "ninja." The image generator screwed that up too. The ninja was very obvious in all the photos, in one they were the only person in the foreground! Sometimes it made multiple ninjas.
I tried a similar one, "Draw a picture of a crowd in the square of a town. Hiding among the crowd is a ninja. Make sure it will be hard for the viewer to spot the ninja at first." This one also got it wrong, the ninja was visible immediately. In fact, in one of them the crowd was parted around the ninja so that they were extra easy to see.
Hi Gary, such 'glaringly' obvious errors stem from the same source as the word hallucinations - no first-hand, bodily experience about the world. Adding more modes (images, video, audio etc.) isn't going to fix the problem. 'Multimodal' is just as clueless as the 'non'.
Coincidentally I wrote a recent paper called 'The Embodied Intelligent Elephant in the Room' for BICA'23 , arguing for embodiment :) The title pays homage to Rodney Brooks' 'Elephants Don't Play Chess' paper.
The way technologists speak about AI -- especially the soothing metaphors like "learn" and "neural" -- is training laypeople to over-estimate robots. I'm especially worried that people are led to think that robots "see" as we see.
Remember the 2016 work at Carnegie Mellon where psychedelic patterned eyeglass frames fooled face recognition neural networks? They spoofed target celebrities' faces with patterns that have nothing to do with facial features we recognise as such.
This reality gap to me is the *real* uncanny valley! We are irrationally frightened by robots that look and move like us, but what's really scary is they don't actually work like we do, not even remotely.
"Unfortunately, errors of discomprehension may soon be even more worrisome in a new context: war. OpenAI seems to be opening the door to military applications of their systems, and at least one well-funded startup is busy hooking up AI to military drones." - how so? If deep learning is "hitting the wall", if it is so poor at understanding the world, how can it be of any use for military? They will surely fail, right?
Sure, but how much damage will be done before the people pushing the military uses admit that it just isn't working? We are, after all, talking about hooking buggy and erratic AI systems up to devices whose function is to kill people, so the potential for real, tangible harm is enormous. The danger is that if we get far enough down that road that powerful people's money and prestige is tied up in military applications, then we won't be able to shut them down even if they are obviously failing appallingly.
In military they test new weapon and abandon projects if it doesn't meet the requirements. Most of experimental projects end like this (robo-mule is one of them). Despite all the hype of killing robots from Boston dynamics (like =y3RIHnK0_NE) none is used in US military
ChatGPT>>DALL-E gets lost on productively defining the term "camouflage"- a concept that's only readily comprehensible to a being that possesses visual perception as an active processing faculty. Presumably, AI image generation has similarly intractable difficulty with productively adapting concepts like "disguise" and "trompe l'oeil". _criticism-by_pere_borrel_del_caso.png (Of course ChatGPT can supply a print-out of the precise dictionary definitions of those terms. But it has no more comprehension of their meaning than a Xerox machine. Does a dictionary have a big vocabulary? Trompe l'langue!)
3a8082e126