trying it on with worlding : the futility of LLMs and creativity
if we subsume meaning, morality, philosophy, culture, religion, art, polity into one category what would it be called?
I keep dipping into Large Language models (LLMs) and being annoyed when not disappointed. Better than neural networks? Are data-painters better than balsa-wood ships in a bottle?
LLMs just do not operate where I am operating. If I was schizoid I would not even notice this, but as I am trying to reach out to others…
So one thing this dipping into using LLMs does do— is explain why it is so hard to sell ‘worlding’ to humans. So it will help me write it better.
That is, one can use the difficulty of communications and an LLM to come up with a solution where it already exists in the literature, the corpus, when it has beem mapped/described in vectorspace. Or, one can use the failure of it to highlight one’s own writing deficiencies.
So, if you already have in hand a solution, and if it is actually innovative, or actually quite different enought to be interesting, then LLMs can thus be used as an index to the possible resistance of peeps to novelty based on momentum of what is already accepted, and not yet forgotten. I.E. what is current.
I’ll link below to a chat or two where I am trying to get a LLM chat to come up with something like my ‘solution’ of ‘worlding’ or ‘to world’ in regard to the categories we have split out from/by the ‘worlding urge’ via economic as well as other less ‘practical’ social processes.
But it seems, it is too much to ask of systems built on retrieving or re-flowing current solutions from the matrix… the vector space of our records. New efforts of analogy are not there yet you see, to be utilised by LLMs. Even if you prod them with the ‘solution’ it will get spat out by the vector space coniptions. It's all style, it's a method of production, which can be a means of production, this is best seenin generative image AI.
The recorded vectorspace is not the possible solution spaces, even if all we know is that the future is not ours to see.
Google’s gemini
No idea. Produced a business report aimed at a new org chart, it produced the title as well ‘Categorizing Human Thought’. (pdf)
Perplexity
“sublime”, actually this is a good call from the past, but not where I was heading, maybe where we came from? “sublime” is good for a state of mind or mood, but not active enought o included janus-like dancing of the gap/s. Receptive agency at most. (pdf)
Lex
no idea, but this is unfair as this platform just tries to help you write stuff, so… it got a bit Magician’s apprentice on my arse. (pdf)
Quivr
allows a choice of LLM models and bespoke ‘brains’, including one based on my website & substack, so we’ll checkout the various first and then the specifically mine last. These didn’t prove a share option so I have created PDFs for all of them. Available here: drive.google.com (includes as well the one above)
https://chat.quivr.app/chat/3e4418d7-51e9-4a86-8d7f-056ff4311bb3
Claude Sonnet (pdf) — more org chart rebranding of university faculties
GPT4o (pdf) — goes meta branding for a ‘unified’ look at underlying principles (tradcore blur I guess if I was a lumper not a splitter)
Groq Llama 3.1 70b (pdf) not a bad discussion but I guess I did not ask it to produce a verb or agency-based framework for these categories of life
Mistral large (pdf) more faculty rebranding, I guess I got what I asked for not what I intended
‘shouldy’ (my quivr brain based on ingesting substack and here on whyweshould.loofs-samorzewski.com) (pdf)— complete fail. I have to say that when an LLM platform ingests provided text/corpus, especially websites, they do a very shallow pass e.g. prioritise the recent posts… Here nothing of my ‘worlding’ is available at all, of course, this highlights the reality I have trying to get this across to fellow humans.
So regardless of my bad forming of the prompt there was no hint of my writing in the bespoke brain by quivr.