Machine learning maps social learning not individual intelligence

"Machine learning maps social learning not individual intelligence" may appear as not saying very much, especially if social learning is unknown or just hard to notice because it is like water to fish, essential but taken for granted.

Social learning is a grab-all term for deliberate, emergent & surprising processes, intentional or unintended, that arise as humans organize their lives among others of their notice. We copy more than we invent, we norm more than we innovate, we should more than we teach. Rituals are conventions of convenient learning and recording events and routines (brain memory work, exercises both shared and individuated).

All this is tied together with folklore, nods towards memory, and various methods of organising it all (including folk-ontologies to classify and name, as well as record-keeping) as well as transmitting this both across the population in negotiation and down the generations.

If we look at the recent mass exposure to generative AI like large language models (LLMs are type of machine learning model) and what this may mean for artificial general intelligence (AGI), and we look at the “source code” we see LLMs ingest the records we have made (text and images) and are making, not the brains. LLMs map the maps, or aggregate the maps and models. So whatever it is these augmentations are doing, it is not human brain intelligence, and certainly not replicating the brightest among us, but the aggregate of our past records and current digital traces.

While recently some creatives (writers and illustrators) have complained and discovered the infringing of their copyright, few have complained about the bigger story of the aggregation of the commons to which their additions are minuscule. The commons, which includes the facility of language itself, is available to all to both learn from and contribute to, so if it is being privatized, or at least gated, by whoever can afford to buy the capital and energy requirements then copyright infringement is the least of our troubles, however annoying they are. (Copyright propertarians are so 1800s in their concern for the owners of printing presses.) 

E.G. see Authors outraged to discover Meta used their pirated work to train its AI systems

To repeat, the machine learning aggregation of our social learning’s outputs and outcomes, which most of us have trouble even recognizing, and thus its practical privatisation of that aggregation, is a bigger problem than copyright infringement.

The mode or habitus for this activity of aggregation capitalism is of course, social media or at least platform tech-companies. Current LLM technology is just the tracking of social learning and not just a consumer’s buying preferences, on decisions in the marketplace, or political hot buttons (if broadly based) to now include all social learning venues or ‘platforms’ that is social media.

They have been aggregating that for years, more statistically, to prompt advertisers to advertise with them.

In any case if this is the best we can do for AI and AGI development then it is because we are blind that this technology has hit a local optimum and we have no way to get out of it. This pothole is curiously framed by a restricted human experience, that of the marketplace, as if that was all, or should be all, that exists in the world.

This is unwise. Especially as the aggregation is not controlled by the emergence of factors within that common space (the market is both a restricted commons, and unfettered anti-commons) but by agents who do not understand who they are working for. Thus the rise of narcissism and the inability to tell apart the self from the world of selves they live among, aggregation just confirms the specialness of the narcissist or “swearlfd”.

However clever the outcomes of current aggregation technologies and algorithms dealing with stochastically collected data supplies and the subsequent vectorspace of semantic maps, it is not human consciousness at work in mimicry, social learning consciousness _maybe_?

The going cross-model stuff is amazing but this confirms my argument that LLMs etc just map social learning and not the experience of life as an animal self-consciousness. They do not claw at the shadows on the walls of the cave, the platonic forms are no closer.

The dangers lie that as we are very social in our origins before we are each ourselves in any amount of individual wisdom then the feedback of all of our social learning in aggregate into our social learning will lead to the “model collapse” of our ability to live socially, or indeed individually. And collapse is a risk in addition to any threat by AGI as as an intelligence to human life (and more likely). Model collapse leading to zombified social learning, NPC worlds (the world is a NPC) is more likely than the threat of an super-intelligent consciousness.

The thinking of an individual as sovereign citizen is a clear example of a social learning pathology of worlding– critically quoting the USA’s constitution in Tasmania as to what a Tasmanian police officer should do, does not make it the world’s constitution that they are cleverly outwitting.

What current LLMs generate is a variety of trajectories or itineraries from within a solution space, based on mapping the social learning of all of us, from recorded texts, as well as all of that produced in the digital present. It is not mimicking individual creativity or consciousness, if only because the individual is doing the same as the LLMs with a lot less augmented infrastructure, i.e. pulling down from what is available from the pool of social learning. We draw on social learning and in routines of embodied algorithmic ritual, produce new utterances, and occasional novel forms. Or even romances.

But we are more than that. (This is the weak form of this argument.)

Even so, letting all that be privatized, or even socialized, for the amusement of ketamine fueled worshippers of syphilitic manly philosophies is not a wise thing to do.

social learning of big history
social learning of big history ⓒ 2025 meika loofs samorzewski

Croosposted on substack.com