There are 3 paths to regulating AI, approaching from a direction of corrections from the negative impact of social media: emotions division, AI reader and conceptual model of the human mind.
Outputs are checked with a division containing what could be harmful or offensive to groups, or detecting unknown capabilities of AI platforms, against reaching end users. Though search engines do a form of this, as well as major AI chatbots, going further with emotions will ensure that things that can affect mental health or consequential are not outputted. This emotions division can be useful as a standard attachment for public AI systems, so that they are preventively monitored. It will be a general cut out for safety, not censorship.
This applies to people [in groups] to use an AI reader so that it goes through offensive things against their group to prevent an AI's output from becoming harmful or hurtful. It will also be personalized, aside from the category of groups, against a range of biases. It will ensure that an AI platform is not watered down beyond necessary from emotional redaction, in the first division.
There are individuals or groups where people are offended by certain tropes and things that can end up affecting mental health. It will be vital to have a reader handle certain words and sentences, then depending on what group, make the output free of them, so users access something clean, per preference. This will exceed what social media is, exposing people to the same things, disregarding the reach of the mind, for many. It will also ensure that whatever negative thing AI might be used for is detected and prevented, especially from prompts to repeat the same or worse.
Examining the workings of the mind, including its understanding and reactions, can play a decisive role in addressing the potential implications of AI by having a conceptual model of the mind, to show how the outputs of AI are processed on the mind and how they create affect. This way, it will be possible to know, then prevent some of the negative effects of AI, since awareness is a way the mind works. Conceptually, the human mind is the collection of all the electrical and chemical impulses of nerve cells and their interactions. It is these interactions that produce emotions, feelings, memory, thoughts, drive action and the rest. Seeing what the mind carries, including how things are understood or reacted to, might be decisive.
Use the conceptual model of the human mind to develop a robust emotional layer, where checks are made, before outputs of AI's images, texts, videos or others are generated for the public. The emotional division can become a regulatory standard to ensure that outputs are checked, to avoid LLMs veering in the wrong direction.
The AI reader would have AI outputs for only registered users for whom it would be mandatory to include their group or triggers, to avoid any output or prompts that include words, texts, videos or references that may be harmful or misleading.
The conceptual model of the human mind will be a display of how impulses generate thoughts, emotions, feelings, memory and others, so that what AI is doing to the mind, at every moment is known and some actions are prevented [directly] from the mind.