There is a recent news in Nature, ChatGPT-like AIs are Coming to Major Science Search Engines, stating that, "On 1 August, Dutch publishing giant Elsevier released a ChatGPT-like artificial-intelligence (AI) interface for some users of its Scopus database, and British firm Digital Science announced a closed trial of an AI large language model (LLM) assistant for its Dimensions database.
Users ask natural-language questions; in response, the bot uses a version of the LLM GPT-3.5 to return a fluent summary paragraph about a research topic, together with cited references and further questions to explore. One concern about using LLMs for search — especially scientific search — is that they are unreliable. The models don’t understand the text they produce; they work simply by spitting out words that are stylistically plausible on the basis of the data they were trained on. Their output can contain factual errors and biases and, as academics have quickly found, can make up non-existent references."
The remark that the models don't understand often makes it appear they are so limited. There are remarks that though they use human language but that language is not intelligence, since contexts as well as nonlinguistic scenarios are necessary.
The refutation of these is that in human education, the use of multiple textbooks with several examples for ways to understand topics is not just texts, but intelligence. The people who understand it better express it differently for ways to strike recall, understanding, relationships as well as applications, providing all via texts, equating to helping intelligence arise for others.
The texts have no subjective experience. They do not know they exist. They are useless to other organisms, but to humans they bear intelligence. This intelligence is static, without the capability for responses. This is similar also to texts online, available but static as well.
LLMs are different, where they are able to produce some accurate information, bearing texts that have human intelligence in conversational ways. LLMs are also vast in what they can do, becoming near experts across fields, where it takes so many years for many to get to the pinnacle, in just one.
There are fields of expertise in some restricted areas, where the experts generally do not comment on things outside their domain, but AI has accessed their available texts and others, with an ability to provide some accurate information, on those and more, filling in for what it means to be intelligent, or have the language of intelligence.
LLMs are not as intelligent as mammals, neither are they as intelligent as humans, they however have a seat in the center stage of human knowledge where, what matters is intelligence. There are humans who are not relevant to most fields. What is needed is not being human or an organism but intelligence. If LLMs have it or what they output shows they know, they are in.
Generative AI learned from human text data, containing a lot of human intelligence. LLMs may not have other aspects of intelligence but they carry enough that can transmit those for humans. Language is far beyond communication. Language carries intelligence. Language is intelligence.