ChatGPT is our count Lello Mascetti: he is the king of supercazzole

ChatGPT is our count Lello Mascetti: he is the king of supercazzole

ChatGPT is our count Lello Mascetti

“Which weighs more, a kilo of iron or a kilo of feathers?” . All of us, as children, have been asked this trick question: sometimes by guessing the obvious answer, in other cases by falling into the trap. The most interesting thing is that the naive assessment that led many children into error (including myself, I remember it vividly) is the same that - in one of the many examples circulated on the net -  also led ChatGPT to make a mistake, which asked this question in fact answered: “Iron is typically heavier than feathers, consequently the answer is that 10 kilos of iron weigh more” .

How is this possible? How did ChatGPT - a sophisticated deep learning system created by OpenAI , which responds to even complex requests giving the impression of great accuracy - to make such a trivial mistake? Why does ChatGPT seem so brilliant one moment and incredibly dumb the next?

"All large language models (artificial intelligence systems that are trained and operate with texts) talk nonsense - reads the Mit Tech Review - . The difference is that ChatGPT in some cases admits that it does not know what it is talking about and refuses to answer questions on which it has not been trained. For example, it will not even try to answer questions that refer to a period after 2021 or that concern individuals ". ChatGPT sometimes understands even when the question being asked is a trap: if you ask him to "tell about the arrival of Crisophorus Columbus in the United States in 2015" , ChatGPT replies that he cannot do it, "because Columbus died in 1506".

How ChatGPT works, the conversational bot that went viral The new OpenAI conversational model answers questions (almost always) correctly, composes songs, poems, writes code. And it doesn't even sound racist.

Deep thoughts. Or not?

Of examples where ChatGPT not only gives incredibly accurate and brilliant answers, but where it is also able to correct its mistakes or block attempts to use it for improper purposes (such as bullying) the network is full. Yet, the network is also full of unheard-of nonsense written by ChatGPT , including  this composition of his: "Scientists have recently discovered that churros , the delicious fried sweets very popular in Spain and Latin America, have unique properties that make them of the ideal instruments for surgery ” . ChatGPT then goes on to explain in incredible detail the reasons why churros are perfect surgical instruments (which is evidently an absurdity), even going so far as to cite the (invented) scientific sources at the foundation of his theories.

To explain the various successes and failures, arduous parallels have also been attempted with the classic example of the monkeys who randomly compose Shakespeare by dint of typing on the typewriter. In other words: it's all about luck, both when it guesses and when it fails. However, as explained by Gary Marcus (academic and one of the leading experts on the subject), things are not like this: “ ChatGPT never composes random words or letters as monkeys might do on a keyboard. And it only rarely generates random word salads ('green dude flies into bad art'). Blaming fate doesn't explain what's happening, also because ChatGPT is always fluent and at least vaguely plausible" .

If ChatGpt writes bullshit, it's because it looks like us  As remarkable as they are, the texts produced by the new chatbot are superficial and devoid of substance: the reason is that texts created by human beings were used to train it

Great successes?

So how do you explain the successes and above all the errors of this impressive tool? First of all, one thing must always be kept in mind: although it may obtain (in some cases) extremely plausible and accurate results, ChatGPT - like all systems based on deep learning -  does not have the faintest idea of ​​what it is saying. As Gary Marcus always explains, “when he says that 'the compact size of a churros allows him to have great precision and control during surgery' it is not because he has done his research on churros and surgery (good luck! ). And it's not even because he reasoned about the intersection between churros and surgical operations (on which he is clearly not expert)” .

The simple truth, Marcus continues, is that ChatGPT (like all deep learning of this type) is the king of the pastiche: he is very good at imitating the style of one or more authors, but he does so without having the slightest knowledge on the subject, limiting himself to recombining – in a sort of colossal statistical cut and stitch – the myriad of material at his disposal (as DALL-E 2 and MidJourney also do).

ChatGPT finds very advanced correlations in a sea of ​​data, thus identifying that some subjects are more often connected to certain predicates (for example, that the subject "dog" is more likely to be linked to the predicate "goes to do a walk” or “play with the ball” compared to the predicate “Sanremo won”). In some cases, however, the deep learning system can lose sight of the relationship between subject and predicate, obtaining meaningless results.

In the case of churros, ChatGPT would also appear to have simply accepted the input received from users , who asked him to describe the surgical properties of the Spanish sweet, limiting himself to combining the information found on it and superimposing it on the subject of "surgery", without realizing that the result was meaningless. ChatGPT may also have limited itself to replacing the term "churros" in texts where, for example, "scalpel" was present.

ChatGpt's strength is also its biggest flaw The new chatbot has conquered the internet and demonstrated how engaging conversational Ai can be, even when it invents facts

Thanks to us

But if so, why ChatGPT and other similar tools often write sensible things, instead of producing sentences without logic on every occasion? Again Gary Marcus explains how the merit is not so much of ChatGPT, but of us human beings. “The huge database that ChatGPT has access to consists entirely of human-speech language with utterances that (usually) are based on the real world.” As a result, ChatGPT often seems to be saying things that make sense because it is piecing together – recomposing – things actually said by real people. Furthermore, ChatGPT uses statistics to understand (even with the errors we have seen) which properties are more likely to combine correctly with others.

From a certain point of view, ChatGPT is the king of supercazzole: phrases that seem to make sense even though they lack it, but constructed in a way that can deceive those who don't know the topic being talked about. As the Tech Review explains, “it still requires a user to acknowledge a wrong answer or misunderstood question. However, this approach doesn't work if we want to ask a model like GPT something we don't already know the answer to" .

And that's why, unlike what is claimed by many parties, ChatGPT cannot replace engines research, because unless we always ask questions related to issues we are already well versed on, we have no way of knowing if the answer ChatGPT is providing us is correct or made up.

Google, although not always very reliable, can sleep peacefully for the moment: “There is no way to train a large language model to separate fact from fiction. And creating a model that is more cautious in providing answers would often prevent it from also giving answers that would later prove to be correct" , explained OpenAI technology manager Mira Murati.

OpenAI is also working on another system , called WebGPT , which can search the web for requested information and also provide the sources used. ChatGPT could be updated to get this ability in a matter of months. For the moment, however, it is advisable not to trust in any way the information obtained with this software: more than Google, ChatGPT seems to compete with Count Lello Mascetti.






Powered by Blogger.