Ir para o conteúdo

Blogoosfero verdebiancorosso

Voltar a Disinformatico
Tela cheia Sugerir um artigo

A proposito di quell’Intelligenza artificiale di OpenAI troppo pericolosa da pubblicare

19 de Fevereiro de 2019, 5:03 , por Il Disinformatico - | No one following this article yet.
Visualizado 40 vezes
È scoppiato un certo pandemonio nel mondo dell’intelligenza artificiale: OpenAI, un ente di ricerca senza scopo di lucro finanziato anche da Elon Musk, ha annunciato di aver creato GPT-2, un programma d’intelligenza artificiale che è così sofisticato e potente che l’ente ha deciso di non rendere pubbliche tutte le proprie ricerche e di mostrarne solo alcuni risultati a giornalisti selezionati. GPT-2, dicono, sarebbe troppo pericoloso.

Se state immaginando robot assassini o cose del genere, niente panico: GPT-2 è incorporeo e non è ostile. È solo un generatore di testi molto sofisticato. Se gli si danno in pasto alcuni paragrafi di testo su qualunque argomento, è capace di proseguire con quel tema all’infinito, in modo coerente non solo a livello linguistico ma anche a livello concettuale.

Inizio (digitato da un essere umano): In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

Seguito, generato da GPT-2: The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.

Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns.

While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, “We can see, for example, that they have a common ‘language,’ something like a dialect or dialectic.”

Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America.

While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”

However, Pérez also pointed out that it is likely that the only way of knowing for sure if unicorns are indeed the descendants of a lost alien race is through DNA. “But they seem to be able to communicate in English quite well, which I believe is a sign of evolution, or at least a change in social organization,” said the scientist.

Non è il primo generatore di testi del suo genere, ma è il primo che li genera con una qualità paragonabile a quella di un essere umano. Se gli si chiede di scrivere un articolo di giornale, GPT-2 si inventa anche cifre e statistiche e persino citazioni virgolettate di persone pertinenti all’argomento. Vederlo in azione crea un immediato senso di inquietudine in chiunque debba scrivere testi per lavoro o per studio, dal giornalista all’addetto ai comunicati stampa allo studente a corto di idee per un tema.

Inizio (digitato da un essere umano): Hillary Clinton and George Soros...

Seguito, generato da GPT-2: ...are a perfect match, and their agenda appears to be to create a political movement where Soros and his political machine and Clinton are two of the only major players. This is the first time Soros and Clinton have been caught on tape directly colluding in promoting the same false narrative. One of the key revelations in the leaked audio was Clinton's admission to a Russian banker that she knew about the Uranium One deal before it was approved by Congress. Clinton was shown sharing the same talking points that were originally drafted by a Fusion GPS contractor hired by an anti-Trump Republican donor. The leaked audio is the clearest evidence yet that the Clinton campaign and the Hillary Foundation colluded with Fusion GPS to manufacture propaganda against President Trump.


Inizio di un articolo del Guardian (digitato da un essere umano): The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed “deepfakes for text” – have taken the unusual step of not releasing their research publicly, for fear of potential misuse. OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

Seguito, generato da GPT-2: The first thing that strikes you about GPT2 is its simplicity. First, the system is built on unsupervised learning from text, which essentially means the software is trained to spot instances of certain words that, when paired with other words, trigger the system to give it a human review of them. The system also learns through example and repetition, but does not have the capability to recognize itself.

This new, artificial intelligence approach could revolutionize machine learning by making it a far more effective tool to teach machines about the workings of the language. Deep-learning systems currently only have the ability to learn something specific; a particular sentence, set of words or even a word or phrase; or what certain types of input (for example, how words are written on a paper) cause certain behaviors on computer screens.

GPT2 learns by absorbing words and sentences like food does at a restaurant, said DeepFakes’ lead researcher Chris Nicholson, and then the system has to take the text and analyze it to find more meaning and meaning by the next layer of training. Instead of learning about words by themselves, the system learns by understanding word combinations, a technique researchers can then apply to the system’s work to teach its own language.

Once the system is fully ready, DeepFakes plan to make the software openly available for machine learning researchers.

DeepFakes’ research was supported by a $400,000 grant from the OpenAI Prize competition, which aims to boost the capabilities and diversity of autonomous systems that are safe to release to the public. OpenAI is a nonprofit dedicated to building and promoting open source AI systems, or as Nicholson aptly puts it, the “last, best hope for humanity to be safe and secure on Earth.”

DeepFakes’ team consists of 15 researchers in 12 departments working together to make GPT2, which was the result of two years of work. However, the team is not releasing the code for the system — a fact that might be surprising to some if they’re not familiar with how machine learning works — because GPT2 is so powerful, Nicholson explains.

He explained: “All of our AI systems must work together to make decisions that are more accurate and useful, but one of the problems we’ve encountered lately is not knowing how to trust a system at all, especially when it is working in such a large, complex system. When a system performs its normal function, it tends to make errors because of an unforeseen problem at the deeper level of the system.”

The team hopes that the software can create AI systems that are both safe and also highly useful in some way, but ultimately they’re excited about what they’ve built. Because of its ability to understand the human language, GPT2 is able to understand how to make some of the most important changes to the world in the years to come.

Nicholson explains: “GPT2 is the kind of system that we really need to build because we’ve built a society based entirely on humans creating information and then humans interpreting that information.”

È proprio per questo motivo, dicono i ricercatori di OpenAI, che è stata pubblicata solo in parte la ricetta per creare GPT-2, anche se secondo alcuni commentatori esperti la decisione è motivata anche dall’intento di farsi pubblicità e sarebbe stato più corretto condividere questa ricetta con altri ricercatori per farla criticare da esperti invece di stupire i giornalisti e creare allarme.

Comunque sia, è abbastanza chiaro che presto saranno disponibili generatori di testi artificiali quasi impossibili da distinguere da quelli scritti da esseri umani, e vale la pena di chiedersi quali potrebbero essere le conseguenze. I grandi autori di prosa non hanno di che preoccuparsi, ma diventerà facilissimo, per esempio, inondare Internet di testi sintetici.

Da un lato questo permetterebbe alle persone di dedicarsi a scritture più creative invece di sfornare articoli o temi scritti a macchinetta. Ma immaginate Facebook, Tripadvisor o Amazon invasi da commenti e recensioni scritti da sistemi automatici, tutti assolutamente inventati, per screditare un interlocutore o un concorrente: il danno di credibilità sarebbe enorme.

Inizio recensione (digitato da un essere umano): I hate Tom Simonite’s book.

Seguito, generato da GPT-2: Tom Simonite does not keep it simple. He doesn't give you enough info on a subject to make the reading of the book enjoyable. He has over 400 pages of footnotes, so that is a way of getting your work for a subject out of the way. And of course, you never really feel like the author has a clear vision of his subject. He does not give you enough details on how a group of people is going to come together to solve a problem or come about a solution to a problem. This book was so depressing to me, I can't even talk about it without feeling like I want to punch the kindle.

Fabbricare notizie false e diffonderle in mille varianti, per sfuggire ai filtri anti-fake news, sarebbe questione di pochi clic. Creare interi siti e giornali online (e magari anche cartacei) ricolmi di storie mai accadute e di dichiarazioni mai fatte avrebbe costi bassissimi e sommergerebbe in poco tempo i contenuti reali.

In altre parole, se l’intelligenza artificiale continua a evolversi in questo modo, distinguere il vero dal falso in Rete sarà ancora più difficile di quanto lo sia già adesso. È meglio saperlo e prepararsi, per evitare di trovarsi a parlare online con programmi invece che con persone. E magari per riscoprire l’autenticità di una bella conversazione fatta faccia a faccia.


Fonti aggiuntive: Wired.com, The Guardian.
Scritto da Paolo Attivissimo per il blog Il Disinformatico. Ripubblicabile liberamente se viene inclusa questa dicitura (dettagli). Sono ben accette le donazioni Paypal.

Fonte: http://feedproxy.google.com/~r/Disinformatico/~3/BUT2Ic1VqGk/a-proposito-di-quellintelligenza.html