AI decodes brain signals into human speech and makes photos of people for ads: artificial intelligence news digest AI decodes brain signals into human speech and makes photos of people for ads: artificial intelligence news digest

Whereas earlier artificial intelligence could generate only portraits, now it creates full-length portrait images of nonexistent people. Read about other latest developments in the world of AI in the digest prepared by the AI Conference Kyiv.

Magnets will bring AI closer
to the efficiency of the human brain

Purdue University (the USA) scientists have defined the process of using magnetics that imitates the brain activity. The development will allow for a more productive training of personal robots, autonomous cars, and unmanned aerial vehicles, enabling them to better generalize data about different objects.

“Our stochastic neural networks try to imitate activities of the human brain and compute through a connection of neurons and synapses,” Kaushik Roy, the research group member said. “This allows the AI to not only store information but also to cluster data about objects in order to later make conclusions and to perform better."

AI helped astronomers to understand
the structure of storms at Saturn

University College London astronomers used the AI algorithm called PlanetNet to study the rough atmosphere of Saturn. The neural network allowed accurately defining the location of storms and analyzing the state of surrounding storm regions on the planet.

Thus, the space probe made images of several S-shaped storm clouds that raged on Saturn’s poles. The reason of the “foul weather” were flows of frozen ammonia that rose from the surface of the planet.

AI was trained to decode
brain signals into human speech

University of California researchers have developed an AI solution that can reproduce human voice by analyzing the motion of lips.

In the process of the research, scientists have built two neural networks. The first one matched brain signals with the motion of lips, and the second synthesized the motion into speech. Volunteers were able to repeat the sentences: one can recognize around 69% of synthesized words on the record.

Thanks to devices that decode the brain activity into a synthesized speech, dumb people will have a chance to pronounce words freely and clearly.

AI learned to generate full-length
portrait images of nonexistent people

Japanese developers from Data Grid created an algorithm capable of producing full-length portraits of people that do not exist.

The image generated by AI resembles photos of real people. Besides, the neural network produces images, on which people can move and change clothes. The Japanese project can fundamentally modify the advertising market, allowing retailers to save on models and photographers.

Vkontakte developed AI
that generates virus-like headlines

Developers of Vkontakte social network built an algorithm that will help to create appropriate news headlines.

To generate a heading, AI has to look through the whole text of the article: the neural network analyzes it and compiles the heading from fragments of used words.

During the trials of this solution, Vkontakte developers showed volunteers a piece of news and two headlines – the original one and the one generated by the neural network. The survey showed that in 45% of cases, the AI-generated headline did not differ from the original one in terms of quality, and in 15% of cases, it was even better.

Stay updated with our events and follow the latest news ►►►

Related news