CDOIn English

Quo Vadis Artificial Intelligence? Article in English

Artificial Intelligence, Machine Learning, and Data Science – these are the terms that have begun to dominate the technology scene in recent years, from Silicon Valley to Shenzen. Although there are many “book” definitions, they do not reflect the common understanding of these buzzwords.

Quo Vadis Artificial Intelligence? Article in English

Today’s AI is capable of: generating and editing any images and photos based on text (just type “dog on the moon in the style of Rembrandt”); creating several-second-long videos based on a submitted script; inventing new clothing and shoe styles; designing furniture; solving tasks from the International Mathematical Olympiad; imitating human voice and facial expressions based on a provided short recording; solving recruitment tasks for programmers; generating background music; simulating the protein folding process; discovering new meta-materials; improving its own source code to create stronger AI…

From a 10-year perspective of working “behind the scenes” on algorithms and machine learning applications, one thing seems certain – artificial intelligence is getting closer and closer to us. However, the general level of awareness about the status quo of AI is getting further and further away from reality. This is a natural phenomenon. Strong specialization and concentration of know-how accompany all fields of science and technology. After all, who among us knows exactly how a gas turbine in a modern power plant is designed? But unlike AI, neither the gas turbine, nor a space rocket, nor many other technologies have the potential to replace humans in thousands of professions, redefine tens of thousands of others and cause ripples in the fabric of civilization.

Treating AI as a “modern gas turbine” – a technical aspect of the world that can be mentally put in pigeonhole as “exotic toys of engineers and scientists” or locked into an organizational silo called ‘Data Science’ – is extremely dangerous. In private conversations, I often compare AI to a bulldozer that comes from very far away and not only levels the problems it encounters on its way but also devours them, growing and becoming ever more powerful.

Sooner or later, every enterprise and every field of life will have to face the “AI bulldozer.” It is only up to our discernment, knowledge and perspective to either jump behind its steering wheel – or become one of the “problems solved”.

Treating AI as a “modern gas turbine” – a technical aspect of the world that can be mentally put in pigeonhole as “exotic toys of engineers and scientists” or locked into an organizational silo called ‘Data Science’ – is extremely dangerous.

What factors drive the development of AI?

To understand the evolution of artificial intelligence and predict the direction of its further development, it is necessary to identify the factors driving and limiting the development of this field. Most of the ideas that have succeeded in the last decade and made up what we can now call the “Deep Learning revolution” were formulated as early as the 1990s.

Back then, passionate scientists created the first prototypes of systems that processed texts or images using neural networks. They also put forward bold hypotheses and designed (though often only on paper) architectures for self-learning and self-improving systems. What is more, they identified key problems associated with their construction. Their work, however, was little known, except by a group of enthusiasts fascinated by neural networks. Today, although it is not common knowledge, much of AI’s greatest successes are embodiments of ideas from the 1990s or variations of them, with small but important tweaks.
So what has changed? Why do the same ideas make it possible to achieve now things that 30 years ago remained in the realm of fantasy? Three main factors provide fuel for the development of artificial intelligence – they can both drive and limit it. These are computing power, data, and available financial and human resources.

Przeczytaj po polsku/Read in Polish:

Quo vadis Artificial Intelligence?

Factor 1: Computing power and hardware

In 2011, neural networks beat humans in image processing for the first time. This was made possible by using GPU accelerators to train convolutional neural networks (which existed much earlier). The computing power that was not available at the time helped start a revolution. As the technology developed, more powerful GPU accelerators allowed faster and faster learning of larger and larger models. A decade ago, the development of GPU technology was driven by demand from gamers. Today, AI demand determines what architecture flagship graphics cards will have. The latest chips are equipped with special cores for high-speed tensor operations and software libraries designed for Deep Learning.

That access to the hardware is setting the pace for AI development is obvious. The latest models for natural language processing have trillions of parameters. Current models for image processing are trained on clusters with more than 4,000 GPUs. A single training run can take several months, with training costs reaching millions of dollars. To date, accelerator power has doubled every 18 months or so.

At the same time, technologies to build AI have begun to be viewed by politicians as strategic. In August 2022, the US banned Nvidia from exporting GPUs to China, which will cause considerable problems for both the Chinese startup market and a hole in the US manufacturer’s budget.

Will this always be the case?

In the next decade, hardware may become a factor holding back the development of artificial intelligence. Nvidia, which has a monopoly position in the AI market, has announced that Moore’s Law (doubling of computing power at the same price) no longer applies due to rising raw material prices and supply chain problems.
At the same time, technologies to build AI have begun to be viewed by politicians as strategic. In August 2022, the US banned Nvidia from exporting GPUs to China, which will cause considerable problems for both the Chinese startup market and a hole in the US manufacturer’s budget. As Taiwan is the crown jewel in the AI chip manufacturing chain, a potential conflict with China could slow down AI development for several years.

GPUs, however, are not the only technology that can be used for AI. Companies such as Google, Huawei, Cerebras, Graphcore and dozens of well-funded startups are working on the Holy Grail – ultra-efficient hardware for training neural networks. Some of these solutions have already been available for several years, such as Google’s TPUv3 and Huawei’s Ascend 910. Others – very ambitious approaches (e.g., optical chips) are in the R&D stage.

For the time being, it is difficult to identify a winner or even answer the question of whether any technology will succeed in threatening Nvidia’s monopoly. The landscape will probably become clearer in 2-4 years. In addition to powerful hardware, software support and a low barrier to entry for engineers are also very important.

Factor 2: Data and its availability

AI’s gigantic successes in natural language and image processing are no accident. It is this data that is most readily available on the Internet – in virtually unlimited quantities. Since the current legislative situation allows models to be trained even on copyrighted works (based on the U.S. Fair Use doctrine), the largest existing models are trained on virtually anything that can be found on the web: text and images from websites, source code from GitHub, scientific publications, and so on.

Data availability combined with computing power dictates the main directions of today’s AI development. Hence models such as GPT-3 (text generation), Copilot (program code generation), Dalle-2 or Imagen (image generation from text) and hundreds of their applications to more specialized problems. Of course, work is still underway on these already largely “solved” problems, and applications in hundreds of fields need to translate the technology available today into user-friendly products.What other types of data are abundant but not yet used today? Movies and videos. They will be the next problem “devoured” by AI in an impressive way. There already are models capable of generating a several-second movies based on a few sentences of a script. It is likely that within a year, we will see an explosion in the ability to generate and edit hyper-realistic movies based on text.

The operators of so-called SuperApps, are in the unique position of having extensive and highly valuable data sets, i.e. applications that simultaneously enable communication, payments, shopping, media consumption, transportation, making doctors’ appointments, sending and receiving shipments or playing the stock market. The best-known examples of such applications are Asian giants Alipay, Gojek, Grab and WeChat.

It can be assumed that in the near future, in many countries, a very interesting “game of thrones” of SuperApps to gain a dominant market position will happen. Local operators will clash with global ones, as well as the existing leaders of the classic enterprise with brand-new startup players.

Other huge sources of information-dense data include logs from IoT sensors, various types of broadcast metadata, data from the (imminent) digitization of national currencies, health profiles or other strictly private. Access to such data for the private sector is virtually impossible due to legal and national security considerations. This type of research is likely to be monopolized by state institutions once governments realize the possibilities of Artificial Intelligence.
An alternative to SuperApps and inaccessible private data is to create new ways for users themselves to generate more data points. Metaverse, Augmented Reality – innovative formulas for user interaction with the “virtual world” – have great potential to generate unprecedented amounts of fuel for AI models. However, it is not clear whether such solutions will be met with the willingness of users to adopt them.

Despite the lukewarm reception of these “immersive” ideas so far, it is important to keep in mind the upcoming possibilities for AI to generate virtual reality. If TikTok can “hack” users’ dopamine systems using human-created content, what will happen if AI could generate arbitrary audio-visual content, matching the user’s preferences perfectly?

If TikTok can “hack” users’ dopamine systems using human-created content, what will happen if AI could generate arbitrary audio-visual content, matching the user’s preferences perfectly?

Factor 3: Financial and human resources, and motivation

Of course, as long as data is the fuel and hardware is the engine – humans are at the wheel of artificial intelligence development. The process of digitizing society has enabled the rise of companies such as Google, Meta, Amazon, Spotify, Netflix and Uber. These companies, breaking the previous canons of business – have decided to make data collection and analysis the heart of their business.

These companies have established the first commercial AI research centers with budgets many times that of the richest universities. Thanks to the overwhelming success of these digital pioneers, it was possible to instil the idea of “Data-Driven Decision Making” in many classical enterprises. New fields of study have been created, universities have greatly expanded their educational offerings, and the global job market has changed greatly.

Ambitions to work in artificial intelligence or Data Science are increasingly popular today, not only among young people but also among experienced experts in other industries. However, the capacity of universities is limited, and from a global perspective, the education system is not keeping up with the changes needed to prepare society for the upcoming AI era.

Machine learning and Data Science are highly technical fields. They require an excellent knowledge of mathematics, computer science and statistics. There is no substitute for this with a 3-month online course. A phenomenon that can be observed in the market is the centralization of talent – technological market leaders concentrate the most talented academics and engineers in AI labs. Other companies, especially those for whom technology is not a key value, have serious problems finding competent employees. Non-technical managers, on the other hand, usually fail to properly assess the competence of the scientists they hire.

The result of such is the existence of Data Science “silos” in many companies – organizational structures whose purported task is to provide and report information based on data. Non-technical managers make business decisions using “Data-Driven” reports from these silos. Such a situation is not much different from high-level decisions made on the basis of reports from consulting firms – which often serve as a backstop for ill-advised decisions or political ammunition for gaining support for their own ideas.

It is possible to hypothesize that more than 50% of the “Data-Driven” decisions made in non-technological companies with Data Science departments are made in a flawed way – based on bad assumptions, wrongly formulated hypotheses, without considering external factors, or under pressure for a specific result that the manager expects. The more flawed the model/study, the more spectacular the results, and spectacular results are the key to success in the budget battle.
For the current state of affairs to change, it is necessary to radically and not just superficially adopt a “Data-Driven” or, better yet, an “Algorithmic Decision Making” methodology. This entails raising the technical competence of executives. The prospect of surrendering part of the decision-making to scientists may arouse reluctance among managers, although market forces will inexorably force these changes. AI, ML and Data Science providers come to the rescue here – because technology is the core of their business, it is easier for them to attract top talent and ensure the scientific rigor of the solutions they create.

Centralization or democratization?

The main factors dictating the pace and direction of AI development may suggest that the field is moving toward complete centralization. Only laboratories with huge budgets, access to immense data sets and powerful computing power can train the largest models. Until recently, there was bitterness in the scientific community about this situation. The largest commercial labs, Google, OpenAI or Microsoft, around two years ago stopped sharing their giant models with the community and started charging for access through APIs or even completely hiding them behind a veil of secrecy.

However, the experimental means of commercialization of proprietary models via paid APIs have been challenged by grassroots initiatives from the scientific community. A public-private collaboration between Ludwig Maximillian University Munich, Large Scale AI Open Network, Stability AI and Runway has trained the Stable Diffusion model for text-based image generation.

The Stable Diffusion model was made available in its entirety, publicly and for free – instantly dethroning the closed models Dall-E 2 (OpenAI) and Imagen (Google) and causing a flurry of innovative applications created by enthusiast scientists. Within 3 months of the model’s publication, there were more than a dozen startups building products based on Stable Diffusion.

Another example of the democratization of AI is the BLOOM model (for text generation and processing) – which is an open and free alternative to the closed, paid GPT-3 (OpenAI). More than 1,000 volunteer scientists were involved in the development of the model, and computing power was lent by the French Ministry of Science and Innovation and institutes subordinate to it.

Another example of the democratization of AI is the BLOOM model (for text generation and processing) – which is an open and free alternative to the closed, paid GPT-3 (OpenAI). More than 1,000 volunteer scientists were involved in the development of the model, and computing power was lent by the French Ministry of Science and Innovation and institutes subordinate to it.

Today, it seems that as long as the greatest advances in AI take place using publicly available datasets and public institutions gain increasing awareness of the importance of AI to the development of the economy, democratic access to scientific and technological advances is safe. However, it is important to keep in mind that in the future, publicly available data, algorithms and models will be only the tip of the iceberg.

What will the future bring?

Only AGI, a powerful general artificial intelligence, will be able to answer this question…. once it is created. Now, in a wide variety of fields, even “weak AI” can easily beat humans. Enough to revolutionize the world. Time-to-market for innovations in various areas of the economy ranges from 6 months to 20 years. That’s how long it will take to take full advantage of the opportunities that already exist today.

The times ahead are very exciting – let’s get to work!

Jacek Dąbrowski, Chief Artificial Intelligence Officer, Synerise

Tagi

Dodaj komentarz

Twój adres e-mail nie zostanie opublikowany. Wymagane pola są oznaczone *