In the last few years, AI startups began testing new business models trying to find a niche for generative content. They did this by actively engaging users. But more importantly, by creating APIs to their platforms. Many have heard of and even used products from Phind, ChatGPT, and Midjorney.
Recently, I've been working with related Generative AI products and tools, and researching how Generative AI is used by colleagues. I began to wonder about the following questions:
Let's deal with these questions together.
Currently, there are applications and programs in the following areas:
At a minimum, Generative AI will seriously affect industries where opensource is used. This includes scenarios requiring data generation, summarizing, and contextual clarification. In imaging, it will influence using a certain format of images, videos, and 3D graphics.
Though quite an impressive technology, Generative AI uses machine learning, language models, and graphical models annotated and labeled by humans. Humans are still vital in generating ideas. The idea to create this article arose on my own. But, I was inspired to write the text by the use of Generative AI. These models use human-generated content, and now we can use AI-generated content for inspiration and new ideas.
Midjourney. Description/Prompt: Ukrainian Carpathians Montane meadow
photograph, photorealistic 8K, HD ...
Despite advances in AI/ML, all learning is human-directed and human-assisted. Most of the data the models are trained on is publicly available. There is also a wealth of private data that is available to humans and is mostly not used to train a model. For example, internal corporate knowledge systems, closed-source databases, and libraries.
The availability of ChatGPT has caused active, and even heated discussions regarding the expediency and ethics of using the technology in the field of education -when passing professional certifications, when answering exams, etc. StackOverflow has updated its usage policies and banned the use of ChatGPT. New York Department of Education blocks ChatGPT on school devices and networks.
Such assistants and tools will occupy their niche and significantly speed up work with data. However, the results will still be checked by people with relevant experience to validate and apply the answers.
The ultimate truth with text-generative AI is that from the other side of the screen, we need people. Suppose you aren't a subject matter expert in a specific area. At first glance, it may seem that generated text is correct. But there are many examples where generated content contains all the needed data, acronyms, and terms but with nonsense or critical mistakes.
So we need someone who can validate Generative AI output.
Many startups and applications are emerging at the intersection of different areas of content generation. Amusingly, there are even platforms for generation of sites and materials for the sole purpose of launching startups.
There are many cases where generated images are published and presented as images of real events or people. Such cases create a demand for recognition tools to validate the images. Perhaps image generation companies and projects will be able to add certain pixels to mark the image as generated. So far, there are initiatives from artists who label their images with the aim of banning them from being used in training AI models. For example, NO AI. Also some artists are suing for copyright infringement.
Will tools for recognition of generated texts and images appear? And how quickly? Some tools already exist, such as this Deepfake Detection Challenge Dataset, and AI text classifier for detecting text generation. For faces, for example, there is a tool to protect your privacy when posting pictures on the network, Fawkes.
I assume that this is a cumulative effect of the following:
In previous years, many resources and investment were directed to AI companies. Universities that have traditionally researched AI/ML have begun to develop this direction more in the last 5-10 years. The number of relevant departments, students, and scientific staff constantly increased. Commercial companies could cooperate with relevant universities and create their projects and R&D.
Over the past five years, the organizers of conferences, workshops, and seminars began to attract more relevant speakers. Currently, most conferences, IT events/exhibitions have separate sections or zones with AI/ML.
The first is the self-limitations of any available platform, which are specified in the Term of Use. Many models have input text filters that describe what to generate. For example, restrictions apply to the creation of content that incites hatred, the formation of fakes, materials containing explicit content. In addition, the output size is limited for image generation. For example, available size options: 256