Artificial intelligence is a term that every respectable tech company is promoting these days. But what’s the truth? Is artificial intelligence really intelligent? Should we fear a future where robots take control over humans—as depicted in movies like Terminator or The Matrix?
In reality, research on artificial intelligence began as far back as the 1950s. The recent surge in the field isn’t due to any groundbreaking new discovery, but rather to the fact that the computational capacity of computer systems (and the capital invested in AI) has reached a critical mass. This allows this decades-old technology to deliver results that are valuable and marketable to the general public.
Applications that are now considered everyday tools—such as predictive text input on mobile phones, personalized ads on various websites, or facial recognition algorithms—all fall under the umbrella of artificial intelligence.
At the same time, perhaps the greatest ‘moral panic’ has been sparked by generative AI models, which have reached new heights in recent years. The emergence of applications like ChatGPT and Google Gemini has brought certain philosophical and ethical questions back into public discourse: Can a machine possess conscious thought? Can a machine have emotions? And ultimately, can a soul be breathed into a machine?
To provide reassuring answers to these questions, it’s worth becoming acquainted—at least in broad terms—with the principles by which artificial intelligence operates. AI systems are ‘trained’ using relatively large volumes of data. This data can take various forms, including images, written materials, and even audio files.
This data is then labeled (for example: ‘These images show a puppy’), and the system, in an automated way, extracts features that allow a given image to match the term ‘puppy’ based on the available set of examples.
Contrary to popular belief, this process doesn’t involve interpreting the data in the human sense. The machine can only reach conclusions like: ‘This specific arrangement of pixels is a recurring element in images labeled with the word “puppy”.’ After this, the AI system can be trained to respond to a command like ‘Draw me a puppy’ by generating a collection of pixels that generally resembles a puppy.
As we see, the machine doesn’t know what a puppy is in the human sense but can act as if it understands what we are asking of it. In this case, AI ultimately performs the same mechanical process of visual representation as a camera or a laptop screen. Essentially, it is simply another representational system.
And although the way AI algorithms operate differs significantly from the principles of other representational systems, we still can’t claim that the image generated by an AI system contains more meaning than an ordinary photograph (and this observation naturally extends to machine-generated text, videos, and audio content as well).
Even from such a brief and necessarily simplified overview of AI processes, it becomes clear that AI cannot generate true novelty: it operates on a statistical basis and can only synthesize existing knowledge, so we need not fear the ‘awakening’ of this technology. Nevertheless, we tend to speak about it as if it were a person; we personify it simply to make it familiar and approachable. In the end, it is humans who bring technology to life.
Let’s use artificial intelligence as a tool: to simplify our lives, to make our work more efficient. Will machines take control over our lives? Our answer might be that perhaps they already have—without us even noticing.
The images were generated with the help of ChatGPT artificial intelligence via the OpenAI system.
Leave a Reply