AI advancements, human interactivity, cautions on user engagement

Image source – Cytus I Android

It is December 2025 and the news is flooded with information on the global race for efficiency in AI technology. Just as the world saw hardware and machinery inventions booming in the Industrial Revolution in the 18th century (roughly 275 years ago), from the first sewing machines to the first photograph, today we are seeing and using the first prototypes of inventions that will become commonplace in the new age to come.

The cognitive capabilities of AI, and the coming AGI, will long out perform any single human brain capability. Given that a machine has enough allocated resources, hardware and memory – it could span thousands to tens of thousands years worth of human thinking, all within mere moments.

The rate at which we will be able to perform technological feats will be at completely unprecedented and efficient rates like no other.

As long as we survive the process.

Currently AI responds to human input. We interact via a chat box, and it returns a recently and increasingly humane response. The first Chat GPT models were designed, and directed by developers to support human input, agreeing with most statements to the best of its ability. No one likes an interlocuter that is not on board to converse, and afterall it seemed the best approach for encouraging users to interact as much as possible with the new technology.

Through time that approach has wavered back and forth. Chatbot “personalities” were changed and tested to be more direct, and then after some human backlash through drops in interaction rates, the common “personality” was changed back again to its sycophant roots. No one in power likes its tool to outsmart and outwit them (and be cocky about it) afterall, power which humanity still holds.

Humankind is not perfect, and psychologists have long warned us of our inate existing biases. Living is complicated; perspective and perception can be distorted amongst the best of us. Unless we are trainied in recognising the existence of our own metacognition, whether via professional counseling or through some serious reality checks, and unless we are constantly working on our own brains to catch these dangerous moments – we can be trapped in unhealthy thinking, which will affect our future decisions.

There are many people that now use AI as a personal chatbot, and release private information that would be classified as sensitive in a psychological field. That is to say, personal information that ideally would be consulted with a professional, or at the very least a close, loved one.

Instead of a professional, people use chat bots. This is dire news for the user, as chatting to a robot that is designed to agree and virtually nod its head at any utterance you may have to state or ask will likely increase your delusion and, not only that, but come up with false references that may seem to support your arguments.

This is not a new problem for humanity. Human cognition is malleable and forever changing, which some may argue can be our greatest attribute in our survival on Earth. We share a global field of knowledge, or a totality of human experience and understanding (and depending on the community, to a point with animals and all nature). We have made ourselves the prime predator and hacked and terrformed our delicate biosystem to our ultimate advantage, which can be mentioned in another article.

So we change and we change, yes. But we are not limited to changing always for the greater good. History has proven this as we saw human delusion become fuel for multiple global wars – to which we are still experiencing drastic consequences.

Delusion on a world war scale requires many variables, ego, events and circumstances to align, yes. But what happens when we each, everyone on Earth, has access to a “tool” that we think we are prompting and leading a conversation, can inadvertley prompt us into ethically bad decisions?

Understanding this is key for AI users, and government policy is struggling to catch up with user cases like that of induced psychosis, or other worse cases.

So when in doubt, always seek a professional *human* for human matters. Consultation is key for correct spreading of information as we embark into a new age for humanity.