top of page

Cybernetics and AI: Bridging the Past to the Future

My fascination with engineering began in late high school reading and article in "Wired Magazine" about a university professor in England who was implanting electrodes into his own body to study how human/machine systems could be controlled or influenced. Up until then, I had experience subjecting living systems to experiments, but it was up to the standard public school type of experimentation. Soaking chicken eggs in vinegar, extracting the membrane, and observing osmosis through various experiments for instance, or pollinating various combinations of pea plants to observe genetic variation was fundamental to my understanding of evolving systems. However, interestingly enough, it never occurred to me at the time, that our own bodies being living systems, are perfectly suited to be the subject of experimentation and are the best way to observe a human/machine interface. The most efficient way for this professor to observe the human/machine interface was for him to be the human.



Reading about that interaction, and subsequently meeting that professor at a Ted talk years later in Chicago, is core to my belief that the direction of our species has been set, we're moving toward a singularity where we won't really know the difference between organic and artificial experiences (some will say that we're already there), but I'd say that the feelings that are elicited by such a future mean that we are not quite there and the fact that we are the engineers of it mean that the details on how, why, when, and whether or not we should get there are still up for debate.


That fact means that our collective knowledge and wisdom absolutely needs to be harnessed to enhance human capabilities through technology, thoughtfully. I came to that conclusion back then and it led me to pursue a degree in Computer Engineering and Bioengineering, with a vision to combine these disciplines to be part of the group that shapes the way humans interact with technology.


During my studies, I met brilliant and inspiring individuals who shared my enthusiasm for understanding systems and their interactions, which sparked my interest in cybernetics, a field based on principles that I found deeply relevant to a wide range of applications, from bioengineer, to designing complex technological systems, to social systems, and the natural world.


The concepts in bioengineering often bring you to one of two roles: either you're engineering biology, directly manipulating living systems, or you're using biology as the engineer, allowing natural processes to guide outcomes. Cybernetics taught me that, at their core, these roles are fundamentally about understanding and influencing systems. Whether you're tweaking a genetic code, configuring some SaaS platform, or designing an AI, the principles remain the same: How do the individual parts of a system interact, and how much control—direct or indirect—can you, as the engineer, exert over the outcome? I'd like to share those pillars with you and encourage you to consider them the next time you're faced with an industry problem or a moment to analyze some dynamic interactions though out your life. They are fundamental to the Nexus Approach and something that I endorse my own teams to embrace in their work.


The Core Pillars of Cybernetics


Feedback Loops: At the heart of cybernetics is the feedback loop—a process in which systems adjust based on the information they receive from their own operations. In today’s AI-driven enterprises, feedback loops are everywhere. AI models learn from data, adjust their algorithms, and improve outcomes, creating a continuous cycle of enhancement.


Control and Communication: Cybernetics views control and communication as the foundation of all systems. In modern AI applications, this is evident in how LLMs process vast amounts of data and communicate insights to users. The balance of control—how much autonomy we give AI—is a key issue for businesses adopting these technologies.


Human-Machine Interaction: Cybernetics has always been concerned with how humans and machines coexist and complement each other. As AI becomes more pervasive, questions about the nature of this relationship—trust, dependency, and the division of labor—are more relevant than ever.


Applying Cybernetics to AI in Business


Either you're engineering biology, directly manipulating living systems, or you're using biology as the engineer, allowing natural processes to guide outcomes.

The understanding of biological systems leads us to consider the idea that part of our work relies on faith. Not faith in the traditional sense, but faith as it applies to a knowledge of the underlying system and the expectation that it will respond and behave in a way that we are accustomed. There's no magic here, there's just a misunderstanding or lack of understanding of the system's that govern certain work. For instance, if you know the steps that a biological process takes there's little need to expect that it won't work that same way given the same initial conditions and closed system variables every time.


The reason AI in the current public understanding is different is because of the unexpected nature of it's interaction with certain inputs. In fact, generative AI, as a technology doesn't fit into those expectations for the following reasons:


  1. Randomness in Generation: The model includes a degree of randomness when selecting words or phrases to generate a response. This means that even with the same input, the model might choose different words or structures each time.

  2. Non-deterministic Processing: While the model is guided by patterns it has learned, it doesn’t follow a strict set of deterministic rules. It might interpret or prioritize aspects of the input differently on different occasions, leading to variations in the response.

  3. Contextual Awareness: If the conversation has evolved or if there’s a broader context that’s being considered (as in a multi-turn conversation), the model might generate responses that are influenced by previous interactions, leading to differences.


That means fundamentally, that human being's cannot interact with this type of technology the way that they have in the past. It has to be approach in a non-deterministic way, which means that there's an element of dialogue and indeed "feedback" that's necessary across the use of a Gen AI model to which people are not accustomed.


It's because of this that the principles that were adopted by cybernetics can and should guide the responsible adoption of AI in enterprises. Feedback loops in AI must be carefully monitored to avoid reinforcing biases or leading to unintended consequences. This requires an ongoing assessment of how AI systems learn and adapt, ensuring they align with business goals and ethical standards.


Control in AI systems must also be balanced. While automation can lead to efficiencies, too much autonomy could result in a loss of human oversight, leading to risks in decision-making processes. The very nature of the degree of randomness in the system encourages a hybrid approach where human judgment and machine efficiency work in tandem. To apply this in business, requires systems to consider the interaction between AI and the human beings that interact with it. This is an experimentation of the human/machine interface. any company that considers these initiatives to be purely a tech play will falter and it's the reason we focus on this type of solution design in our work.


Ethical and Future-Focused Questions


Cybernetics has long addressed the ethical questions that arise from human-machine interactions. As we integrate AI into our lives, these questions only intensify:


  • Autonomy vs. Control: How much decision-making power should be handed over to machines? Where do we draw the line between AI autonomy and human control?

  • Bias and Fairness" How do we ensure that AI systems do not perpetuate or amplify existing biases? How can feedback loops be designed to minimize these risks?

  • The Nature of Work: What is the future role of humans in an AI-driven world? How can we ensure that technological advancements lead to more human creative endeavors rather than obsolescence?


We're continually emerging from Plato's Allegory of the Cave in this regard. Just as we’ve historically moved from shadows to the light of knowledge, the ethical challenges of AI compel us to step out of the cave once again, confronting and shaping the reality of a new technological era. Being able to face our own amplified biases and the control that we can exert on each other because of the infrastructure of technology that we've developed should give everyone a moment of pause and focus. We need to have our own feedback loop of self reflection to consider the steps we take from here on out.


Shaping the Future


We are the architects of this future. The principles of cybernetics—understanding systems, balancing control, and ensuring meaningful human-machine interactions—are more relevant than ever. As we integrate AI into every facet of our lives, we must remember that we hold the power to steer this technological revolution toward a future where humans and machines coexist in harmony, each enhancing the capabilities of the other.


Our future isn’t set in stone. With a thoughtful approach rooted in the lessons of the past, we can shape it for the better. The choices we make today will determine whether AI becomes a tool for empowerment or a force for division. Let’s choose wisely.



Incidentally, if you're interested in reading that article that pushed a young man's interest in technology and put him on a course to where he is today, it's right here: https://www.wired.com/2000/02/warwick/




Interested in learning how the Nexus Approach can help you drive your business into embracing these principles?




Yorumlar


Yorumlara kapatıldı.
bottom of page