Skip to content

Harmonizing AI and Human Expertise: Crafting Adaptive Organizations through Integrated Learning Loops

Back to all articles

Like many of you, I've been utilising ChatGPT as a digital assistant for over a year now. I'm convinced that the future favours those who augment their capabilities with such assistants. However, there's a critical flaw in the current Large Language Model (LLM) approach: the absence of active learning.

In our current setup, humans often refine the outputs before final use. That final improved paragraph is never quite 100% right in tone and style. The downside is that the final enhancements that we make don't immediately feedback into the system. In other words, it never learns what the right "answer" is until perhaps much later when its retrained.  This delay in integrating new learnings means there's a lag between ChatGPT's initial response and its assimilation of improved content. This is perhaps the start of LLM drift. This issue is a microcosm of a larger challenge in learning organisations and systems: how to swiftly close the loop between a decision and its outcomes for quicker, more effective learning.

These delays are akin to the experience of adjusting the temperature in a shower. Imagine you're trying to find the perfect water temperature, but there's a delay between when you adjust the taps and when you actually feel the change. This lag can lead to over or under-corrections. Just as you might end up turning the hot water too high and then compensating by turning it too low, similar oscillations can occur in a business system. The initial response to a problem or change might be too strong or too weak, and these adjustments can cause ripples throughout the organisation. The true impact of these oscillations, much like finally finding the right shower temperature, becomes apparent only after a period of trial and error, extensive analysis, and feedback activities are completed.

  • One solution could be faster re-training cycles. By regularly updating the model with new data, including user-generated improvements, the model can rapidly incorporate recent information. However, this method is resource-intensive and may not achieve real-time learning.
  • Another approach is Human-in-the-Loop active learning. This involves integrating human feedback directly into the learning process. Users could annotate or correct model outputs, which then serve as training data. This method can significantly improve the model's understanding of context and nuance, but it demands a well-structured system to effectively gather and utilise this feedback.
  • Online Learning offers a more immediate solution, allowing LLMs to continuously update their parameters in response to new data. This could enable real-time adaptation to new information and corrections. Yet, this approach is technically demanding and poses challenges in maintaining the model's stability and reliability.
  • A potentially more effective strategy might be decentralised and personalised learning. Here, models learn and adapt at the user or client level, instead of a central training level. This could allow for more individualised and responsive learning, though it raises complex issues around data management and privacy.
  • Donella Meadows suggested that delays are not often easily changeable. It’s usually easier to SLOW DOWN THE CHANGE RATE so that inevitable feedback delays won’t cause so much trouble. For example, slowing the system down to allow technologies and prices to catch up might offer more leverage than trying to eliminate delays outright.  Reflecting on this, consider the pushback from the creative and design industries regarding IP in generated images. Perhaps what's needed is either legislative action or a new business model to decelerate the system, allowing it to learn from new art only after the artist has fully capitalised on their work, or at least developed a revenue model that benefits them.

The perfect solution remains elusive. Ongoing research into advanced machine learning algorithms capable of efficient active learning is crucial, as well as innovative business and revenue models for creative artworks - a new form of NFTs on steroids. These, as yet unimproved, algorithms must balance the need for fresh information against the computational demands of constant updates.

For now, it's wise to keep humans integral to the learning process and actively incorporate active learning strategies in AI transformation. Establishing internal feedback loops to assist in retraining models is a clear step forward. However, a greater challenge lies in reducing the time lag between model iterations. To effectively navigate this, organisations should not only focus on feedback mechanisms but also implement robust feedforward processes.

Feedforward involves anticipating future challenges and opportunities, enabling proactive adjustments before issues become entrenched. This approach, when integrated into an AI framework, can significantly enhance an organisation's agility and responsiveness. It's about creating a learning organisation where insights from both past experiences (feedback) and future predictions (feedforward) inform decision-making at all levels.

Incorporating these loops requires a holistic approach, where the organisation learns from every strategic shift, tactical adjustment, and operational change. This means embedding AI-driven insights into the very fabric of the business processes, allowing for a seamless blend of human expertise and machine intelligence. By doing so, an organisation not only fine-tunes its immediate AI strategies, such as the rollout of LLM solutions but also fosters an environment conducive to continuous learning and adaptation.

Putting these processes in place now lays the groundwork for a more resilient and adaptive business system. Such a system is better equipped to learn quickly from changes, whether they are strategic overhauls, tactical pivots, or operational tweaks. In essence, by building these dual loops of learning – integrating both feedforward foresight and feedback learning – an organisation aligns its broader goals with the nuanced capabilities of AI, ensuring that its journey towards AI transformation is as effective and efficient as possible.