Generative AI: The Dawn of a New Intelligence

7 minute read

Hello everyone, today we’re going to discuss the exciting trends shaping the field of Generative AI.

The field of Generative AI (GenAI) has witnessed an explosion of innovation and development across 2023 and 2024. As the market evolves, it’s crucial to understand the current trends and how they are shaping the landscape of AI. This blog post delves into six key trends in GenAI: Large Language Models, Small Language Models, Large Multimodal Models, Open Source LLM models, LLM Agents, and Large Action Models.

Trend Key Characteristics Examples
Large Language Models - Enterprise Integration
- Adaptability
- Scalability
OpenAI GPT-4, Anthropic Claude, Google Gemini
Small Language Models - Domain-Specific Training
- Cost Efficiency
- Performance
watsonx granite, Microsoft Phi2, Meta LLaMA2, Mixtral8x7B, BERT, FLAN
Large Multimodal Models - Comprehensive Understanding
- Workflow Capabilities
- Specialization and Cost
OpenAI Sora/GPT-4v, Google Gemini
Open Source LLM Models - Transparency
- Customization
- Community Contributions
watsonx prithvi, Microsoft Phi2, Meta LLaMA2, Mistral, Google Gemma
LLM Agents - Autonomous Task Execution
- Integration with Plugins and Skills
- Self-Healing
Microsoft Co-Pilot, OpenAI GPTs, AutoGPT, AWS Q
Large Action Models - Human Interaction Observation
- Symbolic and Neural Network Integration
- Adaptability
Rabbit R1

Large Language Models (LLM)

Large Language Models (LLMs) have become a cornerstone in the field of GenAI, particularly since the release of OpenAI’s ChatGPT on November 30, 2022. These models are capable of performing a wide range of tasks across multiple domains, leveraging massive datasets for pre-training.

Key Characteristics:

  • Enterprise Integration: LLMs can be deployed securely within enterprise environments, addressing concerns around IP and copyright infringement more effectively than before.
  • Adaptability: They can be fine-tuned with enterprise-specific data, enabling customization with minimal effort.
  • Scalability: LLMs are being deployed at scale across various industries, providing significant business value.

Examples: OpenAI GPT-4, Anthropic Claude, Google Gemini.

Small Language Models (SLM)

Small Language Models (SLMs) are designed to operate within narrow domains, offering specialized capabilities that outperform larger models in specific contexts.

Key Characteristics:

  • Domain-Specific Training: SLMs are pre-trained on data relevant to specific domains, making them highly effective for targeted applications.
  • Cost Efficiency: These models have a lower total cost of ownership, as they require less computational power and infrastructure.
  • Performance: Within their specialized domains, SLMs can surpass the performance of larger, more general models.

Examples: watsonx granite, Microsoft Phi2, Meta LLaMA2, Mixtral8x7B, BERT, FLAN.

Large Multimodal Models (LMM)

One of the most exciting developments in GenAI is the rise of large multimodal models (LMMs). These models break down the barriers between different types of data, seamlessly integrating text, images, video, and even audio. Imagine an AI that can generate a photorealistic image from a simple text description, compose a symphony inspired by a painting, or translate a video into multiple languages with perfect lip-syncing.

Large Multimodal Models (LMMs) are capable of processing and generating content across multiple modalities, such as text, images, video, and audio.

LMMs like OpenAI’s DALL-E 2 and Google’s Gemini are already showcasing astonishing capabilities. They’re not just tools for artistic expression; they’re being used to design new drugs, create personalized educational content, and even model complex scientific phenomena.

Key Characteristics:

  • Comprehensive Understanding: LMMs can analyze and synthesize information from various sources, providing a holistic understanding of content.
  • Workflow Capabilities: They support end-to-end workflows, from input processing to output generation.
  • Specialization and Cost: While these models are specialized and expensive, they represent a cutting-edge area of AI research.

Examples: OpenAI Sora/GPT-4v, Google Gemini.

Open Source LLM Models

Open Source LLM Models offer transparency and community-driven development, providing a foundation for innovation and customization.

The open-source movement is fueling a revolution in AI accessibility. Open-source LLM models like Meta’s LLaMA 2 and Stability AI’s Stable Diffusion are giving researchers, developers, and businesses the tools to create and customize powerful AI solutions without the need for massive computational resources.

This democratization of AI is accelerating innovation. It’s leading to a Cambrian explosion of new applications, from AI-powered medical diagnosis to personalized language tutors. It’s also sparking important conversations about ethics, safety, and the responsible use of AI.

Key Characteristics:

  • Transparency: These models offer insights into their training data and algorithms, allowing for greater scrutiny and understanding of biases.
  • Customization: Users can fine-tune open-source models with their own data, creating derivatives tailored to specific needs.
  • Community Contributions: Open-source models benefit from community input, enhancing their capabilities and robustness.

Examples: watsonx prithvi, Microsoft Phi2, Meta LLaMA2, Mistral, Google Gemma.

LLM Agents

LLM Agents are designed to autonomously complete complex tasks by planning and executing multiple steps.

Key Characteristics:

  • Autonomous Task Execution: LLM agents can break down complex, unstructured requests into manageable steps and execute them.
  • Integration with Plugins and Skills: These agents leverage available plugins and skills to perform tasks efficiently.
  • Self-Healing: They can recover from issues autonomously, ensuring continuity in task execution.

Examples: Microsoft Co-Pilot, OpenAI GPTs, AutoGPT, AWS Q.

Large Action Models (LAM)

Language is just one way humans express themselves. Large action models (LAMs) and LLM agents are taking AI beyond words, enabling it to interact with the world through actions.

LAMs learn by observing how we interact with digital interfaces, allowing them to automate complex workflows and even perform tasks in the physical world via robotics.

Large Action Models (LAMs) observe human interactions and learn to perform tasks across different interfaces, transferring this learning to new contexts.

LLM agents, like AutoGPT, are autonomous AI systems that can break down goals into tasks, plan their execution, and adapt to changing circumstances. They’re like personal assistants on steroids, capable of booking flights, managing projects, and even conducting research.

Key Characteristics:

  • Human Interaction Observation: LAMs learn from observing how humans interact with various interfaces, such as websites and mobile apps.
  • Symbolic and Neural Network Integration: They combine symbolic reasoning with neural networks to enhance decision-making and action-taking.
  • Adaptability: LAMs can transfer their understanding of tasks to new interfaces, performing actions on behalf of users.

Examples: Rabbit R1.

Market Landscape and Evolution

Large Language Models

Since the release of ChatGPT in late 2022, large language models have become a significant area for innovation. These models have transitioned to being more enterprise-friendly, allowing secure deployment behind enterprise firewalls and easy fine-tuning with enterprise data. This shift has made them indispensable tools for various clients, enabling large-scale production deployments.

Small Language Models

The market is also seeing a trend toward smaller language models that focus on narrow domains. These models, such as those trained on specific data sets like IBM’s Ansible scripts, can outperform larger models like GPT-4 in their specialized areas. The reduced total cost of ownership and efficient performance make them attractive for specific enterprise applications.

Large Multimodal Models

Large multimodal models are pushing the boundaries of AI capabilities. These models can process and integrate information from text, images, video, and audio, providing a comprehensive understanding of complex scenes. Despite being niche and expensive, their capabilities are expected to improve significantly throughout 2024.

Open Source LLM Models

Open-source models, though varying in licensing openness, offer significant advantages in terms of transparency and customization. Models like Meta’s and Google’s provide the code or weights used for training, allowing enterprises to fine-tune them with their own data. This trend is fostering a collaborative environment that enhances model capabilities.

LLM Agents and Large Action Models

LLM agents are transforming how tasks are executed autonomously, breaking down complex requests into actionable steps and integrating with backend systems. Large action models are evolving from traditional RPA bots by learning how humans interact with software and transferring that knowledge to new interfaces. These advancements are poised to change the industry landscape in 2024.

Towards Artificial General Intelligence (AGI)

The convergence of these trends points towards a future where AI transcends its current limitations and approaches artificial general intelligence (AGI). AGI is the hypothetical ability of an AI system to understand or learn any intellectual task that a human being can. While we’re not there yet, the rapid progress in GenAI suggests that AGI may be closer than we think.

Imagine an AI that can not only understand complex scientific literature but also design experiments, interpret results, and even propose new theories. Or an AI that can collaborate with humans on creative projects, generating ideas, refining concepts, and producing finished works of art, music, or literature.

​ Fig1 - Picture generated with Generative AI

The Future: A Symphony of Intelligence

The future of generative AI is not just about machines that can mimic human creativity or automate tasks. It’s about a new kind of intelligence, a symphony of senses and actions that can augment our own abilities, solve complex problems, and open up new frontiers of knowledge and experience.

This future is not without its challenges. We need to address issues like bias, misinformation, and the potential for misuse. But if we can navigate these challenges responsibly, the potential of generative AI is limitless. It could lead to breakthroughs in medicine, education, scientific discovery, and countless other fields. It could even help us tackle some of the most pressing global challenges, from climate change to poverty.

The era of generative AI is just beginning. It’s a time of tremendous excitement and potential. As we continue to explore this new frontier, one thing is clear: the future of AI is not just intelligent; it’s generative.

​ Fig 2 - Picture generated with Generative AI


The trends in Generative AI are reshaping the AI landscape, offering innovative solutions across various domains. From the versatility of large language models to the specialization of small language models, the comprehensive capabilities of multimodal models, the transparency of open-source models, the autonomy of LLM agents, and the adaptability of large action models, each trend represents a significant leap forward. As these technologies continue to evolve, they promise to deliver unprecedented value and efficiency across industries, paving the way for a future where AI is seamlessly integrated into everyday business operations.


Leave a comment