Understanding Generative AI
Artificial Intelligence (AI) has made remarkable strides in recent years, revolutionizing various fields. Within the realm of AI, a new and exciting approach has emerged called generative AI, taking a leap forward by enabling machines to create and generate entirely new content, such as images, music, and also investment insights, among other things.
Image: created by DALL-E2
Having a closer look at recent breakthroughs in AI, it becomes evident that there have been substantial changes in the training methodologies of machine learning and deep learning models. While traditional models used to handle data in a discriminative way, meaning they were trained on individual data points, newer models have evolved to grasp a deeper understanding of the data by considering its distribution and meta-level properties. This shift signifies a more holistic approach where models learn to analyze data at a higher level, incorporating a broader and more powerful perspective of its characteristics. The progress in their training methods has resulted in modern AI models surpassing the realm of learning and inferring from given datasets. New AI models have the ability to learn from the structures and patterns inherent in the training data, enabling them to produce new data with comparable characteristics. These models are known as Generative AI models. Generative AI refers to a class of algorithms and models that possess the ability to create original and authentic content. Rather than relying solely on predefined patterns and data, generative AI systems learn from vast datasets and then generate new outputs that resemble the original training data. These outputs are not mere replicas but unique creations with a touch of creativity. Generative AI allows machines to go beyond mimicking existing patterns and venture into the realm of imagination.
How is Generative AI Different?
In a recent talk, aisot’s Senior Deep Learning Engineer Dr. Thomas Asikis, compared generative and discriminative AI models to the art world. “A modern generative model is often trained by interacting with a discriminative model. You can think of generative models as artists and of discriminative models as art critics. Generative models typically try to impress discriminative models while the discriminative models try to be unimpressed.”
Unlike pure discriminative AI models which map inputs directly to outputs, generative AI models learn the statistical relationships between inputs and outputs in order to model the joint probability distribution P(X,Y). This allows generative models to create new samples from the learned distribution rather than simply classify or label samples. For example, generative adversarial networks (GANs) are able to generate highly realistic synthetic images after training on large datasets of real images. The generator model in a GAN learns to map random noise to outputs that match the overall distribution of training images. Meanwhile, the discriminator model tries to classify outputs as real or fake. The two models are pitted against each other until the generator can reliably fool the discriminator. This generative modeling process does not require the initial training images to be labeled or classified. Generative models like GANs and variational autoencoders (VAEs) are enabling breakthroughs in image generation, audio synthesis, and drug discovery. However, discriminative models that directly map inputs to outputs remain essential for prediction-focused tasks like image classification. Generative models are not automatically better than discriminative models. The choice between generative and discriminative AI depends on the specific goals and data availability of applications.
Use of generative and discriminative models in asset management
Generative and discriminative AI models both enable important capabilities for asset managers, albeit through different modeling approaches. Generative models learn the complex joint probability distributions underlying financial markets and economic variables. Models like generative adversarial networks (GANs) and variational autoencoders (VAEs) analyze relationships between a diverse set of factors including asset prices, volatility, trading volume, macroeconomic indicators, sentiment, and more. By modeling these multivariate distributions, generative AI can synthesize entirely new simulated scenarios for stress testing portfolio strategies. Asset managers can assess how their investments and systems would fare under rare events or crisis situations that resemble historically observed conditions but are not identical. Having access to a large bank of generative AI-produced simulations allows for more rigorous risk management. Beyond stress testing, generative models can also create new training data to augment limited historical samples for building other ML models.
In contrast, discriminative AI models directly learn mappings from inputs to outputs for prediction tasks. Feedforward neural networks trained by backpropagation are commonly used in finance for price forecasting, algorithmic trading signals, and identifying overpriced assets. Discriminative models excel at leveraging complex patterns in historical data to make specific predictions about the future. Their representational power and ability to capture nonlinear relationships makes them well-suited for the noisy and dynamic nature of financial markets. In summary, generative AI provides the ability to synthesize new scenarios while discriminative AI delivers powerful predictive modeling - both invaluable for modern data-driven asset management.