|

Generative AI Notes Class 12 AI (843) | Easy and Quick Revision

You’ve landed in the best place for Generative AI notes for Class 12! All aligned with the latest CBSE syllabus. Your Board prep just got a lot easier!

Generative AI

  • Generative AI is a branch of Artificial Intelligence that creates new content such as text, images, audio, and more.
  • It works by learning patterns from existing data and generating new outputs similar to its training examples using machine learning algorithms.
  • Examples of Generative AI include ChatGPT, Gemini, Claude, and DALL·E.

Working of Generative AI

  • Generative AI learns patterns from data and autonomously generates similar samples using deep learning.
  • It operates using neural networks that help understand complex and intricate patterns in data.
  • It is used for generating different types of content such as images, text, and more.
  • Important models used in Generative AI include Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs).

Generative Adversarial Networks (GANs):

  • GANs are a type of neural network architecture used in Generative AI.
  • They consist of two networks: a generator and a discriminator.
  • The generator creates new data samples such as images or text (fake data), while the discriminator evaluates these samples to determine whether the data is real or fake.
  • The generator tries to produce data that looks real, while the discriminator tries to detect fake data.
  • Both networks compete with each other in a process called adversarial training.
  • Through this competition, GANs gradually improve and generate highly realistic outputs.
  • GANs are used in image generation, style transfer, and data augmentation.

Variational Autoencoders (VAEs):

  • VAEs are computer programs designed to learn patterns from data in a structured way.
  • They consist of two parts: an encoder and a decoder.
  • The encoder converts input data into a compressed form called a latent space.
  • Latent space is a compressed representation of the original data.
  • The decoder converts this latent space back into the original data format.
  • Unlike GANs, VAEs focus on learning underlying data patterns for generation.
  • VAEs are used for data generation, anomaly detection, and filling missing data.

Comparison of GANs and VAEs:

  • Both GANs and VAEs are powerful generative models.
  • GANs are better for generating highly realistic visual outputs.
  • VAEs are better for structured data generation and interpretable latent spaces.

Generative and Discriminative Models

Discriminative Models

  • Discriminative models are used to distinguish between different classes or categories of data.
  • They focus on learning the boundary between classes based on input features.
  • These models do not generate new data; they only classify or predict labels.
  • They answer questions like “Which category does this data belong to?”
  • Example: Classifying an email as spam or not spam based on words and patterns.
  • They are mainly used for classification tasks in machine learning.

Generative Models

  • Generative models learn the underlying distribution of data.
  • They try to understand how the data is formed and then generate new similar data.
  • These models can create new samples like images, text, or audio.
  • They are based on mathematical concepts such as probability and statistics.
  • They help in handling large datasets by generating meaningful new data.
  • Example: Creating new images of faces that look real but do not exist in reality.
  • They are mainly used for data generation and modeling complex data patterns.

Differences between Generative AI and Discriminative AI:

AspectGenerative AIDiscriminative AI
Purpose (What is it for?)Helps create things like images and stories and finds unusual things. It learns from data without needing to be told precisely what to do.Helps determine what something is or belongs to by looking at its features. It is good at telling different things apart and making decisions based on that.
Models (What are they like?)Uses methods like making models compete or predicting patterns to create new content.Learns rules to separate data and recognize patterns, such as distinguishing between a dog and a cat.
Training Focus (What did they learn during training?)Tries to understand what makes data unique and how to generate similar but new data.Focuses on learning decision boundaries or rules to separate data based on features.
Application (How are they used in real world?)Used in creating artworks, generating story ideas, and detecting unusual patterns in data.Used in facial recognition, speech recognition, and classification tasks like spam detection.
Examples of Algorithms usedGAN, VAEs, LLM, DBMs, Autoregressive models, Naïve Bayes, Gaussian Discriminant AnalysisLogistic Regression, Decision Trees, SVM, Random Forest

Applications of Generative AI

Image Generation

  • Involves creating new images based on patterns learned from existing datasets.
  • AI models analyze features of input images and generate new images with similar characteristics.
  • Produces visuals that resemble previously seen images, such as realistic or artistic outputs.
  • Example: Generating new cat images based on training data.
  • Tools/Examples: Canva, DALL·E, Stability AI, Stable Diffusion.

Text Generation

  • Involves generating written content that sounds like it is written by humans.
  • AI learns from large amounts of text data to understand language patterns.
  • Produces meaningful and context-based sentences or stories.
  • Example: AI writing a story that feels human-authored.
  • Tools/Examples: ChatGPT (OpenAI), Perplexity, Google Bard (Gemini).

Video Generation

  • Involves creating new videos by learning from existing video data.
  • AI can generate animations, visual effects, or realistic video scenes.
  • Produces videos that look authentic and professionally created.
  • Example: AI generating a movie-like scene.
  • Tools/Examples: Google Lumiere, Deepfake algorithms.

Audio Generation

  • Involves generating new audio such as music, voices, or sound effects.
  • AI learns from existing audio recordings to create new sound patterns.
  • Produces music or speech that sounds natural and realistic.
  • Example: AI composing a song that sounds like a real band performed it.
  • Tools/Examples: Meta Voicebox, Google MusicLM.

LLM – Large Language Model

  • A Large Language Model (LLM) is a deep learning-based model used for Natural Language Processing (NLP) tasks.
  • It can perform tasks such as text generation, text classification, question answering, and language translation.
  • LLMs are called “large” because they are trained on massive datasets containing huge amounts of text and code, sometimes including trillions of words.
  • The performance of an LLM depends on the quality and size of the training data used.
  • These models are widely used in conversational AI systems and language-based applications.
  • Example tasks include chatting with users, summarizing text, and translating languages.

Transformers in LLMs:

  • Transformers are a type of neural network architecture that has revolutionized Natural Language Processing (NLP), especially in Large Language Models (LLMs).
  • They help in efficient learning of complex language patterns and relationships within large amounts of text data.

Some leading Large Language Models (LLMs):

  • OpenAI’s GPT-4o: Multimodal model that processes and generates both text and images.
  • Google’s Gemini 1.5 Pro: Supports multimodal capabilities for text, image, and speech understanding.
  • Meta’s LLaMA 3.1: Open-source model optimized for efficient performance in various AI tasks.
  • Anthropic’s Claude 3.5: Focuses on safety and interpretability in language model interactions.
  • Mistral AI’s Mixtral 8x7B: Uses sparse mixture of experts for better performance with smaller model size.

Applications of LLMs:

Text Generation:

  • LLMs are used for text generation tasks like content creation, dialogue generation, story writing, and poetry writing.
  • They generate coherent and context-based text from given prompts.
  • They can translate natural language descriptions into working code.
  • They help in autocompleting text and generating sentence or paragraph continuations (e.g., email auto-completion, writing tools).

Audio Generation:

  • LLMs do not directly generate audio signals.
  • They support audio generation through text-to-speech (TTS) systems.
  • LLMs generate text scripts or descriptions that are converted into natural-sounding speech by TTS systems.

Image Generation:

  • LLMs are used for image captioning tasks.
  • They generate textual descriptions or captions for images.
  • They do not directly create images but help in understanding visual content through text.

Video Generation:

  • LLMs help in video-related tasks by generating textual descriptions or scripts.
  • These descriptions can be used for subtitles, captions, or scene summaries.
  • This improves video accessibility and searchability.

Limitations of LLM:

  • Processing text requires significant computational resources, leading to high response time and costs.
  • LLMs prioritize natural language over accuracy, which may result in factually incorrect or misleading information with high confidence.
  • LLMs may memorize specific details instead of generalizing, leading to poor adaptability.

Risks associated with LLM:

  • Since LLMs are trained on internet text, they may exhibit biases, and there are concerns about data privacy when personal information is processed.
  • Using sensitive data in training can unintentionally reveal confidential information.
  • Carefully designed or misleading inputs (adversarial prompts) may cause harmful or illogical outputs.

Future of Generative AI:

  • The future of AI focuses on developing advanced architectures that go beyond current capabilities while ensuring ethical and responsible use.
  • Generative AI will help solve complex problems in fields like healthcare and education.
  • It will improve Natural Language Processing (NLP) tasks such as multilingual translation.
  • It will expand in multimedia content creation like text, images, audio, and video.
  • Human-AI collaboration will increase, with AI acting as a supportive partner across different domains.

Ethical and Social Implications of Generative AI:

Deepfake Technology:

  • Deepfake technology raises concerns about the authenticity of digital content.
  • Tools like DeepFaceLab and FaceSwap can create fake images, audio, and videos.
  • This can reduce trust in media and increase misinformation.
  • Example: Deepfake videos can misuse a person’s face without consent, causing privacy violations and reputational harm.

Bias and Discrimination:

  • Generative AI models can show bias against certain groups.
  • This can increase social inequality and reinforce stereotypes.
  • Example: AI hiring systems like HireVue may reflect bias based on past hiring data, affecting diversity and fairness.

Plagiarism:

  • Using AI-generated content as personal work raises ethical concerns.
  • It affects intellectual property rights and academic honesty.
  • If AI output closely matches copyrighted content, it may lead to legal issues.

Transparency:

  • It is important to clearly disclose the use of generative AI.
  • Lack of transparency can reduce trust and accountability.
  • Not informing about AI usage can affect academic and professional credibility.

Citing Sources with Generative AI:

  • Intellectual Property: Proper attribution must be given to AI-generated content to respect original creators and follow copyright laws.
  • Accuracy: AI-generated information should be verified for reliability, and primary data sources should be cited whenever possible to maintain credibility.
  • Ethical Use: AI tools should be acknowledged, and context for generated content should be provided to ensure transparency and ethical usage.

Citation Example:

  • Treat the AI as the author and mention the tool name (e.g., Bard) as a Generative AI tool.
  • Use the date when the AI-generated content was received, not the tool’s release date.
  • Optionally include the prompt used to generate the response for reference.

Example (APA style):

  • Bard (Generative AI tool). (2024, February 20). How to cite generative AI in APA style.
  • (Optional): “Prompt: Explain how to cite generative AI in APA style.”

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *