In this episode, Vishnu and Alex reflect on Wisecube’s 8-year journey and over 15 years of experience in AI and NLP. They discuss pioneering search engines using TF-IDF to build knowledge graphs (Orpheus), addressing LLM reliability with Pythia, exploring key milestones in AI development, and the evolution of NLP. Topics include the Eliza effect, real-world healthcare and research applications, CAC, drug discovery, and Wisecube's recent acquisition by John Snow Labs. They explore the future of NLP and AI in healthcare.Alex Thomas's book, "𝗡𝗮𝘁𝘂𝗿𝗮𝗹 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗦𝗽𝗮𝗿𝗸 𝗡𝗟𝗣: 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝘁𝗼 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱 𝗧𝗲𝘅𝘁 𝗮𝘁 𝗦𝗰𝗮𝗹𝗲": https://www.amazon.com/Natural-Language-Processing-Spark-NLP/dp/1492047767 Timestamps 00:00 Introduction and Personal Notes 01:13 Wisecube is Now Part of John Snow Labs! 02:15 History and Evolution of NLP 03:27 Early Search Engine Projects 07:55 CAC (Computer-Aided Coding) Healthcare Project 18:05 Drug Discovery Research 28:12 Knowledge Graphs and Orpheus/Pythia Projects 35:51 Future Outlook and Conclusion Available on: • YouTube: https://youtube.com/@WisecubeAI/podcasts• Apple Podcast: https://apple.co/4kPMxZf• Spotify: https://open.spotify.com/show/1nG58pwg2Dv6oAhCTzab55• Amazon Music: https://bit.ly/4izpdO2Follow us: - John Snow Labs: https://www.johnsnowlabs.com/?utm_source=acquisition&utm_medium=link&utm_campaign=wisecube - LinkedIn: https://www.linkedin.com/company/wisecube/ #AI #NLP #LLM #MachineLearning #KnowledgeGraphs #ArtificialIntelligence #DataScience #HealthcareAI #StartupJourney #AIResearch #DrugDiscovery #NaturalLanguageProcessing
--------
38:20
--------
38:20
LLM Fine-Tuning: RLHF vs DPO and Beyond
In this episode of Gradient Descent, we explore two competing approaches to fine-tuning LLMs: Reinforcement Learning with Human Feedback (RLHF) and Direct Preference Optimization (DPO). Dive into the mechanics of RLHF, its computational challenges, and how DPO simplifies the process by eliminating the need for a separate reward model. We also discuss supervised fine-tuning, emerging methods like Identity Preference Optimization (IPO) and Kahneman-Tversky Optimization (KTO), and their real-world applications in models like Llama 3 and Mistral. Learn practical LLM optimization strategies, including task modularization to boost performance without extensive fine-tuning. Timestamps:Intro - 0:00Overview of LLM Fine-Tuning - 00:48Deep Dive into RLHF - 02:46Supervised Fine-Tuning vs. RLHF - 10:38DPO and Other RLHF Alternatives - 14:43Real-World Applications in Frontier Models - 22:23Practical Tips for LLM Optimization - 25:18Closing Thoughts - 36:05References:[1] Training language models to follow instructions with human feedback https://arxiv.org/abs/2203.02155[2] Direct Preference Optimization: Your Language Model is Secretly a Reward Model https://arxiv.org/abs/2305.18290 [3] Hugging Face Blog on DPO: Simplifying Alignment: From RLHF to Direct Preference Optimization (DPO) https://huggingface.co/blog/ariG23498/rlhf-to-dpo[4] Comparative Analysis: RLHF and DPO Compared https://crowdworks.blog/en/rlhf-and-dpo-compared/[5] YouTube Explanation: How to fine-tune LLMs directly without reinforcement learning https://www.youtube.com/watch?v=k2pD3k1485AListen on:• Apple Podcasts: https://podcasts.apple.com/us/podcast/gradient-descent-podcast-about-ai-and-data/id1801323847• Spotify: https://open.spotify.com/show/1nG58pwg2Dv6oAhCTzab55 • Amazon Music: https://music.amazon.com/podcasts/79f6ed45-ef49-4919-bebc-e746e0afe94c/gradient-descent---podcast-about-ai-and-data • YouTube: https://youtube.com/@WisecubeAI/podcastsOur solutions:- https://askpythia.ai/ - LLM Hallucination Detection Tool- https://www.wisecube.ai - Wisecube AI platform for large-scale biomedical knowledge analysisFollow us: - Pythia Website: https://askpythia.ai/- Wisecube Website: https://www.wisecube.ai- LinkedIn: https://www.linkedin.com/company/wisecube/ - Facebook: https://www.facebook.com/wisecubeai- Twitter: https://x.com/wisecubeai- Reddit: https://www.reddit.com/r/pythia/- GitHub: https://github.com/wisecubeai#FineTuning #LLM #RLHF #AI #MachineLearning #AIDevelopment
--------
37:36
--------
37:36
The Future of Prompt Engineering: Prompts to Programs
Explore the evolution of prompt engineering in this episode of Gradient Descent. Manual prompt tuning — slow, brittle, and hard to scale — is giving way to DSPy, a framework that turns LLM prompting into a structured, programmable, and optimizable process. Learn how DSPy’s modular approach — with Signatures, Modules, and Optimizers — enables LLMs to tackle complex tasks like multi-hop reasoning and math problem solving, achieving accuracy comparable to much larger models. We also dive into real-world examples, optimization strategies, and why the future of prompting looks a lot more like programming. Listen to our podcast on these platforms: • YouTube: https://youtube.com/@WisecubeAI/podcasts • Apple Podcasts: https://apple.co/4kPMxZf • Spotify: https://open.spotify.com/show/1nG58pwg2Dv6oAhCTzab55 • Amazon Music: https://bit.ly/4izpdO2 Mentioned Materials: • DSPy Paper - https://arxiv.org/abs/2310.03714 • DSPy official site - https://dspy.ai/ • DSPy GitHub - https://github.com/stanfordnlp/dspy • LLM abstractions guide - https://www.twosigma.com/articles/a-guide-to-large-language-model-abstractions/ Our solutions: - https://askpythia.ai/ - LLM Hallucination Detection Tool - https://www.wisecube.ai - Wisecube AI platform for large-scale biomedical knowledge analysis Follow us: - Pythia Website: https://askpythia.ai/ - Wisecube Website: https://www.wisecube.ai - LinkedIn: https://www.linkedin.com/company/wisecube/ - Facebook: https://www.facebook.com/wisecubeai - Twitter: https://x.com/wisecubeai - Reddit: https://www.reddit.com/r/pythia/ - GitHub: https://github.com/wisecubeai #AI #PromptEngineering #DSPy #MachineLearning #LLM #ArtificialIntelligence #AIdevelopment
--------
35:36
--------
35:36
Agentic AI – Hype or the Next Step in AI Evolution?
Let’s dive into Agentic AI, guided by the "Cognitive Architectures for Language Agents" (CoALA) paper. What defines an agentic system? How does it plan, leverage memory, and execute tasks? We explore semantic, episodic, and procedural memory, discuss decision-making loops, and examine how agents integrate with external APIs (think LangGraph). Learn how AI tackles complex automation — from code generation to playing Minecraft — and why designing robust action spaces is key to scaling systems. We also touch on challenges like memory updates and the ethics of agentic AI. Get actionable insight… 🔗 Links to the CoALA paper, LangGraph, and more in the description. 🔔 Subscribe to stay updated with Gradient Descent! Listen on:• YouTube: https://youtube.com/@WisecubeAI/podcasts• Apple Podcast: https://apple.co/4kPMxZf• Spotify: https://open.spotify.com/show/1nG58pwg2Dv6oAhCTzab55• Amazon Music: https://bit.ly/4izpdO2Mentioned Materials: • Cognitive Architectures for Language Agents (CoALA) - https://arxiv.org/abs/2309.02427 • Memory for agents - https://blog.langchain.dev/memory-for-agents/ • LangChain - https://python.langchain.com/docs/introduction/ • LangGraph - https://langchain-ai.github.io/langgraph/ Our solutions: https://askpythia.ai/ - LLM Hallucination Detection Tool https://www.wisecube.ai - Wisecube AI platform can analyze millions of biomedical publications, clinical trials, protein and chemical databases. Follow us: - Pythia Website: https://askpythia.ai/ - Wisecube Website: https://www.wisecube.ai - LinkedIn: https://www.linkedin.com/company/wisecube/ - Facebook: https://www.facebook.com/wisecubeai - X: https://x.com/wisecubeai - Reddit: https://www.reddit.com/r/pythia/ - GitHub: https://github.com/wisecubeai #AgenticAI #FutureOfAI #AIInnovation #ArtificialIntelligence #MachineLearning #DeepLearning #LLM
--------
40:43
--------
40:43
LLM as a Judge: Can AI Evaluate Itself?
In the second episode of Gradient Descent, Vishnu Vettrivel (CTO of Wisecube) and Alex Thomas (Principal Data Scientist) explore the innovative yet controversial idea of using LLMs to judge and evaluate other AI systems. They discuss the hidden human role in AI training, limitations of traditional benchmarks, automated evaluation strengths and weaknesses, and best practices for building reliable AI judgment systems.Timestamps:00:00 – Introduction & Context 01:00 – The Role of Humans in AI 03:58 – Why Is Evaluating LLMs So Difficult? 09:00 – Pros and Cons of LLM-as-a-Judge 14:30 – How to Make LLM-as-a-Judge More Reliable? 19:30 – Trust and Reliability Issues 25:00 – The Future of LLM-as-a-Judge 30:00 – Final Thoughts and Takeaways Listen on:• YouTube: https://youtube.com/@WisecubeAI/podcasts• Apple Podcast: https://apple.co/4kPMxZf• Spotify: https://open.spotify.com/show/1nG58pwg2Dv6oAhCTzab55• Amazon Music: https://bit.ly/4izpdO2 Our solutions: • https://askpythia.ai/ - LLM Hallucination Detection Tool • https://www.wisecube.ai - Wisecube AI platform for large-scale biomedical knowledge analysisFollow us: • Pythia Website: www.askpythia.ai• Wisecube Website: www.wisecube.ai• Linkedin: www.linkedin.com/company/wisecube• Facebook: www.facebook.com/wisecubeai• Reddit: www.reddit.com/r/pythia/Mentioned Materials:- Best Practices for LLM-as-a-Judge: https://www.databricks.com/blog/LLM-auto-eval-best-practices-RAG - LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods: https://arxiv.org/pdf/2412.05579v2- Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena: https://arxiv.org/abs/2306.05685- Guide to LLM-as-a-Judge: https://www.evidentlyai.com/llm-guide/llm-as-a-judge - Preference Leakage: A Contamination Problem in LLM-as-a-Judge: https://arxiv.org/pdf/2502.01534- Large Language Models Are Not Fair Evaluators: https://arxiv.org/pdf/2305.17926- Is LLM-as-a-Judge Robust? Investigating Universal Adversarial Attacks on Zero-shot LLM Assessment: https://arxiv.org/pdf/2402.14016v2- Optimization-based Prompt Injection Attack to LLM-as-a-Judge: https://arxiv.org/pdf/2403.17710v4- AWS Bedrock: Model Evaluation: https://aws.amazon.com/blogs/machine-learning/llm-as-a-judge-on-amazon-bedrock-model-evaluation/ - Hugging Face: LLM Judge Cookbook: https://huggingface.co/learn/cookbook/en/llm_judge
“Gradient Descent" is a podcast that delves into the depths of artificial intelligence and data science. Hosted by Vishnu Vettrivel (Founder of Wisecube AI) and Alex Thomas (Principal Data Scientist), the show explores the latest trends, innovations, and practical applications in AI and data science. Join us to learn more about how these technologies are shaping our future.