Making Large Language Models Uncool Again
Our upcoming fireside chat is with a true luminary, Jeremy Howard, co-founder of fast.ai, an ex-Chief Scientist at Kaggle, and creator of the ULMFiT approach on which all modern language models are based.
​In this fireside chat, Jeremy joins Hugo Bowne-Anderson, Outerbounds’ Head of Developer Relations, to talk about the current state of LLMs, how to get beyond the intense hype to deliver actual value, the existential threat posed by increasing concentration in vendor models, why we need more OSS LLMs, and how AI education will need to change.
​They’ll discuss
- What on earth we’ve just witnessed at OpenAI, why it’s important (and why it’s not!),
- ​Cost-benefit analyses of using closed models (such as Chat-GPT and Claude) and OSS models (such as Llama 2 and Mistral),
- ​Key concerns around regulatory capture and the future of OSS LLMs,
- ​What LLM tools and techniques you need to know to future-proof yourself in this rapidly changing landscape,
- ​How AI education will need to change to make LLMs actually uncool so that many people can actually use them!
​And much, much more.
00:00 Prelude
01:45 The fireside chat begins
04:12 What the hell happened at OpenAI recently?!
12:31 The dual governance structure of OpenAI
14:52 Will OpenAI win or lose?
17:45 Is OpenAI losing good for competition and society?
20:18 Should we fear AGI? And other ex risk questions (and answers!)
23:02 The human labour and ghost work behind LLMs
25:04 What is Q*? Or "Is Q* to LLMs what LLMs are to Search?"
33:06 How to navigate the trade-offs and tensions between OSS LLMs and Vendor APIs!
36:13 Which OSS LLMs is Jeremy most excited about?
43:35 Jeremy's thoughts on the future of large LLMs vs small, fine-tuned LLMs
47:45 The risk to foundation models of regulatory capture and the EU AI Act
55:34 Should we really be scared of killer robots? Or sentient AI?
1:00:20 Should AI tools be centralized or decentralized?
1:05:08 Is AI more like DNA printing or WMDs, in terms of risk?
1:09:42 So how DO we make LLMs uncool again, Jeremy?!
1:15:09 Will closed models always be ahead of open models?
https://www.youtube.com/watch?v=6LXw2beprGI&t=3414s