Up next


Decoding Language Model Pre-training Datasets

21 Views
Eye on AI
1
Published on 09 Nov 2023 / In News & Politics

Welcome to episode 151 of the ‘Eye on AI’ podcast. In this episode, host Craig Smith sits down with Asa Cooper Stickland, a postdoctoral researcher at NYU, who is at the forefront of language model safety.  This episode takes us on a journey through the complexities of AI situational awareness, the potential for consciousness in language models, and the future of AI safety research. Craig and Asa delve into the nuances of AI situational awareness and its distinction from sentience. Asa, with his rich background in NLP and AI safety from Edinburgh University, shares insights from his post-doc work at NYU, discussing collaborative efforts on a paper that has garnered attention for its take on situational awareness in large language models (LLMs).  We explore the economic drivers behind creating AI with such capabilities and the role of scaling versus algorithmic innovation in achieving this milestone. We also delve into the concept of agency in LLMs, the challenges of post-deployment monitoring, and the effectiveness of current measures in detecting situational awareness. To wrap things off, we break down the importance of source trustworthiness and the model's ability to discern reliable information, a critical aspect of AI safety and functionality, so make sure to watch till the end.

Show more
0 Comments sort Sort By

Up next