A very informative, frank and comprehensive discussion on the current state of LLM interpretability, especially the discussion concerning faithfulness and being able to "trust" in the way a model appears to "think" through specific problems is very well explained, especially in regard to how models arrive at an output when being prompted to verify a result.
I very much appreciated the honesty regarding what is currently not fully understood in regard to how LLMs arrive at a specific output and their attempts to make this more verifiable. Makes sense considering Anthropic expends what appears to be some of the most (public) effort concerning in-depth understanding over chasing performance goals of the frontier LLM labs.
Especially found this part very well put and liked how they emphasized that even when using terms such as "thinking" in the context of LLMs, that should not be misconstrued to mean that what they are talking about can be map onto the way we are familiar with the term in our human, lived experience:
> I think for me the “do models think in the sense that they do some integration and processing and sequential stuff that can lead to surprising places”? Clearly yes, it'd be kind of crazy from interacting with them a lot for there not to be something going on. We can sort of start to see how it's happening. Then the “like humans” bit is interesting because I think some of that is trying to ask “what can I expect from these” because if it's sort of like me being good at this would that make it good at that? But if it's different from me then I don't really know what to look for. And so really we're just looking to understanding, where do we need to be extremely suspicious or are starting from scratch in understanding this and where can we sort of just reason from our own, very rich experience of thinking? And there I feel a little bit trapped because as a human, I project my own image constantly onto everything like they warned us in the Bible where I'm just like this piece of silicon, it's just like me made in my image where to some extent it's been trained to simulate dialogue between people. So, it's going to be very person-like in its affect. And so some “humanness” will get into it simply from the training, but then it's like using very different equipment that has different limitations. And so, the way it does that might be pretty different.
> To Emmanuel's point, I think we're in this tricky spot answering questions like this because we don't really have the right language for talking about what language models do. It's like we're doing biology, but before people figured out cells or before people figured out DNA. I think we're starting to fill in that understanding. As Emmanuel said, there are these cases now where we can really just go read our paper. You'll know how the model added these two numbers. And then if you want to call it human-like, if you want to call it thinking, or if you want to not, then it's up to you. But the real answer is just find the right language and the right abstractions for talking about the models. But in the meantime, currently we've only 20% succeeded at that scientific project. To fill in the other 80%, we sort of have to borrow analogies from other fields. And there's this question of which analogies are the most apt? Should we be thinking of the models like computer programs? Should we be thinking of them like little people? And it seems to be like in some ways that thinking of them like little people is kind of useful. It's like if I say mean things to the model, it talks back at me.
Would hope this discussion from top level experts may finally put to rest a common delusion I’ve encountered, whether online or offline (spanning industry members, lecturers, students and of course regular people), wherein some are assuming to fully understand how LLMs work at every level, which unfortunately, currently no one does. Any answer beyond, we do not have enough information yet and more research is very much needed, is sadly far to optimistic. Not holding my breath though, even less for social media comments.
Even worse is of course the argument "LLMs must work like (human) brains and by proxy be conscious because some output is similar to what humans might produce" which is akin to "This artifact looks like a modern thing (if you ignore a significant amount of details not serving your interpretation), therefore we had hyper diffusion/ancient aliens/power plant pyramids/ancient plane space ships"...
On another note, there are few things more nerdy in the traditional meaning of the term than a VC backed multi billion dollar company still relying on a Brother HL-L2400DW for their modest printing needs.
A very informative, frank and comprehensive discussion on the current state of LLM interpretability, especially the discussion concerning faithfulness and being able to "trust" in the way a model appears to "think" through specific problems is very well explained, especially in regard to how models arrive at an output when being prompted to verify a result.
I very much appreciated the honesty regarding what is currently not fully understood in regard to how LLMs arrive at a specific output and their attempts to make this more verifiable. Makes sense considering Anthropic expends what appears to be some of the most (public) effort concerning in-depth understanding over chasing performance goals of the frontier LLM labs.
Especially found this part very well put and liked how they emphasized that even when using terms such as "thinking" in the context of LLMs, that should not be misconstrued to mean that what they are talking about can be map onto the way we are familiar with the term in our human, lived experience:
> I think for me the “do models think in the sense that they do some integration and processing and sequential stuff that can lead to surprising places”? Clearly yes, it'd be kind of crazy from interacting with them a lot for there not to be something going on. We can sort of start to see how it's happening. Then the “like humans” bit is interesting because I think some of that is trying to ask “what can I expect from these” because if it's sort of like me being good at this would that make it good at that? But if it's different from me then I don't really know what to look for. And so really we're just looking to understanding, where do we need to be extremely suspicious or are starting from scratch in understanding this and where can we sort of just reason from our own, very rich experience of thinking? And there I feel a little bit trapped because as a human, I project my own image constantly onto everything like they warned us in the Bible where I'm just like this piece of silicon, it's just like me made in my image where to some extent it's been trained to simulate dialogue between people. So, it's going to be very person-like in its affect. And so some “humanness” will get into it simply from the training, but then it's like using very different equipment that has different limitations. And so, the way it does that might be pretty different.
> To Emmanuel's point, I think we're in this tricky spot answering questions like this because we don't really have the right language for talking about what language models do. It's like we're doing biology, but before people figured out cells or before people figured out DNA. I think we're starting to fill in that understanding. As Emmanuel said, there are these cases now where we can really just go read our paper. You'll know how the model added these two numbers. And then if you want to call it human-like, if you want to call it thinking, or if you want to not, then it's up to you. But the real answer is just find the right language and the right abstractions for talking about the models. But in the meantime, currently we've only 20% succeeded at that scientific project. To fill in the other 80%, we sort of have to borrow analogies from other fields. And there's this question of which analogies are the most apt? Should we be thinking of the models like computer programs? Should we be thinking of them like little people? And it seems to be like in some ways that thinking of them like little people is kind of useful. It's like if I say mean things to the model, it talks back at me.
Would hope this discussion from top level experts may finally put to rest a common delusion I’ve encountered, whether online or offline (spanning industry members, lecturers, students and of course regular people), wherein some are assuming to fully understand how LLMs work at every level, which unfortunately, currently no one does. Any answer beyond, we do not have enough information yet and more research is very much needed, is sadly far to optimistic. Not holding my breath though, even less for social media comments.
Even worse is of course the argument "LLMs must work like (human) brains and by proxy be conscious because some output is similar to what humans might produce" which is akin to "This artifact looks like a modern thing (if you ignore a significant amount of details not serving your interpretation), therefore we had hyper diffusion/ancient aliens/power plant pyramids/ancient plane space ships"...
On another note, there are few things more nerdy in the traditional meaning of the term than a VC backed multi billion dollar company still relying on a Brother HL-L2400DW for their modest printing needs.