Someone previously found that that the cross-attention layers in text-to-image diffusion models captures correlation between the input text tokens and corresponding image regions, so that one can use this to segment the image, pixels containing "cat" for example. However this segmentation was rather coarse. The authors of this paper found that also using the self-attention layers leads to a much more detailed segmentation.
They then extend this to video by using the self-attention between two consecutive frames to determine how the segmentation changes from one frame to the next.
Now, text-to-image diffusion models require a text input to generate the image to begin with. From what I can gather they limit themselves to semi-supervised video segmentation, so that the first frame is already segmented by say a human or some other process.
They then run a "inversion" procedure which tries to generate text that causes the text-to-image diffusion model to segment the first frame as closely as possible to the provided segmentation.
With the text in hand, they can then run the earlier segmentation propagation steps to track the segmented object throughout the video.
The key here is that the text-to-image diffusion model is pretrained, and not fine-tuned for this task.
> Can someone smarter than me explain what this is about?
I think you can find the answer under point 3:
> In this work, our primary goal is to show that pretrained text-to-image diffusion models can be repurposed as object trackers without task-specific finetuning.
Meaning that you can track Objects in Videos without using specialised ML Models for Video Object Tracking.
All of these emergent properties of image and video models leads me to believe that evolution of animal intelligence around motility and visually understanding the physical environment might be "easy" relative to other "hard steps".
The more complex that an eye gets, the more the brain evolves not just the physics and chemistry of optics, but also rich feature sets about predator/prey labels, tracking, movement, self-localization, distance, etc.
These might not be separate things. These things might just come "for free".
Can someone smarter than me explain what this is about?
Glossing through the paper, here's my take.
Someone previously found that that the cross-attention layers in text-to-image diffusion models captures correlation between the input text tokens and corresponding image regions, so that one can use this to segment the image, pixels containing "cat" for example. However this segmentation was rather coarse. The authors of this paper found that also using the self-attention layers leads to a much more detailed segmentation.
They then extend this to video by using the self-attention between two consecutive frames to determine how the segmentation changes from one frame to the next.
Now, text-to-image diffusion models require a text input to generate the image to begin with. From what I can gather they limit themselves to semi-supervised video segmentation, so that the first frame is already segmented by say a human or some other process.
They then run a "inversion" procedure which tries to generate text that causes the text-to-image diffusion model to segment the first frame as closely as possible to the provided segmentation.
With the text in hand, they can then run the earlier segmentation propagation steps to track the segmented object throughout the video.
The key here is that the text-to-image diffusion model is pretrained, and not fine-tuned for this task.
That said, I'm no expert.
> Can someone smarter than me explain what this is about?
I think you can find the answer under point 3:
> In this work, our primary goal is to show that pretrained text-to-image diffusion models can be repurposed as object trackers without task-specific finetuning.
Meaning that you can track Objects in Videos without using specialised ML Models for Video Object Tracking.
All of these emergent properties of image and video models leads me to believe that evolution of animal intelligence around motility and visually understanding the physical environment might be "easy" relative to other "hard steps".
The more complex that an eye gets, the more the brain evolves not just the physics and chemistry of optics, but also rich feature sets about predator/prey labels, tracking, movement, self-localization, distance, etc.
These might not be separate things. These things might just come "for free".