One thing I didn’t emphasize in the post: this work started partly from thinking about how black-box generative models might be audited under emerging regulations like the EU AI Act, where access to model internals or weights can’t be assumed.
Instead of aiming for human-readable explainability, ONTOS looks at whether it’s possible to leave behind reproducible, quantitative traces of structural stability during generation — something closer to audit evidence than a narrative justification.
I don’t claim this says anything about factual correctness or ethics. The narrower question is: was this generation process structurally stable, predictable, or already collapsing internally, even if the output still looks fluent on the surface.
I’m curious whether people see structural monitoring like this as complementary to existing safety / compliance approaches, or fundamentally limited in ways I might be missing.
One thing I didn’t emphasize in the post: this work started partly from thinking about how black-box generative models might be audited under emerging regulations like the EU AI Act, where access to model internals or weights can’t be assumed.
Instead of aiming for human-readable explainability, ONTOS looks at whether it’s possible to leave behind reproducible, quantitative traces of structural stability during generation — something closer to audit evidence than a narrative justification.
I don’t claim this says anything about factual correctness or ethics. The narrower question is: was this generation process structurally stable, predictable, or already collapsing internally, even if the output still looks fluent on the surface.
I’m curious whether people see structural monitoring like this as complementary to existing safety / compliance approaches, or fundamentally limited in ways I might be missing.