Well several times faster, but not interesting enough to say that use this. For me it personally was an exploratory project to review litellm and its internals.
The LLM docgen in this case Claude has been over enthusiastic due to my incessant prodding :D.
I appreciate you doing this and sharing it. I had a similar experience with rust and tokenization library (BERTScore) and realized it was better to let the barely worse method stand because the effort was not worth it to maintain long term
Well i would counter that by saying most code has been autocompleted for a while. At this point in software development history, discussing the size of commits is a null discussion :).
The benchmarks in your README.md state that it is several times faster for those operations, are they a lie?
Well several times faster, but not interesting enough to say that use this. For me it personally was an exploratory project to review litellm and its internals.
The LLM docgen in this case Claude has been over enthusiastic due to my incessant prodding :D.
I appreciate you doing this and sharing it. I had a similar experience with rust and tokenization library (BERTScore) and realized it was better to let the barely worse method stand because the effort was not worth it to maintain long term
measure before implementing "improvements", you'll develop a sense over time of what is taking too long.
Interesting write-up.
Is this whole post and github repo LLM-generated slop?
The core is real, the rest of the narrative nudging LLMs to behave :). If you remove the noise and just run the benchmark that's proof enough.
The interesting bit was that the bindings overheads dominate, and makes this shim not that much of a performance bump.
Commit history has 5 commits, 3 of them are 1day ago, and all of them add +1000 lines.
Definitely looks like it.
Well i would counter that by saying most code has been autocompleted for a while. At this point in software development history, discussing the size of commits is a null discussion :).