I tried Ollama but found requests a bit slow since it doesn't seem to fully utilize my resources. LM Studio has been better for me. It uses the upstream GGML implementation, which feels more optimized. My problem with LM Studio is that they've added so many options and features lately that it takes a lot of configuration.
What's your favorite open model for writing and coding these days?
Not a wrapper of ChatGPT. I mean no cloud AI at all for sensitive data. A different approach: local AI. A desktop app where you can download the open models you want and run them locally. 100% private, no data ever leaves your computer or network. The real question is whether companies would pay for team features on top of that.
No, but I've done it by mistake. I wanted chatgpt to proofread a letter and uploaded it without think over the consequences. It's very possible to do it if you aren't careful. Keep that in mind.
The first thing to keep in mind is the illusion of transparency. You might internally know that something is wrong or exploitable or you've made an obvious mistake, but that's generally much less obvious to others.
The second to keep in mind is that we are currently in a crisis of attention. There's too much to think about and do nowadays, and there is a gigantic lack of motivated actors to act upon that information. You could consider it the dual of the illusion of transparency, but it's the illusion of motivation. Other people, by in large, just do not give a damn because they can't and don't have time for it.
Even a nation state if they wanted to go spy on everyone's private information would immediately find themselves with too much nonsense to sift through and not enough time to actually follow through even on surface level information. Let alone leaks that actually require some sort of sophisticated synthesis over two or three disparate pieces of info.
Lastly, it's the difficulty in exploitation. You know how projects and code and stuff seem easy until you try them, and it turns out that actually, this is taking forever, and it barely works? The whole devil in the details thing.
Well, that applies to exploits as well. It's easy until you try it, and then you have this Swiss cheese model of success where random stuff doesn't line up correctly and your workflow broke.
AI surveillance btw barely changes any of this calculus.
I usually try to anonymize whatever is going on. Personal conversations key details removed or slightly modified, no real person or company names at all, etc
Code I only run through the zero-retention API accounts anyway.
10 years ago I was reluctant to use smartphone browser, because it's obvious everything is tracked and profiled as on mobile there was no ad blocker, no possibility to edit hosts file. Now I just use mobile browser for everything without second thought. Give it 3-5 years.
I don't think we're comparing apples to apples here. Using a mobile browser certainly opened the door to more tracking, but at the end of the day it was mostly about ads and profiling. With AI, the scale is very different. These systems can learn far more about you from the questions you ask and the data you provide. That makes privacy and data protection a much bigger concern than ever before.
No, absolutely not. I run local models to have those conversations.
Having read the myriad AWS data protection agreements I would feel comfortable running bedrock hosted models. Others may feel differently.
Nice.
What software do you use to run LLMs locally?
I tried Ollama but found requests a bit slow since it doesn't seem to fully utilize my resources. LM Studio has been better for me. It uses the upstream GGML implementation, which feels more optimized. My problem with LM Studio is that they've added so many options and features lately that it takes a lot of configuration.
What's your favorite open model for writing and coding these days?
>... with OpenAI, ...
About that:
https://arstechnica.com/tech-policy/2025/08/openai-offers-20...
I don't think this article is relevant to what we're discussing here.
It says that The New York Times (NYT) is suing OpenAI for alleged copyright infringement, claiming that ChatGPT was trained on its articles.
What am I missing?
If someone were to build a paid, privacy-preserving wrapper of ChatGPT, it may not see as much traction because ChatGPT is free.
Not a wrapper of ChatGPT. I mean no cloud AI at all for sensitive data. A different approach: local AI. A desktop app where you can download the open models you want and run them locally. 100% private, no data ever leaves your computer or network. The real question is whether companies would pay for team features on top of that.
No, but I've done it by mistake. I wanted chatgpt to proofread a letter and uploaded it without think over the consequences. It's very possible to do it if you aren't careful. Keep that in mind.
I'm kinda meh about it.
The first thing to keep in mind is the illusion of transparency. You might internally know that something is wrong or exploitable or you've made an obvious mistake, but that's generally much less obvious to others.
The second to keep in mind is that we are currently in a crisis of attention. There's too much to think about and do nowadays, and there is a gigantic lack of motivated actors to act upon that information. You could consider it the dual of the illusion of transparency, but it's the illusion of motivation. Other people, by in large, just do not give a damn because they can't and don't have time for it.
Even a nation state if they wanted to go spy on everyone's private information would immediately find themselves with too much nonsense to sift through and not enough time to actually follow through even on surface level information. Let alone leaks that actually require some sort of sophisticated synthesis over two or three disparate pieces of info.
Lastly, it's the difficulty in exploitation. You know how projects and code and stuff seem easy until you try them, and it turns out that actually, this is taking forever, and it barely works? The whole devil in the details thing.
Well, that applies to exploits as well. It's easy until you try it, and then you have this Swiss cheese model of success where random stuff doesn't line up correctly and your workflow broke.
AI surveillance btw barely changes any of this calculus.
I usually try to anonymize whatever is going on. Personal conversations key details removed or slightly modified, no real person or company names at all, etc
Code I only run through the zero-retention API accounts anyway.
For those who said NO, have you looked into alternative providers like PrivateGPT?
I think the solution is local AI, not another cloud AI.
A big NO
No, I didn’t want the share it before, I don’t want now just because the build a AI UI before the data grabber
My sensitive data? nah
Your sensitive data? load it up bruh
:))
10 years ago I was reluctant to use smartphone browser, because it's obvious everything is tracked and profiled as on mobile there was no ad blocker, no possibility to edit hosts file. Now I just use mobile browser for everything without second thought. Give it 3-5 years.
I don't think we're comparing apples to apples here. Using a mobile browser certainly opened the door to more tracking, but at the end of the day it was mostly about ads and profiling. With AI, the scale is very different. These systems can learn far more about you from the questions you ask and the data you provide. That makes privacy and data protection a much bigger concern than ever before.
Yes for most things, no for passwords and secret keys. Even I do not trust myself with those sometimes.