The sources bother me. When I get some answer related to programming and I am not so sure about it I check the docs, a good link to the docs would be a great help.
Microsoft copilot should do better because it is attached to a search engine but it sucks at giving citations. Often it gives the right answer and a totally wrong citation which is frustrating.
Google's AI results in the search are so bad I try not to look at them at all. If I ask some simple question like "Can I use characters in my base to do operations in Arknights?" I get a wrong answer, citation or not.
So far as context my take with agentic coding assistants is that if you let the context get longer it will eventually get confused and start going in circles. Often it seems to code brilliantly at the beginning but pretty soon it loses the thread. The answer to that is just start a new session, if there is something you want to carry over from the old session you should cut and paste it into the documentation and tell it to look at it.
So far as bias I'd say that truth is the most problematic concept in philosophy, simply introducing the idea of "the Truth" (worse than just "the truth") impairs the truth. See 9/11 Truther.
Look at Musk's misadventures with Grok. I'd love to see an AI trained with a viewpoint like "principled conservative" but that's not what Musk wants. One moment he's BFF with Donald Trump, next minute Trump is one of Epstein's pedophiles. To satisfy Musk it would have to always know if we are at war with Eurasia or Eastasia today.
The sources bother me. When I get some answer related to programming and I am not so sure about it I check the docs, a good link to the docs would be a great help.
Microsoft copilot should do better because it is attached to a search engine but it sucks at giving citations. Often it gives the right answer and a totally wrong citation which is frustrating.
Google's AI results in the search are so bad I try not to look at them at all. If I ask some simple question like "Can I use characters in my base to do operations in Arknights?" I get a wrong answer, citation or not.
So far as context my take with agentic coding assistants is that if you let the context get longer it will eventually get confused and start going in circles. Often it seems to code brilliantly at the beginning but pretty soon it loses the thread. The answer to that is just start a new session, if there is something you want to carry over from the old session you should cut and paste it into the documentation and tell it to look at it.
So far as bias I'd say that truth is the most problematic concept in philosophy, simply introducing the idea of "the Truth" (worse than just "the truth") impairs the truth. See 9/11 Truther.
Look at Musk's misadventures with Grok. I'd love to see an AI trained with a viewpoint like "principled conservative" but that's not what Musk wants. One moment he's BFF with Donald Trump, next minute Trump is one of Epstein's pedophiles. To satisfy Musk it would have to always know if we are at war with Eurasia or Eastasia today.