A few enterprising hackers have started projects to do counter surveillance against ICE, and hopefully protect their communities through clever use of technology.
Please no. Absolutely not. LLM is absolutely not “nice for dealing with confusion” but the very opposite.
Please do consider people effort, articles, attributions, and actually learning and organizing your knowledge. Please do train your mind, and self-confidence.
You can’t rely on LLMs to get actual answers for technical things but it can help avoid a huge amount of wasted time and effort, back-and-forth, going in circles, talking around or past the issues etc. that is seen in threads everywhere in these types of expert niche communities. Besides, maybe my question has already been answered.
When I don’t know the specific terms or framing, am missing context or am trying to get from A to C, but have no idea that B even exists, nevermind how (or who) to ask about it. If I can accelerate the process of clearing that up, I can go to the correct human expert or community with a much better handle on what it is I’m actually looking for and how to ask for it.
Thank you, but I do disagree. You cannot know the “result” of that LLM does include all the required context, and you won’t re-clarify it, since the output does already not contain the relevant, and in the end you miss the knowledge and waste the time, too.
How are you sure the output does include the relevant? Will you ever re-submit the question to an algorithm, without even knowing it is required re-submit it, since there’s even no indication for it? I.e. The LLM just did not include what you needed, did not include also important context surrounding it, and did not even tell you the authors to question further - no attribution, no accountability, no sense, sorry.
I’m not sure we disagree. I agree that LLMs are not a good source for raw knowledge, and it’s definitely foolish to use them as if they’re some sort of oracle. I already mentioned that they are not good at providing answers, especially in a technical context.
What they are good at is gathering sources and recontextualizing your queries based on those sources, so that you can pose your query to human experts in a way that will make more sense to them.
You’re of course in your absolute right to avoid the tech entirely, as it comes with many pitfalls. Many of these models are damn good at gathering info from real human sources, though, if you can be concise with your prompts and avoid the temptation of swallowing its “analysis”.
You mention wasted time and effort, going in circles, talking past and around issues/questions, I think a lot of people underestimate that this is why people go to AI in the first place, because asking for help can be genuinely unbearable sometimes
that’s hilarious. I remember the scene back in the day was more like, “if you don’t know, get fucked because nobody is going to be responsible for your incompetent bullshit.”
They have lots of snobbish gatekeeping still, it just exists at a higher level. Entry level knowledge is abundant. But once you seek a community with more specialized expertise, the IRC channels will be private and have passwords, and you better have contributed something to a novel exploit or something…
The online communities are typically great. If you get really stuck, LLMs can be nice for dealing with your specific confusion.
Edit: … but it’s better to ask the community so others can benefit from the answer.
Please no. Absolutely not. LLM is absolutely not “nice for dealing with confusion” but the very opposite.
Please do consider people effort, articles, attributions, and actually learning and organizing your knowledge. Please do train your mind, and self-confidence.
You can’t rely on LLMs to get actual answers for technical things but it can help avoid a huge amount of wasted time and effort, back-and-forth, going in circles, talking around or past the issues etc. that is seen in threads everywhere in these types of expert niche communities. Besides, maybe my question has already been answered.
When I don’t know the specific terms or framing, am missing context or am trying to get from A to C, but have no idea that B even exists, nevermind how (or who) to ask about it. If I can accelerate the process of clearing that up, I can go to the correct human expert or community with a much better handle on what it is I’m actually looking for and how to ask for it.
Thank you, but I do disagree. You cannot know the “result” of that LLM does include all the required context, and you won’t re-clarify it, since the output does already not contain the relevant, and in the end you miss the knowledge and waste the time, too.
How are you sure the output does include the relevant? Will you ever re-submit the question to an algorithm, without even knowing it is required re-submit it, since there’s even no indication for it? I.e. The LLM just did not include what you needed, did not include also important context surrounding it, and did not even tell you the authors to question further - no attribution, no accountability, no sense, sorry.
I’m not sure we disagree. I agree that LLMs are not a good source for raw knowledge, and it’s definitely foolish to use them as if they’re some sort of oracle. I already mentioned that they are not good at providing answers, especially in a technical context.
What they are good at is gathering sources and recontextualizing your queries based on those sources, so that you can pose your query to human experts in a way that will make more sense to them.
You’re of course in your absolute right to avoid the tech entirely, as it comes with many pitfalls. Many of these models are damn good at gathering info from real human sources, though, if you can be concise with your prompts and avoid the temptation of swallowing its “analysis”.
You mention wasted time and effort, going in circles, talking past and around issues/questions, I think a lot of people underestimate that this is why people go to AI in the first place, because asking for help can be genuinely unbearable sometimes
Exactly, just look at the dropoff on stack overflow recently
that’s hilarious. I remember the scene back in the day was more like, “if you don’t know, get fucked because nobody is going to be responsible for your incompetent bullshit.”
oh how times have changed.
They have lots of snobbish gatekeeping still, it just exists at a higher level. Entry level knowledge is abundant. But once you seek a community with more specialized expertise, the IRC channels will be private and have passwords, and you better have contributed something to a novel exploit or something…