This content originally appeared on Adactio: Journal and was authored by Adactio: Journal
One persistent piece of slopaganda you’ll here is this:
“It’s just a tool. What matters is how you use it.”
This isn’t a new tack. The same justification has been applied to many technologies.
Leaving aside Kranzberg’s first law, large language models are the very antithesis of a neutral technology. They’re imbued with bias and political decisions at every level.
There’s the obvious problem of where the training data comes from. It’s stolen. Everyone knows this, but some people would rather pretend they don’t know how the sausage is made.
But if you set aside how the tool is made, it’s still just a tool, right? A building is still a building even if it’s built on stolen land.
Except with large language models, the training data is just the first step. After that you need to traumatise an underpaid workforce to remove the most horrifying content. Then you build an opaque black box that end-users have no control over.
Take temperature, for example. That’s the degree of probability a large language model uses for choosing the next token. Dial the temperature too low and the tool will parrot its training data too closely, making it a plagiarism machine. Dial the temperature too high and the tool generates what we kindly call “hallucinations”.
Either way, you have no control over that dial. Someone else is making that decision for you.
A large language model is as neutral as an AK-47.
I understand why people want to feel in control of the tools they’re using. I know why people will use large language models for some tasks—brainstorming, rubber ducking—but strictly avoid them for any outputs intended for human consumption.
You could even convince yourself that a large language model is like a bicycle for the mind. In truth, a large language model is more like one of those hover chairs on the spaceship in WALL·E.
Large language models don’t amplify your creativity and agency. Large language models stunt your creativity and rob you of agency.
When someone applies a large language model it is an example of tool use. But the large language model isn’t the tool.
This content originally appeared on Adactio: Journal and was authored by Adactio: Journal

Adactio: Journal | Sciencx (2025-05-23T08:38:11+00:00) Tools. Retrieved from https://www.scien.cx/2025/05/23/tools/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.