@anders I think my primary tools for augmenting my own thinking are sketching, note-taking and wr...
@anders I think my primary tools for augmenting my own thinking are sketching, note-taking and writing (and some tools similar to what you describe that can help me get an overview of my own notes and sketches). And in some sense reading books. I believe my inability to trust the output of ChatGPT, and the lack of transparency on what it is trained on, makes it difficult for me to find a good use for it. 😬
I do understand the idea of using it for critical thinking and generating statements that are to be judged and reasoned around, as the following post explains, but I rarely see this type of critical thinking even in examples provided by peers in my industry – using it in workplaces. The amount of trust many people place in the output, and use as conclusions for analytical work, is a problem…
https://www.cambridge.org/elt/blog/2023/03/30/enhancing-learners-critical-thinking-skills-with-ai-assisted-technology/
When we augment with a telescope it is because we have recognised a need to see further into space. When we augment with a hearing aid or speaker, it is because we have recognised a need to hear better. And we can immediately validate if they work for this purpose.
It is not entirely clear to me what part of human thinking is augmented with ChatGPT and how I can judge if that is actually what is happening. i.e. I woud like to see the need more clearly described, and I would like to see the result of that need being met more clearly validated. Can everyone think better with a tool like ChatGPT, in the same way everyone with vision can see further with a telescope?
For me a microscope and a hearing aid can eliminate misunderstanding, as they help me see or hear more clearly. Meanwhile a tool like ChatGPT can boost misunderstanding as it is a statistical representation of a source that is unclear to me, and evidentially biased.
I know you didn’t say ChatGPT itself was augmenting thinking, as you used the word catayst. But I did find it interesting to think of it as an augmentation tool, where ChatGPT clearly can be used to augment all the wrong things (disinformation, bias, depression, etc).