
I mentioned fairness yesterday in terms of using tools and thought about ChatGPT. I’m not going into the mechanics of it – I don’t know enough about the technology – but its impact has been immediate.
Everywhere I look it seems people are talking about how it’ll be the game changer in terms of workplace learning. Having used it for a few months, I can see some benefits of using it. It is able to summarise topics quickly and provides good basic information about pretty much any topic you throw at it. Unlike the SME in your organisation, it’s pretty much available 24/7 and won’t distance itself from you when you ask it a question.
- It does, however, seem to be a bit flakey. Ask for references to be cited and they’re often incorrect. That immediately creates mistrust of it a a tool.
- It is only as good as the questions you ask and it takes some practice to learn how to ask the right questions to get better responses.
- It can be wrong. I asked it about integrating informal learning in the workplace and it suggested ‘setting goals’. I challenged this language:
Shouldn’t learning goals be ‘agreed’ and not ‘set’?
Author and chat.openai.com
Yes, you are correct. Learning goals should be agreed upon, rather than simply set by management or imposed upon employees. This approach to goal-setting is more collaborative and recognizes the importance of employee engagement and ownership in the learning process.
So what is it good for? Right now I think it’s an exceptional search engine for things which can help you do your job. Ask it about coaching and you’ll get a basic response. Ask it to expand on different parts of the response and challenge and you’ll end up with rich content.
The best use of it is as a tool to develop rubrics for assessment. Here’s an example:
Develop a rubric to assess people’s understanding of ChatGPT
Sure, here is a possible rubric to assess people’s understanding of ChatGPT:
Criteria Level 1 (Low) Level 2 (Medium) Level 3 (High) Awareness The person is not aware of what ChatGPT is or what it does. The person has heard of ChatGPT but has a limited understanding of what it does or how it works. The person is familiar with ChatGPT and has a good understanding of its capabilities and limitations. Functionality The person does not understand how ChatGPT generates responses or how to use it effectively. The person has a basic understanding of how ChatGPT generates responses and how to use it for simple tasks. The person understands how ChatGPT generates responses and can use it effectively for a variety of tasks. Accuracy The person does not understand that ChatGPT is not always accurate and can make mistakes. The person is aware that ChatGPT can make mistakes but does not know the extent of its accuracy. The person understands that ChatGPT is not always accurate and knows how to assess the quality of its responses. Limitations The person does not understand the limitations of ChatGPT and may expect it to perform tasks that it cannot do. The person has a basic understanding of the limitations of ChatGPT but may overestimate its capabilities. The person understands the limitations of ChatGPT and knows how to use it appropriately for tasks that it can handle. Critical Thinking The person does not apply critical thinking skills when using ChatGPT or evaluating its responses. The person applies some critical thinking skills when using ChatGPT or evaluating its responses. The person applies advanced critical thinking skills when using ChatGPT or evaluating its responses. To assess people’s understanding of ChatGPT, you could ask them questions related to each criterion and assign a score based on their response. For example, you could ask:
- What is ChatGPT, and what does it do?
- How does ChatGPT generate responses, and what are some of its limitations?
- Can you give an example of a task that ChatGPT can perform well and one that it cannot do?
- When using ChatGPT, how do you assess the quality of its responses, and what critical thinking skills do you apply?
Based on their answers, you could assign a score of 1-3 for each criterion, where 1 is low understanding, 2 is medium understanding, and 3 is high understanding. You could then add up the scores to get an overall assessment of their understanding of ChatGPT.
Author and chat.openai.com
If I were developing assessment tools in workplace learning, this is where I’d be creative with the tool. I’d be thinking about the core elements we would expect people to know and seeing where ChatGPT was sending them. I’d be looking deep into the references being suggested and seeing what was being recommended. I’d be challenging the tool to be more sophisticated in its responses by asking more complex questions. As I saw mentioned the other day, as a language tool it needs clear natural language to communicate effectively.
I wouldn’t, however, expect it to do everything for me. In the example above on rubrics there are still elements of nuance a person has to bring to the activity.
And you can’t automate nuance.
How would you use ChatGPT? Please let me know in the comments.