Links I refer to in this episode
Vikki: [00:00:00] Hello and welcome to the PhD life coach. And this week I have another guest with me and I think this one is going to be super fun because it's quite controversial and interesting and very, very topical. So welcome. We have Jessica Parker here from the AI company. Moxie. Welcome, Jessica.
Jessica: Thank you, Vikki. I'm excited to be here. You know, I listened to your podcast several months ago when you met with Alison Miller, who was the owner of The Dissertation Coach and now runs The Academic Writers' Space. She's a really close friend and colleague of mine, and I really enjoyed that episode, so I'm honored to be here. Thanks for having me.
Vikki: Fantastic to have you. So we're going to be talking today guys about AI and AI's use in academia and the controversies and the misconceptions and essentially all the things that academics need to know, but before we get into that, maybe tell people a little bit more about you. Obviously you've got this connection with dissertation coach and the Moxie company as well, so tell people a bit more about that.
Jessica: Yeah, so I will just put this out there as a disclaimer. I am not a computer scientist. I am not an AI expert. I think of myself as an advocate and a skeptic. So my goal is to really try to understand AI, in terms of its capabilities and limitations and helping guide my students and my clients on how to use it ethically and responsibly. Uh, but I started doing generative AI research about a year ago, and before that, I was a health care researcher. I worked in Boston for two large universities managing some large scale inter-professional health care grants.
I got pretty burnout on academia. My dad got sick and I came home to take care of him. And I thought, you know, what can I do to try to bridge this gap in my career? And I started a consulting company, Dissertation By Design. And that was in 2017 and originally it was just me working with all the clients and I primarily specialized in working with health care disciplines and really just giving them guidance on research design and data interpretation.
And then my team grew and that's how I ultimately met Allison Miller, the owner of The Dissertation Coach. And we really bonded so when she decided she wanted to retire from The Dissertation Coach and focus on The Academic Writers' Space, it just seemed like a natural fit for me to take over. So that happened in January of this year, and that was big.
So I still manage both of those companies, but from an AI perspective, like most of the world, I started using chat GPT three, in like March, 2023. And I will never forget the moment I first started using it. I was just in shock. I could not believe how well it approximated like human like conversation.
And I had both awe and then just, I felt like I had an existential crisis. I immediately thought about like, well, what does this mean for research and my industry and learning and society and just all the things. And so I'm very curious. So I set about learning. I just immediately dove in to YouTube videos and LinkedIn.
I started trying to find thought leaders and just teaching myself as much as I could to understand it. I also supervise doctoral students at a university in Boston still and so I wanted to think about how they might be using it and how to guide them. So one of the first things I did, uh, last summer was I started trying to create my own generative AI tools. And that's kind of what sparks this whole journey. Like I never set out to found a tech company. Um, I'm a very non technical person up until recently. And so I think this has been as much a surprise to me as anyone else who knows me.
Vikki: I love that. I love that. So then I just decided to build one.
Jessica: You know, naivete is a good thing. I think if you had told me then all the challenges I would run into, I might've thought twice, but here I am navigating it.
Vikki: One of my recent episodes, I was thinking about the 10 different qualities that I think we need to be good bosses to ourselves and ambitious was one of them. And I love that just getting immersed in something and seeing an opportunity and going for it, regardless of kind of what your original background was, building on the expertise that you've got now. I think that's amazing.
Jessica: Yeah, I think it can be like a curse and a superpower. It's like, I'm really good at focusing and solving problems. And then sometimes I can get like completely immersed in something and lose myself in it. And you know, so like my family checks in on me and they're like, we haven't heard from you. And it's usually because I've just discovered some new capability and I'm like building some new application or something like that. Yeah.
Vikki: I mean, so tell us about what you've been building.
Jessica: Yeah, so Moxie, we started out really focusing on using generative AI for formative feedback, and I wanted to solve a problem. I'm a very pragmatic person, but the first problem I wanted to solve was a problem I have with my doc students and my clients, and it's this need that they have where they want feedback on really long academic texts. You know, we think about 40 page lit reviews or 100 page research proposals.
And typically they haven't planned ahead and they need it last minute. And so I'm limited in my time and resources. And I thought, you know, can generative AI provide some sort of formative feedback on aspects of their ideas or their writing? And it can. And that was the first research study I did with an applied linguist. We evaluated chat GPT's capabilities and limitations for automated writing evaluation and we looked at complexity, accuracy, [00:06:00] and fluency of the writing. And we came to a conclusion then, and it's evolved, but we've kind of stuck to it, which has been interesting given how much we've learned, which is that we already have these tools available before generative AI that are really good at looking at accuracy. So you think about rule based systems like Grammarly or the spell checker in Microsoft Word. So those are rule based systems that are good at looking at accuracy. Whereas generative AI, what we found, if you use it appropriately, so it's not going to immediately do this well, it can really help with complexity and fluency of the writing.
And I believe, and what I've seen, and I know to be true, is when you give it enough context and you're using your critical thinking skills when you're engaging with an AI chatbot, you can increase depth and complexity in your writing.
And so that's really what we set out to do. And so the first suite of AI tools we created were tools I was using with my doc students. So I did a study alongside of them, a participatory research study to understand, like, their experiences with it, and it was wonderful. I made it clear that the generative AI was not grading them. It was not a summative assessment. It was just meant to help them get some preliminary feedback from something that I created using the same criteria I would be using to evaluate their work and try to close the gap on their own before submitting their work to me.
And so they loved that. They felt like it gave them a bit more autonomy in the learning process and I noticed that it was reinforcing learning because it was using that criteria I provided it, that I was teaching the students. So Moxie is really mostly about formative feedback. So we don't create tools to write for the user. Like people don't come to Moxie or if they do, they quickly realize we're not for them to like generate their lit review or something like that. It's more like you have to bring something to the table. And then Moxie acts as like a collaborator or a thought partner with you to develop your work further.
Vikki: Amazing. And you said that the students liked that and they found it useful. Tell me a bit more about what they kind of got out of that.
Jessica: Yeah, so, some of the things I heard early on, and I'm now on my 4th semester with this. So every time I'm sort of tweaking and experimenting, but what I started noticing in the discussion boards and the students weren't aware of it. I became aware of it. And then we did a focus group. So then they became more aware of it is, I noticed more, metacognition. So they were thinking more about their process and I had intentionally built the chatbot to do that to force them to think about the process, not the product and to recall concepts like these students, and this was an academic writing course. I was exposing them to new concepts, such as anthropomorphism or precision or coherence and writing. And these were concepts that they were not familiar with. And so getting that feedback, maybe 10 times from an AI tool before submitting it to me gave them ample opportunity to like, see those concepts reinforced. And then I would develop the tools to encourage reflection. And then I required reflection in their papers to understand how they used it. So I started seeing these signs of metacognition and cognition where they're recalling and using the concepts that they're learning in the discussion boards, and normally I would see that much later in the semester, so that was a good sign.
And what the students liked about it is, it was available any time of day, never gets tired, uh, and they, and they're not afraid to ask them questions. So sometimes I don't know a student is struggling until I see their 1st assignment or until they reach out to me, but the students through interaction with the chat bot, and they don't have to admit what they don't know or come to me right away.
Because maybe, you know, there's that power dynamic. So they appreciated that they sort of had the opportunity to ask the dumb questions that maybe they're too afraid to ask me. That was something that they liked and, but the biggest thing that they appreciated was feeling like they could try to improve their work well before they submitted it to me. So it gave them like a bit more control over that process.
Vikki: Hmm. I love that. So one of the things I've noticed with AI, so I've only used the kind of the bog standard commercial free chat GPT, and I've used it for a few worky bits, and liked some bits of it, not others. We talked a little bit before we started recording that I've used it to develop some examples to use in a workshop, for example, but then I haven't liked it when I've, I've tried to do like summaries of summaries of my podcasts into short articles and things, and I didn't like the way that worked. But one of the things that I noticed is that I learned a lot about my own thinking by thinking about how to give it enough instructions to do something well, if you see what I mean.
Because we all hopefully know by now that if you say in chat GPT, you know, write a paragraph on photosynthesis, it'll chug something out. But if you say, write a paragraph on photosynthesis that's at the level of a graduate student, including, I don't know what recent research there is on photosynthesis, bad example, but you know what I mean, you know, giving it more and more context and more and more instruction, the better quality output you get, and for me, I think a lot of the benefit is in actually learning what you're exactly asking for in the first place. And I wonder whether that's something you see with the writing and the feedback.
Jessica: Yes, you have to have an order to, so there's this age old computer science principle that I learned, which is garbage in garbage out. And that still holds true for generative AI. [00:12:00] So the more you give it, the more likely you're going to get what you're looking for out of it. And I was actually reading something recently that I think captures this really well. So. All the frontier models like ChatGPT by OpenAI, Clod by Anthropic, Gemini by Google, Llama by Meta. They're trained on everything in the internet. And the internet is a decontextualized and frictionless environment and these are general purpose tools. And so they're good at doing just a little bit of everything kind of okay. But when you give it all of that instruction, so like my prompts are sometimes a page long.
Like I was just working on one of my prompts for synthesis and it requires me to have a lot of clarity about exactly what I want to evaluate. And so it's interesting through writing the prompts actually have improved my rubrics and my evaluation criteria, 'cause it's helped me see what's unclear. And that's one of the ways I use generative AI a lot.
Just as an educator. So this is just not even with Moxie, but I will take an assessment criteria or rubric or template, I'll feed it into say Claude, and I'll say What's unclear? Imagine you're a first year PhD student who has no knowledge of these concepts. Which of these instructions might be a bit vague? How do I need to elaborate? Should I give some examples and sentence starters? And so it just helps me really improve a lot of those instructions and templates and rubrics for my students.
I also use it as a thought partner, and this is what I encourage my students and my clients to do. You know, we as humans have a lot of assumptions and biases. I mean, writing a positionality statement is a common assignment for a first year Ph. D. student because they're learning about positionality. Well, you can brainstorm and thought partner with a chat bot and have it like point out what some of your assumptions and your biases may be by having it role play. It doesn't mean that you take everything for truth and at face value. It gives so many more opportunities to do those things where maybe before you had to have a human available and not everyone has that human available to thought partner with.
Vikki: Yeah, for sure. My brain is now pinging in about 50 different directions, but I feel like for the purposes of me fully understanding and everyone else fully understanding. Can you just tell us a bit more about how it even works and therefore, you know, what it's good at and what it's less good at?
Jessica: Yeah, no, that's a good question, especially cause I think with the news hype and the media, I feel like expectations are not aligned with reality. And so a lot of people do not understand. They just think it's magic. I think the easiest way to explain what a large language model does. So I'm not talking about an image generator. In particular, I'm talking about text content. It's like a mathematical model of communication. So we have artificial intelligence is like an umbrella term that encompasses machine learning, deep learning, which includes neural networks and large language models and generative AI are grouped together.
Ultimately, these models have been trained on vast amounts of data, so much data, it's hard numbers that you've never heard of, crazy amounts of data, everything on the intranet. Through learning and seeing all of that data and seeing how words are paired together, it creates a database and we call that a vector database.
And so a word like apple could be the fruit or it could be the company. And the way it knows the difference is based on the words that are surrounding it. So when you ask a question to chat GPT, if you say, tell me about Apple's products, it's going to know that the word Apple by product means that it's a company. So it just puts words together in a vector database and it uses numbers. So it's just a mathematical model of communication.
Vikki: So what implications does that have? What people should be using it for in academia?
Jessica: Well, the first is that it's not rule based. So up until now, we've all thought of technology in terms of software. Software is programmed and it's rule based, so it's predictable. We can identify where something went wrong. We just go find that code and we fixed it because it's not following the rule we gave it. Generative AI is not rule based. It produces something new and original, even if it's slightly different every time. So it's not pulling complete sentences from somewhere, so it's not paraphrasing or plagiarizing. It's generating something new each time. And it's not following rules, so it's less predictable. That's why you and I might ask ChatGPT or Claude the exact same question, and it might give us a slightly different response, which is why context is so important. I use the example of, if you were to go into ChatGPT and say, What color is the sky? Just leave it at that. likely to predict that the word is blue. It's just a prediction model. But if you give it context and say the color of the sky is blank, it's raining today. It's going to predict a different word, like grey. And so that's where the context is important, but it's still not predictable.
Like the more you add on, the more complex the task is. So that's why they call it a black box. It's really difficult to trace any issues or like, I was recently reading a study where they looked at, um, I think they use chat GPT. They use it to evaluate different essays by students who were white, black, Asian American, and it scored them all very differently and it was stricter in its grading for Asian Americans compared to black and white students. And you [00:18:00] can't go into the system and figure out like exactly why and how that happened. That's very different from a software. And so everything we know about the software paradigm, which we're all used to, does not apply with generative AI. And that's really hard, I think, for people to understand. That means it's not a hundred percent accurate.
It's not a fact checker. So I. I hear a lot of people using it like they would Google where they go to Google and ask a question that you expect to link to a source and get a fact from. That's not what ChatGPT is made for. It might get it right, but it's not a fact checker. It's just predicting the next word. It doesn't have Truth. So I think that is important for people to understand. And I think that's really challenging to wrap your head around because it's so good. It's so confident in its responses. It uses a lot of, when you look at the language, boosters, which makes it sound even more confident.
So for someone who's not an expert, it comes across as the truth. And unless you question it, even then it's still predicting the next word. It's not thinking about your response or your question if that makes sense. So people using it like Google for fact checking is, I don't like to say right or wrong, but that's not the best use of a large language model.
What's also challenging is what we're starting to see is this idea of summarizing. For instance, now you've probably noticed in Google, when you ask a question, it does use Gemini and it'll summarize and attempt to answer a question for you at the top, and it will link to its sources. But large language models are not the best at summarizing. Like, if you just tell a person to summarize, that person is going to choose what they're going to focus on in that summary. You think about summarizing a whole research article, I might really value the methods and put more emphasis on the methods. So unless you're telling it exactly what to focus on in that summary. And so what we start to see is this simplification bias, which is really problematic in research. And I've been cautioning people about that quite a bit.
An example of simplification bias would be if you, especially these AI research assistants, if you ask it a question, like you put in your research question, it'll summarize maybe the top 10 papers and attempt to answer that question. If you really go through each of those sources, a lot of times it will get it wrong. And that's because it's not great at knowing what to focus on. It's not a human. It's not looking at that research through the same lens that you would. based on your experience and your perspective and maybe the theory that you're using.
So I, I feel like people are going to get in trouble with this simplification bias and that's something that concerns me quite a bit.
Vikki: Definitely. And Yeah, and you see people on Twitter and things talking about tools that, you know, this will take the 50 articles you need to read and put it into tabulated form.
So you don't, you know, they don't usually say the words, so you don't need to read the original, but it's kind of inferred sometimes that that's why this will save you so much time and That it is really concerning that it doesn't have that element of having gone through your brain and been filtered against the things that you think are important or the things you're focusing on this time.
Jessica: Well, I want to touch on that because I think you're hitting on something important and It bothers me that the marketing language that we're seeing is all about speed and efficiency. I don't know if Microsoft still uses this language, but when they released its 1st, like generative educational product, they use the phrase teaching speed, which is really interesting.
To me, it seems obvious, but I do find myself having to say this, like, as a researcher, when you get, when you're an expert in something, or you're becoming an expert, you don't go get your Ph. D. for speed and efficiency. There's friction in learning, doesn't mean it has to be more painful than it needs to be, but I do worry about this focus on speed and efficiency because it does send the wrong message. I don't think that I'm conducting research any faster than I was, but that's wasn't my goal. And I think that surprises people when I talk about it, like the goal, the way I see it, isn't. To do your research faster to your lit review faster. I think you can do it. Maybe more a bit more efficiently manually, I used to build out literature matrices and word. So now I can speed that up. I can, you know, use it in a way I could just make that table, but I'm still having to read every article. So it's not saving time. It's just shifting my time. It's like, I'm just spending my time on different things. And I think if people can think about it that way, then that would be, I think, a healthier way to approach it.
I see what you mean and I hear it all the time. And I think that's where sometimes expectations are not aligned with reality.
Vikki: Yeah. But there's two different versions of reality as well, isn't there, in the sense that there's the reality of what it's actually good at and what it should, in inverted commas, be used for. But there is also the reality, and maybe this is worse in undergrads, one would hope, but I'm sure it filters through, there is also the reality of what people will actually just use it for. And sort of believing the truth of both of those I think is actually really challenging because we can say, you know, it's the same as we'd say to undergrads, you know, things a lot easier if you turn up to lectures and you talk to your tutors and da da da, and then they try and do it from the video recordings and blah blah. Um, we can say it [00:24:00] works a lot better and this is what it's intended for. But if it roughly does that, then there's going to be chunk of people for whom that's very attractive and that kind of tempt them over, even if they know it's not perfect, it's, it's done.
Jessica: I mean, we're seeing that. I have mixed feelings on this. So on one hand, like I've been in rooms where there's conversations about how all the admissions essay now is our essays now are AI generated. One part of me, it's like, I want to give humans the benefit of the doubt and say that I think that's a sign of low AI literacy. I also believe that as long as the focus is on the grade and there's deadlines, there's always going to be cheating. What I think is great about this moment for educators. And I try to talk to faculty about shifting our focus from A. I detection because they're very unreliable to instead rethinking, which is a hard discussion because it requires a lot of effort and work rethinking, like, how are we evaluating learning? And personally, you know, for me, it's been a big shift to process over product has helped me address some of these issues.
Now, I would not want to be an English comp professor at a university. Like, that's a whole other thing to tackle that I think is really challenging, but I do like to remind folks that, writing technologies have been around for a long time. There's been concerns like with the printing press and with the development of phones and text that we would lose our ability to write. And we've navigated that before. And I think we will again. We're just still very early in the process, and there's a lot of education that needs to happen in terms of just AI literacy.
Vikki: Yeah, I think one of the things that it, one of the positives is I think it is going to teach, it's going to force us to teach things that were perhaps kind of expected to just implicitly pick up. Because when I think about novice academics, I'm thinking about sort of, you know, the end of undergraduate, beginning of postgraduate, that sort of level where they're doing, you know, they're doing their lit reviews and things, but they're still at the kind of beginnings of knowing how. When they're doing that in a beginnery way, it's not that different than what AI does, in my opinion. You know, they're reading stuff, and they're kind of trying to say what they say in slightly different words, and like, summarize what was in that more or less accurately and combine it up with summaries of other articles and try and smush that into something vaguely coherent. You know, this is with all respect. We've all been through that stage. And I think we've sort of, I don't know, maybe we've been lazy with just how things have been taught, but getting people to understand the difference between that and filtering literature through the particular lens that you're trying to look at it through and bringing your perspectives and comparing things that aren't usually brought together and whatever, and all those interesting things you can do to produce a good piece of work are the bits that, AI at the moment, at least, are less good at.
But in order for students to see, or academics to see, what it can't do, they have to understand that actually the way they're doing it, isn't the kind of advanced version either. Does that, does that make any sense? Because I think like with reading too, you know, I spend my life trying to share with people that if when you read an article, you start at the beginning and you read to the end and your goal is to read it. You've missed a trick here, you know, you need to be going into it with why am I reading this? What is the purpose? Am I looking at the methods? Am I trying to understand the take home message of it? Am I trying to see what argument they're making? Which bits of it are going to give me that?
And yeah, you'll read the whole thing at some point, but I'm a big fan of getting people to jump around in an article, reading all those things. And so I'd love to hear your thoughts on whether you think there's a role in when we're understanding the limitations of what AI does in better understanding the limitations of what we as humans do in, in the sort of beginnings of our academic careers.
Jessica: Yeah, I think you're exactly right. And you're on to something. For example, I think back to when I was the first research study I ever did was as a graduate student. It was my senior year and it was like abstracts. I was reading abstracts because it was like so overwhelming. I started with too broad of a search. Like, how am I supposed to get through all these articles? I wasn't searching appropriately. And then it was just like reading abstracts. And that's what I see now when I look at simplification bias with AI systems is a lot of the information it's pulling is from the abstract of the article, which is what we know that a lot of students do.
And so I, I see the point that you're making and this is where I have a hard time answering because my answer kind of depends on the context with the student. So some of it is like the level of expertise. I'm going to go back to a discussion about writing and try to, like, connect my ideas. I do this webinar that students really like, and I talk about a top down versus a bottom up approach to writing.
Jessica: And experts typically have this, like, top down approach because we already know the field. We come to the table with a thesis, an idea, an argument, and we go find what we need to build that argument. And therefore, our voice tends to come through more in our writing. Whereas a student who doesn't yet know the field, they kind of have to [00:30:00] go from the bottom up and look at all this evidence and then the pressure to like figure out what is the gap and what is the question and they don't have their voice.
And then there's levels like, you don't just go from like novice to experts. Like we think of Bloom's taxonomy and you gradually improve your expertise over time. When I think about a first year PhD student, first semester coming in, like, I don't know that I want them using AI for any of these things, but if I have my student who's gone through their coursework, they've demonstrated their ability to synthesize literature, critique literature, choose an appropriate research design, then I think that's a really good point to introduce them to these tools. Now, does it mean that the 1st year 1st semester PhD student isn't using? I feel like those are things that we just to some extent we can't control other than just trying to educate them and helping them understand how that might be hampering their ability and their skills later on. If they're using AI shortcuts.
I think a really interesting conversation that I'm starting to hear that I don't have any answers for. I mostly just have questions at this point. Which is around, like, what are the skills that are going to be needed? Because Anthropic's Claude, they just released a video, if you haven't seen it, it's called Computer Use Capability. It's a full AI agent system that can run on your computer where you give it a goal. You could tell it to conduct an entire lit review for you, and it'll go find all of the literature, it'll execute all the tasks by going online, locating it, storing it where you want it stored, Putting the information in Excel spreadsheet, so it is able to work across software platforms on your device, and it can execute all of these tasks in a row.
And that's already here. So we have agents already and then how advanced are those going to be? And the questions I'm starting to hear and with faculty and higher at, or some of them are big questions about, like, how are we going to keep up with the workforce and stay relevant to make sure that we're producing students who have skills that are valued by the workforce when this technology is evolving so quickly, what does research even look like in 5 years? If A. I. Is able to really accurately conduct a thorough lit review and come to the same conclusions as humans what is the role of the researcher then? Are we going to have fewer experts? Will it free up our time for more creative problem solving? Will writing even be the medium for expressing these ideas.
I mean, notebook LM already has the ability to turn an article into a conversational podcast. So those are such interesting questions that I do not know the answer to, that I feel like everyone is just speculating on. And I think anyone who claims to have all the answers is not being honest because the reality is, is even the top AI experts who are building these models still have a lot of these questions and we don't know.
Quick interjection. If you're finding today's session useful, but you're driving or walking the dog or doing the dishes, I want you to do one thing for me after you've finished. Go to my website, theasyourlifecoach. com and sign up for my newsletter. We all know that we listen to podcasts and we think, Oh, this is really, really useful.
I should do that. And then we don't end up doing it. My newsletter is designed specifically to help you make sure you actually use the stuff that you hear here. So every week you'll get a quick summary of the podcast. You'll get some reflective questions and you'll get one action that you can take immediately.
To start implementing the things we've talked about. My newsletter community also have access to one session a month of online group coaching, which is completely free, but you have to be on the email list to get access. They're also the first to hear when there's spaces on my one to one coaching, or when there are other programs and workshops that you can get involved with.
So after you've listened, or even right now, make sure you go and sign up.
Vikki: Yeah. So with the formative feedback, because I think that's fascinating. How do you balance up the added kind of benefits that brings. And I don't think anyone listening will underestimate how useful that is. One of the biggest issues I deal with with my clients is their frustrations over not getting feedback. And when I coach academics, their frustrations with the requirements to be giving feedback for everything and one of the things that I coach on quite a bit is how can, particularly when I'm working with students, how can students generate their own ability to evaluate things and their own ability to reassure themselves without seeking approval from their supervisor.
Now, I'm never discouraging them from getting feedback. Obviously, feedback's the fastest way to learn, and we'll talk about that more in a minute. I do see this sort of dependence on if my supervisor tells me it's good, I'll believe it, rather than being able to, like, reassure themselves or to troubleshoot their own work in a meaningful way.
And I'd be interested to hear your perspective on whether the AI stuff helps them to develop that skill to do it themselves, or whether it just makes them dependent on a bot to reassure them instead of a tutor to reassure them.
Jessica: Yeah, that's a good question that I get a lot. And I think we're still figuring out the implications of over reliance, using it as a crutch. This is where I think AI literacy becomes so important. Part of AI literacy is functional, just understanding capabilities and limitations. Critical AI literacy requires the user, in this case a student, to not just take all of the feedback. Sometimes it gets it wrong. It's maybe 95% on point. Sometimes [00:36:00] it leaves things out, it focuses on the wrong things. Again, it's not a rule based system. The way I train my students to use it. And when I talk to educators about having their students use AI for formative feedback, I talk about teaching the students right away to not believe it all to be true. So they have to critically think about what that feedback is. So it's not the same as getting feedback from me where they take it all to be a hundred percent truth. Like they know exactly.
Vikki: I mean, not if I coach them, they don't. I teach them to read supervisor comments critically as well!
Jessica: Yeah, my doc students is more what I'm referring to, like they really value. So that's like an interesting question that I was wondering in the beginning is like, are they even going to value this feedback because it's not me? And I found that because I had designed the tools and they know that I added the criteria that I was using, they trusted it more than just, trying to go to chat gpt and say, give me feedback based on this rubric. But that's more of a trust issue. Not so much how they're using it.
With critical literacy. It involves. Not just uploading your paper, getting the feedback, and then walking away with that initial feedback and trying to implement it. The real value, and I just published an article I could share with you to link, with my students, is meaning negotiation. So meeting negotiation happens with second language learners, and I have had this theory about academic writing is that it's a non native language for everyone, and so there's elements of second language learning that we can see in those who are learning academic writing for the first time. And that's something that we noticed when we studied my students chat conversations, because they shared them with us, that the students who are getting the most benefit out of it, follow up. There's lots of turn taking, asking for clarification. Can you pull another excerpt for me? Can you explain that for me? Can you create an analogy to help me understand that a bit more. Just like you would if you were learning a language where you're asking lots of follow up questions and for explanations? Having that meaning negotiation with the AI is a part of critical AI literacy.
I don't think all students are going to do that, but I think that's part of our job of teaching them how to use it responsibly, is helping them understand what it means to like, have a conversation and negotiate with it, not just take it all to be true and then do it. You also have to use your brain, I mean, that's why I think there's this expectation because of the media and how it's reporting on AI that it's some quick fix and that it's going to require less effort, but. I mean, we're dealing with PhD students, and these are really complex problems that are being solved.
And so there's no shortcut around using those critical thinking skills. And so if a student is going into it thinking, I'm going to write this paper faster, you know, I say, it's actually probably going to take you longer because I'm going to make you reflect on how you use this tool. But hopefully you're learning more and you have a higher quality product at the end where you thought through all of the ethical considerations that maybe you would have missed in that first draft or, um, done a more thorough critical appraisal of the evidence than maybe you would have done in that first draft for me.
Vikki: Have you seen any differences in the emotional responses to feedback from the, um, bot rather than from people? Because one of the things. I see a lot is clients who procrastinate submitting something to their supervisor because they're worried their supervisor is going to tell them it's rubbish and all those things. Is it just as bad? Do your students worry about the bot criticizing them or do they care less because it's not you.
Jessica: Yeah, that was one of our findings was that they, and this is a small sample, but we have seen validation of these findings and the literature elsewhere. But that was one of our findings is that the students described, they didn't realize they were describing it, but that was part of my role as the researcher is teasing that out, is bypassing that, like, affective state where you can shut down because the feedback is personal. On the other flip side of that. Sometimes the AI would validate their ideas and so that would stop them from ruminating and second guessing. Like if enough times they've gotten the feedback that this is coherent, they've achieved paragraph unity or whatever it may be, then they stop ruminating on it and their confidence increases and they move on. Yeah, my students viewed it, and we hear this all the time, is like, it's this neutral, Thing machine that's giving me something valuable.
It's not all 100 percent true, but it's there's something I can take away from this to improve my work. And sometimes it's validating your ideas. And sometimes it's giving critical feedback, but you don't have that emotional shutdown that you have when you get it from your advisor because you feel embarrassed or ashamed that you produced work that got that type of criticism.
Vikki: I want to take you back to something you said earlier about the biases that there can be in anything that's based on stuff from the internet, right? How do you, how do you manage that in the context of giving formative feedback,
Jessica: Yeah, we as humans have a lot of biases, so of course, these models are also going to have biases. Um, but yeah, when you're not aware of them, there's a lot of dangers there. There is. There's more like medium and small language models that are coming out for specific use cases to try to address some of these issues.
It's complicated, but I'm encouraged by the growing field of research. That's. happening to try to understand the biases and teach others how to mitigate them. But the first step is understanding that the biases are present and reflecting on your own biases and how that might be reflected in the output.
Vikki: Yeah. Cause I mean, it's not like, you know, when a human does feedback on a work, that it's not biased by many of the same things. We may tell ourselves we're trying not to be and everything. So it's not like [00:42:00] there's a kind of gold standard. I think sometimes when people are talking about all of this, there's this sort of inferred gold standard of human marking where it's, you know, it's accurate and replicable and all of those things.
Which we all know isn't true, but I think sometimes when it's, maybe it is the lack of AI literacy, but when it's coming from a machine, you almost, if you don't know these things, you can sort of assume that it's being more objective than it is being.
Jessica: For sure. And I think that like what you just asked is that I see a lot of different sort of debates taking place and I sort of sit in the middle where, no, I do not believe we should be using AI for summative assessment and grading students and having that final say on a student's grade. And some people will use that argument to say we shouldn't be using it at all. And then I come back and say, well, as humans, like, are you sitting down and grading the student and thinking about cultural differences in writing styles, or are you just grading according to the rubric?
So it's not a binary response. It really depends on the learning outcomes, the level of the learner. I mean, I think what's amazing is we're starting to see AI products come out that help neurodivergent learners with dyslexia, ADHD, and so there's so much potential there and it's not like a, should we do it or should we not? It's more of like a how, how, and first we have to understand the capabilities and limitations before we make that decision.
Vikki: Yeah, I think I've mentioned on one of these before, but there's several tools now for people with ADHD where it'll break tasks down into its constituent parts and things. And, that's a model of it that I think can be really, really useful because it's not actually doing any of the work, but it's helping you to take what feels like an insurmountable task and break it down into chunks, which I know is something that even people who are neurotypical can, can find really challenging too.
And I think, I think that's one of my take homes with AI, is I actually think that the skills we'll need to develop to use AI well are skills that would make us better academics if we never used AI. So, when hearing you talk about feedback, one of the things my clients and I often discuss and I do this inside, I do supervisor training as well as coaching people, and I think this is done badly on both sides, is that students say, can you give me feedback on this 40 page lit review, and the supervisor tries to give feedback, whatever feedback is, on a 40 page lit review.
I get so many students who tell me that their supervisors will only read a polished final draft. They won't read anything before that and things, which I think is ludicrous. Sorry, supervisors, but it is. Um, and well, when I say, what feedback are you looking for? They're saying, I want them to tell me whether it's good enough or not.
And. we often talk about all the different levels of feedback you can ask for in terms of, you know, am I making a argument that broadly sounds like it makes sense with some evidence to back it up? Um, does it feel like it's in the right sort of order so that it follows one from the next and all these sorts of things.
And so the stuff that you've had to put into designing and that your students are now having to use in order to ask it the right questions, feel like things that would be really useful for students to ask their supervisors that specifically and for supervisors to be as focused because presumably when you ask Moxie, do the paragraphs flow coherently from one to the next, it doesn't start correcting typos and things the way that a supervisor often gets distracted.
Jessica: Yeah. Yeah. Every student is different. So I don't want to generalize, but I did find that instead of those vague requests, give me feedback or can you pregrade this? Can you just take a look at this really quick before I submit it, you know, in six hours for grading. A lot of times they were coming to me and they actually already had an idea of what they were struggling with.
I kind of expected that, but I wasn't sure. Um, and, and so when you're using, and this isn't just Moxie, like if you were to create your own tool using a rubric and you were to consistently have that criteria, you start to notice patterns like I consistently struggle with passive voice in my writing. My hope is that if students are starting to see that feedback again and again from the AI, it'll help them ask more targeted questions to their supervisor, versus just this generic, but I understand what you're saying. I do that all the time too. I don't consult with individual clients anymore, but that was one of my approaches is I'd say, you know, you can't just ask me to read this whole thing. I need you to tell me what were you struggling with? What's top of mind for you? and I do think AI can come in handy that way.
Vikki: I'm going to also take you back. So you started to talk about it, but I think it'd be useful to go in more depth in terms of when it's useful for people to start using AI, because one of the times where I've tried to use AI and found it quite limited is where I really wanted it to sound like me.
So I have my podcast transcripts. Everyone listening, there will be one of this. I have all my podcast transcripts and I'd love to turn them into. Short articles. And I started doing it myself, but I'm coming up to my hundredth episode and a podcast ends up being about 8, 000 words. So it's, it's a substantial body of work.
And so I messed around with quite a few different versions of AI. And I even try, you know, you see these guidelines online where you're like, here's five [00:48:00] pieces of my writing. Try and edit this one into a short thing in the same style as that. And maybe I'm not giving good prompts, or maybe I'm not finding the right AI models, but In my experience, it made me sound very, I call it kind of generic internet y.
Very sort of, this is a game changing fact, kind of thing. Um, and so I've sort of, at the moment, at least divided my life into things that I can ask AI to do. You know, I've got these four things in my fridge. Are there any recipes that build from that or whatever? Happy days. Fine. I can do that.
Versus things that at the moment I won't, and writing my emails, writing my podcasts, writing anything that I want to sound like my voice, I won't. And one of the things that made me reflect on is that that entirely depends on the fact that at the moment I am capable of writing in a voice that feels like my voice.
And that's true, whether I'm doing this more kind of chatty stuff or, you know, I've got tons of academic publications and stuff in my academic life, I know what I sound like too. Um, and I just wondered what that's like for people who are at the beginnings of their career and whether, will this stop them learning what their voice is if they've only ever had a AI voice, if you see what I mean.
Jessica: Yeah, I've heard there's this debate going about, it's like, am I starting to sound more like the AI or is it starting to sound more like me, like, which is it, uh, from a, from the perspective of, let's talk about low stakes tasks. So, and in your example, you know, you're summarizing transcripts, one of my most common low stakes tasks is maybe I'm creating notes for a LinkedIn post where I'm bringing together, you know, a lot of different ideas and I'll make like a long bulleted list.
So that's low stakes. There's a lot more editing involved. So I find that instead of spending all that time on the writing, I'm now doing the editing. So I don't expect it to produce something that I'm just going to copy and paste into the YouTube description or my LinkedIn posts. So for those low stakes tasks, it's like shifting my time from where I was doing a lot of the writing to now I'm doing a lot of the just quick and dirty drafting. And then a lot of my time is spent editing. So I let the AI put together all of that, like connective tissue. And sometimes I edit a lot of it out. Um, then I think about high stakes tasks in terms of what are the boundaries of when we should use it and when we can't.
And I'm just going to use some examples because I, I don't have any sort of rules of thumb, if you will, other than if you don't know how to do it yourself, like analyze data using a statistical test, then please don't use AI for it. Cause you have no way of evaluating whether it's accurate or not. So that's kind of a rule of thumb I have, especially if it's high stakes.
But from the perspective of you have a novice, let's say researcher who maybe doesn't have their voice. I think about different scenarios. So fear of the blank page. Now you can just put in your ideas into AI and, and, and brainstorm with it. You know, I think about lit review outlines. Um, what are potential outlines of this is the argument I have- problem, cause solution, you know, thematic, whatever it may be.
And then you can sort of take those suggestions and instead of starting on a blank page, you have some headings to start with. Like, I don't think that that is problematic or cheating. It requires you to have some clarity about your problem, going into it, to ask the right questions, to get what you want out of it.
I think it is problematic to rely on it to like identify literature gaps for you or choose your research design or develop your IRB application and then you don't have to think about informed consent. Like, these are really important decisions that we make in the research process. And if we want to protect the integrity of research, I think the human has to be steering, we have to be in control and the AI is just sometimes our copilot.
When it's appropriate, but I tend to, to just tell my students, like, do not use it if you don't know how to do it yourself. if you have no clue how to select a research design, please do not ask chat GBT to select a research design for you. On the other hand, if you've selected your, you feel confident you've selected it, but maybe you don't know if you've justified it well, and you know how to ask that question, I think that's perfectly appropriate because you've still made those decisions.
Those are still your ideas. Now, that is very different than saying, here's my entire lit review, edit it for grammar, spelling, punctuation. Because what's likely going to happen is, well, it's unpredictable, but usually what happens when you ask that is you don't get just your lit review edited for grammar. There's going to be changes, there's going to be shifts in language that you might not notice unless you're reading every word.
Vikki: Hmm. Yeah, and I think it's really, you know, you were talking about affect before, I think just remembering the role of emotions in all of this is super important because I think for us at the kind of career stage we're at, what you just said makes absolute total sense. There's things I know how to do, it's fine, I can tell whether it's done it well or not, I can tweak it, da da da. Other things, more of a copilot, totally get that. My concern, I guess, is that all of that makes absolute sense, but when a student is panicking and doesn't think they know how to do any of it, and it has to be done because there's deadlines coming and all of those things, I worry that it becomes self reinforcing, right, that because they ask too much of AI, but they [00:54:00] kind of get through, right, they're not going to get amazing anything, but it's, it's all right, it gets done.
They go to the next milestone in their PhD or whatever, but now they're even more sure they can't do it for themselves. Um, And I'm just, I just think it's going to be really important, and it sounds like you are doing this, it sounds, I think it's going to be really important to remember the, and I say this with due respect to the students because it's true of all of us, the kind of lack of rationality sometimes in the choices that we make when we're feeling pressured or when we're feeling unconfident in our own abilities to analyze these things. It's not just a kind of really cognitive cost benefit analysis that people are making decisions from with these things. Yeah.
Jessica: Yeah. Ethan Mullock calls it like the temptation of the button. And I think it's so true. If you haven't read it, whoever's listening, he, Ethan Mullock is a professor at Wharton Business School here in the U S and he's like a thought leader on generative AI and innovation and higher ed. And he has a sub stack that I love. Comes out every week. I read it. One of his subsects that resonated most with me was called the, like setting time on fire with the temptation of the button.
Like, are we going to have a crisis of meaning? And right in the beginning has a screenshot of Google docs They were in beta at the time where there was a little button that just said, like, help me, right? And I was like, what are we going to value now? Are we even going to value writing anymore? And that's when I felt like I was having an existential crisis.
Cause I'm like, I don't know. I mean, it is tempting to push the button if you haven't done any work and it's due at midnight. And it's either that or an automatic zero. We're already seeing it. We're already seeing evidence of that. And I don't know that there is a way to prevent it because AI detectors don't work. They're not reliable at all. If you haven't used one, just try putting in some of the work that you wrote well before AI existed and you'll see that they're not reliable. So AI detectors are not the way. I think it's going to cause a real shift in how we think about how we're evaluating learning and it's not going to happen overnight and it's going to be really rocky.
There's going to be implications that we can't wrap our head around. Just like we had no idea what the implications of like social media would be on, you know, mental health and isolation. I think there's a lot of implications. We don't, we have no idea. I think what's scary is that it's out there. Students are using it. More students are using it than faculty are using it. And then how do we navigate that? And I don't have the answer. I'm like, I don't know. Yeah, I still have deadlines. I still expect my students to write their own work. I still know that they're going to be tempted to press the button because it's there.
It's very tempting. Um, but again, and maybe this is like, overly optimistic or naive, but I do feel that as we learn more about this technology, then it'll become a lot more clear how to manage those concerns. I mean, I do believe knowledge is power. I mean, that was why I said about learning about AI is I felt honestly, my first thought was I felt very threatened by it.
Like, am I going to have a company? Are my doc students are, are they even going to be writing dissertations in 5 years? what does this even mean for my entire professional life? And I've come a long way since then. Um, but I think there's a lot of faculty and a lot of folks who feel very threatened and it's leading to just a shutting down mentality sort of ostrich head in the sand.
And, um, and we know that that is not going to work. But I think just to kind of try to answer your question, we need to talk to students. Like, I think a student's voice is really important in all of this, um, and helping us understand how to address these concerns that we're having.
Vikki: Yeah. One thing it made me think of, and this is, you mentioned interdisciplinarity before, and I come from a, very interdisciplinary background. So I love pinging off into different disciplines. Um, one thing it made me think of a lot is all the research around, um, illegal drugs in sports. So I was a sports scientist in my, my academic background and, um, there were Couple of people there, um, Professor Maria Kavussanu, Professor Ian Boardley at my old university, who do a lot of research around the decision making process that athletes go through at the point where they decide whether they are or aren't going to take illegal drugs.
So these performance enhancing drugs we're talking about here. And there's some really, really interesting stuff around the sort of moral disengagement that's involved in believing that other people do it, too, believing that your reasons for doing it are sufficient to justify the breaking of the rules.
And I know AI isn't always breaking the rules, so I'm not, like, doing direct comparisons, but I think there's some really interesting stuff there around how people go from being sure that they wouldn't do these things to kind of maybe sometimes to now actually being regular users and relying on it for performance enhancement.
And I'm sure I'm less familiar with the kind of criminology literature and stuff, but I'm sure there'll be parallel literature around how people make and justify those sorts of decisions. And. I wonder whether it would be interesting to look at parallels between, because we make decisions around where boundaries sit as to what's acceptable and what's not, and in what circumstances, because what they're doing with the performance enhancing drugs work is seeing [01:00:00] if they can identify young athletes that they need to intervene with earlier, try and figure out which are the ones that are heading that way early enough that you can intervene and sort of, scoop them up and bring them back to safety sort of thing.
Jessica: Yeah, I mean, I would imagine I went down this rabbit hole a while ago. It's not fresh in my head, but I did start looking at the literature on plagiarism. Dr Sarah Eaton is a scholar in Calgary in Canada. She's done a lot of work on academic integrity and plagiarism, and she has this post plagiarism framework that I find to be really fascinating, and she asserts that, at some point soon human AI hybrid writing will be the norm and that our standard rules of plagiarism will no longer apply and that just got me interested in plagiarism.
So I went down this rabbit hole into trying to understand plagiarism and some of the things that I learned were around. I mentioned earlier around cultural differences, so there's like inadvertent plagiarism. There's mosaic plagiarism, and then cheating overall, a lot of it does come down to circumstance. It's very situational. And then, yes, you like get away with it and then you sort of push the limits the next time, but ultimately it comes down to our incentives and our rewards.
Like if the focus is on meeting the deadline and getting the good grade, and that's what we're rewarding, then that is more likely to create that situation where you're tempted to cheat or plagiarize. And so it causes you to question the systems that are in place that are reinforcing this behavior.
And that makes me just think about like institutions and ethical guidelines. So what does our community, our academic community accept or reject? And I don't think we know right now. Like we've saw, I think the NSF or maybe it was the NIH originally said absolutely no generative AI can be used to develop a grant proposal, and then they shifted it to acknowledgement.
I would imagine that given some time, we'll have more institutional guidance on what the standards are, the ethical standards for the academic community. Um, but I think you're right. I think there are parallels, but in some ways, like, I feel that higher education is due for a closer look at how we are incentivizing students to get the grade or actually learn. I mean, in the US our standard grades are abysmal. Like reading comprehension is at the lowest ever. And um, so in that way I think it's good. It's forcing us to really rethink some of these systems that are in place.
Vikki: Yeah. Raising some really important, big issues. . Thank you so much. This has ended up being a monster sized episode, and I love it, and I could have carried on talking to you for so much longer. But thank you so much. You've mentioned a couple of things already that I will link in the show notes, so listeners, look out for those. but if people want to know more specifically about you and Moxie, where can they look?
Jessica: Yeah, so Moxie, our website is moxielearn. ai. I'm on LinkedIn as Jessica L. Parker. I do most of my thought leadership on LinkedIn, but we publish our research on Moxie's website. And I also have a ResearchGate profile for Moxie in our lab, because we are actively studying generative AI in research contexts, so.
Vikki: Amazing. And spell Moxie for people?
Jessica: M O X I E.
Vikki: Moxie. Perfect. Thank you so much for coming. It's going to be so much food for thought. People listening, let me know your thoughts. You can reply to my newsletter. If you're not signed up for my newsletter, make sure you are. You can just go to my website, thephdlifecoach. com, or you can find me on Instagram at the PHD Life Coach. Tell me what you're thinking. Are you using AI? What scares you? What do you want to know more? And who knows? We might talk about it in a future episode. Thank you so much for coming, Jessica. Thank you everyone for listening and I will see you next week.
Thank you for listening to the PhD life coach podcast. If you liked this episode, please tell your friends, your colleagues, and your universities. I'd appreciate it if you took the time to like, leave a review, give me stars, stickers, and all that general approval as well. If you'd like to find out more about working with me, either for yourself or for people at your university, please check out my website at thephdlifecoach.com. You can also sign up to hear more about my free group coaching sessions for PhD students and academics. See you next time.