Rotterdam school of Management, Erasmus University compact logo

Article: Monday, 1 July 2024

There’s a mix of opinions about using generative artificial intelligence (GenAI) for work. Is it acceptable to use a bit of smart help or not? It appears that people often think that they use it for inspiration (which is ok), but that other people probably rely on it to do all the work (which is not considered ok) – and this bias has implications for marketing, policymaking and education. The evidence comes from research done by RSM associate professors Mirjam Tuk and Anne Kathrin Klesse; and PhD candidate Begum Celiktutan in their paper Acceptability Lies in the Eye of the Beholder: Self-Other Biases in GenAI Collaborations, which will be published in a forthcoming issue of the International Journal of Research in Marketing
 

“The answer to the question of whether GenAI usage is acceptable is not clear-cut. Instead, it depends on whether we are making this decision for ourselves or for others. Interestingly, it seems acceptable to use GenAI for ourselves but less so for others,” says Dr Mirjam Tuk. “This is because people tend to overestimate their own contribution to output like application letters or student assignments, when they co-create them with GenAI because they believe that they used GenAI only for inspiration rather than for outsourcing the work.  

 

Double standards 

Since the launch of ChatGPT in 2022, and other dialogue-creating platforms that followed it, the acceptability of using GenAI has been heavily debated, with some people being highly in favor and others highly against it. The work by Celiktutan and colleagues directly speaks to this debate by documenting  a systematic difference between the way in which people evaluate their own work and the way in which they evaluate other people’s work with GenAI. The results suggest that creators (e.g., students) assign more credits to themselves when co-creating work (e.g., a student assignment) with GenAI than the credit that evaluators (e.g., teachers or other students) assign to them for the work. In fact, people seem to employ double standards when judging their own co-creations with GenAI and those of others, being lenient on themselves but harsher on others.   

The answer to the question of whether GenAI usage is acceptable is not clear-cut.

How much work did ChatGPT do?

 The researchers looked at almost 5,500 participants in nine  studies that covered a variety of tasks ranging from job applications and student assignments to brainstorming and creative tasks. Half of the participants were asked to create an output with ChatGPT or imagined creating something using ChatGPT whereas the other half of the participants thought about someone else creating the same thing, potentially with the help of ChatGPT.  

Afterwards, all participants estimated the extent to which ChatGPT had contributed to the output (x% of 100%). In some studies, the researchers also measured how acceptable participants thought it was to use GenAI for that task. They also used a ChatGPT detector to measure the participants’ accuracy in assessing contributions to output. 

 

For inspiration or outsourcing? 

When evaluating their own output, participants thought they contributed around 54 per cent to the output, and ChatGPT contributed 46 per cent. 

But when evaluating others’ output, participants thought the other person contributed only about 38 per cent while ChatGPT contributed 62 per cent. These differences in inferred contribution stem from differences in inferred usage behavior:  
 
People perceive themselves as using GenAI to get inspiration, but think that other people use it to outsource the task. These differences in inferred usage behavior then prompt people to think that it is totally acceptable for themselves to use GenAI, but not so much for others. 

 

Implications from the research

Uncovering this bias has the following implications for marketing and sales professionals, for policymaking, and for education:  
 

Marketing and sales

Transparency in the use of GenAI is crucial to prevent clients from questioning the value of marketing professionals, as these may be perceived as overly reliant on GenAI.

Policymaking

Policymakers need to consider human evaluators' biases in addition to the capabilities and data of GenAI.

Education

Awareness of the self-other bias is essential for both educators and students. Clear guidelines for the use of GenAI should reflect the perspectives of creators and evaluators.

The value of professionals 

Using GenAI is common in marketing and sales; its appeal is increased efficiency in marketing tasks. But the downside is that it could call into question whether or not there’s added value from professions like marketing research agencies, influencer marketers, and content marketers. Clients may assume that these functions are using GenAI to assist or even completely outsource their tasks, thus, questioning the value of employing such agencies.   

To prevent such inferences, marketing professionals may want to be transparent about the value they bring to the industry as this can highlight their competitive advantage above and beyond GenAI tools.   

For policy makers, it is not enough to focus only on the outcomes that GenAI can produce and its underlying data and algorithms. It is  also important to consider the humans evaluating the outcome. The research team suggests: “Policy makers should be aware that their stance in the debate may differ depending on whether  they are creators or evaluators,”   

The findings also offer valuable insights for education:  “Both educators and students need to be aware of the bias we documented, and understand that it influences people’s perceptions of acceptability – that outsourcing tasks to ChatGPT is perceived as unacceptable but using it for inspiration is more acceptable.” 


 
Creators and evaluators 

Formulating clear guidelines for using ChatGPT should take into account the perspectives of creators and evaluators, as well different types of behaviour in using it.  

“Overall, our results stress the importance of transparency for everyone involved. Transparency leaves less room for interpretation and also highlights the added value of a human creator,” commented Dr Tuk. 

 

dr. M.A. (Mirjam) Tuk
Associate Professor of Marketing
Rotterdam School of Management (RSM)
Erasmus University Rotterdam
Photo
Mirjam Tuk
Dr. A. (Anne-Kathrin) Klesse
Associate Professor
Rotterdam School of Management (RSM)
Erasmus University Rotterdam
Photo
Anne-Kathrin Klesse
B. (Begum) Celiktutan
PhD Candidate
Rotterdam School of Management (RSM)
Erasmus University Rotterdam
Photo
Begum Celiktutan

Erasmus Centre for Data Analytics (ECDA)

This article was made within the Psychology of AI lab. It examines the human aspects of AI adoption and interpretation and is part of the Erasmus Centre for Data Analytics (ECDA). This interdisciplinary group studies consumer acceptance of AI, employee beliefs about technological replacement, and analysts' interpretations of data.

Pile of books with vibrant bookmarks protruding from various pages, symbolizing in-depth research.

Related articles

RSM Discovery

Want to elevate your business to the next level using the latest research? RSM Discovery is your online research platform. Read the latest insights from the best researchers in the field of business. You can also subscribe to the newsletter to receive a bimonthly highlight with the most popular articles.

Do you want to learn more about this subject?

Check out these RSM education programmes

Leading with Decision-Driven Analytics
Leading with Decision-Driven Analytics
  • 14 Oct 2024
  • 3 days
  • 3,400
International Full-time MBA
International Full-time MBA
  • 12 months
  • 65,000
MSc Business Information Management
MSc Business Information Management
  • 12 months
    • €2,530 (EEA)*|
    • €22,500 (Non-EEA)
    • *see details on page
Diploma Programme in General Management
Diploma Programme in General Management
  • 14 Oct 2024
  • 1 year
    • €14,825 -
    • €15,425 depending on modules chosen *
    • * see details on the page
Digital and AI Strategy
Digital and AI Strategy
  • 10 March 2025
  • 3,500
Digital Leadership and Change
Digital Leadership and Change
  • 2 Dec 2024
  • 3 days
  • 3,500
Strategic Account Management
Strategic Account Management
  • 6 Nov 2024
  • 3 days
  • €3,800
Digital Analytics and Customer Insights
Digital Analytics and Customer Insights
  • 7 Apr 2025
  • 3 days
  • 3,600
Digital Innovation
Digital Innovation
  • 4 Nov 2024
  • 3 days
  • 3,500
Executive MBA
Executive MBA
  • 18 months
  • 68,000
Global Executive MBA
Global Executive MBA
  • 21 months
  • 72,000
MScBA Business Analytics & Management
MScBA Business Analytics & Management
  • 12 months
    • €2,530 (EEA)*|
    • €22,500 (Non-EEA)
    • *see details on page
Your contact for more information:
Danielle Baan

Science Communication and Media Officer

Portrait of Erika Harriford-McLaren
Erika Harriford-McLaren

Corporate Communications & PR Manager

Erasmus University campus in autumn, showcasing its iconic red trees, viewed from across the campus pool.