A survey by PRWeek, an industry-leading source of news and analysis, revealed that the five top tasks in which AI assists communicators include adopting speech-to-text technology, data analysis, improving SEO, research, and crafting press releases.

In another study by consultancy Sandpiper and PRovoke Media, data shows that up to 86% of global communicators are optimistic about generative AI advancements. However, 85% of the study is concerned about potential legal and ethical issues due to these technologies.

This edition of Muse delves into some ethical implications of utilizing AI in communication.

Factual errors and misinformation

While content generated by natural language processing tools may seem polished or elaborated, it is likely to be made up if the AI can’t access up-to-date information required to answer a given input.

This is often called ‘hallucination defects’ where AI generates seemingly plausible yet factually incorrect or irrelevant outputs, which stems from its lack of real-world understanding and training data limitations.

In addition, seemingly reasonable content generated by AI often lacks credibility as the tools don’t cite the sources on which they base their outputs.

Consequently, if AI-generated content is not screened and edited, communicators could be disseminating factually incorrect or unverified information to stakeholders, inadvertently misinforming audiences. This ethical implication could erode trust between organisations and their audiences.

Inherent biases in algorithms

PwC, a global professional services provider, noted that natural language processing (NLP) has been found to exhibit racial, gender and disability biases.

Natural language processing systems are trained by collecting user queries on numerous topics, including their stereotypes or discrimination. According to the online media company Business Insider, this may include sexism, racism and failure to reflect the progress of social movements.

As a result, generative AI may reflect oversimplified impressions and exclusive expressions in its responses to certain inputs. Failing to recognise these biases may lead to content that excludes and even offends specific audiences, putting your professional relationships at stake.

Unintended privacy breaches

As ChatGPT and other AI tools collect inputs to improve generated outputs, confidentiality violations might be committed if users unknowingly key in sensitive data and third-party information.

A survey conducted by Ragan Communications, a leading communication intelligence, professional development and training provider, and The Conference Board, a global non-profit think tank, data security and privacy is ranked the third most significant challenge in AI adoption. In addition, nearly half of the communicators surveyed are concerned about the potential disclosure of proprietary knowledge.

These ethical dilemmas necessitate guidelines on how communicators and business leaders can maintain ethical practices while integrating AI for productivity. Continue reading our next article to find out more.