The Rise of AI Chat Misconduct: When Employees Face Harassment Through Digital Tools

  • Home
  • The Rise of AI Chat Misconduct: When Employees Face Harassment Through Digital Tools

The Rise of AI Chat Misconduct: When Employees Face Harassment Through Digital Tools

AI chat misconduct refers to the harmful, unauthorized, or unethical use of AI chatbot tools like ChatGPT, Google Gemini, and Microsoft Copilot. While these tools can help complete specific tasks at work, they can also be harmful when misused.

California has some of the nation’s strongest anti-harassment laws. They include the Fair Employment and Housing Act (FEHA) and extend to the digital workplace as well. If you’re harassed through chat, email, Slack, Teams, or other project management tools, you’re protected by California laws. Deepfake content and digital harassment are treated as seriously as in-person misconduct.

Knowing your rights as an employee is essential. Explore the changing landscape of AI chat misconduct and digital harassment to see what you and other employees can and cannot do in your workplace’s digital environment.

Understanding AI-Driven Harassment in the Workplace

Identifying AI-driven harassment is extremely important. There are four key ways this form of harassment happens.

Automated Trolling

Automated trolling is also known as dogpiling. It occurs when someone uses AI to generate dozens or even thousands of fake reports or hateful messages. The goal is to overwhelm someone or silence the victim.

Imagine a pregnant manager reprimands a salesperson. The worker tells other salespeople, and soon they’re asking ChatGPT to generate dozens of “Top 10 Mean Jokes About Pregnant Women” and sharing them in a work email chain.

Eventually, the manager sees the jokes and feels uncomfortable, which affects her work. She steps back from her job duties because she doesn’t want to be face-to-face with her team. That’s an example of how automated trolling works.

Deepfakes

Deepfakes are audio clips, images, or videos that are not real but appear to be. In workplace harassment, these fake audio, images, or videos are intended to humiliate or intimidate the victim.

Consider the example of the pregnant manager. Using a deepfake, an employee employs AI to create a video of herself in a sexually explicit way and posts the link on the company’s Instagram page. It’s not real, but it appears real, embarrasses her, and quickly spreads to clients who don’t realize it’s fake, harming her reputation.

Manipulated Content

AI can be used to falsify records or to make it appear that a worker did something inappropriate. Manipulated content may result in unfair disciplinary action against an employee.

Circling back to the pregnant manager. She leaves a voice message asking the worker to come to her office. That salesperson uses AI to add inflammatory remarks that sound exactly like the manager. The worker emails the fake message to human resources. HR reprimands her for failing to fulfill her work duties and adds a disciplinary strike to her record.

Voice Impersonation

Voice impersonation involves using AI to clone another worker’s voice. With that cloned voice, the worker creates inappropriate messages to others. The former example illustrates how voice impersonation can be incredibly damaging.

It can also be used to cause distrust with clients. A fake message seems real to an important client, and the client is offended. They refuse to work with the company unless that manager’s job is terminated. The manager loses her job over something that she never said.

How AI-Driven Harassment Impacts Victims

The impact of AI-driven harassment is substantial. If a company hasn’t updated employment contracts or handbooks to include AI-specific misconduct, it can confuse what constitutes harassment and what does not.

Because AI is easy to access and advancing technology continues to expand its capabilities, workers keep finding new ways to use it. If there are clear policies, misconduct shouldn’t happen. However, not every company has taken the time to update policies, leaving gray areas.

When misused, AI harassment can impact the victim’s mental health. It damages reputations and can even affect the worker’s career. While deepfakes and voice impersonation are eventually proven fake, the damage is already done. There will always be people who refuse to believe it was fake.

S. 146, aka the Take It Down Act, was passed by Congress in May 2025 and goes into effect in 2026. It requires platforms to take down deepfakes within 48 hours of notification. That means there are two days when fake audio, images, or video are available. That’s a lot of time for harm to happen.

Earlier in 2025, a Maryland high school athletic director created deepfake audio recordings of the principal making derogatory, racist comments. The school district fired the principal despite forensic experts finding that the recordings were fake. The community turned on him, adding to his distress.

The principal sued the district and the athletic director for defamation, libel, and slander. The athletic director was sentenced to four months in jail. The principal and school district settled several months later.

What Employers Should Do

Your employer is responsible for harassment by management or supervisors. They’re also responsible for the conduct of their employees and third parties, like contractors or vendors, IF they knew of the situation and didn’t act promptly and appropriately.

Employers should include updated guidelines on AI-driven harassment and digital misconduct in employee handbooks. The Civil Rights Department passed regulations on AI in 2025, which took effect on October 1, 2025. One rule governs the use of AI and Automated Decision Systems (ADS) in employment matters, such as hiring and promotions.

The Equal Employment Opportunity Commission also determined that harassment and discrimination policies extend to digital spaces such as email, instant messaging, virtual work environments, official social media accounts, and other technologies and services used by companies. This includes AI and other digital platforms. California incorporates digital workplace environments into its laws against workplace discrimination and harassment.

California regulates the use of Automated Decision Systems (ADS) and other AI tools when it comes to rating performance, monitoring communications, or judging the qualifications of a job applicant. If the AI tools discriminate against a protected class, it may violate the FEHA. It can also establish a hostile environment, which is also against the law.

Actions to Take as a California Employee

What should you do if you believe your rights have been violated and your employer hasn’t responded appropriately? Take screenshots of any harassing content, images, or messages, and note the date, time, and user ID. Act quickly to capture evidence, as it can be deleted easily. Additionally, keep a journal of each incident.

Report the incident to your human resources department or the party listed in your employee handbook. Do this in writing so you have proof that you filed the complaint. Keep paper copies and store a file on a flash drive or a private account on a cloud service like Dropbox.

Next, contact Shegerian & Conniff or fill out the online contact form. Our employment law attorneys evaluate your workplace harassment complaint during a free consultation and advise you on your next steps.

This entry was posted in Blog. Bookmark the permalink.

Request A Free & Confidential Consultation

Contact Us