Skip to main content

Artificial intelligence – consider this

share arrow printer bookmark flag

May 19, 2023

As a rapidly emerging and evolving technology, the use of artificial intelligence (AI) is becoming increasingly prominent in business to improve efficiency and productivity.

It’s been integrated into a variety of technologies that monitor computer networks, complete complex data analysis, provide customer support and generate business documents at speeds no human could hope to achieve.  

While platforms and solutions using AI can make you look like a standout, here are a few things to consider before jumping right in:

Is the information you enter into an AI-powered content generator considered confidential or proprietary, and does the provider offer a reasonable expectation or guarantee of confidentiality and privacy?

Imagine this scenario – you need help drafting a business plan or new product rollout, so you visit an online AI-powered content generator and enter your proprietary business or product information.

You receive a well-drafted business plan or product rollout document in return, and a short time later, you learn your idea has been rolled out by a would-be competitor.

The online software provider offered no guarantee of confidentiality and placed the information you entered for sale online.

Does your company own the content generated by artificial intelligence, or does it become part of the public domain?

According to Naruto vs. Slater – a 2015 court case over the ownership of a selfie taken by a crested black macaque – only humans can hold a copyright, not a monkey, like Naruto, or by extension, a machine.

While this particular court case doesn’t definitively address ownership of AI-generated content, it demonstrates the matter of content ownership is far from decided.

With the quality of AI-generated text, audio and video content (think deepfakes), is there a bad actor on the other end of that email, voicemail or video call requesting some form of action?

While not commonplace yet, cyber-trickery with audio or video deepfakes is a real problem.

In early 2020, deepfake audio and forged email were used to persuade an employee of a world bank that a bad actor was a director of a customer company.

The employee subsequently transferred funds at the impersonator’s request.

Scenarios like these can easily encourage someone to view artificial intelligence-based technologies in a negative light.

However, there are steps you and your co-workers can take to help mitigate AI-related hazards, including:

Regularly remind employees to be aware of what they enter into AI platforms because it may become public information, especially if they’re using free accounts.

Encourage employees using AI-powered content generators to put their personal touch on the work after receiving AI-generated output.

Their personal touches to the work may provide a legal argument for ownership should a challenge arise.

Continually educate every employee about the truly legitimate appearance of AI-generated email, voice and video deepfakes.

Train employees to consistently question whether they were expecting a request and to double verify requests for action through a trusted channel.

Sometimes it’s obvious, but other times it’s not.

Share real-life examples and sporadic tests with employees to help the verification become second nature.

Along similar lines, consider educating vendors and service providers about which channels your company will use to make requests and request confirmation via a trusted channel. 

Outside of the human element:
Keep your IT security measures up to date and employ a vulnerability management system to prioritize remediation of threats based on anticipated impact.Maintain an up-to-date email filtering system to remove as much malicious mail before employees even see it.Employ multi-factor authentication.Consider using an endpoint detection and response tool, which typically uses some form of artificial intelligence or machine learning to eliminate threats.
While fears of cyber-intrusions, attacks and AI deceptions may keep your IT employees up at night, an ounce of prevention (and awareness) is worth a pound of cure.

Barb Streubel is the chief information officer for Green Bay-headquartered KI, which manufactures innovative furniture and movable wall system solutions for education, healthcare, government and corporate markets.

share arrow printer bookmark flag

Trending View All Trending