robot in computer for ai policy

5 Factors for an Effective A.I. Policy

Whether you know it or not, whether you’ve approved it or not, your employees are already using A.I. tools.

You can demand they stop…or you can help them do it better.

How to Effectively and Responsibly Use A.I. in Your Business

The Age of Artificial Intelligence (A.I.) is upon us. There are free tools and paid tools, built-in tools, and add-ons. Businesses are already using it to make their teams more efficient. And according to recent surveys, your employees are, too, officially or not.

There are obvious benefits to A.I., along with some risks and plenty of uncertainty. So how do you move your company forward safely and effectively?

At Infinity, the answer is simple: create a policy.

Policies for us are helpful guidelines. We use them to lay out the What, Why, and How of practically any situation.

They can be as loose or restrictive as you like, and they are living documents–meant to be updated and revised as we learn and do more.

What should You Include in your Company’s A.I. Policy?

The following 5 factors will give you a framework to help your business use A.I. effectively and efficiently. There may be much more you want to include depending on your level of familiarity with A.I. or your specific compliance requirements, but these 5 will get your team up and running with some helpful boundaries.

1. Your Overall Goal

Before you worry about writing a word of the policy, think about what you’re trying to accomplish first. What is the goal?

  • Are you more concerned with limiting use or encouraging experimentation?
  • Do you want to form a test team or committee to vet A.I. tools and processes before rolling out to employees?
  • Do you want to address requests on a case-by-case basis?

There’s no one ‘right’ way to handle A.I. Every company and potential use is different. The point is to pick a place to start and go from there. Ignoring it or waiting will only put you at risk of serious problems later.

2. The Human Element

Once you have your goal set, think specifically about your team.

What roles or areas of their work is A.I. help acceptable in? Maybe more importantly, what areas is it not?

And the big, bold, unmissable factor here: Make sure your employees know that ultimate responsibility for anything A.I.-generated or revised or supplemented still falls on them.

As an example of something you’re probably already familiar with, think of spellcheck. You may not consider that A.I., but it is a form. And while it can point out words it thinks you have spelled incorrectly or sometimes used incorrectly, it can also mess up names and change your intended meaning if you let it. That’s why there is a built-in review step for you to go through the suggestions. You have final say. You have ultimate control.

To put it another way:

ChatGPT will not get fired if an inaccurate proposal gets sent and accepted by a client at a loss of thousands of dollars.

Bard, or Gemini, will not get suspended without pay for an inappropriate blog, email, or report summary that goes to all your stakeholders.

Canva, Adobe Express, or Microsoft Designer will not get fined for copyright abuse because their images were not original.

A human must review and approve his or her own work. A.I. tools can save time, improve quality, and augment ideas or concepts, but they are not perfect and cannot be held responsible. Your employees must be.

3. Data Security

Ensure you talk about the information your employees may and may not enter into an A.I. tool. Remind them that there should be no expectation of privacy or security with any publicly available A.I. tools. 

For example, you might allow your team to use a specific company’s name in order to pull information about that company for a proposal. But you might not want your team to upload a list of all your clients’ names because you don’t know how or where it might be stored or accessed by others.

And it should go without saying (but never hurts to include in a policy), that absolutely no personal identification information or proprietary company information should be entered into public A.I. tools.

Additionally, if you learn about particularly risky or unreliable tools that you don’t want your employees using for any reason, include those in your policy. You can go further and ask your I.T. team to block access to various websites. Ideally, you will include your I.T. team in the larger A.I. discussion so they can provide you with guidance from their experience and possibly even help you with this policy.

4. Unintentional Bias

You can tie this in to the human factor above or leave it on its own, but it should definitely be included.

A.I. has shown its tendency for bias based on the information and code used to build it. That bias can be hard to spot and can result in distorted and potentially harmful outputs. It’s important to look out for any kind of bias and, as in the human element above, to remember that A.I. is not perfect. It needs to be double-checked.

This excerpt from an IBM article on bias shows a clear example:

As a test of image generation, Bloomberg requested more than 5,000 AI images be created and found that, “The world according to Stable Diffusion is run by white male CEOs. Women are rarely doctors, lawyers or judges. Men with dark skin commit crimes, while women with dark skin flip burgers.”4  Midjourney conducted a similar study of AI art generation, requesting images of people in specialized professions. The result showed both younger and older people, but the older people were always men, reinforcing gender bias of the role of women in the workplace.5

What can you do about bias? That’s the larger question many are working to answer.

At an individual level, we can work to avoid bias by ensuring our data is complete and fully representative, by working in well-rounded teams that can identify as a whole what one or two individuals might miss, and by continuously checking both our information in and our results out.

Your policy should include a reminder and warning about potential bias, as well as how you expect your team to handle it.

5. Documentation and Communication

An A.I. use policy itself is a strong start to documentation. But depending on how you want your team to use A.I., there may be a lot more to add to it.

Part of your policy may include how you want your team to document the tools and ways they use A.I. It may include what they felt worked well and what didn’t. Maybe how much time it saved, or what new ideas or unexpected results came out of it.

Because keep in mind, your policy should be a living document. Its true effectiveness will only come from your employees sharing what they learn and avoiding what others have already found did not work well.

 

As we’ve mentioned in related articles about this, A.I. is here to stay. As a leader in your business, it falls to you to shape how your company and employees will use it…to your advantage or to your detriment.

If you have any questions about A.I. or want help with your tools or policy, please reach out. We’d love to speak with you.