The AI use-cases are mounting up for small and medium enterprises (SMEs). But with the AI Ethics community laser-focused on big-tech and government, a big chunk of business is left out of the discussion. So, if you’re leading an ambitious SME or start-up looking to use AI, this post is for you.
If you’ve been attending any AI conferences recently, you’ll probably have noticed sessions on AI Ethics. The story is straightforward: AI, and in particular the Machine and Deep Learning (ML & DL), bring risks. For example unintentionally disadvantaging or excluding groups or being unable to explain your decisions (even to yourself). None of us want that, but you probably found the discussions high-level, and lacking concrete steps on what you can do (unless you happen to be a national government or globe-spanning big tech).
Let’s help you out.
Below are 6 golden tips on what you can do if the tools supporting your business are bundling AI, or you’re looking to see how data and machine learning can give your company the edge.
No degree in ethics required.
1. Get the knowledge
You won’t get far if you don’t understand the basics of responsible use. Make sure you and your leaders have a ground-level understanding of the technology and the key risks.
Important: AI is NOT just smarter computers: different type of tech, different type of risk.
You need to know: concepts on data privacy (what it is, why it matters); the basics of ML/DL technology (how it differs from regular IT); “AI Ethics” concepts such as fairness, bias, transparency, etc; and how to manage risk. TechInnocens offers a Masterclass that fits the bill perfectly.
2. Know your obligations
Harms from AI/ML technology are context specific. The same data, model or system can have profoundly differing effects on people depending on where and how it is used. Some industries will have existing regulation on data use and model risk. Most will have specific regulations and obligations related to interactions with customers and the wider public. Knowing your obligations well, across the leaders, will allow you to better identify risk. Be aware regulations will never answer all the questions on appropriate use, so don’t rely on these alone. Which brings us to the next point.
3. Involve your market
Insights into what you can do are not the same as agreeing what you should do. As perceptions on acceptable use vary according to the culture, beliefs and values of the affected individual, it makes absolute sense to seek input from varied social groups and backgrounds to see what they think. You might not have the finances to run major market analysis and surveys, but you can run fairly light-touch scenarios with current customers.
Wondering if that new technology is going to be well accepted? Try asking. You may be surprised at the opinions you receive on no-no’s and trade-offs.
4. Set your values, your red-lines, your discussions points
Decisions on appropriate use need to represent your company and what you and your team stand for. So the discussions and decisions are best made with your team, not in isolation. Gather some example use-cases, some potential solutions, the input from external voices and have a session to land your principles around the use of the tech. Where are your no-go’s, where are you cautious, where needs close attention, where are you happy for people to make their own calls? Having spent time understanding your regulatory and legal obligations, and having asked outsiders what they think, you’ll be better positioned to make this a Code or Policy that you’re proud to stand by.
5. Improve your procurement due-diligence
Your AI tech is likely to come from 3rd parties. Whether it’s purchased, bundled, developed for you, or a service that you use, this is where you are going to need to turn a code into action.
Form a list of questions to ask vendors e.g. how was the model created, where’s the data from, what trade-offs did they make to get it working, can we see how it works, what happens if things go wrong? Make these questions part of your due diligence when selecting a new product. If your tech lead feels confident, you can write these yourself. If not, there are plenty you can re-use (e.g.) or TechInnocens can help you out when things are getting lost in translation to tech.
You should also run the questions against your current systems to find risks. Triage these into:
- Aligned – Systems and vendors that align to the code you developed.
- Mitigate on opportunity – Where a need for active governance to avoid harms means cost/benefit doesn’t stack up.
- Act Now – Where the technology crosses a red line, or there is a serious risk of harm being caused. These need dealing with as a priority.
If the system or vendor gets flagged, don’t assume you need to go elsewhere or go without. Have a chat. Many include functionality to disengage AI , or can help you in other ways.
6. Keep the governance light-touch, pro-active and aligned to your values.
Don’t immediately dive into ethics committees, Chief Ethics Officers, hiring AI Ethicists and the like. Keep the governance active, participatory and focused on resolving dilemmas. By keeping the conversation going with your team on acceptable use, you’ll be better placed to identify risks, hear from the organisational edge on actual impact, and navigate a course that everyone buys into. Avoid tick-lists that don’t provide insight into the risk. For bigger companies, identify a “go-to” role for questions from staff and who can guide on the topic in projects. If yours is a smaller organisation, include the risks in leadership meetings and set aside all-hands time when big decisions need making. Use external parties when topics need more focus or understanding.
By taking a few steps to know your position, understand the technology and put in place some guiderails, you’re going to radically improve your chances on staying onside with customers and the public when using frontier technology. It’s a process, not a one off effort, and like most aims needs attention to assure success. But, importantly, it doesn’t need huge outlay on boards or new teams. Keep it light touch, authentic and transparent.
The whole set of steps can be set-up in a few weeks, taking up only a few hours of your time. It’s not going to guarantee you catch every misuse or mishap, but it will put you back in the driver’s seat for managing this sort of risk. If you’d like a bit more help in getting this set-up, TechInnocens has got your back. Drop a mail or get in contact, whether it’s a conversation to get you started or you’d like us to take this task off your hands, we can be there.