SMEs and Start-ups: Use AI and keep the trust

The AI use-cases are mounting up for small and medium enterprises (SMEs). But with the AI Ethics community laser-focused on big-tech and government, a big chunk of business is left out of the discussion. So, if you’re leading an ambitious SME or start-up looking to use AI, this post is for you.

If you’ve been attending any AI conferences recently, you’ll probably have noticed sessions on AI Ethics. The story is straightforward: AI, and in particular the Machine and Deep Learning (ML & DL), bring risks. For example unintentionally disadvantaging or excluding groups or being unable to explain your decisions (even to yourself). None of us want that, but you probably found the discussions high-level, and lacking concrete steps on what you can do (unless you happen to be a national government or globe-spanning big tech).

Let’s help you out.

Below are 6 golden tips on what you can do if the tools supporting your business are bundling AI, or you’re looking to see how data and machine learning can give your company the edge.

No degree in ethics required.

1. Get the knowledge

You won’t get far if you don’t understand the basics of responsible use. Make sure you and your leaders have a ground-level understanding of the technology and the key risks.

Important: AI is NOT just smarter computers: different type of tech, different type of risk.

You need to know: concepts on data privacy (what it is, why it matters); the basics of ML/DL technology (how it differs from regular IT); “AI Ethics” concepts such as fairness, bias, transparency, etc; and how to manage risk. TechInnocens offers a Masterclass that fits the bill perfectly.

2. Know your obligations

Harms from AI/ML technology are context specific. The same data, model or system can have profoundly differing effects on people depending on where and how it is used. Some industries will have existing regulation on data use and model risk. Most will have specific regulations and obligations related to interactions with customers and the wider public. Knowing your obligations well, across the leaders, will allow you to better identify risk. Be aware regulations will never answer all the questions on appropriate use, so don’t rely on these alone. Which brings us to the next point.

3. Involve your market

Insights into what you can do are not the same as agreeing what you should do. As perceptions on acceptable use vary according to the culture, beliefs and values of the affected individual, it makes absolute sense to seek input from varied social groups and backgrounds to see what they think. You might not have the finances to run major market analysis and surveys, but you can run fairly light-touch scenarios with current customers.

Wondering if that new technology is going to be well accepted? Try asking. You may be surprised at the opinions you receive on no-no’s and trade-offs.

4. Set your values, your red-lines, your discussions points

Decisions on appropriate use need to represent your company and what you and your team stand for. So the discussions and decisions are best made with your team, not in isolation. Gather some example use-cases, some potential solutions, the input from external voices and have a session to land your principles around the use of the tech. Where are your no-go’s, where are you cautious, where needs close attention, where are you happy for people to make their own calls? Having spent time understanding your regulatory and legal obligations, and having asked outsiders what they think, you’ll be better positioned to make this a Code or Policy that you’re proud to stand by.

5. Improve your procurement due-diligence

Your AI tech is likely to come from 3rd parties. Whether it’s purchased, bundled, developed for you, or a service that you use, this is where you are going to need to turn a code into action.

Form a list of questions to ask vendors e.g. how was the model created, where’s the data from, what trade-offs did they make to get it working, can we see how it works, what happens if things go wrong? Make these questions part of your due diligence when selecting a new product. If your tech lead feels confident, you can write these yourself. If not, there are plenty you can re-use (e.g.) or TechInnocens can help you out when things are getting lost in translation to tech.

You should also run the questions against your current systems to find risks. Triage these into:

  • Aligned – Systems and vendors that align to the code you developed.
  • Mitigate on opportunity – Where a need for active governance to avoid harms means cost/benefit doesn’t stack up.
  • Act Now – Where the technology crosses a red line, or there is a serious risk of harm being caused. These need dealing with as a priority.

 

If the system or vendor gets flagged, don’t assume you need to go elsewhere or go without. Have a chat. Many include functionality to disengage AI , or can help you in other ways.

6. Keep the governance light-touch, pro-active and aligned to your values.

Don’t immediately dive into ethics committees, Chief Ethics Officers, hiring AI Ethicists and the like. Keep the governance active, participatory and focused on resolving dilemmas. By keeping the conversation going with your team on acceptable use, you’ll be better placed to identify risks, hear from the organisational edge on actual impact, and navigate a course that everyone buys into. Avoid tick-lists that don’t provide insight into the risk. For bigger companies, identify a “go-to” role for questions from staff and who can guide on the topic in projects. If yours is a smaller organisation, include the risks in leadership meetings and set aside all-hands time when big decisions need making. Use external parties when topics need more focus or understanding.


 

By taking a few steps to know your position, understand the technology and put in place some guiderails, you’re going to radically improve your chances on staying onside with customers and the public when using frontier technology. It’s a process, not a one off effort, and like most aims needs attention to assure success. But, importantly, it doesn’t need huge outlay on boards or new teams. Keep it light touch, authentic and transparent.

The whole set of steps can be set-up in a few weeks, taking up only a few hours of your time. It’s not going to guarantee you catch every misuse or mishap, but it will put you back in the driver’s seat for managing this sort of risk. If you’d like a bit more help in getting this set-up, TechInnocens has got your back. Drop a mail or get in contact, whether it’s a conversation to get you started or you’d like us to take this task off your hands, we can be there.

Terms and Conditions: Emerging Technology Quick Scan Assessment

This assessment tool and all associated documentation has been prepared Maior Natu Pty Ltd ACN 649 020 039 as trustee for Sancus Trust ABN 46 507 741 242 trading as TechInnocens (TechInnocens) and is provided to you on the following conditions:

 

  • this tool and documentation is strictly confidential and is solely for your own use and that of your professional advisers. It must not be provided to any other party without the prior written consent of TechInnocens, which may be withheld in the absolute discretion of TechInnocens;
  • the content in this tool and documentation does not constitute advice (including tax, legal or ethical advice);
  • you should consider the appropriateness of the information contained in this tool and documentation and make your own decisions based on your individual and/or corporate objectives and needs. You should obtain independent legal, financial and/or other professional advice, as appropriate, relevant to your individual and/or corporate needs before making a decision based on this information.
  • you acknowledge that TechInnocens is the owner of the intellectual property owned or used in connection with this tool and documentation, including without limitation: any patent, know-how, trade mark, service mark, copyright, invention, design, trade secret or confidential information, and any other intellectual property or rights whether registered or not used in connection with or forming part of any business of TechInnocens (Intellectual Property);
  • you hereby disclaim any interest (implied or otherwise) that you may have or may be assumed to have in the Intellectual Property;
  • TechInnocens has the right to deal with the Intellectual Property in any way whatsoever, including to assign or licence the Intellectual Property to any third party; and
  • you agree not make any claim against TechInnocens in relation to the Intellectual Property.

 

Statements in this tool and documentation are made only as of the date of usage of the tool unless otherwise stated. TechInnocens is not responsible for providing updated information to you. Neither TechInnocens nor its officers make any representation or warranty as to, or take responsibility for, the accuracy, reliability or completeness of the information contained in this tool and documentation. Nothing contained in this tool and documentation , nor any other related information made available to you is, or shall be relied upon as, a promise, representation, warranty or guarantee, whether as to the past, present or the future. 

 

To the maximum extent permitted by law, TechInnocens and its officers disclaim all liability that may otherwise arise from reliance upon this tool and documentation or due to any information contained in this tool and documentation being inaccurate or due to information being omitted from this tool and documentation , whether by way of negligence or otherwise.  Neither TechInnocens, its officers nor any other person guarantees the performance of any proposed information referred to in this tool and documentation . You must accept sole responsibility associated with the use and/or reliance of the material in this tool and documentation , irrespective of the purpose for which such use or results are applied.

 

The copyright in all information contained in this tool and documentation is owned by or licensed to TechInnocens.  Except as expressly permitted, no information may be copied, reproduced, transmitted or re-distributed. All rights reserved.