Embracing AI for your business, ethically
The AI-powered revolution is happening at exponential speed, and will undoubtedly boost economic growth and reshape the way we work. AI and AI-powered automation are here to stay, and companies have no choice but to embrace these tools or risk falling behind. At the same time, AI is raising legitimate ethical and moral concerns. Will humans be replaced by machines? Should AI algorithms be trusted to make important decisions? As an executive, how do you incorporate AI into your business in an ethical, responsible way?
AI is reshaping the way we work. For the better.
Despite the concerns and uncertainties it generates, AI has proven a powerful force in terms of the gains of productivity and technological progress it has enabled across a wide range of sectors. In business, automation has already transformed the way both elementary and complex tasks are performed. Companies are using AI-powered software to automate significant portions of their operations, eliminating repetitive and tedious tasks, and allowing their employees to be more productive and to focus on more strategic work. According to a McKinsey study, productivity could rise by 20% over the next decade, thanks to the use of AI.
Chatbots – one of the most widely-used AI technologies – are capable of handling increasingly complex tasks, from answering basic customer requests, to ordering business supplies. While we won’t be having intelligent conversations with chatbots anytime soon, these virtual assistants have the potential to cut business costs by $8 billion by 2022 (Juniper Research). With advances in AI, they will allow us to automate a whole range of business tasks – from intelligence report extraction, to sales and customer service – making us much more productive in the long run.
Concerns about AI and its implications are founded.
Put simply, AI can be defined as a technology designed to replicate intelligent human behavior. In science fiction, it is often depicted as the rogue robot that turns against humans to destroy them. While we’re far removed from an “I-robots” scenario, AI is expected to radically affect the future workplace; McKinsey estimates that 5% of occupations could be fully replaced by robots, while 30% of activities in 60% of all occupations could be partly automated. The future of work will therefore largely be defined by our ability to work alongside highly evolved machines.
To function optimally, AI requires the use of massive amounts of data. This raises the question of data collection and confidentiality. How do we approach the privacy question under the new AI paradigm? Will international laws, such as GDPR, be sufficient to regulate the use of data for AI? How do we regulate the exploitation of mass data without hindering the development of AI? The same question applies in the use of facial recognition algorithms in surveillance cameras. How do we make the most of these ground-breaking technologies while preserving the customer’s anonymity?
What’s more, AI machine learning algorithms are increasingly relied upon to provide recommendations and make important decisions – from aiding the hiring process, and predicting crime to determining what we see on social media. With many reported incidents of AI bias by some of the biggest tech players – Google, Linkedin, and Microsoft to name a few – there is mounting concern over AI’s ability to make fair recommendations. The big question that arises is: how can we, as a society, think ethically about making the technology more inclusive?
How can business leaders embrace AI ethically?
One way responsible executives can grapple with the digital shift is by preparing their employees for the new workplace. They should encourage and train their workforce, across all functions, to raise their skill level, adapt to new technology and take on more value-added responsibility. The workers who will succeed under the AI paradigm will be those who have learned to navigate the gap between humans and technology.
While many of the policies regulating AI will have to be decided at the governmental and institutional levels, tech leaders and entrepreneurs will be the ones setting the tone on how AI has to be handled. Case in point: Google’s CEO acknowledged the tech giant’s responsibility to set an example in dealing with AI, and released their highly-publicized AI principles, paving a responsible way towards the future of AI for businesses worldwide.
Being at the top can be a lonely place, and every good leader will know to surround themself with executives who possess a strong sense of ethics and an ability to ask questions about the implications of AI. Leaders at the forefront of AI should not shy away from having conversations about its disruptive potential as well as its limitations, and taking a clear stance on how they plan to approach and implement the technology.
Employees should also be included in the conversation, and workers at every level can and should question their company’s involvement with AI. Google’s decision to put a stop to the Maven project was a direct result of some 4,000 of its employees’ speaking up against the use of AI for military purposes, which goes to show how far a few ‘minority’ voices can carry when united by a common belief.